Unnamed: 0
int64
0
15.9k
cleaned_code
stringlengths
67
124k
cleaned_prompt
stringlengths
168
30.3k
3,900
<ASSISTANT_TASK:> Python Code: np.random.seed(0) data = np.random.randn(10, 10) col_sc = ColorScale() grid_map = GridHeatMap(color=data, scales={'color': col_sc}) Figure(marks=[grid_map], padding_y=0.0) x_sc, y_sc, col_sc = OrdinalScale(), OrdinalScale(reverse=True), ColorScale() grid_map = GridHeatMap(color=data, scales={'column': x_sc, 'row': y_sc, 'color': col_sc}) ax_x, ax_y = Axis(scale=x_sc), Axis(scale=y_sc, orientation='vertical') Figure(marks=[grid_map], axes=[ax_x, ax_y], padding_y=0.0) x_sc, y_sc, col_sc = LinearScale(), LinearScale(reverse=True), ColorScale() ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical') ## The data along the rows is not uniform. Hence the 5th row(from top) of the map ## is twice the height of the remaining rows. row_data = np.arange(10) row_data[5:] = np.arange(6, 11) column_data = np.arange(10, 20) grid_map = GridHeatMap(row=row_data, column=column_data, color=data, scales={'row': y_sc, 'column': x_sc, 'color': col_sc}) Figure(marks=[grid_map], padding_y=0.0, axes=[ax_x, ax_y]) print(row_data.shape) print(column_data.shape) print(data.shape) x_sc, y_sc, col_sc = LinearScale(), LinearScale(reverse=True), ColorScale() ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical') row_data = np.arange(11) column_data = np.arange(10, 21) grid_map = GridHeatMap(row=row_data, column=column_data, color=data, scales={'row': y_sc, 'column': x_sc, 'color': col_sc}) Figure(marks=[grid_map], padding_y=0.0, axes=[ax_x, ax_y]) x_sc, y_sc, col_sc = LinearScale(), LinearScale(reverse=True, max=15), ColorScale() ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical') row_data = np.arange(10) column_data = np.arange(10, 20) grid_map = GridHeatMap(row=row_data, column=column_data, color=data, scales={'row': y_sc, 'column': x_sc, 'color': col_sc}) Figure(marks=[grid_map], padding_y=0.0, axes=[ax_x, ax_y]) x_sc, y_sc, col_sc = LinearScale(), LinearScale(reverse=True, min=-5, max=15), ColorScale() ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical') row_data = np.arange(10) column_data = np.arange(10, 20) grid_map = GridHeatMap(row=row_data, column=column_data, color=data, scales={'row': y_sc, 'column': x_sc, 'color': col_sc}, row_align='end') Figure(marks=[grid_map], padding_y=0.0, axes=[ax_x, ax_y]) x_sc, y_sc, col_sc = LinearScale(), LinearScale(reverse=True, min=-5, max=15), ColorScale() ax_x = Axis(scale=x_sc) ax_y = Axis(scale=y_sc, orientation='vertical') row_data = np.arange(9) column_data = np.arange(10, 20) grid_map = GridHeatMap(row=row_data, column=column_data, color=data, scales={'row': y_sc, 'column': x_sc, 'color': col_sc}, row_align='end') Figure(marks=[grid_map], padding_y=0.0, axes=[ax_x, ax_y]) col_sc = ColorScale() grid_map = GridHeatMap(color=data, scales={'color': col_sc}, opacity=0.3, stroke='white') Figure(marks=[grid_map], padding_y=0.0) data = np.random.randn(10, 10) col_sc = ColorScale() grid_map = GridHeatMap(color=data, scales={'color': col_sc}, interactions={'click':'select'}, selected_style={'opacity': '1.0'}, unselected_style={'opacity': 0.4}) Figure(marks=[grid_map], padding_y=0.0) grid_map.selected <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Basic Heat map Step2: Heat map with axes Step3: Non Uniform Heat map Step4: Alignment of the data with respect to the grid Step5: By default, for N points along any dimension, data aligns to the start of the rectangles in the grid. Step6: By changing the row_align and column_align properties, the grid can extend in the opposite direction Step7: For N+1 points on any direction, the grid extends infintely in both directions Step8: Changing opacity and stroke Step9: Selections on the grid map Step10: The selected trait of a GridHeatMap contains a list of lists, with each sub-list containing the row and column index of a selected cell.
3,901
<ASSISTANT_TASK:> Python Code: #for ronum in ranumlis: # print ronum randict = dict() othgues = [] othlow = 0 othhigh = 9 for ranez in range(10): randxz = random.randint(othlow, othhigh) othgues.append(randxz) othlow = (othlow + 10) othhigh = (othhigh + 10) #print othgues tenlis = ['zero', 'ten', 'twenty', 'thirty', 'fourty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] #for telis in tenlis: # for diez in dieci: # print telis #randict for ronum in ranumlis: #print ronum if ronum in othgues: print (str(ronum) + ' You Win!') else: print (str(ronum) + ' You Lose!') #dieci = dict() #for ranz in range(10): #print str(ranz) + str(1)# # dieci.update({str(ranz) + str(1): str(ranz)}) # for numz in range(10): #print str(ranz) + str(numz) # print numz #print zetoo #for diez in dieci: # print diez #for sinum in ranumlis: # print str(sinum) + (str('\n')) #if str(sinum) in othhigh: # print 'Win' #import os #os.system('sudo adduser joemanz --disabled-login --quiet -D') #uslis = os.listdir('/home/wcmckee/signinlca/usernames/') #print ('User List: ') #for usl in uslis: # print usl # os.system('sudo adduser ' + usl + ' ' + '--disabled-login --quiet') # os.system('sudo mv /home/wcmckee/signinlca/usernames/' + usl + ' ' + '/home/' + usl + ' ') #print dieci <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Makes dict with keys pointing to the 10s numbers.
3,902
<ASSISTANT_TASK:> Python Code: # Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version()) # Para visualização de gráficos from pylab import * %matplotlib inline import numpy as np import pandas as pd import statsmodels as st import sys import warnings if not sys.warnoptions: warnings.simplefilter("ignore") warnings.simplefilter(action='ignore', category=FutureWarning) warnings.filterwarnings("ignore", category=FutureWarning) warnings.filterwarnings("ignore") import statsmodels.api as sm from statsmodels.sandbox.regression.predstd import wls_prediction_std np.random.seed(9876789) np.__version__ pd.__version__ st.__version__ # Criando dados artificiais nsample = 100 x = np.linspace(0, 10, 100) X = np.column_stack((x, x**2)) beta = np.array([1, 0.1, 10]) e = np.random.normal(size=nsample) X = sm.add_constant(X) y = np.dot(X, beta) + e model = sm.OLS(y, X) results = model.fit() print(results.summary()) print('Parameters: ', results.params) print('R2: ', results.rsquared) nsample = 50 sig = 0.5 x = np.linspace(0, 20, nsample) X = np.column_stack((x, np.sin(x), (x-5)**2, np.ones(nsample))) beta = [0.5, 0.5, -0.02, 5.] y_true = np.dot(X, beta) y = y_true + sig * np.random.normal(size=nsample) res = sm.OLS(y, X).fit() print(res.summary()) print('Parameters: ', res.params) print('Standard errors: ', res.bse) print('Predicted values: ', res.predict()) prstd, iv_l, iv_u = wls_prediction_std(res) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x, y, 'o', label="data") ax.plot(x, y_true, 'b-', label="True") ax.plot(x, res.fittedvalues, 'r--.', label="OLS") ax.plot(x, iv_u, 'r--') ax.plot(x, iv_l, 'r--') ax.legend(loc='best') from statsmodels.tsa.arima_process import arma_generate_sample # Gerando dados np.random.seed(12345) arparams = np.array([.75, -.25]) maparams = np.array([.65, .35]) # Parâmetros arparams = np.r_[1, -arparams] maparam = np.r_[1, maparams] nobs = 250 y = arma_generate_sample(arparams, maparams, nobs) dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs) y = pd.Series(y, index=dates) arma_mod = sm.tsa.ARMA(y, order=(2,2)) arma_res = arma_mod.fit(trend='nc', disp=-1) print(arma_res.summary()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Statsmodels Step2: Time-Series Analysis
3,903
<ASSISTANT_TASK:> Python Code: import tensorflow as tf lengths = [4, 3, 5, 2] def g(lengths): lengths_transposed = tf.expand_dims(lengths, 1) range = tf.range(0, 8, 1) range_row = tf.expand_dims(range, 0) mask = tf.less(range_row, lengths_transposed) result = tf.where(mask, tf.ones([4, 8]), tf.zeros([4, 8])) return result result = g(lengths.copy()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
3,904
<ASSISTANT_TASK:> Python Code: # Copyright 2019 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Update the following variables for your project PROJECT_ID = '<project-id>' VERSION_DIR = 'gs://<bucket-name>/<folder-name>/' MODEL_NAME = '<model-name>' VERSION_NAME = '<version-name>' import argparse import pandas as pd from sklearn.metrics import mean_absolute_error from sklearn.model_selection import train_test_split from xgboost import XGBRegressor import urllib.request try: from sklearn.preprocessing import Imputer except ImportError: from sklearn.impute import SimpleImputer as Imputer import logging logging.getLogger().setLevel(logging.INFO) TRAINING_URL="https://raw.githubusercontent.com/kubeflow/examples/master/xgboost_ames_housing/ames_dataset/train.csv" TRAINING_FILE="train.csv" ESTIMATORS=1000 LEARNING_RATE=0.1 TEST_FRACTION_SIZE=0.25 EARLY_STOPPING_ROUNDS=50 def run_training_and_eval(): (train_X, train_y), (test_X, test_y) = read_input() model = train_model(train_X, train_y, test_X, test_y, ESTIMATORS, LEARNING_RATE) eval_model(model, test_X, test_y) def download(url, file_name): with urllib.request.urlopen(url) as response, open(file_name, "wb") as file: file.write(response.read()) def read_input(test_size=TEST_FRACTION_SIZE): Read input data and split it into train and test. download(TRAINING_URL, TRAINING_FILE) data = pd.read_csv(TRAINING_FILE) data.dropna(axis=0, subset=['SalePrice'], inplace=True) y = data.SalePrice X = data.drop(['SalePrice'], axis=1).select_dtypes(exclude=['object']) train_X, test_X, train_y, test_y = train_test_split(X.values, y.values, test_size=test_size, shuffle=False) imputer = Imputer() train_X = imputer.fit_transform(train_X) test_X = imputer.transform(test_X) return (train_X, train_y), (test_X, test_y) def train_model(train_X, train_y, test_X, test_y, n_estimators, learning_rate): Train the model using XGBRegressor. model = XGBRegressor(n_estimators=n_estimators, learning_rate=learning_rate) model.fit(train_X, train_y, early_stopping_rounds=EARLY_STOPPING_ROUNDS, eval_set=[(test_X, test_y)]) logging.info("Best RMSE on eval: %.2f with %d rounds", model.best_score, model.best_iteration+1) return model def eval_model(model, test_X, test_y): Evaluate the model performance. predictions = model.predict(test_X) logging.info("mean_absolute_error=%.2f", mean_absolute_error(predictions, test_y)) (train_X, train_y), (test_X, test_y) = read_input() model = train_model(train_X, train_y, test_X, test_y, ESTIMATORS, LEARNING_RATE) eval_model(model, test_X, test_y) import joblib joblib.dump(model, 'model.joblib') !gsutil cp model.joblib {VERSION_DIR} from kubeflow.fairing.deployers.gcp.gcpserving import GCPServingDeployer deployer = GCPServingDeployer() deployer.deploy(VERSION_DIR, MODEL_NAME, VERSION_NAME) from googleapiclient import discovery ml = discovery.build('ml', 'v1') resource_name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME) ml.projects().predict( name=resource_name, body={ 'instances': [ [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37] ] } ).execute() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step4: Define training and evaluation functions Step5: Train and evaluate the model locally Step6: Export the model using the joblib library Step7: Deploy the model to GCP Step8: Send a prediction to the deployed model
3,905
<ASSISTANT_TASK:> Python Code: %pylab inline import seaborn as sns import warnings warnings.filterwarnings("ignore") import pandas as pd names = ['Position','I','J','H','K_s','[3.6]','[4.5]','[5.8]','[8.0]','A_v','Luminosity'] link = 'http://iopscience.iop.org/0004-637X/720/2/1374/suppdata/apj363663t2_ascii.txt' harvey2010_raw = pd.read_csv(link, sep='\t', names=names, skiprows=5, na_values=' sdotsdotsdot ') harvey2010_raw.dropna(how='all', inplace=True) fv = lambda a: a.split(' +or- ')[0] if a == a else np.NaN fe = lambda a: a.split(' +or- ')[-1] if a == a else np.NaN #awesome little hack here! bands = ['I','J','H','K_s','[3.6]','[4.5]','[5.8]','[8.0]'] harvey2010_tbl2 = harvey2010_raw.drop(bands, axis=1) for band in bands: harvey2010_tbl2[band]=harvey2010_raw[band].apply(fv) harvey2010_tbl2[band+'_e']=harvey2010_raw[band].apply(fe) harvey2010_tbl2.insert(0, 'RA', harvey2010_tbl2.Position.str.slice(0, 11).copy()) harvey2010_tbl2.insert(1, 'DEC', harvey2010_tbl2.Position.str.slice(12).copy()) harvey2010_tbl2.drop('Position', axis=1, inplace=True) harvey2010_tbl2.to_csv('../data/Harvey2010/tbl2.csv') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Table 2 - Photometry of Selected Candidates Step2: Data cleaning. Step3: Split [value +or- error] into separate columns Step4: Split the position into RA and DEC. Step5: Save data locally
3,906
<ASSISTANT_TASK:> Python Code: skieval2d_periodic = pd.read_excel(SUBDIR+"ski-eval_2d_periodic_abridged.xlsx") skieval2d_periodic ax1 = skieval2d_periodic.ix[skieval2d_periodic['inch']==1].plot.area(x="m/s",y="drag",color="Red",label="1 in") ax2 = skieval2d_periodic.ix[skieval2d_periodic['inch']==2].plot.area(x="m/s",y="drag",color="Green",label="2 in",ax=ax1) ax3 = skieval2d_periodic.ix[skieval2d_periodic['inch']==3].plot.area(x="m/s",y="drag",color="Blue",label="3 in",ax=ax2) ax5 = skieval2d_periodic.ix[skieval2d_periodic['inch']==5].plot.area(x="m/s",y="drag",color="Purple",label="5 in",ax=ax3) skieval2d_periodic.ix[skieval2d_periodic['inch']==1]["lift"] <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Then I can immediately make some quick plots. For instance, for each width in inch, I can plot drag or lift vs. velocity (m/s)
3,907
<ASSISTANT_TASK:> Python Code: def f(x): Function to compute the value of 5x+2 return 5*x+2 f(3) lambda x: 5*x+2 g = lambda x: 5*x+2 g(3) # Calculating Harmonic Mean using lambda function harmonic_mean = lambda x,y,z : 3/(1/x + 1/y + 1/z)**0.5 harmonic_mean(1,2,3) # Sorting a List by thr last name using lambda expression presidents_usa = ["George Washington", "John Adams","Thomas Jefferson","James Madison","James Monroe","John Quincy Adams","Andrew Jackson"] presidents_usa.sort(key = lambda name: name.split(" ")[-1].lower()) presidents_usa def volume(a): volumne of a cube with edge 'a' return a**3 # Edge length in cm edges = [1,2,3,4,5] # Calculating the volume of given cubes using Direct Method volumes = [] for a in edges: v = volume(a) volumes.append(v) volumes # Calculating the volume of given cubes using the Map function map(volume,edges) list(map(volume,edges)) # Convert height from cms to feet : 1 cm = 0.0328 feet height_in_cms = [('Tom',183),('Daisy',171),('Margaret',179),('Michael',190),('Nick',165)] #Writing Convertor function using lambda expression height_in_feet = lambda data: (data[0],round(data[1]*0.0328,1)) #Using the 'Map' function list(map(height_in_feet,height_in_cms)) # Filter out all the numbers greater than 5 from a list my_list = [1,2,3,4,5,6,7,8,9] output_list = filter(lambda x : x>5, my_list) list(output_list) # Removing missing values from a list countries_asia = ["Afghanistan","","Bhutan","","China","","Georgia","","","India"] list(filter(None,countries_asia)) # Compute the product of a list of integers using 'reduce'function from functools import reduce product = reduce((lambda x,y : x*y), [1,2,3,4,5]) product # Compute the product of a list of integers using a 'For' loop product = 1 list = [1,2,3,4,5] for num in list: product = product*num product # Determining the maximum number in a given list from functools import reduce f = lambda a,b : a if (a>b) else b reduce(f,[58,69,12,158,698]) # Creating a list of squares of first 10 numbers using loops squares = [] for x in range(10): squares.append(x**2) squares # # Creating a list of squares of first 10 using list comprehension squares = [x**2 for x in range(10)] squares # Convert height from cms to feet using List Comprehension : 1 cm = 0.0328 feet height_in_cms = [('Tom',183),('Daisy',171),('Margaret',179),('Michael',190),('Nick',165)] height_in_feet = [(height[0],round(height[1]*0.0328,1)) for height in height_in_cms] height_in_feet # Removing missing values from a list countries_asia = ["Afghanistan","","Bhutan","","China","","Georgia","","","India"] [country for country in countries_asia if country!=""] # Determining the maximum number in a given list numbers = [58,69,12,158,698,956] max((x) for x in numbers) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Elements of Functional Programming in Python Step2: Now we would compute the same expression using Lambda functions. To create a lambda expression, we type in the keyword lambda, followed by the inputs. Next, we enter a colon followed by the expression that will be the return value. Step3: This lambda function will take the input x and return 5x + 2, just like the earlier function f. There is a problem, however.The lambda is not the name of the function. It is a Python keyword that says - what follows is an anonymous function. So how do we use it? One way is to give it a name. Step4: Lambda expression with multiple inputs. Step5: Lambda expression without inputs. Step7: The Map Function Step8: What if we need to compute the volumes for many different cubes with different edge lengths? Step9: There are two ways to do this. One by using the direct method and the other by using the map function. Step10: Now let's see how we can accomplish this task using a single line of code with the map function. Step11: The map function takes in two arguments. Step12: Example Step13: The Filter Function Step14: Example Step15: This filters out all values that are treated as false in a boolean setting. Step16: The following diagram shows the intermediate steps of the calculation Step17: Example Step18: List Comprehensions Step19: Now let's use list comprehension to achieve the same result in a one liner Step20: Usage Step21: List Comprehensions vs Filter function Step22: List Comprehensions vs Reduce function
3,908
<ASSISTANT_TASK:> Python Code: # Import all necessary libraries, this is a configuration step for the exercise. # Please run it before the simulation code! import numpy as np import matplotlib.pyplot as plt from gll import gll from lagrange1st import lagrange1st from flux_hetero import flux # Show the plots in the Notebook. plt.switch_backend("nbagg") # Initialization of setup # -------------------------------------------------------------------------- tmax = 2.5 # Length of seismogram [s] xmax = 10000 # Length of domain [m] vs0 = 2500 # Advection velocity rho0 = 2500 # Density [kg/m^3] mu0 = rho0*vs0**2 # shear modulus N = 2 # Order of Lagrange polynomials ne = 200 # Number of elements sig = 100 # width of Gaussian initial condition x0 = 4000 # x locartion of Gauss eps = 0.2 # Courant criterion iplot = 20 # Plotting frequency imethod = 'RK' # 'Euler', 'RK' nx = ne*N + 1 dx = xmax/(nx-1) # space increment #-------------------------------------------------------------------- # Initialization of GLL points integration weights [xi,w] = gll(N) # xi, N+1 coordinates [-1 1] of GLL points # w Integration weights at GLL locations # Space domain le = xmax/ne # Length of elements, here equidistent ng = ne*N + 1 # Vector with GLL points k = 0 xg = np.zeros((N+1)*ne) for i in range(0, ne): for j in range(0, N+1): k += 1 xg[k-1] = i*le + .5*(xi[j] + 1)*le x = np.reshape(xg, (N+1, ne), order='F').T # Calculation of time step acoording to Courant criterion dxmin = np.min(np.diff(xg[1:N+1])) dt = eps*dxmin/vs0 # Global time step nt = int(np.floor(tmax/dt)) # Mapping - Jacobian J = le/2 # Jacobian Ji = 1/J # Inverse Jacobian # 1st derivative of Lagrange polynomials l1d = lagrange1st(N) # Initialization of system matrices # ----------------------------------------------------------------- # Elemental Mass matrix M = np.zeros((N+1, N+1)) for i in range(0, N+1): M[i, i] = w[i] * J # Inverse matrix of M (M is diagonal!) Minv = np.identity(N+1) for i in range(0, N+1): Minv[i,i] = 1. / M[i,i] # Elemental Stiffness Matrix K = np.zeros((N+1, N+1)) for i in range(0, N+1): for j in range(0, N+1): K[i,j] = w[j] * l1d[i,j] # NxN matrix for every element # Inialize Flux relates matrices # --------------------------------------------------------------- # initialize heterogeneous A Ap = np.zeros((ne,2,2)) Am = np.zeros((ne,2,2)) Z = np.zeros(ne) rho = np.zeros(ne) mu = np.zeros(ne) # initialize c, rho, mu, and Z rho = rho + rho0 rho[int(ne/2):ne] = .25 * rho[int(ne/2):ne] # Introduce discontinuity mu = mu + mu0 c = np.sqrt(mu/rho) Z = rho * c # Initialize flux matrices for i in range(1,ne-1): # Left side positive direction R = np.array([[Z[i], -Z[i]], [1, 1]]) Lp = np.array([[0, 0], [0, c[i]]]) Ap[i,:,:] = R @ Lp @ np.linalg.inv(R) # Right side negative direction R = np.array([[Z[i], -Z[i]], [1, 1]]) Lm = np.array([[-c[i], 0 ], [0, 0]]) Am[i,:,:] = R @ Lm @ np.linalg.inv(R) # DG Solution, Time extrapolation # --------------------------------------------------------------- # Initalize solution vectors Q = np.zeros((ne, N+1, 2)) Qnew = np.zeros((ne, N+1, 2)) k1 = np.zeros((ne, N+1, 2)) k2 = np.zeros((ne, N+1, 2)) Q[:,:,0] = np.exp(-1/sig**2*((x-x0))**2) Qs = np.zeros(xg.size) # for plotting Qv = np.zeros(xg.size) # for plotting # Initialize animated plot # --------------------------------------------------------------- fig = plt.figure(figsize=(10,6)) ax1 = fig.add_subplot(2,1,1) ax2 = fig.add_subplot(2,1,2) line1 = ax1.plot(x, Q[:,:,0], 'k', lw=1.5) line2 = ax2.plot(x, Q[:,:,1], 'r', lw=1.5) ax1.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b') ax2.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b') ax1.set_xlim([0, xmax]) ax2.set_xlim([0, xmax]) ax1.set_ylabel('Stress') ax2.set_ylabel('Velocity') ax2.set_xlabel(' x ') plt.suptitle('Heterogeneous Disc. Galerkin - %s method'%imethod, size=16) plt.ion() # set interective mode plt.show() # --------------------------------------------------------------- # Time extrapolation # --------------------------------------------------------------- for it in range(nt): if imethod == 'Euler': # Calculate Fluxes Flux = flux(Q, N, ne, Ap, Am) for i in range(1,ne-1): Qnew[i,:,0] = dt * Minv @ (-mu[i] * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T Qnew[i,:,1] = dt * Minv @ (-1/rho[i] * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T elif imethod == 'RK': # Calculate Fluxes Flux = flux(Q, N, ne, Ap, Am) for i in range(1,ne-1): k1[i,:,0] = Minv @ (-mu[i] * K @ Q[i,:,1].T - Flux[i,:,0].T) k1[i,:,1] = Minv @ (-1/rho[i] * K @ Q[i,:,0].T - Flux[i,:,1].T) for i in range(1,ne-1): Qnew[i,:,0] = dt * Minv @ (-mu[i] * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T Qnew[i,:,1] = dt * Minv @ (-1/rho[i] * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T Flux = flux(Qnew,N,ne,Ap,Am) for i in range(1,ne-1): k2[i,:,0] = Minv @ (-mu[i] * K @ Qnew[i,:,1].T - Flux[i,:,0].T) k2[i,:,1] = Minv @ (-1/rho[i] * K @ Qnew[i,:,0].T - Flux[i,:,1].T) # Extrapolate Qnew = Q + 0.5 * dt * (k1 + k2) else: raise NotImplementedError Q, Qnew = Qnew, Q # -------------------------------------- # Animation plot. Display solution if not it % iplot: for l in line1: l.remove() del l for l in line2: l.remove() del l # stretch for plotting k = 0 for i in range(ne): for j in range(N+1): Qs[k] = Q[i,j,0] Qv[k] = Q[i,j,1] k = k + 1 # -------------------------------------- # Display lines line1 = ax1.plot(xg, Qs, 'k', lw=1.5) line2 = ax2.plot(xg, Qv, 'r', lw=1.5) plt.gcf().canvas.draw() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1. Initialization of setup Step2: 2. Elemental Mass and Stiffness matrices Step3: 3. Flux Matrices Step4: 4. Discontinuous Galerkin Solution
3,909
<ASSISTANT_TASK:> Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csir-csiro', 'vresm-1-0', 'atmos') # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Document Authors Step2: Document Contributors Step3: Document Publication Step4: Document Table of Contents Step5: 1.2. Model Name Step6: 1.3. Model Family Step7: 1.4. Basic Approximations Step8: 2. Key Properties --&gt; Resolution Step9: 2.2. Canonical Horizontal Resolution Step10: 2.3. Range Horizontal Resolution Step11: 2.4. Number Of Vertical Levels Step12: 2.5. High Top Step13: 3. Key Properties --&gt; Timestepping Step14: 3.2. Timestep Shortwave Radiative Transfer Step15: 3.3. Timestep Longwave Radiative Transfer Step16: 4. Key Properties --&gt; Orography Step17: 4.2. Changes Step18: 5. Grid --&gt; Discretisation Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Step20: 6.2. Scheme Method Step21: 6.3. Scheme Order Step22: 6.4. Horizontal Pole Step23: 6.5. Grid Type Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Step25: 8. Dynamical Core Step26: 8.2. Name Step27: 8.3. Timestepping Type Step28: 8.4. Prognostic Variables Step29: 9. Dynamical Core --&gt; Top Boundary Step30: 9.2. Top Heat Step31: 9.3. Top Wind Step32: 10. Dynamical Core --&gt; Lateral Boundary Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Step34: 11.2. Scheme Method Step35: 12. Dynamical Core --&gt; Advection Tracers Step36: 12.2. Scheme Characteristics Step37: 12.3. Conserved Quantities Step38: 12.4. Conservation Method Step39: 13. Dynamical Core --&gt; Advection Momentum Step40: 13.2. Scheme Characteristics Step41: 13.3. Scheme Staggering Type Step42: 13.4. Conserved Quantities Step43: 13.5. Conservation Method Step44: 14. Radiation Step45: 15. Radiation --&gt; Shortwave Radiation Step46: 15.2. Name Step47: 15.3. Spectral Integration Step48: 15.4. Transport Calculation Step49: 15.5. Spectral Intervals Step50: 16. Radiation --&gt; Shortwave GHG Step51: 16.2. ODS Step52: 16.3. Other Flourinated Gases Step53: 17. Radiation --&gt; Shortwave Cloud Ice Step54: 17.2. Physical Representation Step55: 17.3. Optical Methods Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Step57: 18.2. Physical Representation Step58: 18.3. Optical Methods Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Step60: 20. Radiation --&gt; Shortwave Aerosols Step61: 20.2. Physical Representation Step62: 20.3. Optical Methods Step63: 21. Radiation --&gt; Shortwave Gases Step64: 22. Radiation --&gt; Longwave Radiation Step65: 22.2. Name Step66: 22.3. Spectral Integration Step67: 22.4. Transport Calculation Step68: 22.5. Spectral Intervals Step69: 23. Radiation --&gt; Longwave GHG Step70: 23.2. ODS Step71: 23.3. Other Flourinated Gases Step72: 24. Radiation --&gt; Longwave Cloud Ice Step73: 24.2. Physical Reprenstation Step74: 24.3. Optical Methods Step75: 25. Radiation --&gt; Longwave Cloud Liquid Step76: 25.2. Physical Representation Step77: 25.3. Optical Methods Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Step79: 27. Radiation --&gt; Longwave Aerosols Step80: 27.2. Physical Representation Step81: 27.3. Optical Methods Step82: 28. Radiation --&gt; Longwave Gases Step83: 29. Turbulence Convection Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Step85: 30.2. Scheme Type Step86: 30.3. Closure Order Step87: 30.4. Counter Gradient Step88: 31. Turbulence Convection --&gt; Deep Convection Step89: 31.2. Scheme Type Step90: 31.3. Scheme Method Step91: 31.4. Processes Step92: 31.5. Microphysics Step93: 32. Turbulence Convection --&gt; Shallow Convection Step94: 32.2. Scheme Type Step95: 32.3. Scheme Method Step96: 32.4. Processes Step97: 32.5. Microphysics Step98: 33. Microphysics Precipitation Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Step100: 34.2. Hydrometeors Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Step102: 35.2. Processes Step103: 36. Cloud Scheme Step104: 36.2. Name Step105: 36.3. Atmos Coupling Step106: 36.4. Uses Separate Treatment Step107: 36.5. Processes Step108: 36.6. Prognostic Scheme Step109: 36.7. Diagnostic Scheme Step110: 36.8. Prognostic Variables Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Step112: 37.2. Cloud Inhomogeneity Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Step114: 38.2. Function Name Step115: 38.3. Function Order Step116: 38.4. Convection Coupling Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Step118: 39.2. Function Name Step119: 39.3. Function Order Step120: 39.4. Convection Coupling Step121: 40. Observation Simulation Step122: 41. Observation Simulation --&gt; Isscp Attributes Step123: 41.2. Top Height Direction Step124: 42. Observation Simulation --&gt; Cosp Attributes Step125: 42.2. Number Of Grid Points Step126: 42.3. Number Of Sub Columns Step127: 42.4. Number Of Levels Step128: 43. Observation Simulation --&gt; Radar Inputs Step129: 43.2. Type Step130: 43.3. Gas Absorption Step131: 43.4. Effective Radius Step132: 44. Observation Simulation --&gt; Lidar Inputs Step133: 44.2. Overlap Step134: 45. Gravity Waves Step135: 45.2. Sponge Layer Step136: 45.3. Background Step137: 45.4. Subgrid Scale Orography Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Step139: 46.2. Source Mechanisms Step140: 46.3. Calculation Method Step141: 46.4. Propagation Scheme Step142: 46.5. Dissipation Scheme Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Step144: 47.2. Source Mechanisms Step145: 47.3. Calculation Method Step146: 47.4. Propagation Scheme Step147: 47.5. Dissipation Scheme Step148: 48. Solar Step149: 49. Solar --&gt; Solar Pathways Step150: 50. Solar --&gt; Solar Constant Step151: 50.2. Fixed Value Step152: 50.3. Transient Characteristics Step153: 51. Solar --&gt; Orbital Parameters Step154: 51.2. Fixed Reference Date Step155: 51.3. Transient Method Step156: 51.4. Computation Method Step157: 52. Solar --&gt; Insolation Ozone Step158: 53. Volcanos Step159: 54. Volcanos --&gt; Volcanoes Treatment
3,910
<ASSISTANT_TASK:> Python Code: %run ~/Dropbox/pythonCode/LCSanalyzeblue.py t = s.galfitflag & s.lirflag & s.sizeflag & ~s.agnflag & s.sbflag galfitnogim = t & ~s.gim2dflag sum(galfitnogim) s.s.ISDSS[galfitnogim] print sum(s.s.ISDSS[galfitnogim] == -1) galfitsdssnogim = galfitnogim & (s.s.ISDSS != -1) sum(galfitsdssnogim) s.s.NSAID[galfitsdssnogim] from astropy import units as u from astropy.coordinates import SkyCoord from astropy.table import Table try: # Python 3.x from urllib.parse import urlencode from urllib.request import urlretrieve except ImportError: # Python 2.x from urllib import urlencode from urllib import urlretrieve import IPython.display r = 22.5 - 2.5*log10(s.s.NMGY[:,4]) flag = galfitnogim & (r >= 14.) & (r <= 18.) print sum(flag) ra = s.s.RA[flag] dec = s.s.DEC[flag] ids = s.s.NSAID[flag] coords = SkyCoord(ra*u.deg, dec*u.deg, frame='icrs') testcoord = coords[0] impix = 100 imsize = 1*u.arcmin cutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx' for i in range(len(coords.ra)): query_string = urlencode(dict(ra=coords[i].ra.deg, dec=coords[i].dec.deg, width=impix, height=impix, scale=imsize.to(u.arcsec).value/impix)) url = cutoutbaseurl + '?' + query_string # this downloads the image to your disk urlretrieve(url, 'images/'+str(ids[i])+'_SDSS_cutout.jpg') print 'NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i]) t = IPython.display.Image('images/'+str(ids[i])+'_SDSS_cutout.jpg') IPython.display.display(t) for i in range(10,len(ids)): print '* NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i]) print 'http://cas.sdss.org/dr7/en/tools/explore/obj.asp?ra=%.5f&dec=%.5f'%(ra[i],dec[i]) for i in range(len(coords.ra)): query_string = urlencode(dict(ra=coords[i].ra.deg, dec=coords[i].dec.deg, width=impix, height=impix, scale=imsize.to(u.arcsec).value/impix)) url = cutoutbaseurl + '?' + query_string # this downloads the image to your disk urlretrieve(url, 'images/'+str(nsaids[i])+'_SDSS_cutout.jpg') print i, nsaids[i],coords[i].ra,coords[i].dec print 'NSAID %i (%10.6f, %10.6f)'%(nsaids[i],coords[i].ra.deg,coords[i].dec) t = IPython.display.Image('images/'+str(nsaids[i])+'_SDSS_cutout.jpg') IPython.display.display(t) for i in range(len(coords.ra)): print 'NSAID %i (%10.6f, %10.6f)'%(ids[i],ra[i],dec[i]) ids = where(galfitnogim & (s.s.ISDSS == -1)) print ids lcs = fits.getdata('/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all_size.fits') gim = fits.getdata('/Users/rfinn/research/SimardSDSS2011/table1.fits')virgocat = SkyCoord(vdat.RA*u.degree,vdat.DEC*u.degree,frame='icrs') from astropy.coordinates import SkyCoord from astropy import units as u %matplotlib inline lcat = SkyCoord(lcs.RA*u.degree,lcs.DEC*u.degree,frame='icrs') gcat = SkyCoord(gim._RAJ2000*u.degree,gim._DEJ2000*u.degree,frame='icrs') index,dist2d,dist3d = lcat.match_to_catalog_sky(gcat) plt.figure() plt.plot # only keep matches with matched RA and Dec w/in 1 arcsec matchflag = dist2d.degree < 3./3600 matchedarray1=np.zeros(len(lcat),dtype=gim.dtype) matchedarray1[matchflag] = gim[index[matchflag]] print 'percent of LCS galaxies matched = %.1f'%(sum(matchflag)*1./len(matchflag)*100.) # get rid of names that start with __ # these cause trouble in the analysis program t = [] for a in matchedarray1.dtype.names: t.append(a) for i in range(len(t)): if t[i].startswith('__'): t[i] = t[i][2:] t = tuple(t) #print t matchedarray1.dtype.names = t outfile = '/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all.gim2d.tab1.fits' fits.writeto(outfile,matchedarray1,overwrite=True) diff = (lcs.B_T_r - matchedarray1['B_T_r']) bad_matches = (abs(diff) > .01) & matchflag print 'number of bad matches = ',sum(bad_matches) s.s.NSAID[bad_matches] plt.figure() plt.plot(lcs.RA[bad_matches],lcs.DEC[bad_matches],'ko') print lcs.CLUSTER[bad_matches] print sum(s.galfitflag[bad_matches]) print sum(diff < 0.) outfile = '/Users/rfinn/research/LocalClusters/NSAmastertables/LCS_all.gim2d.tab1.fits' gdat = fits.getdata(outfile) gdat.__B_T_r <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Galaxies not in SDSS phot catalog Step2: Galaxies in SDSS but no B/T fit Step3: Download SDSS Images Step4: NSAID 69538 (244.060699, 34.258434) Step5: NSAID 143514 (202.163284, 11.387049)
3,911
<ASSISTANT_TASK:> Python Code: # Numpy, JAX, Matplotlib and h5py should all be correctly installed and on the python path. from __future__ import print_function, division, absolute_import import datetime import h5py import jax.numpy as np from jax import random from jax.experimental import optimizers from jax.config import config #config.update("jax_debug_nans", True) # Useful for finding numerical errors import matplotlib.pyplot as plt import numpy as onp # original CPU-backed NumPy import scipy.signal import scipy.stats import os import sys import time # You must change this to the location of the computation-thru-dynamics directory. HOME_DIR = '/home/sussillo/' sys.path.append(os.path.join(HOME_DIR,'computation-thru-dynamics')) import lfads_tutorial.lfads as lfads import lfads_tutorial.plotting as plotting import lfads_tutorial.utils as utils from lfads_tutorial.optimize import optimize_lfads, get_kl_warmup_fun onp_rng = onp.random.RandomState(seed=0) # For CPU-based numpy randomness INTEGRATOR_RNN_DATA_FILE = \ '/tmp/vrnn/pure_int/trained_data_vrnn_pure_int_0.00002_2019-06-19_15:12:45.h5' lfads_dir = '/tmp/lfads/' # where to save lfads data and parameters to rnn_type = 'lfads' task_type = 'integrator' # Make directories data_dir = os.path.join(lfads_dir, 'data/') output_dir = os.path.join(lfads_dir, 'output/') figure_dir = os.path.join(lfads_dir, os.path.join(output_dir, 'figures/')) if not os.path.exists(output_dir): os.makedirs(output_dir) if not os.path.exists(figure_dir): os.makedirs(figure_dir) # Load synthetic data data_dict = utils.read_file(INTEGRATOR_RNN_DATA_FILE) f = plotting.plot_data_pca(data_dict) f = plotting.plot_data_example(data_dict['inputs'], data_dict['hiddens'], data_dict['outputs'], data_dict['targets']) data_dt = 1.0/25.0 # define our dt in a physiological range # If data is normed between 0 and 1, then a 1 yields this many # spikes per second. Pushing this downwards makes the problem harder. max_firing_rate = 80 train_fraction = 0.9 # Train with 90% of the synthetic data renormed_fun = lambda x : (x + 1) / 2.0 renormed_data = renormed_fun(data_dict['hiddens']) # When dimensions are relevant, I use a variable naming scheme like # name_dim1xdim2x... so below, here is the synthetic data with # 3 dimensions of batch, time and unit, in that order. data_bxtxn = utils.spikify_data(renormed_data, onp_rng, data_dt, max_firing_rate=max_firing_rate) nexamples, ntimesteps, data_dim = data_bxtxn.shape train_data, eval_data = utils.split_data(data_bxtxn, train_fraction=train_fraction) eval_data_offset = int(train_fraction * data_bxtxn.shape[0]) eval_data.shape f = plotting.plot_data_stats(data_dict, data_bxtxn, data_dt) my_example_bidx = eval_data_offset + 0 my_example_hidx = 0 scale = max_firing_rate * data_dt my_signal = scale*renormed_data[my_example_bidx, :, my_example_hidx] my_signal_spikified = data_bxtxn[my_example_bidx, :, my_example_hidx] plt.plot(my_signal, 'r'); plt.stem(my_signal_spikified); nfilt = 3 my_filtered_spikes = scipy.signal.filtfilt(onp.ones(nfilt)/nfilt, 1, my_signal_spikified) plt.plot(my_signal, 'r'); plt.plot(my_filtered_spikes); plt.title("This looks terrible"); plt.legend(('True rate', 'Filtered spikes')); import sklearn ncomponents = 100 full_pca = sklearn.decomposition.PCA(ncomponents) full_pca.fit(onp.reshape(data_bxtxn, [-1, data_dim])) plt.stem(full_pca.explained_variance_) plt.title('Those top 2 PCs sure look promising!'); ncomponents = 2 pca = sklearn.decomposition.PCA(ncomponents) pca.fit(onp.reshape(data_bxtxn[0:eval_data_offset,:,:], [-1, data_dim])) my_example_pca = pca.transform(data_bxtxn[my_example_bidx,:,:]) my_example_ipca = pca.inverse_transform(my_example_pca) plt.plot(my_signal, 'r') plt.plot(my_example_ipca[:,my_example_hidx]) plt.legend(('True rate', 'PCA smoothed spikes')) plt.title('This a bit better.'); # LFADS Hyper parameters data_dim = train_data.shape[2] # input to lfads should have dimensions: ntimesteps = train_data.shape[1] # (batch_size x ntimesteps x data_dim) batch_size = 128 # batch size during optimization # LFADS architecture - The size of the numbers is rather arbitrary, # but relatively small because we know the integrator RNN isn't too high # dimensional in its activity. enc_dim = 128 # encoder dim con_dim = 128 # controller dim ii_dim = 1 # inferred input dim, we know there is 1 dim in integrator RNN gen_dim = 128 # generator dim, should be large enough to generate integrator RNN dynamics factors_dim = 32 # factors dim, should be large enough to capture most variance of dynamics # Numerical stability var_min = 0.001 # Minimal variance any gaussian can become. # Optimization HPs that percolates into model l2reg = 0.00002 # Initial state prior parameters # the mean is set to zero in the code ic_prior_var = 0.1 # this is $\sigma^2_p$ in above paragraph # Inferred input autoregressive prior parameters # Again, these hyper parameters are set "in the ballpark" but otherwise # pretty randomly. ar_mean = 0.0 # process mean ar_autocorrelation_tau = 1.0 # seconds, how correlated each time point is, related to $\phi$ above. ar_noise_variance = 0.1 # noise variance lfads_hps = {'data_dim' : data_dim, 'ntimesteps' : ntimesteps, 'enc_dim' : enc_dim, 'con_dim' : con_dim, 'var_min' : var_min, 'ic_prior_var' : ic_prior_var, 'ar_mean' : ar_mean, 'ar_autocorrelation_tau' : ar_autocorrelation_tau, 'ar_noise_variance' : ar_noise_variance, 'ii_dim' : ii_dim, 'gen_dim' : gen_dim, 'factors_dim' : factors_dim, 'l2reg' : l2reg, 'batch_size' : batch_size} num_batches = 20000 # how many batches do we train print_every = 100 # give information every so often # Learning rate HPs step_size = 0.05 # initial learning rate decay_factor = 0.9999 # learning rate decay param decay_steps = 1 # learning rate decay param # Regularization HPs keep_rate = 0.98 # dropout keep rate during training # Numerical stability HPs max_grad_norm = 10.0 # gradient clipping above this value # The fact that the start and end values are required to be floats is something I need to fix. kl_warmup_start = 500.0 # batch number to start KL warm-up, explicitly float kl_warmup_end = 1000.0 # batch number to be finished with KL warm-up, explicitly float kl_min = 0.01 # The minimum KL value, non-zero to make sure KL doesn't grow crazy before kicking in. kl_max = 1.0 lfads_opt_hps = {'num_batches' : num_batches, 'step_size' : step_size, 'decay_steps' : decay_steps, 'decay_factor' : decay_factor, 'kl_min' : kl_min, 'kl_max' : kl_max, 'kl_warmup_start' : kl_warmup_start, 'kl_warmup_end' : kl_warmup_end, 'keep_rate' : keep_rate, 'max_grad_norm' : max_grad_norm, 'print_every' : print_every, 'adam_b1' : 0.9, 'adam_b2' : 0.999, 'adam_eps' : 1e-1} assert num_batches >= print_every and num_batches % print_every == 0 # Plot the warmup function and the learning rate decay function. plt.figure(figsize=(16,4)) plt.subplot(121) x = onp.arange(0, num_batches, print_every) kl_warmup_fun = get_kl_warmup_fun(lfads_opt_hps) plt.plot(x, [kl_warmup_fun(i) for i in onp.arange(1,lfads_opt_hps['num_batches'], print_every)]); plt.title('KL warmup function') plt.xlabel('Training batch'); plt.subplot(122) decay_fun = optimizers.exponential_decay(lfads_opt_hps['step_size'], lfads_opt_hps['decay_steps'], lfads_opt_hps['decay_factor']) plt.plot(x, [decay_fun(i) for i in range(1, lfads_opt_hps['num_batches'], print_every)]); plt.title('learning rate function') plt.xlabel('Training batch'); # Initialize parameters for LFADS key = random.PRNGKey(onp.random.randint(0, utils.MAX_SEED_INT)) init_params = lfads.lfads_params(key, lfads_hps) key = random.PRNGKey(onp.random.randint(0, utils.MAX_SEED_INT)) trained_params, opt_details = \ optimize_lfads(key, init_params, lfads_hps, lfads_opt_hps, train_data, eval_data) # Plot the training details x = onp.arange(0, num_batches, print_every) plt.figure(figsize=(20,6)) plt.subplot(251) plt.plot(x, opt_details['tlosses']['total'], 'k') plt.ylabel('Training') plt.title('Total loss') plt.subplot(252) plt.plot(x, opt_details['tlosses']['nlog_p_xgz'], 'b') plt.title('Negative log p(z|x)') plt.subplot(253) plt.plot(x, opt_details['tlosses']['kl_ii'], 'r') plt.title('KL inferred inputs') plt.subplot(254) plt.plot(x, opt_details['tlosses']['kl_g0'], 'g') plt.title('KL initial state') plt.subplot(255) plt.plot(x, opt_details['tlosses']['l2'], 'c') plt.xlabel('Training batch') plt.title('L2 loss') plt.subplot(256) plt.plot(x, opt_details['elosses']['total'], 'k') plt.xlabel('Training batch') plt.ylabel('Evaluation') plt.subplot(257) plt.plot(x, opt_details['tlosses']['nlog_p_xgz'], 'b') plt.xlabel('Training batch') plt.subplot(258) plt.plot(x, opt_details['elosses']['kl_ii'], 'r') plt.xlabel('Training batch') plt.subplot(259) plt.plot(x, opt_details['elosses']['kl_g0'], 'g') plt.xlabel('Training batch'); # See the effect of the KL warmup, which is shown # by the KL penalities without the warmup scaling. plt.figure(figsize=(7,4)) plt.subplot(221) plt.plot(x, opt_details['tlosses']['kl_ii_prescale'], 'r--') plt.ylabel('Training') plt.subplot(222) plt.plot(x, opt_details['tlosses']['kl_g0_prescale'], 'g--') plt.subplot(223) plt.plot(x, opt_details['elosses']['kl_ii_prescale'], 'r--') plt.ylabel('Evaluation') plt.xlabel('Training batch') plt.subplot(224) plt.plot(x, opt_details['elosses']['kl_g0_prescale'], 'g--') plt.xlabel('Training batch'); fname_uniquifier = datetime.datetime.now().strftime("%Y-%m-%d_%H:%M:%S") network_fname = ('trained_params_' + rnn_type + '_' + task_type + '_' + \ fname_uniquifier + '.npz') network_path = os.path.join(output_dir, network_fname) # Note we are just using numpy save instead of h5 because the LFADS parameter # is nested dictionaries, something I couldn't get h5 to save down easily. print("Saving parameters: ", network_path) onp.savez(network_path, trained_params) # After training, you can load these up, after locating the save file. if False: loaded_params = onp.load(network_path, allow_pickle=True) trained_params = loaded_params['arr_0'].item() # Plot a bunch of examples of eval trials run through LFADS. reload(plotting) #reload(lfads) def plot_rescale_fun(a): fac = max_firing_rate * data_dt return renormed_fun(a) * fac bidx = my_example_bidx - eval_data_offset bidx = 0 nexamples_to_save = 1 for eidx in range(nexamples_to_save): fkey = random.fold_in(key, eidx) psa_example = eval_data[bidx,:,:].astype(np.float32) psa_dict = lfads.posterior_sample_and_average_jit(trained_params, lfads_hps, fkey, psa_example) # The inferred input and true input are rescaled and shifted via # linear regression to match, as there is an identifiability issue. there. plotting.plot_lfads(psa_example, psa_dict, data_dict, eval_data_offset+bidx, plot_rescale_fun) plt.figure(figsize=(16,4)) plt.subplot(141) plt.plot(my_signal, 'r'); plt.stem(my_signal_spikified); _, _, r2_spike, _, _ = scipy.stats.linregress(my_signal_spikified, my_signal) plt.title('Raw spikes R^2=%.3f' % (r2_spike)) plt.legend(('True rate', 'Spikes')); plt.subplot(142) plt.plot(my_signal, 'r'); plt.plot(my_filtered_spikes); _, _, c_tfilt, _, _ = scipy.stats.linregress(my_filtered_spikes, my_signal) plt.title("Temporal filtering R^2=%.3f" % (c_tfilt**2)); plt.legend(('True rate', 'Filtered spikes')); plt.subplot(143) plt.plot(my_signal, 'r') plt.plot(my_example_ipca[:,my_example_hidx]) _, _, c_sfilt, _, _ = scipy.stats.linregress(my_example_ipca[:,my_example_hidx], my_signal) plt.legend(('True rate', 'PCA smoothed spikes')) plt.title('Spatial filtering R^2=%.3f' % (c_sfilt**2)); plt.subplot(144) plt.plot(my_signal, 'r') my_lfads_rate = onp.exp(psa_dict['lograte_t'][:,my_example_hidx]) plt.plot(my_lfads_rate) _, _, r2_lfads, _, _ = scipy.stats.linregress(my_lfads_rate, my_signal) plt.legend(('True rate', 'LFADS rate')) plt.title('LFADS "filtering" R^2=%.3f' % (r2_lfads)); nexamples = 1000 r2_sfilts = onp.zeros(nexamples*data_dim) r2_lfadss = onp.zeros(nexamples*data_dim) eidx = 0 for bidx in range(nexamples): ebidx = eval_data_offset + bidx # Get the LFADS decode for this trial. fkey = random.fold_in(key, bidx) psa_example = eval_data[bidx,:,:].astype(np.float32) psa_dict = lfads.posterior_sample_and_average_jit(trained_params, lfads_hps, fkey, psa_example) # Get the spatially smoothed trial. trial_rates = scale*renormed_data[ebidx, :, :] trial_spikes = data_bxtxn[ebidx, :, :] spikes_pca = pca.transform(trial_spikes) spikes_ipca = pca.inverse_transform(spikes_pca) for hidx in range(data_dim): sig = trial_rates[:, hidx] ipca_rate = spikes_ipca[:,hidx] lfads_rate = onp.exp(psa_dict['lograte_t'][:,hidx]) _, _, cc_sfilt, _, _ = scipy.stats.linregress(ipca_rate, sig) _, _, cc_lfads, _, _ = scipy.stats.linregress(lfads_rate, sig) r2_sfilts[eidx] = cc_sfilt**2 r2_lfadss[eidx] = cc_lfads**2 eidx += 1 plt.figure(figsize=(8,4)) plt.subplot(121) plt.hist(r2_sfilts, 50) plt.title('Spatial filtering, hist of R^2, <%.3f>' % (onp.mean(r2_sfilts))) plt.xlim([-.5, 1.0]) plt.subplot(122) plt.hist(r2_lfadss, 50); plt.title('LFADS filtering, hist of R^2, <%.3f>' % (onp.mean(r2_lfadss))); plt.xlim([-.5, 1.0]); r2_iis = [] nexamples = 1000 for bidx in range(nexamples): ebidx = eval_data_offset + bidx # Get the LFADS decode for this trial. psa_example = eval_data[bidx,:,:].astype(np.float32) fkey = random.fold_in(key, bidx) psa_dict = lfads.posterior_sample_and_average_jit(trained_params, lfads_hps, fkey, psa_example) # Get the true input and inferred input true_input = onp.squeeze(data_dict['inputs'][ebidx]) inferred_input = onp.squeeze(psa_dict['ii_t']) slope, intercept, _, _, _ = scipy.stats.linregress(inferred_input, true_input) _, _, cc_ii, _, _ = scipy.stats.linregress(slope * inferred_input + intercept, true_input) r2_iis.append(cc_ii**2) r2_iis = onp.array(r2_iis) plt.hist(r2_iis, 20) plt.title('Correlation between rescaled inferrred inputs and true inputs, hist of R^2, <%.3f>' % (onp.mean(r2_iis))) plt.xlim([0.0, 1.0]); ntrials = 1000 true_h0s = onp.zeros([ntrials, data_dim]) ic_means = onp.zeros([ntrials, gen_dim]) colors = onp.zeros(ntrials) for bidx in range(ntrials): ebidx = eval_data_offset + bidx # Get the LFADS decode for this trial. psa_example = eval_data[bidx,:,:].astype(np.float32) fkey = random.fold_in(key, bidx) #psa_dict = lfads.posterior_sample_and_average_jit(trained_params, lfads_hps, fkey, psa_example) lfads_results = lfads.lfads_jit(trained_params, lfads_hps, fkey, psa_example, 1.0) # Get the true initial condition (and the readout of the true h0 for color) # Get the inferred input from LFADS true_h0s[bidx,:] = data_dict['h0s'][ebidx] colors[bidx] = data_dict['outputs'][ebidx][0] ic_means[bidx,:] = lfads_results['ic_mean'] from sklearn.manifold import TSNE plt.figure(figsize=(16,8)) plt.subplot(121) h0s_embedded = TSNE(n_components=2).fit_transform(true_h0s) plt.scatter(h0s_embedded[:,0], h0s_embedded[:,1], c=colors) plt.title('TSNE visualization of integrator RNN intial state') plt.subplot(122) ic_means_embedded = TSNE(n_components=2).fit_transform(ic_means) plt.scatter(ic_means_embedded[:,0], ic_means_embedded[:,1], c=colors); plt.title('TSNE visualziation of LFADS inferred intial generator state.') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Preliminaries - notes on using JAX Step2: Load the data Step3: Plot examples and statistics about the integrator RNN data. Step4: The goal of this tutorial is to infer the hiddens (blue), and input to the integrator RNN (umm... also blue). Step5: Spikify the synthetic data Step6: Plot the statistics of the data. Step7: Let's study this single example of a single neuron's true firing rate (red) and the spikified version in the blue stem plot. Step8: If you were to increase max_firing_rate to infinity, the stem plot would approach the red line. This plot gives you an idea of how challenging the data set is, at least on single trials. We can think about this a little bit. If you were to simply filter the spikes, it definitely would not look like the red trace, at this low maximum firing rate. This means that if any technique were to have Step9: This would force us to think about ways in which the population can be filtered. The first idea is naturally PCA. Perhaps there is a low-d subspace of signal that can be found in the high-variance top PCs. Using the entire trial, it's likely this should do better. Step10: So temporal filtering is not great, and spatial filtering helps only a bit. What to do? The idea LFADS explores is that if you knew the system that generated the data, you would be able to separate signal from noise, the signal being what a system can generate, the noise being the rest. Step11: Hyperparameters for Priors Step12: Hyperparameters for inferred inputs Step13: LFADS Optimization hyperparameters Step14: Warming up the KL penalties Step15: Note, there is currently a HUGE amount of debate about what the correct parameter value here is for the KL penalty. kl_max = 1 is what creates a lower bound on the (marginal) log likelihood of the data, but folks argue it could be higher or lower than 1. Myself, I have never played around with this HP, but I have the idea that LFADS may benefit from < 1 values, as LFADS is not really being used for random sampling from the distribution of spiking data. Step16: Train the LFADS model Step17: Note that the first loop could take a few minutes to run, because the LFADS model is unrolled, and therefor the JIT (just in time) compilation is slow, and happens "just in time", which is the first training loop iteration. On my computer, the JIT compilation takes a few minutes. Step18: Save the LFADS model parameters Step19: LFADS Visualization Step20: And coming back to our example signal, how well does LFADS do on it, compared to the other much easier to implement methods? A noticeable improvement on inferring the underlying rate. Step21: That single example can't tell the whole story so let us look at the average. LFADS is much better than spatial averaging across a large set of trials. Step22: Compare the inferred inputs learned by LFADS to the actual inputs to the integrator RNN. Step23: Compare the inferred initial state for the LFADS generator to the actual initial state of the integrator RNN.
3,912
<ASSISTANT_TASK:> Python Code: print(__doc__) import numpy as np np.random.seed(123) import matplotlib.pyplot as plt from skopt.benchmarks import branin as _branin def branin(x, noise_level=0.): return _branin(x) + noise_level * np.random.randn() from matplotlib.colors import LogNorm def plot_branin(): fig, ax = plt.subplots() x1_values = np.linspace(-5, 10, 100) x2_values = np.linspace(0, 15, 100) x_ax, y_ax = np.meshgrid(x1_values, x2_values) vals = np.c_[x_ax.ravel(), y_ax.ravel()] fx = np.reshape([branin(val) for val in vals], (100, 100)) cm = ax.pcolormesh(x_ax, y_ax, fx, norm=LogNorm(vmin=fx.min(), vmax=fx.max()), cmap='viridis_r') minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]]) ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14, lw=0, label="Minima") cb = fig.colorbar(cm) cb.set_label("f(x)") ax.legend(loc="best", numpoints=1) ax.set_xlabel("X1") ax.set_xlim([-5, 10]) ax.set_ylabel("X2") ax.set_ylim([0, 15]) plot_branin() from functools import partial from skopt import gp_minimize, forest_minimize, dummy_minimize func = partial(branin, noise_level=2.0) bounds = [(-5.0, 10.0), (0.0, 15.0)] n_calls = 60 def run(minimizer, n_iter=5): return [minimizer(func, bounds, n_calls=n_calls, random_state=n) for n in range(n_iter)] # Random search dummy_res = run(dummy_minimize) # Gaussian processes gp_res = run(gp_minimize) # Random forest rf_res = run(partial(forest_minimize, base_estimator="RF")) # Extra trees et_res = run(partial(forest_minimize, base_estimator="ET")) from skopt.plots import plot_convergence plot = plot_convergence(("dummy_minimize", dummy_res), ("gp_minimize", gp_res), ("forest_minimize('rf')", rf_res), ("forest_minimize('et)", et_res), true_minimum=0.397887, yscale="log") plot.legend(loc="best", prop={'size': 6}, numpoints=1) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Toy model Step2: This shows the value of the two-dimensional branin function and Step3: Note that this can take a few minutes.
3,913
<ASSISTANT_TASK:> Python Code: from dcgpy import expression_gdual_vdouble as expression from dcgpy import kernel_set_gdual_vdouble as kernel_set from pyaudi import gdual_vdouble as gdual from matplotlib import pyplot as plt import numpy as np from numpy import sin, cos from random import randint, random np.seterr(all='ignore') # avoids numpy complaining about early on malformed expressions being evalkuated %matplotlib inline kernels = kernel_set(["sum", "mul", "pdiv", "diff"])() # note the call operator (returns the list of kernels) dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213)) n_points = 50 x = [] v = [] k = [] for i in range(n_points): x.append(random()*2 + 2) v.append(random()*2 + 2) k.append(random()*2 + 2) x = gdual(x,"x",1) v = gdual(v,"v",1) k = gdual(k) def fitness_call(dCGP, x, v, k): res = dCGP([x,v,k])[0] dPdx = np.array(res.get_derivative({"dx": 1})) dPdv = np.array(res.get_derivative({"dv": 1})) xcoeff = np.array(x.constant_cf) vcoeff = np.array(v.constant_cf) kcoeff = np.array(k.constant_cf) err = dPdx/dPdv - kcoeff * xcoeff / vcoeff return sum(err * err), 3 # We run an evolutionary strategy ES(1 + offspring) def run_experiment(max_gen, offsprings, dCGP, x, v, k, screen_output=False): chromosome = [1] * offsprings fitness = [1] *offsprings best_chromosome = dCGP.get() best_fitness = 1e10 for g in range(max_gen): for i in range(offsprings): check = 0 while(check < 1e-3): dCGP.set(best_chromosome) dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes fitness[i], check = fitness_call(dCGP, x,v,k) chromosome[i] = dCGP.get() for i in range(offsprings): if fitness[i] <= best_fitness: if (fitness[i] != best_fitness) and screen_output: dCGP.set(chromosome[i]) print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["x","v","k"])) best_chromosome = chromosome[i] best_fitness = fitness[i] if best_fitness < 1e-12: break dCGP.set(best_chromosome) return g, dCGP # We run nexp experiments to accumulate statistic for the ERT nexp = 100 offsprings = 10 stop = 2000 res = [] print("restart: \t gen: \t expression:") for i in range(nexp): dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213)) g, dCGP = run_experiment(stop, 10, dCGP, x,v,k, False) res.append(g) if g < (stop-1): print(i, "\t\t", res[i], "\t", dCGP(["x","v","k"]), " a.k.a ", dCGP.simplify(["x","v","k"])) one_sol = dCGP res = np.array(res) ERT = sum(res) / sum(res<(stop-1)) print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings) print(one_sol.simplify(["x","v","k"])) plt.rcParams["figure.figsize"] = [20,20] one_sol.visualize(["x","v","k"]) kernels = kernel_set(["sum", "mul", "pdiv", "diff","sin","cos"])() # note the call operator (returns the list of kernels) dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213)) n_points = 50 omega = [] theta = [] c = [] for i in range(n_points): omega.append(random()*10 - 5) theta.append(random()*10 - 5) c.append(random()*10) omega = gdual(omega,"omega",1) theta = gdual(theta,"theta",1) c = gdual(c) def fitness_call(dCGP, theta, omega, c): res = dCGP([theta, omega, c])[0] dPdtheta = np.array(res.get_derivative({"dtheta": 1})) dPdomega = np.array(res.get_derivative({"domega": 1})) thetacoeff = np.array(theta.constant_cf) omegacoeff = np.array(omega.constant_cf) ccoeff = np.array(c.constant_cf) err = dPdtheta/dPdomega + (-ccoeff * np.sin(thetacoeff)) / omegacoeff check = sum(dPdtheta*dPdtheta + dPdomega*dPdomega) return sum(err * err ), check # We run an evolutionary strategy ES(1 + offspring) def run_experiment(max_gen, offsprings, dCGP, theta, omega, c, screen_output=False): chromosome = [1] * offsprings fitness = [1] *offsprings best_chromosome = dCGP.get() best_fitness = 1e10 for g in range(max_gen): for i in range(offsprings): check = 0 while(check < 1e-3): dCGP.set(best_chromosome) dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes fitness[i], check = fitness_call(dCGP, theta, omega, c) chromosome[i] = dCGP.get() for i in range(offsprings): if fitness[i] <= best_fitness: if (fitness[i] != best_fitness) and screen_output: dCGP.set(chromosome[i]) print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["theta","omega","c"])) best_chromosome = chromosome[i] best_fitness = fitness[i] if best_fitness < 1e-12: break dCGP.set(best_chromosome) return g, dCGP # We run nexp experiments to accumulate statistic for the ERT nexp = 100 offsprings = 10 stop = 2000 res = [] print("restart: \t gen: \t expression:") for i in range(nexp): dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213)) g, dCGP = run_experiment(stop, 10, dCGP, theta, omega, c, False) res.append(g) if g < (stop-1): print(i, "\t\t", res[i], "\t", dCGP(["theta","omega","c"]), " a.k.a ", dCGP.simplify(["theta","omega","c"])) one_sol = dCGP res = np.array(res) ERT = sum(res) / sum(res<(stop-1)) print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings) print(one_sol.simplify(["theta","omega","c"])) plt.rcParams["figure.figsize"] = [20,20] one_sol.visualize(["theta","omega","c"]) kernels = kernel_set(["sum", "mul", "pdiv", "diff"])() # note the call operator (returns the list of kernels) dCGP = expression(inputs=3, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213)) n_points = 50 v = [] omega = [] r = [] theta = [] mu = [] for i in range(n_points): v.append(random()*2 + 2) omega.append(random()*1 + 1) r.append(random() + 0.1) theta.append(random()*2 + 2) mu.append(random() + 1) r = gdual(r,"r",1) omega = gdual(omega,"omega",1) v = gdual(v,"v",1) theta = gdual(theta,"theta",1) mu = gdual(mu) ## Use this fitness if energy conservation is to be found (it basically forces the expression to depend on v) def fitness_call(dCGP, r, v, theta, omega, mu): res = dCGP([r, v, theta, omega, mu])[0] dPdr = np.array(res.get_derivative({"dr": 1})) dPdv = np.array(res.get_derivative({"dv": 1})) dPdtheta = np.array(res.get_derivative({"dtheta": 1})) dPdomega = np.array(res.get_derivative({"domega": 1})) rcoeff = np.array(r.constant_cf) vcoeff = np.array(v.constant_cf) thetacoeff = np.array(theta.constant_cf) omegacoeff = np.array(omega.constant_cf) mucoeff = np.array(mu.constant_cf) err = dPdr / dPdv + (-mucoeff/rcoeff**2 + rcoeff*omegacoeff**2) / vcoeff + dPdtheta / dPdv / vcoeff * omegacoeff + dPdomega / dPdv / vcoeff * (-2*vcoeff*omegacoeff/rcoeff) check = sum(dPdr*dPdr + dPdv*dPdv + dPdomega*dPdomega + dPdtheta*dPdtheta) return sum(err * err), check ## Use this fitness if any conservation is to be found (will always converge to angular momentum) def fitness_call_free(dCGP, r, v, theta, omega, mu): res = dCGP([r, v, theta, omega, mu])[0] dPdr = np.array(res.get_derivative({"dr": 1})) dPdv = np.array(res.get_derivative({"dv": 1})) dPdtheta = np.array(res.get_derivative({"dtheta": 1})) dPdomega = np.array(res.get_derivative({"domega": 1})) rcoeff = np.array(r.constant_cf) vcoeff = np.array(v.constant_cf) thetacoeff = np.array(theta.constant_cf) omegacoeff = np.array(omega.constant_cf) mucoeff = np.array(mu.constant_cf) err = dPdr * vcoeff + dPdv * (-mucoeff/rcoeff**2 + rcoeff*omegacoeff**2) + dPdtheta * omegacoeff + dPdomega * (-2*vcoeff*omegacoeff/rcoeff) check = sum(dPdr*dPdr + dPdv*dPdv +dPdomega*dPdomega+ dPdtheta*dPdtheta) return sum(err * err ), check # We run an evolutionary strategy ES(1 + offspring) def run_experiment(max_gen, offsprings, dCGP, r, v, theta, omega, mu, obj_fun, screen_output=False): chromosome = [1] * offsprings fitness = [1] *offsprings best_chromosome = dCGP.get() best_fitness = 1e10 for g in range(max_gen): for i in range(offsprings): check = 0 while(check < 1e-3): dCGP.set(best_chromosome) dCGP.mutate_active(i+1) # we mutate a number of increasingly higher active genes fitness[i], check = obj_fun(dCGP, r, v, theta, omega, mu) chromosome[i] = dCGP.get() for i in range(offsprings): if fitness[i] <= best_fitness: if (fitness[i] != best_fitness) and screen_output: dCGP.set(chromosome[i]) print("New best found: gen: ", g, " value: ", fitness[i], " ", dCGP.simplify(["r","v","theta","omega","mu"])) best_chromosome = chromosome[i] best_fitness = fitness[i] if best_fitness < 1e-12: break dCGP.set(best_chromosome) return g, dCGP # We run nexp experiments to accumulate statistic for the ERT (angular momentum case) nexp = 100 offsprings = 10 stop = 2000 #100000 res = [] print("restart: \t gen: \t expression:") for i in range(nexp): dCGP = expression(inputs=5, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213)) g, dCGP = run_experiment(stop, 10, dCGP, r, v, theta, omega, mu, fitness_call_free, False) res.append(g) if g < (stop-1): print(i, "\t\t", res[i], "\t", dCGP(["r","v","theta","omega","mu"]), " a.k.a ", dCGP.simplify(["r","v","theta","omega","mu"])) one_sol = dCGP res = np.array(res) ERT = sum(res) / sum(res<(stop-1)) print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings) print(one_sol.simplify(["r","v","theta","omega","mu"])) plt.rcParams["figure.figsize"] = [20,20] one_sol.visualize(["r","v","theta","omega","mu"]) # We run nexp experiments to accumulate statistic for the ERT (angular momentum case) nexp = 100 offsprings = 10 stop = 100000 res = [] print("restart: \t gen: \t expression:") for i in range(nexp): dCGP = expression(inputs=5, outputs=1, rows=1, cols=15, levels_back=16, arity=2, kernels=kernels, seed = randint(0,234213213)) g, dCGP = run_experiment(stop, 10, dCGP, r, v, theta, omega, mu, fitness_call, False) res.append(g) if g < (stop-1): print(i, "\t\t", res[i], "\t", dCGP(["r","v","theta","omega","mu"]), " a.k.a ", dCGP.simplify(["r","v","theta","omega","mu"])) one_sol = dCGP res = np.array(res) ERT = sum(res) / sum(res<(stop-1)) print("ERT Expected run time - avg. number of function evaluations needed: ", ERT * offsprings) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We consider a set of differential equations in the form Step2: We define 50 random control of points where we check that the prime integral holds Step3: Simple pendulum Step4: We define 50 random control of points where we check that the prime integral holds Step5: The two-body problem Step6: We define 50 random control of points where we check that the prime integral holds
3,914
<ASSISTANT_TASK:> Python Code: bigfile = open('/Users/chengjun/百度云同步盘/Writing/OWS/ows-raw.txt', 'rb') chunkSize = 1000000 chunk = bigfile.readlines(chunkSize) print len(chunk) with open("/Users/chengjun/GitHub/cjc/data/ows_tweets_sample.txt", 'w') as f: for i in chunk: f.write(i) with open("/Users/chengjun/GitHub/cjc/data/ows_tweets_sample.txt", 'rb') as f: lines = f.readlines() # 总行数 len(lines) # 查看第一行 lines[0] varNames = lines[0].replace('\n', '').split(',') varNames len(varNames) lines[1344] with open("/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean.txt", 'w') as f: right_line = '' # 正确的行,它是一个空字符串 blocks = [] # 确认为正确的行会被添加到blocks里面 for line in lines: right_line += line.replace('\n', ' ') line_length = len(right_line.split(',')) if line_length >= 14: blocks.append(right_line) right_line = '' for i in blocks: f.write(i + '\n') len(blocks) blocks[1344] with open("/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean4.txt", 'w') as f: right_line = '' # 正确的行,它是一个空字符串 blocks = [] # 确认为正确的行会被添加到blocks里面 for line in lines: right_line += line.replace('\n', ' ').replace('\r', ' ') line_length = len(right_line.split(',')) if line_length >= 14: blocks.append(right_line) right_line = '' for i in blocks: f.write(i + '\n') blocks[1344] # 提示:你可能需要修改以下路径名 with open("/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean.txt", 'rb') as f: chunk = f.readlines() len(chunk) chunk[:3] import csv clean_lines = (line.replace('\x00','') \ for line in chunk[1:]) lines = csv.reader(clean_lines, delimiter=',', \ quotechar='"') import pandas as pd df = pd.read_csv("/Users/chengjun/GitHub/cjc/data/ows_tweets_sample_clean.txt",\ sep = ',', quotechar='"') df[:3] df.Text[1] df['From User'] from collections import defaultdict data_dict = defaultdict(int) line_num = 0 lines = csv.reader((line.replace('\x00','') for line in chunk[1:]), delimiter=',', quotechar='"') for i in lines: line_num +=1 data_dict[i[8]] +=1 # i[8] 是user data_dict.items()[:5] print line_num %matplotlib inline from matplotlib.font_manager import FontProperties import matplotlib.pyplot as plt import matplotlib #matplotlib.rcParams['font.sans-serif'] = ['Microsoft YaHei'] #指定默认字体 matplotlib.rc("savefig", dpi=100) font = FontProperties(fname=r'/Users/chengjun/github/cjc/data/msyh.ttf', size=14) # 注意:修改这里的路径名 plt.hist(data_dict.values()) #plt.yscale('log') #plt.xscale('log') plt.xlabel(u'发帖数', fontproperties=font) plt.ylabel(u'人数', fontproperties=font) plt.show() tweet_dict = defaultdict(int) for i in data_dict.values(): tweet_dict[i] += 1 plt.loglog(tweet_dict.keys(), tweet_dict.values(), 'ro',linewidth=2) plt.xlabel(u'推特数', fontproperties=font) plt.ylabel(u'人数', fontproperties=font ) plt.show() import numpy as np import statsmodels.api as sm def powerPlot(d_value, d_freq, color, marker): d_freq = [i + 1 for i in d_freq] d_prob = [float(i)/sum(d_freq) for i in d_freq] #d_rank = ss.rankdata(d_value).astype(int) x = np.log(d_value) y = np.log(d_prob) xx = sm.add_constant(x, prepend=True) res = sm.OLS(y,xx).fit() constant,beta = res.params r2 = res.rsquared plt.plot(d_value, d_prob, linestyle = '', color = color, marker = marker) plt.plot(d_value, np.exp(constant+x*beta),"red") plt.xscale('log'); plt.yscale('log') plt.text(max(d_value)/2,max(d_prob)/10, r'$\beta$ = ' + str(round(beta,2)) +'\n' + r'$R^2$ = ' + str(round(r2, 2))) histo, bin_edges = np.histogram(data_dict.values(), 15) bin_center = 0.5*(bin_edges[1:] + bin_edges[:-1]) powerPlot(bin_center,histo, 'r', 'o') #lg=plt.legend(labels = [u'Tweets', u'Fit'], loc=3, fontsize=20) plt.ylabel(u'概率', fontproperties=font) plt.xlabel(u'推特数', fontproperties=font) plt.show() import statsmodels.api as sm from collections import defaultdict import numpy as np def powerPlot(data): d = sorted(data, reverse = True ) d_table = defaultdict(int) for k in d: d_table[k] += 1 d_value = sorted(d_table) d_value = [i+1 for i in d_value] d_freq = [d_table[i]+1 for i in d_value] d_prob = [float(i)/sum(d_freq) for i in d_freq] #d_rank = ss.rankdata(d_value).astype(int) x = np.log(d_value) y = np.log(d_prob) xx = sm.add_constant(x, prepend=True) res = sm.OLS(y,xx).fit() constant,beta = res.params r2 = res.rsquared plt.plot(d_value, d_prob, 'ro') plt.plot(d_value, np.exp(constant+x*beta),"red") plt.xscale('log'); plt.yscale('log') plt.text(max(d_value)/2,max(d_prob)/5, 'Beta = ' + str(round(beta,2)) +'\n' + 'R squared = ' + str(round(r2, 2))) plt.title('Distribution') plt.ylabel('P(K)') plt.xlabel('K') plt.show() powerPlot(data_dict.values()) import powerlaw def plotPowerlaw(data,ax,col,xlab): fit = powerlaw.Fit(data,xmin=2) #fit = powerlaw.Fit(data) fit.plot_pdf(color = col, linewidth = 2) a,x = (fit.power_law.alpha,fit.power_law.xmin) fit.power_law.plot_pdf(color = col, linestyle = 'dotted', ax = ax, \ label = r"$\alpha = %d \:\:, x_{min} = %d$" % (a,x)) ax.set_xlabel(xlab, fontsize = 20) ax.set_ylabel('$Probability$', fontsize = 20) plt.legend(loc = 0, frameon = False) from collections import defaultdict data_dict = defaultdict(int) for i in df['From User']: data_dict[i] += 1 import matplotlib.cm as cm cmap = cm.get_cmap('rainbow_r',6) fig = plt.figure(figsize=(6, 4),facecolor='white') ax = fig.add_subplot(1, 1, 1) plotPowerlaw(data_dict.values(), ax,cmap(1), '$Gold\;Metals$') tweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com http://ccc.nju.edu.cn RT !!HELP!!!!''' import re import twitter_text import re tweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com http://ccc.nju.edu.cn RT !!HELP!!!!''' rt_patterns = re.compile(r"(RT|via)((?:\b\W*@\w+)+)", \ re.IGNORECASE) rt_user_name = rt_patterns.findall(tweet)[0][1].strip(' @') rt_user_name import re tweet = '''@AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com http://ccc.nju.edu.cn RT !!HELP!!!!''' rt_patterns = re.compile(r"(RT|via)((?:\b\W*@\w+)+)", re.IGNORECASE) rt_user_name = rt_patterns.findall(tweet) print rt_user_name if rt_user_name: print 'it exits.' else: print 'None' import re def extract_rt_user(tweet): rt_patterns = re.compile(r"(RT|via)((?:\b\W*@\w+)+)", re.IGNORECASE) rt_user_name = rt_patterns.findall(tweet) if rt_user_name: rt_user_name = rt_user_name[0][1].strip(' @') else: rt_user_name = None return rt_user_name tweet = '''@AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com http://ccc.nju.edu.cn RT !!HELP!!!!''' print extract_rt_user(tweet) def extract_tweet_text(tweet, at_names, urls): for i in at_names: tweet = tweet.replace(i, '') for j in urls: tweet = tweet.replace(j, '') marks = ['RT @', '@', '&quot;', '#', '\n', '\t', ' '] for k in marks: tweet = tweet.replace(k, '') return tweet import twitter_text tweet = '''RT @AnonKitsu: ALERT!!!!!!!!!!COPS ARE KETTLING PROTESTERS IN PARK W HELICOPTERS AND PADDYWAGONS!!!! #OCCUPYWALLSTREET #OWS #OCCUPYNY PLEASE @chengjun @mili http://computational-communication.com http://ccc.nju.edu.cn RT !!HELP!!!!''' ex = twitter_text.Extractor(tweet) at_names = ex.extract_mentioned_screen_names() urls = ex.extract_urls() hashtags = ex.extract_hashtags() rt_user = extract_rt_user(tweet) tweet_text = extract_tweet_text(tweet, at_names, urls) print at_names, urls, hashtags, rt_user,'-------->', tweet_text import csv lines = csv.reader((line.replace('\x00','') for line in chunk[1:]), delimiter=',', quotechar='"') tweets = [i[1] for i in lines] for tweet in tweets[:5]: ex = twitter_text.Extractor(tweet) at_names = ex.extract_mentioned_screen_names() urls = ex.extract_urls() hashtags = ex.extract_hashtags() rt_user = extract_rt_user(tweet) tweet_text = extract_tweet_text(tweet, at_names, urls) print at_names, urls, hashtags, rt_user, print tweet_text import csv lines = csv.reader((line.replace('\x00','') \ for line in chunk[1:]), \ delimiter=',', quotechar='"') tweet_user_data = [(i[1], i[8]) for i in lines] for tweet,user in tweet_user_data: rt_user = extract_rt_user(tweet) if rt_user: print rt_user, ',', user <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 2. 清洗错行的情况 Step2: 问题: 第一行是变量名 Step3: 如何来处理错误换行情况? Step4: 3. 读取数据、正确分列 Step5: 4. 统计数量 Step6: 5. 清洗tweets文本 Step7: 安装twitter_text Step8: 获得清洗过的推特文本 Step9: 思考:
3,915
<ASSISTANT_TASK:> Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import tensorflow as tf print(tf.config.list_physical_devices('GPU')) # In the tf.keras.layers package, layers are objects. To construct a layer, # simply construct the object. Most layers take as a first argument the number # of output dimensions / channels. layer = tf.keras.layers.Dense(100) # The number of input dimensions is often unnecessary, as it can be inferred # the first time the layer is used, but it can be provided if you want to # specify it manually, which is useful in some complex models. layer = tf.keras.layers.Dense(10, input_shape=(None, 5)) # To use a layer, simply call it. layer(tf.zeros([10, 5])) # Layers have many useful methods. For example, you can inspect all variables # in a layer using `layer.variables` and trainable variables using # `layer.trainable_variables`. In this case a fully-connected layer # will have variables for weights and biases. layer.variables # The variables are also accessible through nice accessors layer.kernel, layer.bias class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_outputs]) def call(self, inputs): return tf.matmul(inputs, self.kernel) layer = MyDenseLayer(10) _ = layer(tf.zeros([10, 5])) # Calling the layer `.builds` it. print([var.name for var in layer.trainable_variables]) class ResnetIdentityBlock(tf.keras.Model): def __init__(self, kernel_size, filters): super(ResnetIdentityBlock, self).__init__(name='') filters1, filters2, filters3 = filters self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1)) self.bn2a = tf.keras.layers.BatchNormalization() self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same') self.bn2b = tf.keras.layers.BatchNormalization() self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1)) self.bn2c = tf.keras.layers.BatchNormalization() def call(self, input_tensor, training=False): x = self.conv2a(input_tensor) x = self.bn2a(x, training=training) x = tf.nn.relu(x) x = self.conv2b(x) x = self.bn2b(x, training=training) x = tf.nn.relu(x) x = self.conv2c(x) x = self.bn2c(x, training=training) x += input_tensor return tf.nn.relu(x) block = ResnetIdentityBlock(1, [1, 2, 3]) _ = block(tf.zeros([1, 2, 3, 3])) block.layers len(block.variables) block.summary() my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1), input_shape=( None, None, 3)), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(2, 1, padding='same'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(3, (1, 1)), tf.keras.layers.BatchNormalization()]) my_seq(tf.zeros([1, 2, 3, 3])) my_seq.summary() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: カスタムレイヤー Step2: レイヤー:有用な演算の共通セット Step3: 既存のレイヤーのすべての一覧は、ドキュメントを参照してください。Dense(全結合レイヤー)、Conv2D、LSTM、BatchNormalization、Dropoutなどのたくさんのレイヤーが含まれています。 Step4: カスタムレイヤーの実装 Step5: できるだけ標準のレイヤーを使ったほうが、概してコードは読みやすく保守しやすくなります。コードを読む人は標準的なレイヤーの振る舞いに慣れているからです。tf.keras.layers にはないレイヤーを使うことを希望する場合には、github の課題を作成するか、プルリクエスト (推薦) を送ってください。 Step6: しかし、ほとんどの場合には、モデルはレイヤーを次々に呼び出すことで構成されます。tf.keras.Sequential クラスを使うことで、これをかなり短いコードで実装できます。
3,916
<ASSISTANT_TASK:> Python Code: import string def norm_words(words): words = words.lower().translate(None, string.punctuation) return words jeopardy["clean_question"] = jeopardy["Question"].apply(norm_words) jeopardy["clean_answer"] = jeopardy["Answer"].apply(norm_words) jeopardy.head() def norm_value(value): try: value = int(value.translate(None, string.punctuation)) except: value = 0 return value jeopardy["clean_value"] = jeopardy["Value"].apply(norm_value) jeopardy["Air Date"] = pd.to_datetime(jeopardy["Air Date"]) print(jeopardy.dtypes) jeopardy.head() def ans_in_q(row): match_count = 0 split_answer = row["clean_answer"].split(" ") split_question = row["clean_question"].split(" ") try: split_answer.remove("the") except: pass if len(split_answer) == 0: return 0 else: for word in split_answer: if word in split_question: match_count += 1 return match_count / len(split_answer) jeopardy["answer_in_question"] = jeopardy.apply(ans_in_q, axis=1) print(jeopardy["answer_in_question"].mean()) jeopardy[jeopardy["answer_in_question"] > 0].head() jeopardy[(jeopardy["answer_in_question"] > 0) & (jeopardy["clean_question"].apply(string.split).apply(len) > 6)].head() jeopardy = jeopardy.sort_values(by="Air Date") question_overlap = [] terms_used = set() for index, row in jeopardy.iterrows(): match_count = 0 split_question = row["clean_question"].split(" ") for word in split_question: if len(word) < 6: split_question.remove(word) for word in split_question: if word in terms_used: match_count += 1 terms_used.add(word) if len(split_question) > 0: match_count /= float(len(split_question)) question_overlap.append(match_count) jeopardy["question_overlap"] = question_overlap print(jeopardy["question_overlap"].mean()) jeopardy.tail() def value(row): if row["clean_value"] > 800: value = 1 else: value = 0 return value jeopardy["high_value"] = jeopardy.apply(value, axis=1) jeopardy.head() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Normalizing columns Step2: Answers in questions Step3: Only 0.6% of the answers appear in the questions itself. Out of this 0.6%, a sample of the questions shows that they are all multiple choice questions, which concludes that it is very unlikely that the answer will be in the question itself. Step4: Low value vs high value questions
3,917
<ASSISTANT_TASK:> Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import tensorflow as tf import numpy as np import tensorflow_probability as tfp import matplotlib.pyplot as plt import seaborn as sns tfd = tfp.distributions # Use try/except so we can easily re-execute the whole notebook. try: tf.enable_eager_execution() except: pass num_vars = 2 # Number of variables (`n` in formula). var_dim = 1 # Dimensionality of each variable `x[i]`. num_components = 3 # Number of components for each mixture (`K` in formula). sigma = 5e-2 # Fixed standard deviation of each component. # Choose some random (component) modes. component_mean = tfd.Uniform().sample([num_vars, num_components, var_dim]) factorial_mog = tfd.Independent( tfd.MixtureSameFamily( # Assume uniform weight on each component. mixture_distribution=tfd.Categorical( logits=tf.zeros([num_vars, num_components])), components_distribution=tfd.MultivariateNormalDiag( loc=component_mean, scale_diag=[sigma])), reinterpreted_batch_ndims=1) plt.figure(figsize=(6,5)) # Compute density. nx = 250 # Number of bins per dimension. x = np.linspace(-3 * sigma, 1 + 3 * sigma, nx).astype('float32') vals = tf.reshape(tf.stack(np.meshgrid(x, x), axis=2), (-1, num_vars, var_dim)) probs = factorial_mog.prob(vals).numpy().reshape(nx, nx) # Display as image. from matplotlib.colors import ListedColormap cmap = ListedColormap(sns.color_palette("Blues", 256)) p = plt.pcolor(x, x, probs, cmap=cmap) ax = plt.axis('tight'); # Plot locations of means. means_np = component_mean.numpy().squeeze() for mu_x in means_np[0]: for mu_y in means_np[1]: plt.scatter(mu_x, mu_y, s=150, marker='*', c='r', edgecolor='none'); plt.axis(ax); plt.xlabel('$x_1$') plt.ylabel('$x_2$') plt.title('Density of factorial mixture of Gaussians'); samples = factorial_mog.sample(1000).numpy() g = sns.jointplot( x=samples[:, 0, 0], y=samples[:, 1, 0], kind="scatter", marginal_kws=dict(bins=50)) g.set_axis_labels("$x_1$", "$x_2$"); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 階乗混合 Step2: TFP を使ってガウス分布の階乗混合を構築する Step3: tfd.Independent を使用しているところに注意してください。この「meta-distribution」は reduce_sum を、右端の reinterpreted_batch_ndims バッチ次元に対して log_prob に適用します。この場合、これは log_prob を計算するときに、バッチ次元のみを残して変数次元を合計します。これによるサンプリングへの影響はありません。 Step4: サンプルと周辺密度の推定をプロットする
3,918
<ASSISTANT_TASK:> Python Code: import pandas as pd df = pd.DataFrame({'Date':['2019-01-01','2019-02-08','2019-02-08', '2019-03-08']}) df['Date'] = pd.to_datetime(df['Date']) df['Date'] = df['Date'].dt.strftime('%d-%b-%Y') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description:
3,919
<ASSISTANT_TASK:> Python Code: import os from dotenv import load_dotenv, find_dotenv # find .env automagically by walking up directories until it's found dotenv_path = find_dotenv() # load up the entries as environment variables load_dotenv(dotenv_path) PROJECT_DIR = os.path.dirname(dotenv_path) RAW_DATA_DIR = PROJECT_DIR + os.environ.get("RAW_DATA_DIR") INTERIM_DATA_DIR = PROJECT_DIR + os.environ.get("INTERIM_DATA_DIR") files=os.environ.get("FILES").split() print("Project directory is : {0}".format(PROJECT_DIR)) print("Raw data directory is : {0}".format(RAW_DATA_DIR)) print("Interim directory is : {0}".format(INTERIM_DATA_DIR)) # The following jupyter notebook magic makes the plots appear in the notebook. # If you run in batch mode, you have to save your plots as images. %matplotlib notebook # matplotlib.pyplot is traditionally imported as plt import matplotlib.pyplot as plt # Pandas is traditionaly imported as pd. import pandas as pd from pylab import rcParams # some display options to size the figures. feel free to experiment pd.set_option('display.max_columns', 25) rcParams['figure.figsize'] = (17, 7) samchild=pd.read_csv(RAW_DATA_DIR+'/samchild.csv') samchild.columns.values.tolist() features=[x for x in samadult.columns.values.tolist() if x.startswith('ALDURA')] import numpy as np np.sum(samadult.WKDAYR.notnull() & (samadult['WKDAYR']<900)) features=[ 'ALDURA3', 'ALDURA4', 'ALDURA6', 'ALDURA7', 'ALDURA8', 'ALDURA11', 'ALDURA17', 'ALDURA20', 'ALDURA21', 'ALDURA22', 'ALDURA23', 'ALDURA24', 'ALDURA27', 'ALDURA28', 'ALDURA29', 'ALDURA33'] target='WKDAYR' from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score reg=LinearRegression() data=samadult[samadult.WKDAYR.notnull() & (samadult.WKDAYR<900)] X=data[features] X.shape X=X+1 X.shape X=X.fillna(0) X.shape #X=np.log1p(X) #X.shape y=np.log1p(data[target]) indicator_names=[x+"_I" for x in features] indicators=pd.DataFrame() for feature in features: indicators[feature+"_I"]=data[feature].notnull().astype(int) indicators X=pd.concat([X, indicators], axis=1, join_axes=[X.index]) X reg.fit(X,y) y_pred=reg.predict(X) score=r2_score(y, y_pred) import matplotlib.pyplot as plt plt.plot(y,y_pred,marker='.', linestyle='None', alpha=0.5 ) plt.xlabel('Y Train') plt.ylabel('Y Predict') plt.show() score from sklearn.linear_model import Ridge ridge=Ridge(alpha=0.07, normalize=True) ridge.fit(X,y) y_pred=ridge.predict(X) def display_plot(cv_scores, cv_scores_std): fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(alpha_space, cv_scores) std_error = cv_scores_std / np.sqrt(10) ax.fill_between(alpha_space, cv_scores + std_error, cv_scores - std_error, alpha=0.2) ax.set_ylabel('CV Score +/- Std Error') ax.set_xlabel('Alpha') ax.axhline(np.max(cv_scores), linestyle='--', color='.5') ax.set_xlim([alpha_space[0], alpha_space[-1]]) ax.set_xscale('log') plt.show() # Import necessary modules import numpy as np from sklearn.linear_model import Ridge from sklearn.model_selection import cross_val_score # Setup the array of alphas and lists to store scores alpha_space = np.logspace(-4, 0, 50) ridge_scores = [] ridge_scores_std = [] # Create a ridge regressor: ridge ridge = Ridge(normalize=True) # Compute scores over range of alphas for alpha in alpha_space: # Specify the alpha value to use: ridge.alpha ridge.alpha = alpha # Perform 10-fold CV: ridge_cv_scores ridge_cv_scores = cross_val_score(ridge, X, y, cv=10) # Append the mean of ridge_cv_scores to ridge_scores ridge_scores.append(np.mean(ridge_cv_scores)) # Append the std of ridge_cv_scores to ridge_scores_std ridge_scores_std.append(np.std(ridge_cv_scores)) # Display the plot display_plot(ridge_scores, ridge_scores_std) from sklearn.decomposition import PCA from mpl_toolkits.mplot3d import Axes3D from sklearn.cluster import KMeans fig = plt.figure(1, figsize=(8, 6)) plt.clf() ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=40, azim=20) plt.cla() pca = PCA(n_components=3) pca.fit(X) X_pca = pca.transform(X) kmean=KMeans(n_clusters=8) kmean.fit(X_pca) y_lab=kmean.labels_ # Reorder the labels to have colors matching the cluster results ax.scatter(X_pca[:, 0], X_pca[:, 1], X_pca[:, 2], label=y_lab,c=y_lab, cmap=plt.cm.spectral) ax.w_xaxis.set_ticklabels([]) ax.w_yaxis.set_ticklabels([]) ax.w_zaxis.set_ticklabels([]) plt.legend(bbox_to_anchor=(0, 1), loc='upper right', ncol=7) plt.show() y_lab <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: First some environment variables Step2: Importing pandas and matplotlib.pyplot Step3: Reading a file in Pandas
3,920
<ASSISTANT_TASK:> Python Code: import folium m = folium.Map([45, 0], zoom_start=4) folium.Marker([45, -30], popup="inline implicit popup").add_to(m) folium.CircleMarker( location=[45, -10], radius=25, fill=True, popup=folium.Popup("inline explicit Popup"), ).add_to(m) ls = folium.PolyLine( locations=[[43, 7], [43, 13], [47, 13], [47, 7], [43, 7]], color="red" ) ls.add_child(folium.Popup("outline Popup on Polyline")) ls.add_to(m) gj = folium.GeoJson( data={"type": "Polygon", "coordinates": [[[27, 43], [33, 43], [33, 47], [27, 47]]]} ) gj.add_child(folium.Popup("outline Popup on GeoJSON")) gj.add_to(m) m m = folium.Map([45, 0], zoom_start=2) folium.Marker( location=[45, -10], popup=folium.Popup("Let's try quotes", parse_html=True, max_width=100), ).add_to(m) folium.Marker( location=[45, -30], popup=folium.Popup(u"Ça c'est chouette", parse_html=True, max_width="100%"), ).add_to(m) m import json import numpy as np import vincent scatter_points = { "x": np.random.uniform(size=(100,)), "y": np.random.uniform(size=(100,)), } # Let's create the vincent chart. scatter_chart = vincent.Scatter(scatter_points, iter_idx="x", width=600, height=300) # Let's convert it to JSON. scatter_json = scatter_chart.to_json() # Let's convert it to dict. scatter_dict = json.loads(scatter_json) m = folium.Map([43, -100], zoom_start=4) popup = folium.Popup() folium.Vega(scatter_chart, height=350, width=650).add_to(popup) folium.Marker([30, -120], popup=popup).add_to(m) # Let's create a Vega popup based on scatter_json. popup = folium.Popup(max_width=0) folium.Vega(scatter_json, height=350, width=650).add_to(popup) folium.Marker([30, -100], popup=popup).add_to(m) # Let's create a Vega popup based on scatter_dict. popup = folium.Popup(max_width=650) folium.Vega(scatter_dict, height=350, width=650).add_to(popup) folium.Marker([30, -80], popup=popup).add_to(m) m import branca m = folium.Map([43, -100], zoom_start=4) html = <h1> This is a big popup</h1><br> With a few lines of code... <p> <code> from numpy import *<br> exp(-2*pi) </code> </p> folium.Marker([30, -100], popup=html).add_to(m) m m = folium.Map([43, -100], zoom_start=4) html = <h1> This popup is an Iframe</h1><br> With a few lines of code... <p> <code> from numpy import *<br> exp(-2*pi) </code> </p> iframe = branca.element.IFrame(html=html, width=500, height=300) popup = folium.Popup(iframe, max_width=500) folium.Marker([30, -100], popup=popup).add_to(m) m import pandas as pd df = pd.DataFrame( data=[["apple", "oranges"], ["other", "stuff"]], columns=["cats", "dogs"] ) m = folium.Map([43, -100], zoom_start=4) html = df.to_html( classes="table table-striped table-hover table-condensed table-responsive" ) popup = folium.Popup(html) folium.Marker([30, -100], popup=popup).add_to(m) m # Let's create a Figure, with a map inside. f = branca.element.Figure() folium.Map([-25, 150], zoom_start=3).add_to(f) # Let's put the figure into an IFrame. iframe = branca.element.IFrame(width=500, height=300) f.add_to(iframe) # Let's put the IFrame in a Popup popup = folium.Popup(iframe, max_width=2650) # Let's create another map. m = folium.Map([43, -100], zoom_start=4) # Let's put the Popup on a marker, in the second map. folium.Marker([30, -100], popup=popup).add_to(m) # We get a map in a Popup. Not really useful, but powerful. m <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Vega Popup Step3: Fancy HTML popup Step5: You can also put any HTML code inside of a Popup, thaks to the IFrame object. Step6: Note that you can put another Figure into an IFrame ; this should let you do stange things...
3,921
<ASSISTANT_TASK:> Python Code: from ipynb.fs.full.article_quality.db_monthly_stats import DBMonthlyStats, dump_aggregation import configparser config = configparser.ConfigParser() config.read('../settings.cfg') import os def write_once(path, write_to): if not os.path.exists(path): print("Writing out " + path) with open(path, "w") as f: write_to(f) dbms = DBMonthlyStats(config) write_once( "../data/processed/enwiki.full_wiki_aggregation.tsv", lambda f: dump_aggregation(dbms.all_wiki_aggregation(), f)) write_once( "../data/processed/enwiki.wikiproject_women_scientists_aggregation.tsv", lambda f: dump_aggregation(dbms.wikiproject_aggregation("WikiProject_Women_scientists"), f)) write_once( "../data/processed/enwiki.wikiproject_oregon_aggregation.tsv", lambda f: dump_aggregation(dbms.wikiproject_aggregation("WikiProject_Oregon"), f)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Read the configuration Step2: Utility to make sure we only generate files once Step3: Dump the monthly aggregations
3,922
<ASSISTANT_TASK:> Python Code: import random def rnd(n): return [random.uniform(-1, 1) for _ in range(n)] SHOTS = 5000 x = rnd(SHOTS) y = rnd(SHOTS) def pairs(seq1, seq2): yes1, yes2, no1, no2 = [], [], [], [] for a, b in zip(seq1, seq2): if (a*a + b*b)**.5 <= 1: yes1.append(a) yes2.append(b) else: no1.append(a) no2.append(b) return yes1, yes2, no1, no2 x_sim, y_sim, x_nao, y_nao = pairs(x, y) %matplotlib inline import matplotlib.pyplot as plt plt.figure() plt.axes().set_aspect('equal') plt.grid() plt.scatter(x_sim, y_sim, 3, color='b') plt.scatter(x_nao, y_nao, 3, color='r') 4 * len(x_sim) / SHOTS def pi(n): uni = random.uniform c = 0 i = 0 while i < n: if abs(complex(uni(-1, 1), uni(-1, 1))) <= 1: c += 1 i += 1 return c * 4.0 / n res = [ (1, 4.0), (10, 2.8), (100, 3.24), (1000, 3.096), (10000, 3.1248), (100000, 3.14144), (1000000, 3.142716), (10000000, 3.1410784), (100000000, 3.14149756), (1000000000, 3.141589804) ] import math plt.figure() x, y = zip(*res) x = [round(math.log(n, 10)) for n in x] plt.plot(x, y) plt.axhline(math.pi, color='r') plt.grid() plt.xticks(x, ['10**%1.0f' % a for a in x]) x <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Now we can select coordinate pairs inside the circle Step2: We now plot the shots inside the circle in blue, outside in red Step3: We can now see the π approximation Step4: The next function abstracts the process so far. Given n, pi(n) will compute an approximation of π by generating random coordinates and counting those that fall inside the circle Step5: Using this loop, I tried the pi() function with n at different orders of magnitude Step6: Now we can graph how the results of pi() approach the actual π (the red line)
3,923
<ASSISTANT_TASK:> Python Code: # Test here the my_auth implementation. def my_auth(username, password,required_scopes=None): An dummy authentication function. :params: username, the username :params: password, the password :params: scopes, the scope :returns: `{"sub": username, "scope": ""}` on success, None on failure raise NotImplementedError def get_echo(tz, user=None): :param: tz, the timezone :param: user, the authenticated user passed by `connexion` raise NotImplementedError render_markdown( f''' Play a bit with [Swagger UI]({api_server_url("ui")}) ''') # Try to curl some requests! !curl http://localhost:5000/datetime/v1/echo -kv def decode_token(token): :param: token, a generic token :return: a json object compliant with RFC7662 OAUTH2 if token_is_valid: return { 'iss': 'http://api.example.com', 'iat': 1561296569, 'exp': 1561296571, 'sub': 'ioggstream', 'scope': ['local'] } <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Authorization & securitySchemes Step3: Add user support in get_echo Step4: Test the my_auth implementation Step6: Bearer token & JWT Security
3,924
<ASSISTANT_TASK:> Python Code: # import packages import pandas as pd # data management import matplotlib.pyplot as plt # graphics import datetime as dt # check today's date import sys # check Python version # IPython command, puts plots in notebook %matplotlib inline print('Today is', dt.date.today()) print('Python version:\n', sys.version, sep='') # experiment in this box a = 2*3 b = 2.0*3 c = 'abc' d = ['This', "is", 'not', "a", 'string'] e = d[3] # do this with a function because we're lazy and value our time def valuetype(x): print value and type of input x print('Value and type: ', x, ', ', type(x), sep='') # (a) valuetype(a) # (b) valuetype(b) # (c) valuetype(c) # (d) valuetype(d) # (e) valuetype(e) f = (1, 2, 3) g = {1: 'Chase', 2: 'Dave', 3: 'Spencer'} h = 'foo' + 'bar' i = (1 != 0) # (f) valuetype(f) # (g) valuetype(g) # (h) valuetype(h) # (i) valuetype(i) torf = True if torf: x = 1 else: x = 2 print('x =', x) first = 'Sarah' last = 'Beckett-Hile' # (a) firstoffirst = first[0] firstoffirst # (b) last.split('-') # (c) combo = first + ' ' + last[0] + '.' combo # (d) def lastinitial(name1, name2): combo = name1 + ' ' + name2[0] + '.' return combo lastinitial(first, last) lastinitial('Chase', 'Coleman') things = [1, '2', 3.0, 'four'] # (a) for thing in things: print('Value and type: ', thing, ', ', type(thing), sep='') # (b) for thing in things: if type(thing) == int: print('Value and type: ', thing, ', ', type(thing), sep='') # (c) [print('Value and type: ', thing, ', ', type(thing), sep='') for thing in things] [print('Value and type: ', thing, ', ', type(thing), sep='') for thing in things if type(thing) == int] data = {'Size': ['1 to 4', '5 to 9', '10 to 19', '20 to 49', '50 to 99', '100 to 249', '250 to 499', '500 to 999', '1000 to 2499', '2500 to 4999', '5000 to 9999', '10000+'], 'Firms': [2846416, 1020772, 598153, 373345, 115544, 63845, 19389, 9588, 6088, 2287, 1250, 1357], 'Emp': [5998912, 6714924, 8151891, 11425545, 8055535, 9788341, 6611734, 6340775, 8321486, 6738218, 6559020, 32556671]} bds = pd.DataFrame(data) bds = bds.set_index('Size') # (a) type(bds) # (b) bds.shape (c) list(bds) # or bsd.columns bds.index # (d) bds.dtypes # (a) bds['AvgEmp'] = bds['Emp']/bds['Firms'] bds.head(3) # (b) bds = bds.rename(columns={'Emp': 'Employment'}) bds.head(3) # (c) bds['Employment'].plot.bar() # everything has to be in same cell to apply to the same figure plt.style.use('fivethirtyeight') # (e) fig, ax = plt.subplots() # (a) bds['Firms'].plot.barh(ax=ax, color='red') # (b,c) ax.set_title('Numbers of firms by employment category') # (d) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step2: Question 0 Step3: Question 2 Step4: Question 3 Step5: Changed cell to Markdown with menu at top Step6: Question 5 Step7: Question 6 Step8: Question 7 Step9: Question 8
3,925
<ASSISTANT_TASK:> Python Code: from __future__ import print_function import sys sys.path.append('../') import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import seaborn as sns from hopfield import Hopfield %matplotlib inline sns.set(font_scale=2.0) prng = np.random.RandomState(seed=100) normalize = True T = 1.0 n_store = 7 N_samples = 1000 n_dim = 400 nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_dim) nn.train(list_of_patterns, normalize=normalize) x = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) y = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.hist(x) ax.hist(y) print(np.std(x)) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) for i in range(10): x = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) ax.hist(x, alpha=0.5) n_dim = 400 nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_dim) nn.train(list_of_patterns, normalize=normalize) x = np.empty(n_dim) for i in range(N_samples): h = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) x = np.concatenate((x, h)) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) n, bins, patches = ax.hist(x, bins=30) print(np.var(x)) print(nn.sigma) n_dimensions = [200, 800, 2000, 5000] fig = plt.figure(figsize=(16, 12)) gs = gridspec.GridSpec(2, 2) for index, n_dim in enumerate(n_dimensions): nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_store) nn.train(list_of_patterns, normalize=normalize) x = np.empty(n_dim) for i in range(N_samples): h = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) x = np.concatenate((x, h)) ax = fig.add_subplot(gs[index//2, index%2]) ax.set_xlim([-1, 1]) ax.set_title('n_dim = ' + str(n_dim) + ' std = ' + str(np.std(x))) weights = np.ones_like(x)/float(len(x)) n, bins, patches = ax.hist(x, bins=30, weights=weights, normed=False) n_dimensions = np.logspace(1, 4, num=20) variances = [] standar_deviations = [] for index, n_dim in enumerate(n_dimensions): print('number', index, 'of', n_dimensions.size, ' n_dim =', n_dim) n_dim = int(n_dim) nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_store) nn.train(list_of_patterns, normalize=normalize) x = np.empty(n_dim) for i in range(N_samples): h = np.dot(nn.w, np.sign(prng.normal(size=n_dim))) x = np.concatenate((x, h)) variances.append(np.var(x)) standar_deviations.append(np.std(x)) fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(111) ax.semilogx(n_dimensions, variances,'*-', markersize=16, label='var') ax.semilogx(n_dimensions, standar_deviations, '*-', markersize=16, label='std') ax.axhline(y=nn.sigma, color='k', label='nn.sigma') ax.legend() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Dependency in how many states are added Step2: We generate two random patterns and test whether the field (h) is dependent on the initial state. We see very similar normal behavior for both of them Step3: We now then try this with 10 patterns to see how the distributions behave Step4: We see that the normal distribution is mainted, then we calculate the field h (result of the np.dot(w, s) calculation) for a bunch of different initial random states and concatenate the results to see how the whole distribution looks like Step5: Dependence on network size Step6: Now we calculate the variance of the h vector as a function of the dimension
3,926
<ASSISTANT_TASK:> Python Code: from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import time from tensorflow.python.client import timeline %matplotlib inline FLAGS = tf.app.flags.FLAGS # number of device count tf.app.flags.DEFINE_integer('num_cpu_core', 1, 'Number of CPU cores to use') tf.app.flags.DEFINE_integer('intra_op_parallelism_threads', 1, 'How many ops can be launched in parallel') tf.app.flags.DEFINE_integer('num_gpu_core', 0, 'Number of GPU cores to use') device_id = -1 # Global Variable Counter for device_id used def next_device(use_cpu = True): ''' See if there is available next device; Args: use_cpu, global device_id Return: new device id ''' global device_id if (use_cpu): if ((device_id + 1) < FLAGS.num_cpu_core): device_id += 1 device = '/cpu:%d' % device_id else: if ((device_id + 1) < FLAGS.num_gpu_core): device_id += 1 device = '/gpu:%d' % device_id return device def mnist_loader(): Load MNIST data in tensorflow readable format The script comes from: https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/examples/tutorials/mnist/input_data.py import gzip import os import tempfile import numpy from six.moves import urllib from six.moves import xrange from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets mnist = read_data_sets('MNIST_data', one_hot=True) n_samples = mnist.train.num_examples return (mnist, n_samples) (mnist, n_samples) = mnist_loader() print('Number of available data: %d' % n_samples) x_sample = mnist.test.next_batch(100)[0] plt.figure(figsize=(8, 4)) for i in range(6): plt.subplot(2, 3, i + 1) plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("MNIST Data") plt.colorbar() plt.tight_layout() def xavier_init(neuron_in, neuron_out, constant=1): low = -constant*np.sqrt(6/(neuron_in + neuron_out)) high = constant*np.sqrt(6/(neuron_in + neuron_out)) return tf.random_uniform((neuron_in, neuron_out), minval=low, maxval=high, dtype=tf.float32) sess_ = tf.Session() weights = [] for i in range(1000): weights.append(sess_.run(xavier_init(3,3))) weights = np.array(weights).reshape((-1,1)) n, bins, patches = plt.hist(weights, bins=20) plt.xlabel('weight value') plt.ylabel('counts') plt.title('Histogram of Weights Initialized by Xavier') plt.show() def vae_init(batch_size=100, learn_rate=0.001, x_in=784, encoder_1=500, encoder_2=500, decoder_1=500, decoder_2=500, z=20): This function build a varational autoencoder based on https://jmetzen.github.io/2015-11-27/vae.html In consideration of simplicity and future work on optimization, we removed the class structure A tensorflow session, optimizer and cost function as well as input data will be returned # configuration of network # x_in = 784 # encoder_1 = 500 # encoder_2 = 500 # decoder_1 = 500 # decoder_2 = 500 # z = 20 # input x = tf.placeholder(tf.float32, [None, x_in]) # initialize weights # two layers encoder encoder_h1 = tf.Variable(xavier_init(x_in, encoder_1)) encoder_h2 = tf.Variable(xavier_init(encoder_1, encoder_2)) encoder_mu = tf.Variable(xavier_init(encoder_2, z)) encoder_sigma = tf.Variable(xavier_init(encoder_2, z)) encoder_b1 = tf.Variable(tf.zeros([encoder_1], dtype=tf.float32)) encoder_b2 = tf.Variable(tf.zeros([encoder_2], dtype=tf.float32)) encoder_bias_mu = tf.Variable(tf.zeros([z], dtype=tf.float32)) encoder_bias_sigma = tf.Variable(tf.zeros([z], dtype=tf.float32)) # two layers decoder decoder_h1 = tf.Variable(xavier_init(z, decoder_1)) decoder_h2 = tf.Variable(xavier_init(decoder_1, decoder_2)) decoder_mu = tf.Variable(xavier_init(decoder_2, x_in)) decoder_sigma = tf.Variable(xavier_init(decoder_2, x_in)) decoder_b1 = tf.Variable(tf.zeros([decoder_1], dtype=tf.float32)) decoder_b2 = tf.Variable(tf.zeros([decoder_2], dtype=tf.float32)) decoder_bias_mu = tf.Variable(tf.zeros([x_in], dtype=tf.float32)) decoder_bias_sigma = tf.Variable(tf.zeros([x_in], dtype=tf.float32)) # compute mean and sigma of z with tf.device(next_device()): layer_1 = tf.nn.softplus(tf.add(tf.matmul(x, encoder_h1), encoder_b1)) with tf.device(next_device()): layer_2 = tf.nn.softplus(tf.add(tf.matmul(layer_1, encoder_h2), encoder_b2)) z_mean = tf.add(tf.matmul(layer_2, encoder_mu), encoder_bias_mu) z_sigma = tf.add(tf.matmul(layer_2, encoder_sigma), encoder_bias_sigma) # compute z by drawing sample from normal distribution eps = tf.random_normal((batch_size, z), 0, 1, dtype=tf.float32) z_val = tf.add(z_mean, tf.multiply(tf.sqrt(tf.exp(z_sigma)), eps)) # use z to reconstruct the network with tf.device(next_device()): layer_1 = tf.nn.softplus(tf.add(tf.matmul(z_val, decoder_h1), decoder_b1)) with tf.device(next_device()): layer_2 = tf.nn.softplus(tf.add(tf.matmul(layer_1, decoder_h2), decoder_b2)) x_prime = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, decoder_mu), decoder_bias_mu)) # define loss function # reconstruction lost recons_loss = -tf.reduce_sum(x * tf.log(1e-10 + x_prime) + (1-x) * tf.log(1e-10 + 1 - x_prime), 1) # KL distance latent_loss = -0.5 * tf.reduce_sum(1 + z_sigma - tf.square(z_mean) - tf.exp(z_val), 1) # summing two loss terms together cost = tf.reduce_mean(recons_loss + latent_loss) # use ADAM to optimize optimizer = tf.train.AdamOptimizer(learning_rate=learn_rate).minimize(cost) # initialize all variables init = tf.global_variables_initializer() # config_ = tf.ConfigProto(device_count={"CPU": FLAGS.num_cpu_core}, # limit to num_cpu_core CPU usage inter_op_parallelism_threads = 1, intra_op_parallelism_threads = FLAGS.intra_op_parallelism_threads, log_device_placement=True) # define and return the session sess = tf.Session(config=config_) sess.run(init) return (sess, optimizer, cost, x, x_prime) def vae_train(sess, optimizer, cost, x, batch_size=100, learn_rate=0.001, x_in=784, encoder_1=500, encoder_2=500, decoder_1=500, decoder_2=500, z=20, train_epoch=1, verb=1, verb_step=5): start_time = time.time() for epoch in range(train_epoch): avg_cost = 0 total_batch = int(n_samples / batch_size) for i in range(total_batch): batch_x, _ = mnist.train.next_batch(batch_size) run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) run_metadata = tf.RunMetadata() _, c = sess.run((optimizer, cost), feed_dict={x: batch_x}, options=run_options, run_metadata=run_metadata) avg_cost += c / n_samples * batch_size elapsed_time = (time.time() - start_time)* 1000 / verb_step start_time = time.time() if verb: if epoch % verb_step == 0: # print('Epoch:%04d\tCost=%.2f' % (epoch+1, avg_cost)) print('Epoch:%04d' % (epoch+1), 'cost=', '{:.9f}'.format(avg_cost), 'Elapsed time: ','%.9f' % elapsed_time) # Create the Timeline object, and write it to a json tl = timeline.Timeline(run_metadata.step_stats) ctf = tl.generate_chrome_trace_format() with open('timeline.json', 'w') as f: f.write(ctf) (sess, optimizer, cost, x, x_prime) = vae_init() vae_train(sess, optimizer, cost, x, train_epoch=10) x_sample = mnist.test.next_batch(100)[0] x_reconstruct = sess.run(x_prime, feed_dict={x: x_sample}) plt.figure(figsize=(8, 12)) for i in range(5): plt.subplot(5, 2, 2*i + 1) plt.imshow(x_sample[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("Test input") plt.colorbar() plt.subplot(5, 2, 2*i + 2) plt.imshow(x_reconstruct[i].reshape(28, 28), vmin=0, vmax=1, cmap="gray") plt.title("Reconstruction") plt.colorbar() plt.tight_layout() sess.close() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step2: mnist_loader Step3: Test mnist data Step4: We are generating synthetic data in this project, so all the 55000 samples can be used for training Step5: xavier_init Step6: Test xavier_init Step8: vae_init Step9: vae_train
3,927
<ASSISTANT_TASK:> Python Code: from babi_loader import BabiDataset, pad_collate from torch.utils.data import DataLoader # There are 20 tasks, we should control which task we would like to load task_id = 1 dataset = BabiDataset( task_id ) dataloader = DataLoader(dataset, batch_size=4, shuffle=False, collate_fn=pad_collate) contexts, questions, answers = next(iter(dataloader)) print(contexts.shape) print(questions.shape) print(answers.shape) print(contexts) print(questions) print(answers) references = dataset.QA.IVOCAB print(references) def interpret_indexed_tensor(contexts, questions, answers): for n, data in enumerate(zip(contexts, questions, answers)): context = data[0] question = data[1] answer = data[2] print(n) for i, sentence in enumerate(context): s = ' '.join([references[elem.item()] for elem in sentence]) print(f'{i}th sentence: {s}') s = ' '.join([references[elem.item()] for elem in question]) print(f'question: {s}') s = references[answer.item()] print(f'answer: {s}') interpret_indexed_tensor(contexts, questions, answers) import os import torch import torch.nn as nn import torch.nn.functional as F import torch.nn.init as init from torch.autograd import Variable class InputModule(nn.Module): def __init__(self, vocab_size, hidden_size): super(InputModule, self).__init__() self.hidden_size = hidden_size ################################################################################## # # nn.GRU use the bidirectional parameter to control the type of GRU # ################################################################################## self.gru = nn.GRU(hidden_size, hidden_size, bidirectional=True, batch_first=True) self.dropout = nn.Dropout(0.1) # we should all the word_embedding the same, while we using the different module def forward(self, contexts, word_embedding): ''' contexts.size() -> (#batch, #sentence, #token) word_embedding() -> (#batch, #sentence x #token, #embedding) position_encoding() -> (#batch, #sentence, #embedding) facts.size() -> (#batch, #sentence, #hidden = #embedding) ''' batch_num, sen_num, token_num = contexts.size() contexts = contexts.view(batch_num, -1) contexts = word_embedding(contexts) contexts = contexts.view(batch_num, sen_num, token_num, -1) contexts = self.position_encoding(contexts) contexts = self.dropout(contexts) ######################################################################### # # if you change the gru type, you should also change the initial hidden state # as bidirectional gru's shape is (2, *, *), while normal gru just need( 1, *, *) # ######################################################################### h0 = Variable(torch.zeros(2, batch_num, self.hidden_size).cuda()) facts, hdn = self.gru(contexts, h0) ######################################################################### # # if you use bi-directional GRU, you should fusion the output, # if you use normal GRU, commont the following code. # acconding to the equation (6) # ######################################################################### facts = facts[:, :, :self.hidden_size] + facts[:, :, self.hidden_size:] return facts def position_encoding(self, embedded_sentence): ''' embedded_sentence.size() -> (#batch, #sentence, #token, #embedding) l.size() -> (#sentence, #embedding) output.size() -> (#batch, #sentence, #embedding) ''' _, _, slen, elen = embedded_sentence.size() l = [[(1 - s/(slen-1)) - (e/(elen-1)) * (1 - 2*s/(slen-1)) for e in range(elen)] for s in range(slen)] l = torch.FloatTensor(l) l = l.unsqueeze(0) # for #batch l = l.unsqueeze(1) # for #sen l = l.expand_as(embedded_sentence) weighted = embedded_sentence * Variable(l.cuda()) return torch.sum(weighted, dim=2).squeeze(2) # sum with tokens class QuestionModule(nn.Module): def __init__(self, vocab_size, hidden_size): super(QuestionModule, self).__init__() self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True) def forward(self, questions, word_embedding): ''' questions.size() -> (#batch, #token) word_embedding() -> (#batch, #token, #embedding) gru() -> (1, #batch, #hidden) ''' questions = word_embedding(questions) _, questions = self.gru(questions) questions = questions.transpose(0, 1) return questions class AttentionGRUCell(nn.Module): def __init__(self, input_size, hidden_size): super(AttentionGRUCell, self).__init__() self.hidden_size = hidden_size self.Wr = nn.Linear(input_size, hidden_size) self.Ur = nn.Linear(hidden_size, hidden_size) self.W = nn.Linear(input_size, hidden_size) self.U = nn.Linear(hidden_size, hidden_size) def forward(self, fact, C, g): r = torch.sigmoid(self.Wr(fact) + self.Ur(C)) h_tilda = torch.tanh(self.W(fact) + r * self.U(C)) g = g.unsqueeze(1).expand_as(h_tilda) h = g * h_tilda + (1 - g) * C return h class AttentionGRU(nn.Module): def __init__(self, input_size, hidden_size): super(AttentionGRU, self).__init__() self.hidden_size = hidden_size self.AGRUCell = AttentionGRUCell(input_size, hidden_size) def forward(self, facts, G): batch_num, sen_num, embedding_size = facts.size() C = Variable(torch.zeros(self.hidden_size)).cuda() for sid in range(sen_num): fact = facts[:, sid, :] g = G[:, sid] if sid == 0: C = C.unsqueeze(0).expand_as(fact) C = self.AGRUCell(fact, C, g) return C class EpisodicMemory(nn.Module): def __init__(self, hidden_size): super(EpisodicMemory, self).__init__() self.AGRU = AttentionGRU(hidden_size, hidden_size) self.z1 = nn.Linear(4 * hidden_size, hidden_size) self.z2 = nn.Linear(hidden_size, 1) self.next_mem = nn.Linear(3 * hidden_size, hidden_size) def make_interaction(self, facts, questions, prevM): ''' facts.size() -> (#batch, #sentence, #hidden = #embedding) questions.size() -> (#batch, 1, #hidden) prevM.size() -> (#batch, #sentence = 1, #hidden = #embedding) z.size() -> (#batch, #sentence, 4 x #embedding) G.size() -> (#batch, #sentence) ''' batch_num, sen_num, embedding_size = facts.size() questions = questions.expand_as(facts) prevM = prevM.expand_as(facts) z = torch.cat([ facts * questions, facts * prevM, torch.abs(facts - questions), torch.abs(facts - prevM) ], dim=2) z = z.view(-1, 4 * embedding_size) G = torch.tanh(self.z1(z)) G = self.z2(G) G = G.view(batch_num, -1) G = F.softmax(G, dim=1) return G def forward(self, facts, questions, prevM): ''' facts.size() -> (#batch, #sentence, #hidden = #embedding) questions.size() -> (#batch, #sentence = 1, #hidden) prevM.size() -> (#batch, #sentence = 1, #hidden = #embedding) G.size() -> (#batch, #sentence) C.size() -> (#batch, #hidden) concat.size() -> (#batch, 3 x #embedding) ''' G = self.make_interaction(facts, questions, prevM) C = self.AGRU(facts, G) concat = torch.cat([prevM.squeeze(1), C, questions.squeeze(1)], dim=1) next_mem = F.relu(self.next_mem(concat)) next_mem = next_mem.unsqueeze(1) return next_mem class AnswerModule(nn.Module): def __init__(self, vocab_size, hidden_size): super(AnswerModule, self).__init__() self.z = nn.Linear(2 * hidden_size, vocab_size) self.dropout = nn.Dropout(0.1) def forward(self, M, questions): M = self.dropout(M) concat = torch.cat([M, questions], dim=2).squeeze(1) z = self.z(concat) return z class DMNPlus(nn.Module): ''' This class combine all the module above. The data flow is showed as the above image. ''' def __init__(self, hidden_size, vocab_size, num_hop=3, qa=None): super(DMNPlus, self).__init__() self.num_hop = num_hop self.qa = qa self.word_embedding = nn.Embedding(vocab_size, hidden_size, padding_idx=0, sparse=True).cuda() self.criterion = nn.CrossEntropyLoss(reduction='sum') self.input_module = InputModule(vocab_size, hidden_size) self.question_module = QuestionModule(vocab_size, hidden_size) self.memory = EpisodicMemory(hidden_size) self.answer_module = AnswerModule(vocab_size, hidden_size) def forward(self, contexts, questions): ''' contexts.size() -> (#batch, #sentence, #token) -> (#batch, #sentence, #hidden = #embedding) questions.size() -> (#batch, #token) -> (#batch, 1, #hidden) ''' facts = self.input_module(contexts, self.word_embedding) questions = self.question_module(questions, self.word_embedding) M = questions for hop in range(self.num_hop): M = self.memory(facts, questions, M) preds = self.answer_module(M, questions) return preds # train the index into a word, it's similar to the part 1 data explore def interpret_indexed_tensor(self, var): if len(var.size()) == 3: # var -> n x #sen x #token for n, sentences in enumerate(var): for i, sentence in enumerate(sentences): s = ' '.join([self.qa.IVOCAB[elem.data[0]] for elem in sentence]) print(f'{n}th of batch, {i}th sentence, {s}') elif len(var.size()) == 2: # var -> n x #token for n, sentence in enumerate(var): s = ' '.join([self.qa.IVOCAB[elem.data[0]] for elem in sentence]) print(f'{n}th of batch, {s}') elif len(var.size()) == 1: # var -> n (one token per batch) for n, token in enumerate(var): s = self.qa.IVOCAB[token.data[0]] print(f'{n}th of batch, {s}') # calculate the loss of the network def get_loss(self, contexts, questions, targets): output = self.forward(contexts, questions) loss = self.criterion(output, targets) reg_loss = 0 for param in self.parameters(): reg_loss += 0.001 * torch.sum(param * param) preds = F.softmax(output, dim=1) _, pred_ids = torch.max(preds, dim=1) corrects = (pred_ids.data == answers.cuda().data) acc = torch.mean(corrects.float()) return loss + reg_loss, acc task_id = 10 epochs = 3 if task_id in range(1, 21): #def train(task_id): dset = BabiDataset(task_id) vocab_size = len(dset.QA.VOCAB) hidden_size = 80 model = DMNPlus(hidden_size, vocab_size, num_hop=3, qa=dset.QA) model.cuda() best_acc = 0 optim = torch.optim.Adam(model.parameters()) for epoch in range(epochs): dset.set_mode('train') train_loader = DataLoader( dset, batch_size=32, shuffle=True, collate_fn=pad_collate ) model.train() total_acc = 0 cnt = 0 for batch_idx, data in enumerate(train_loader): contexts, questions, answers = data batch_size = contexts.size()[0] contexts = Variable(contexts.long().cuda()) questions = Variable(questions.long().cuda()) answers = Variable(answers.cuda()) loss, acc = model.get_loss(contexts, questions, answers) optim.zero_grad() loss.backward() optim.step() total_acc += acc * batch_size cnt += batch_size if batch_idx % 20 == 0: print(f'[Task {task_id}, Epoch {epoch}] [Training] loss : {loss.item(): {10}.{8}}, acc : {total_acc / cnt: {5}.{4}}, batch_idx : {batch_idx}') dset.set_mode('test') test_loader = DataLoader( dset, batch_size=100, shuffle=False, collate_fn=pad_collate ) test_acc = 0 cnt = 0 model.eval() for batch_idx, data in enumerate(test_loader): contexts, questions, answers = data batch_size = contexts.size()[0] contexts = Variable(contexts.long().cuda()) questions = Variable(questions.long().cuda()) answers = Variable(answers.cuda()) _, acc = model.get_loss(contexts, questions, answers) test_acc += acc * batch_size cnt += batch_size print(f'Task {task_id}, Epoch {epoch}] [Test] Accuracy : {test_acc / cnt: {5}.{4}}') def interpret_indexed_tensor(references, contexts, questions, answers, predictions): for n, data in enumerate(zip(contexts, questions, answers, predictions)): context = data[0] question = data[1] answer = data[2] predict = data[3] print(n) for i, sentence in enumerate(context): s = ' '.join([references[elem.item()] for elem in sentence]) print(f'{i}th sentence: {s}') q = ' '.join([references[elem.item()] for elem in question]) print(f'question: {q}') a = references[answer.item()] print(f'answer: {a}') p = references[predict.argmax().item()] print(f'predict: {p}') dset.set_mode('test') dataloader = DataLoader(dset, batch_size=4, shuffle=False, collate_fn=pad_collate) contexts, questions, answers = next(iter(dataloader)) contexts = Variable(contexts.long().cuda()) questions = Variable(questions.long().cuda()) # prediction model.eval() predicts = model(contexts, questions) references = dset.QA.IVOCAB #contexts, questions, answers = next(iter(dataloader)) #interpret_indexed_tensor(contexts, questions, answers) interpret_indexed_tensor(references, contexts, questions, answers, predicts) # default dataset is training set dataset = BabiDataset( task_id ) print('train set:', len(dataset)) # change the mode into "test", now dataset is testset dataset.set_mode("test") print('test set:', len(dataset)) class InputModule(nn.Module): def __init__(self, vocab_size, hidden_size): super(InputModule, self).__init__() self.hidden_size = hidden_size self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True) self.dropout = nn.Dropout(0.1) def forward(self, contexts, word_embedding): ''' contexts.size() -> (#batch, #sentence, #token) word_embedding() -> (#batch, #sentence x #token, #embedding) position_encoding() -> (#batch, #sentence, #embedding) facts.size() -> (#batch, #sentence, #hidden = #embedding) ''' batch_num, sen_num, token_num = contexts.size() contexts = contexts.view(batch_num, -1) contexts = word_embedding(contexts) contexts = contexts.view(batch_num, sen_num, token_num, -1) contexts = self.position_encoding(contexts) contexts = self.dropout(contexts) facts, hdn = self.gru(contexts) return facts def position_encoding(self, embedded_sentence): ''' embedded_sentence.size() -> (#batch, #sentence, #token, #embedding) l.size() -> (#sentence, #embedding) output.size() -> (#batch, #sentence, #embedding) ''' _, _, slen, elen = embedded_sentence.size() l = [[(1 - s/(slen-1)) - (e/(elen-1)) * (1 - 2*s/(slen-1)) for e in range(elen)] for s in range(slen)] l = torch.FloatTensor(l) l = l.unsqueeze(0) # for #batch l = l.unsqueeze(1) # for #sen l = l.expand_as(embedded_sentence) weighted = embedded_sentence * Variable(l.cuda()) return torch.sum(weighted, dim=2).squeeze(2) # sum with tokens task_id = 10 epochs = 3 if task_id in range(1, 21): #def train(task_id): dset = BabiDataset(task_id) vocab_size = len(dset.QA.VOCAB) hidden_size = 80 model = DMNPlus(hidden_size, vocab_size, num_hop=3, qa=dset.QA) model.cuda() best_acc = 0 optim = torch.optim.Adam(model.parameters()) for epoch in range(epochs): dset.set_mode('train') train_loader = DataLoader( dset, batch_size=32, shuffle=True, collate_fn=pad_collate ) model.train() total_acc = 0 cnt = 0 for batch_idx, data in enumerate(train_loader): contexts, questions, answers = data batch_size = contexts.size()[0] contexts = Variable(contexts.long().cuda()) questions = Variable(questions.long().cuda()) answers = Variable(answers.cuda()) loss, acc = model.get_loss(contexts, questions, answers) optim.zero_grad() loss.backward() optim.step() total_acc += acc * batch_size cnt += batch_size if batch_idx % 20 == 0: print(f'[Task {task_id}, Epoch {epoch}] [Training] loss : {loss.item(): {10}.{8}}, acc : {total_acc / cnt: {5}.{4}}, batch_idx : {batch_idx}') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 好了,现在我们导入了dataloader之后,可以尝试将每一个batch的数据打印出来看看是什么样子的 Step2: 可以看到,打印出来的都是数字索引,不是真实的文本,所以我们在输出之前,肯定需要对这些索引进行重新映射,找回原来的文本信息,我们先看一下查找表 Step3: 好了,根据这个查找表,我们就可以很顺利得将我们的数据还原出来了。只需要按照索引重新映射回来即可。 Step4: 从上面测试中,我们就清楚我们该如何使用这个Dataloader了。 Step5: 整个DMN的过程使用了很多的GRU作为编码器,在这里的话,我们可以稍作回顾,一般的情况下,GRU的更新过程如下公式所示: Step6: 在上面的模块中,你可能注意到了self.position_encoding()这个函数,它的作用在于给输入的句子加上位置信息。即,position_encoding的作用是将输入的句子{我, 爱, 你}转换成{我1, 爱2, 你3}的形式进行输出。这样子做的含义如下文引用所示。 Step7: 2.4 Memory Module Step8: <img src="./images/memory_equation.png" width = "50%" /> Step9: 2.5 Answer Module Step10: 2.6 Combine Step11: 3. Training and Test Step12: 3.2 Test Step13: 3.3 Show example Step14: 4. Exercise Step15: 答:任务10的训练集为9000条,测试集为1000条。
3,928
<ASSISTANT_TASK:> Python Code: !python --version !cat ~/.keras/keras.json import numpy as np import scipy as sp import pandas as pd import matplotlib.pyplot as plt import sklearn import keras import numpy print('numpy:', numpy.__version__) import scipy print('scipy:', scipy.__version__) import matplotlib print('matplotlib:', matplotlib.__version__) import IPython print('iPython:', IPython.__version__) import sklearn print('scikit-learn:', sklearn.__version__) import keras print('keras: ', keras.__version__) # optional import theano print('Theano: ', theano.__version__) import tensorflow as tf print('Tensorflow: ', tf.__version__) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Configure Keras with tensorflow Step2: Test if everything is up&running Step3: 2. Check installeded Versions
3,929
<ASSISTANT_TASK:> Python Code: %%bash rm $(pwd)/crash-* from nipype import SelectFiles, Node, Workflow from os.path import abspath as opap from nipype.interfaces.fsl import MCFLIRT, IsotropicSmooth # Create SelectFiles node templates={'func': '{subject_id}/ses-test/func/{subject_id}_ses-test_task-fingerfootlips_bold.nii.gz'} sf = Node(SelectFiles(templates), name='selectfiles') sf.inputs.base_directory = opap('/data/ds000114') sf.inputs.subject_id = 'sub-11' # Create Motion Correction Node mcflirt = Node(MCFLIRT(mean_vol=True, save_plots=True), name='mcflirt') # Create Smoothing node smooth = Node(IsotropicSmooth(fwhm=4), name='smooth') # Create a preprocessing workflow wf = Workflow(name="preprocWF") wf.base_dir = 'working_dir' # Connect the three nodes to each other wf.connect([(sf, mcflirt, [("func", "in_file")]), (mcflirt, smooth, [("out_file", "in_file")])]) # Let's run the workflow try: wf.run() except(RuntimeError) as err: print("RuntimeError:", err) else: raise !nipypecli crash $(pwd)/crash-*selectfiles-*.pklz !nipypecli crash -r $(pwd)/crash-*selectfiles-*.pklz wf.config['execution']['crashfile_format'] = 'txt' try: wf.run() except(RuntimeError) as err: print("RuntimeError:", err) else: raise !cat $(pwd)/crash-*selectfiles-*.txt from nipype.interfaces.fsl import IsotropicSmooth try: smooth = IsotropicSmooth(fwhm='4') except(Exception) as err: if "TraitError" in str(err.__class__): print("TraitError:", err) else: raise else: raise IsotropicSmooth.help() from nipype.interfaces.fsl import IsotropicSmooth try: smooth = IsotropicSmooth(output_type='NIFTIiii') except(Exception) as err: if "TraitError" in str(err.__class__): print("TraitError:", err) else: raise else: raise from nipype.algorithms.misc import Gunzip from nipype import Node files = ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz', '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'] gunzip = Node(Gunzip(), name='gunzip',) try: gunzip.inputs.in_file = files except(Exception) as err: if "TraitError" in str(err.__class__): print("TraitError:", err) else: raise else: raise from nipype import MapNode gunzip = MapNode(Gunzip(), name='gunzip', iterfield=['in_file']) gunzip.inputs.in_file = files files = ['/data/ds000114/sub-01/func/sub-01_task-fingerfootlips_bold.nii.gz', '/data/ds000114/sub-03/func/sub-03_task-fingerfootlips_bold.nii.gz'] try: gunzip.inputs.in_file = files except(Exception) as err: if "TraitError" in str(err.__class__): print("TraitError:", err) else: raise else: raise from nipype.interfaces.spm import Smooth try: smooth = Smooth(in_files='/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz') except(Exception) as err: if "TraitError" in str(err.__class__): print("TraitError:", err) else: raise else: raise from nipype.interfaces.spm import Realign try: realign = Realign(in_files='/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz') except(Exception) as err: if "TraitError" in str(err.__class__): print("TraitError:", err) else: raise else: raise from nipype.interfaces.spm import Realign realign = Realign(register_to_mean=True) try: realign.run() except(ValueError) as err: print("ValueError:", err) else: raise realign.help() from nipype.interfaces.afni import Despike try: despike = Despike(in_file='/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz', output_type='NIFTI') except(Exception) as err: if "TraitError" in str(err.__class__): print("TraitError:", err) else: raise else: raise from nipype import SelectFiles, Node, Workflow from os.path import abspath as opap from nipype.interfaces.fsl import MCFLIRT, IsotropicSmooth # Create SelectFiles node templates={'func': '{subject_id}/func/{subject_id}_task-fingerfootlips_bold.nii.gz'} sf = Node(SelectFiles(templates), name='selectfiles') sf.inputs.base_directory = opap('/data/ds000114') sf.inputs.subject_id = 'sub-01' # Create Motion Correction Node mcflirt = Node(MCFLIRT(mean_vol=True, save_plots=True), name='mcflirt') # Create Smoothing node smooth = Node(IsotropicSmooth(fwhm=4), name='smooth') # Create a preprocessing workflow wf = Workflow(name="preprocWF") wf.base_dir = 'working_dir' # Connect the three nodes to each other wf.connect([(sf, mcflirt, [("func", "in_file")]), (mcflirt, smooth, [("out_file", "in_file")])]) # Create a new node mcflirt_NEW = Node(MCFLIRT(mean_vol=True), name='mcflirt_NEW') # Connect it to an already connected input field try: wf.connect([(mcflirt_NEW, smooth, [("out_file", "in_file")])]) except(Exception) as err: print("Exception:", err) else: raise <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Example Crash 1 Step2: Investigating the crash Step3: nipypecli allows you to rerun the crashed node using an additional option -r. Step4: When running in the terminal you can also try options that enable the Python or Ipython debugger when re-executing Step5: Now you should have a new text file with your crash report. Step6: Example Crash 2 Step7: This will give you the error Step8: In a similar way, you will also get an error message if the input type is correct but you have a type in the name Step9: Example Crash 3 Step10: This can be solved by using a MapNode Step11: Now, make sure that you specify files that actually exist, otherwise you will have a TraitError again Step12: By the way, not that those crashes don't create a crashfile, because they didn't happen during runtime, but still during workflow building. Step13: SPM problem 2 with *.nii.gz files Step14: This issue can be solved by unzipping the compressed NIfTI file before giving it as an input to an SPM node. This can either be done by using the Gunzip interface from Nipype or even better if the input is coming from a FSL interface, most of them have an input filed output_type='NIFTI', that you can set to NIFIT. Step15: This gives you the error Step16: Using input fields that don't exist Step17: This results in the TraitError Step18: Now, let's create a new node and connect it to the already occupied input field in_file of the smooth node
3,930
<ASSISTANT_TASK:> Python Code: notes_file = 'synthnotes/data/note-events.parquet' pq_root_path = 'synthnotes/data/xml_extracted' pf = ParquetFile(notes_file) df = pf.to_pandas() xml_dir = 'synthnotes/data/xml_files' xml_files = os.listdir(xml_dir) ids = [int(f.split('.txt.xmi')[0]) for f in xml_files] notes = df[df.ROW_ID.isin(ids)] notes = notes.reset_index(drop=True) notes = notes.drop(['CHARTDATE','CHARTTIME','STORETIME','CGID','ISERROR'],axis=1) def get_notes_sample(df, n=100, category='Nursing'): notes = df[notes_df['CATEGORY'] == 'Nursing'] notes = notes[notes['ISERROR'].isnull()] notes = notes[notes['DESCRIPTION'] == 'Generic Note'] notes = notes.sample(n=n) notes = notes.reset_index(drop=True) return notes # parser = CtakesXmlParser() # schemas = list() # for file in xml_files: # xml_out = parser.parse(f'{xml_dir}/{file}') # for k, v in xml_out.items(): # feature_df = pd.DataFrame(list(v)) # if feature_df.shape[0] > 0: # table = pa.Table.from_pandas(feature_df) # pq.write_to_dataset(table, f'{pq_root_path}/{k}') # else: # print(f"{k} was empty for {file}") def get_df_from_pq(root, name): return pq.read_table(f'{root}/{name}').to_pandas() def transform_preds(df): df['frameset'] = df['frameset'].apply(lambda x: x.split('.')[0]) return df def transform_mentions(mentions): # Don't want this to fail if these have already been removed try: mentions = mentions.drop( ['conditional', 'history_of', 'generic', 'polarity', 'discovery_technique', 'subject'], axis=1) except: pass sorted_df = mentions.groupby(['sent_id', 'begin']) \ .apply(lambda x: x.sort_values(['begin', 'end'])) # Drop the mentions that are parts of a larger span. Only keep the containing span that holds multiple # mentions deduped = sorted_df.drop_duplicates(['sent_id', 'begin'], keep='last') deduped = deduped.drop_duplicates(['sent_id', 'end'], keep='first') return deduped.reset_index(drop=True) def set_template_token(df, column): df['template_token'] = df[column] return df def get_template_tokens(row): return pd.Series({ 'doc_id': row['doc_id'], 'sent_id': row['sent_id'], 'token': row['template_token'], 'begin': row['begin'], 'end': row['end'] }) # def merge_mentions_umls(mentions, umls): # umls['umls_xmi_id'] = umls['xmi_id'] # mentions = mentions.merge(umls[['umls_xmi_id', 'cui']], on='umls_xmi_id') # return mentions # def umls_dedup(umls): # return umls.drop_duplicates(subset=['cui']) # def set_umls_join_key(umls): # umls['umls_xmi_id'] = umls['xmi_id'] # return umls def set_sentence_pos(df): df = df.groupby(["doc_id"]).apply(lambda x: x.sort_values(["begin"])).reset_index(drop=True) df['sentence_number'] = df.groupby("doc_id").cumcount() return df def get_root_verb(row): pass def extract_sent(row): begin = row['begin'] end = row['end'] row['TEXT'] = row['TEXT'][begin:end] return row def write_notes(row): fn = f'raw_notes/{row["ROW_ID"]}' with open(fn, 'w') as f: f.write(row['TEXT']) def get_text_from_sentence(row, notes): doc = notes[notes['ROW_ID'] == row['doc_id']] b = row['begin'] e = row['end'] return doc['TEXT'].iloc[0][b:e] def edit_dist(row, term2): term1 = row.loc['preferred_text'] return lev_norm(term1, term2) def get_cui( mention, umls_df): ont_arr = list(map(int, mention['ontology_arr'].split())) or None ment_text = mention['text'] concepts = umls_df[umls_df['xmi_id'].isin(ont_arr)].loc[:, ['cui', 'preferred_text', 'xmi_id']] concepts['dist'] = concepts.apply(edit_dist, args=(ment_text,), axis=1) sorted_df = concepts.sort_values(by='dist', ascending=True).reset_index(drop=True) cui = sorted_df['cui'].iloc[0] xmi_id = sorted_df['xmi_id'].iloc[0] pref_text = sorted_df['preferred_text'].iloc[0] return cui, xmi_id, pref_text preds = get_df_from_pq(pq_root_path, 'predicates') mentions = get_df_from_pq(pq_root_path, 'mentions') umls = get_df_from_pq(pq_root_path, 'umls_concepts') sents = get_df_from_pq(pq_root_path, 'sentences') tokens = get_df_from_pq(pq_root_path, 'tokens') sents = sents.rename({'id': 'sent_id'}, axis=1) sents.head() sents = sents.rename({'id': 'sent_id'}, axis=1) sents = sents.merge(notes[['ROW_ID', 'TEXT']], left_on='doc_id', right_on='ROW_ID').drop('ROW_ID', axis=1) sents = sents.apply(extract_sent, axis=1) sents = sents.rename({'TEXT': 'text'}, axis=1) sents = set_sentence_pos(sents) sents_with_mentions = sents[ sents['sent_id'].isin( mentions.drop_duplicates(subset='sent_id')['sent_id'] ) ] umls = umls[~umls['preferred_text'].isna()] mentions = get_df_from_pq(pq_root_path, 'mentions') mentions = transform_mentions(mentions) mentions.head() mentions['text'] = mentions.apply(get_text_from_sentence, args=(notes,), axis=1) mentions.head() mentions = mentions.merge(sents_with_mentions[['sent_id', 'sentence_number']], on='sent_id') mentions.head() preds = transform_preds(preds) print(preds.shape) preds = preds[ preds['sent_id'].isin( sents_with_mentions['sent_id'] ) ] print(preds.shape) preds['text'] = preds.apply(get_text_from_sentence, args=(notes,), axis=1) mentions[['cui', 'umls_xmi_id', 'preferred_text']] = mentions. \ apply(get_cui, args=(umls,), axis=1, result_type='expand') mentions.head() mentions['template_token'] = mentions['mention_type'] preds['template_token'] = preds['frameset'] preds_toks = preds.apply(get_template_tokens, axis=1) mentions_toks = mentions.apply(get_template_tokens, axis=1) mentions_toks.groupby(['sent_id', 'end']).head() preds_toks.groupby(['sent_id', 'end']).head() template_tokens = preds_toks.append(mentions_toks) temp_tokens = template_tokens.groupby(['sent_id']).apply(lambda x: x.sort_values(['begin'])) temp_tokens.head() sem_templates = template_tokens.sort_values('begin').groupby('sent_id')['token'].apply(' '.join) sem_templates.head() temp_tokens.token.unique().shape sem_df = pd.DataFrame(sem_templates) # What is this? sem_df.head() sem_df.reset_index(level=0, inplace=True) sem_df = sem_df.rename(columns={'token': 'sem_template'}) sem_df = sem_df.merge(sents[['sent_id', 'sentence_number', 'doc_id', 'begin', 'end']], left_on='sent_id', right_on='sent_id' )#.drop('id', axis=1) sem_df.head() avg_sents_per_doc = sents.groupby('doc_id').size().mean() print(avg_sents_per_doc) avg_sents_with_ents_per_doc = sents_with_mentions.groupby('doc_id').size().mean() print(avg_sents_with_ents_per_doc) print(mentions['cui'].nunique()) mentions.groupby('doc_id').size().mean() mentions.groupby('sent_id').size().mean() tokens = tokens[(~tokens['sent_id'].isnull()) & (tokens['token_type'] != 'NewlineToken')] wc_by_doc = tokens.groupby('doc_id').count()['id'].reset_index(name='count') wc_by_doc['count'].mean() wc_by_sentence = tokens.groupby('sent_id')['id'].count().reset_index(name='count') wc_by_sentence['count'].mean() mention_counts = mentions.groupby('mention_type').size().reset_index(name='count') mention_counts mention_counts['frequency'] = mention_counts['count'] / mention_counts['count'].sum() mention_counts mentions_by_pos = pd.crosstab( mentions['mention_type'], mentions['sentence_number']).apply(lambda x: x / x.sum(), axis=0) mentions_by_pos cui_counts = mentions.groupby('cui').size().reset_index(name='count') cui_counts = cui_counts.sort_values('count', ascending=False).reset_index(drop=True) cui_counts.head(10) cui_counts['frequency'] = cui_counts['count'] / cui_counts['count'].sum() cui_counts.head(10) cui_counts_with_text = cui_counts.merge(mentions[['cui', 'preferred_text']], on='cui') \ .drop_duplicates('cui') \ .reset_index(drop=True) cui_counts_with_text.head(10) cui_by_pos = pd.crosstab(mentions['cui'], mentions['sentence_number']).apply(lambda x: x / x.sum(), axis=0) cui_by_pos.head() cui_by_pos.loc[:, 0].sort_values(ascending=False)[:10] sem_df.head() sem_df['sem_template'].nunique() count_temps = sem_df.groupby('sem_template').size().reset_index(name='count') count_temps = count_temps.sort_values('count', ascending=False).reset_index(drop=True) count_temps.head(10) count_temps['frequency'] = count_temps['count'] / count_temps['count'].sum() count_temps.head(10) sem_df.head() sem_df['sentence_number'].shape temp_by_pos = pd.crosstab(sem_df['sem_template'], sem_df['sentence_number']).apply(lambda x: x / x.sum(), axis=0) temp_by_pos.head() df_dir = 'data/processed_dfs' # Write sentences, mentions, predicates, and umls concepts to parquet, sem_df sents_with_mentions.to_parquet(f'{df_dir}/sentences.parquet') mentions.to_parquet(f'{df_dir}/mentions.parquet') preds.to_parquet(f'{df_dir}/predicates.parquet') umls.to_parquet(f'{df_dir}/umls.parquet') sem_df.to_parquet(f'{df_dir}/templates.parquet') temp_by_pos.to_parquet(f'{df_dir}/templates_by_pos.parquet') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Get the list of ids from the processed xml files so we can select a subset of the mimic notes Step2: Select the subset of notes that we have xml output from ctakes for. Reset the index and drop some unnecessary columns Step3: process the xml files and store in parquet locally Step4: Creating templates Step5: Pull in the dataframes for elements we need for processing Step6: Prep sentences DF Step7: Add position of sentence in document to sentences df Step8: remove sentences without entities Step9: Prep UMLS DF Step10: Pref Mentions DF Step11: Add original text to mentions Step12: Add sentence position to mentions Step13: Prep Predicates DF Step14: Remove predicates not in sentences with mentions Step15: Add original text to predicates Step16: Linking CUI codes to entities (mentions) Step17: Set the template tokens we're going to use Step18: Append the two template tokens dataframes Step19: Get the semantic templates Step20: Gather corpus statistics Step21: Average sentences w/ entities per doc Step22: Count of unique cuis (When removing overlapping text spans) Step23: Average # of cuis per doc Step24: Average # of cuis per sentence Step25: Average # of words per doc (excluding newline tokens and symbols) Step26: Average # of words per sentence Step27: Get frequency of mentions Step28: Frequency of mentions by sentence position Step29: Frequency of CUIs Step30: Frequency with preferred text Step31: Frequency of CUIs by sentence position Step32: Number of unique templates Step33: Frequency of templates (identified by sentence number) Step34: Frequency of templates by sentence position Step35: Write dataframes to parquet
3,931
<ASSISTANT_TASK:> Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import sys import numpy as np import pandas as pd import matplotlib.pyplot as plt data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() rides[:24*10].plot(x='dteday', y='cnt') dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] train_targets.head() # each element of x is an array with 53 features and each element of y is an array with 3 targets # each x is one hour features def get_batches(x, y, n_seqs, n_steps): '''Create a generator that returns batches of size n_seqs x n_steps from arr. Arguments --------- array x and array y: Array you want to make batches from n_seqs: Batch size, the number of sequences per batch n_steps: Number of sequence steps per batch ''' # Get the number of hours per batch and number of batches we can make hours_per_batch = n_seqs * n_steps n_batches = len(x)//hours_per_batch # convert from Pandas to np remove the index column x = x.reset_index().values[:,1:] y = y.reset_index().values[:,1:] # make only full batches x, y = x[:n_batches*hours_per_batch], y[:n_batches*hours_per_batch] # TODO: this needs to be optmized # x_temp will be ( n rows x n_steps wide) where each element is an array of 53 features # this first look splits the x with n rows and n_steps wide x_temp = [] y_temp = [] for st in range(0, n_batches*hours_per_batch, n_steps ): x_temp.append( x[st:st+n_steps] ) y_temp.append( y[st:st+n_steps] ) x = np.asarray(x_temp ) y = np.asarray(y_temp ) # this splits x in n_seqs rows so the return is a batch of n_seqs rows with n_steps wide # where each element is an array of 53 features (one hour from our data) for sq in range(0,(n_batches*hours_per_batch)//n_steps, n_seqs ): yield x[sq:sq+n_seqs,:,:], y[sq:sq+n_seqs,:,:] print(train_features.tail()) batches = get_batches(train_features, train_targets, 20, 96) x, y = next(batches) print(x.shape) # x, y = next(batches) # print(x.shape) import tensorflow as tf num_features = 56 num_targets = 3 batch_size = 10 # one step for each hour that we want the sequence to remember num_steps = 50 lstm_size = 256 num_layers = 2 learning_rate = 0.0005 keep_prob_val = 0.75 # Declare placeholders we'll feed into the graph inputs = tf.placeholder(tf.float32, [batch_size, None, num_features], name='inputs') targets = tf.placeholder(tf.float32, [batch_size, None, num_targets], name='targets') # Keep probability placeholder for drop out layers keep_prob = tf.placeholder(tf.float32, name='keep_prob') learningRate = tf.placeholder(tf.float32, name='learningRate') # # Use a basic LSTM cell # lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size) # # Add dropout to the cell # drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) # # Stack up multiple LSTM layers, for deep learning # #cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers) # initial_state = cell.zero_state(batch_size, tf.float32) #Replaced the code above because TF with GPU was complaining def lstm_cell(): cell = tf.contrib.rnn.BasicLSTMCell(lstm_size) return tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)], state_is_tuple = True) initial_state = cell.zero_state(batch_size, tf.float32) outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) # this is one thing that I still dont fully understood, I had to set the activation_fn=None so the # fully connected layer dont use any activation funcition, this seems to work predictions = tf.contrib.layers.fully_connected(outputs, 3, activation_fn=None) cost = tf.losses.mean_squared_error(targets, predictions) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), tf.cast(tf.round(targets), tf.int32)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) epochs = 100 saver = tf.train.Saver() #validation accuracy to plot val_accuracy=[] training_loss=[] with tf.Session() as sess: sess.run(tf.global_variables_initializer()) iteration = 1 for e in range(epochs): state = sess.run(initial_state) for ii, (x, y) in enumerate(get_batches(train_features, train_targets, batch_size, num_steps), 1): feed = {inputs: x, targets: y, keep_prob: keep_prob_val, initial_state: state} loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed) if iteration%5==0: print("Epoch: {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Train loss: {:.3f}".format(loss)) training_loss.append(loss) if iteration%25==0: val_acc = [] val_state = sess.run(cell.zero_state(batch_size, tf.float32)) for x, y in get_batches(val_features, val_targets, batch_size, num_steps): feed = {inputs: x, targets: y, keep_prob: 1, initial_state: val_state} batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed) val_acc.append(batch_acc) val_accuracy.append( np.mean(val_acc) ) print("Val acc: {:.3f}".format(np.mean(val_acc))) iteration +=1 saver.save(sess, "checkpoints/bike-sharing.ckpt") plt.plot(val_accuracy, label='Accuracy') plt.legend() _ = plt.ylim() plt.plot(training_loss, label='Loss') plt.legend() _ = plt.ylim() test_acc = [] #with tf.Session(graph=graph) as sess: with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) for ii, (x, y) in enumerate(get_batches(test_features, test_targets, batch_size, num_steps), 1): feed = {inputs: x, targets: y, keep_prob: 1, initial_state: test_state} batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed) test_acc.append(batch_acc) print("Test accuracy: {:.3f}".format(np.mean(test_acc))) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) test_state = sess.run(cell.zero_state(batch_size, tf.float32)) fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] batch = get_batches(test_features, test_targets, batch_size, num_steps) x,y = next(batch) feed = {inputs: x, targets: y, keep_prob: 1, initial_state: test_state} pred = sess.run([predictions], feed_dict=feed) pred = pred[0].reshape(500,-1) pred[:,0] *= std pred[:,0] += mean lf = pred[:,0] # predictions = network.run(test_features).T*std + mean ax.plot(lf, label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(lf)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load and prepare the data Step2: Checking out the data Step3: Dummy variables Step4: Scaling target variables Step5: Splitting the data into training, testing, and validation sets Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). Step7: Lets build the get_batches Step8: Lets test the batches above Step9: Build the network Step10: Lstm Step11: Output Step12: Validation Accuracy Step13: Trainning Step14: Check out your predictions Step15: Create a graph to compare the data and predictions
3,932
<ASSISTANT_TASK:> Python Code: import random list10 = [] for x in range(10): list10.append(random.randrange(100)) list100 = [] for x in range(100): list100.append(random.randrange(100)) list1000 = [] for x in range(1000): list1000.append(random.randrange(100)) def sort_list(old_list): def find_new_index(old_i): for new_i in range(1, len(new_list)): if new_list[new_i] > old_list[old_i]: return new_i new_list = [old_list[0]] for old_i in range(1, len(old_list)): if old_list[old_i] <= new_list[0]: new_list.insert(0, old_list[old_i]) elif old_list[old_i] >= new_list[len(new_list)-1]: new_list.insert(len(new_list), old_list[old_i]) else: new_list.insert(find_new_index(old_i), old_list[old_i]) return new_list %time sort_list(list10) %time sort_list(list100) %time sort_list(list1000) def bsort_list(old_list): new_list = [old_list[0]] def find_new_index(old_i): start_index = 0 end_index = len(new_list) - 1 while end_index - start_index > 1: middle_index = int((end_index - start_index) / 2 + start_index) if old_list[old_i] == new_list[start_index]: new_i = start_index return new_i elif old_list[old_i] == new_list[end_index]: new_i = end_index return new_i elif old_list[old_i] == new_list[middle_index]: new_i = middle_index return new_i elif old_list[old_i] < new_list[middle_index]: end_index = middle_index else: start_index = middle_index new_i = end_index return new_i for old_i in range(1, len(old_list)): if old_list[old_i] < new_list[0]: new_list.insert(0, old_list[old_i]) elif old_list[old_i] > new_list[len(new_list) - 1]: new_list.insert(len(new_list), old_list[old_i]) else: new_list.insert(find_new_index(old_i), old_list[old_i]) return new_list print(list10) print(bsort_list(list10)) %time bsort_list(list10) %time bsort_list(list100) %time bsort_list(list1000) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: binary sorting Step2: sort_list(list10) was 29 µs, so this one is slower Step3: sort_list() was total
3,933
<ASSISTANT_TASK:> Python Code: import matplotlib.pyplot as plt %matplotlib inline import nengo import numpy as np import scipy.ndimage from scipy.ndimage.interpolation import rotate import matplotlib.animation as animation from matplotlib import pylab from PIL import Image import nengo.spa as spa import cPickle import random from nengo_extras.data import load_mnist from nengo_extras.vision import Gabor, Mask from skimage.measure import compare_ssim as ssim # --- load the data img_rows, img_cols = 28, 28 (X_train, y_train), (X_test, y_test) = load_mnist() X_train = 2 * X_train - 1 # normalize to -1 to 1 X_test = 2 * X_test - 1 # normalize to -1 to 1 random.seed(1) '''Didn't work n_imgs = len(X_train) imgs = np.ndarray((n_imgs+1000,784*2)) for i in range(n_imgs): imgs[i] = np.append(X_train[i],scipy.ndimage.interpolation.rotate(np.reshape(X_train[i],(28,28)), random.randint(1,360),reshape=False,mode="nearest").ravel()) #Add some examples with no rotation for i in range(1000): imgs[n_imgs+i] = np.append(X_train[i],X_train[i]) #List of calculated similarities similarities = np.ndarray((len(imgs),1)) for i in range(len(imgs)): similarities[i] = ssim(imgs[i][:28**2].reshape(28,28),imgs[i][28**2:].reshape(28,28)) ''' #List of images imgs = X_train.copy() #Rotated images rot_imgs = X_train.copy() for img in rot_imgs: img[:] = scipy.ndimage.interpolation.rotate(np.reshape(img,(28,28)), random.randint(1,360),reshape=False,mode="nearest").ravel() #List of calculated similarities similarities = np.ndarray((len(imgs),1)) for i in range(len(imgs)): similarities[i] = ssim(imgs[i].reshape(28,28),rot_imgs[i].reshape(28,28)) #Remove negative values, doesn't really change output #similarities[similarities<0]=0 #Check to see if images and similarity generated correctly index = np.random.randint(1,60000) plt.subplot(121) plt.imshow(np.reshape(imgs[index],(28,28)),cmap="gray") plt.subplot(122) plt.imshow(np.reshape(rot_imgs[index],(28,28)),cmap="gray") #plt.imshow(np.reshape(imgs[index],(28*2,28)),cmap="gray") #similarity = ssim(imgs[index][:28**2].reshape(28,28),imgs[index][28**2:].reshape(28,28)) similarity = similarities[index] print(similarity) rng = np.random.RandomState(9) # --- set up network parameters #Want to map from images to similarity n_vis = X_train.shape[1] #imgs.shape[1] n_out = similarities.shape[1] #number of neurons/dimensions of semantic pointer n_hid = 1000 #Try with more neurons for more accuracy #Want the encoding/decoding done on the training images ens_params = dict( eval_points=X_train, #imgs, neuron_type=nengo.LIF(), #originally used LIFRate() intercepts=nengo.dists.Choice([-0.5]), max_rates=nengo.dists.Choice([100]), ) #Least-squares solver with L2 regularization. solver = nengo.solvers.LstsqL2(reg=0.01) #solver = nengo.solvers.LstsqL2(reg=0.0001) #network that generates the weight matrices between neuron activity and images and the labels with nengo.Network(seed=3) as model: a = nengo.Ensemble(n_hid, n_vis, seed=3, **ens_params) v = nengo.Node(size_in=n_out) conn = nengo.Connection( a, v, synapse=None, eval_points=imgs, function=similarities,#want the similarities out solver=solver) # linear filter used for edge detection as encoders, more plausible for human visual system encoders = Gabor().generate(n_hid, (11, 11), rng=rng) encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True) #Set the ensembles encoders to this a.encoders = encoders #Check the encoders were correctly made plt.imshow(encoders[0].reshape(28, 28), vmin=encoders[0].min(), vmax=encoders[0].max(), cmap='gray') #Get the neuron activity of an image or group of images (this is the semantic pointer in this case) def get_activities(sim, images): _, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images) return acts #Get similarity of activity using dot product def get_dots(imgs): dots = np.ndarray((60000,1)) for i in range(len(imgs)): dots[i] = np.dot(imgs[i][:1000],imgs[i][1000:]) return dots with nengo.Simulator(model) as sim: #Neuron activities of different mnist image pairs orig_acts = get_activities(sim,imgs) rot_acts = get_activities(sim,rot_imgs) acts = np.ndarray((orig_acts.shape[0],orig_acts.shape[1]*2)) for i in range(len(acts)): acts[i] = np.append(orig_acts[i],rot_acts[i]) dot_similarities = get_dots(acts) #solvers for a learning rule solver = nengo.solvers.LstsqL2(reg=1e-8) solver_ssim = nengo.solvers.LstsqL2(reg=1e-8) #find weight matrix between neuron activity of the original image pair and the dot product of activities #weights returns a tuple including information about learning process, just want the weight matrix weights,_ = solver(acts, dot_similarities) weights_ssim,_ = solver_ssim(acts,similarities) test1 = X_test[random.randint(1,10000)] test2 = scipy.ndimage.interpolation.rotate(np.reshape(test1,(28,28)), random.randint(0,0),reshape=False,mode="nearest").ravel() pylab.subplot(121) pylab.imshow(test1.reshape(28,28),cmap='gray') pylab.subplot(122) pylab.imshow(test2.reshape(28,28),cmap='gray') _,act1 = nengo.utils.ensemble.tuning_curves(a, sim, inputs=test1) _,act2 = nengo.utils.ensemble.tuning_curves(a, sim, inputs=test2) act = np.append(act1,act2) print(np.dot(act,weights)) print(np.dot(act,weights_ssim)) #filename = "two_img_similarity_dot_weights" + str(n_hid*2) +".p" #cPickle.dump(weights.T, open( filename, "wb" ) ) filename = "two_img_similarity_ssim_weights2" + str(n_hid*2) +".p" cPickle.dump(weights_ssim.T, open( filename, "wb" ) ) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load the MNIST training and testing images Step2: Create array of images and rotated pairs and list of structural similarities Step3: The Network Step4: Evaluating the network statically Step5: Simulator Step6: Testing the outputs Step7: Saving weight matrices
3,934
<ASSISTANT_TASK:> Python Code: import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LogNorm %matplotlib inline import seawater as sw from pyspec import spectrum as spec fni = "data/synthetic_uv.npz" uv_synthetic = np.load(fni) up = uv_synthetic['up'] # We may also want to calculate the wavenumber spectrum of a 3d-array along two dimensions, and # then average along the third dimension. Here we showcase that pyspec capability by repeating the # up array... up2 = np.tile(up,(10,1,1)).T up2.shape spec2d10 = spec.TWODimensional_spec(up2,1.,1.) spec2d = spec.TWODimensional_spec(up,1.,1.) fig = plt.figure(figsize=(9,7)) ax = fig.add_subplot(111) cf = ax.contourf(spec2d.kk1,spec2d.kk2,spec2d.spec.mean(axis=-1),np.logspace(-6,6,10),norm=LogNorm(vmin=1.e-6,vmax=1e6)) cb = plt.colorbar(cf) ax.set_xlabel(r'$k_x$') ax.set_ylabel(r'$k_y$') cb.set_label(r'log$_{10}$ E') fig = plt.figure(figsize=(9,7)) ax = fig.add_subplot(111) cf = ax.contourf(spec2d.kk1,spec2d.kk2,spec2d10.spec.mean(axis=-1),np.logspace(-6,6,10),norm=LogNorm(vmin=1.e-6,vmax=1e6)) cb = plt.colorbar(cf) ax.set_xlabel(r'$k_x$') ax.set_ylabel(r'$k_y$') cb.set_label(r'log$_{10}$ E') spec2d.ndim k3 = np.array([.5e-2,.5]) E3 = 1/k3**3/1e5 fig = plt.figure(figsize=(9,7)) ax = fig.add_subplot(111) plt.loglog(spec2d.ki,spec2d10.ispec.mean(axis=-1)) plt.loglog(k3,E3,'k--') plt.text(1.e-2,50,r'$\kappa^{-3}$',fontsize=25) ax.set_xlabel(r"Wavenumber") ax.set_ylabel(r"Spectral density") ki, Ei = spec.avg_per_decade(spec2d.ki,spec2d.ispec,nbins = 10) fig = plt.figure(figsize=(9,7)) ax = fig.add_subplot(111) plt.loglog(spec2d.ki,spec2d.ispec,label='raw') plt.loglog(ki,Ei,label='binned') ax.set_xlabel(r"Wavenumber") ax.set_ylabel(r"Spectral density") plt.legend(loc=3) sn = 5*np.ones(Ei.size) # number of spectral realizations sn[10:16] = 20 sn[16:] = 100 El,Eu = spec.spec_error(Ei, sn, ci=0.95) # calculate lower and upper limit of confidence limit fig = plt.figure(figsize=(9,7)) ax = fig.add_subplot(111) ax.fill_between(ki,El,Eu, color='r', alpha=0.25) plt.loglog(ki,Ei,color='r') ax.set_xlabel(r"Wavenumber") ax.set_ylabel(r"Spectral density") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load random data with $\kappa^{-3}$ spectrum Step2: Compute and plot the 2D spectrum using $dx = dy = 1$ Step3: Calculating the isotropic spectrum Step4: Averaging with 10 bins decade Step5: Adding error bars
3,935
<ASSISTANT_TASK:> Python Code: %%bash find . -name "*.c" | xargs sed -i bck "/#include<malloc\.h>/d" %%bash cat ./isis/abs/allocate.cbck from astropy import units as u, constants as const class SnickersBar(object): def __init__(self, w, h, l, weight, energy_density=2460 * u.kJ/ (100 * u.g)): self.w = u.Quantity(w, u.cm) self.h = u.Quantity(h, u.cm) self.l = u.Quantity(l, u.cm) self.weight = u.Quantity(weight, u.g) self.energy_density = u.Quantity(energy_density, u.kJ / u.g) def calculate_volume(self): return self.w * self.h * self.l @property def volume(self): return self.w * self.h * self.l my_snickers_bar = SnickersBar(0.5, 0.5, 4, 0.01 * u.kg) my_snickers_bar.w = 1 * u.cm my_snickers_bar.volume %load_ext Cython import numpy as np import numexpr as ne x1, y1 = np.random.random((2, 1000000)) x2, y2 = np.random.random((2, 1000000)) distance = [] def calculate_distances(x1, y1, x2, y2): distances = [] for i in xrange(len(x1)): distances.append(np.sqrt((x1[i] - x2[i])**2 + (y1[i] - y2[i]**2))) return distances def numpy_calculate_distances(x1, y1, x2, y2): return np.sqrt((x1 - x2)**2 + (y1-y2)**2) def ne_calculate_distances(x1, y1, x2, y2): return ne.evaluate('sqrt((x1 - x2)**2 + (y1-y2)**2)') #%timeit calculate_distances(x1, y1, x2, y2) %timeit ne_calculate_distances(x1, y1, x2, y2) %%cython -a import numpy as np cimport numpy as np import cython cdef extern from "math.h": cpdef double sqrt(double x) @cython.boundscheck(False) def cython_calculate_distances(double [:] x1, double [:] y1, double [:] x2, double [:] y2): distances = np.empty(len(x1)) cdef double [:] distances_view = distances cdef int i cdef int len_x1=len(x1) for i in xrange(len_x1): distances_view[i] = sqrt((x1[i] - x2[i])**2 + (y1[i] - y2[i]**2)) return distances <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Quick introduction to chocolate bars and classes Step2: Using cython
3,936
<ASSISTANT_TASK:> Python Code: import os import sys import numpy import itk from itk import TubeTK as ttk from itkwidgets import view PixelType = itk.F Dimension = 3 ImageType = itk.Image[PixelType, Dimension] # Read tre file TubeFileReaderType = itk.SpatialObjectReader[Dimension] tubeFileReader = TubeFileReaderType.New() tubeFileReader.SetFileName("Data/MRI-Normals/Normal071-VascularNetwork.tre") tubeFileReader.Update() tubes = tubeFileReader.GetGroup() # Read template image TemplateImageType = itk.Image[PixelType, Dimension] TemplateImageReaderType = itk.ImageFileReader[TemplateImageType] templateImageReader = TemplateImageReaderType.New() templateImageReader.SetFileName("Data/MRI-Normals/Normal071-MRA.mha") templateImageReader.Update() templateImage = templateImageReader.GetOutput() view(templateImage) TubesToImageFilterType = ttk.ConvertTubesToImage[TemplateImageType] tubesToImageFilter = TubesToImageFilterType.New() tubesToImageFilter.SetUseRadius(True) tubesToImageFilter.SetTemplateImage(templateImageReader.GetOutput()) tubesToImageFilter.SetInput(tubes) tubesToImageFilter.Update() outputImage = tubesToImageFilter.GetOutput() TTKImageMathType = ttk.ImageMath[ImageType,ImageType] imMath = TTKImageMathType.New(Input = outputImage) imMath.AddImages(templateImage, 2048, 1) combinedImage = imMath.GetOutput() view(combinedImage) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load the tubes and a reference image which provides the size, spacing, origin, and orientation for the desired output image. Step2: Visualize the template image, just because it looks cool - the data in the image is actually irrelevant. Step3: Create a binary image that represents the spatial extent of the TubeSpatialObjects in the hierarchy of SpatialObjects in the variable "tubes" that was read-in above. If you only want to visualize centerlines of the tubes, set "UseRadius" to false. Step4: Visualize the results by blending the template and output images. Again, the content of the template image
3,937
<ASSISTANT_TASK:> Python Code: %load_ext autoreload %autoreload 2 from __future__ import print_function import hickle import numpy as np from pathos.multiprocessing import ProcessingPool as Pool import random import cProfile import pstats import StringIO import timeit import psutil import sys import os import timeit import pandas as pd pd.set_option('display.max_columns', None) import matplotlib.pyplot as plt %matplotlib inline import qp from qp.utils import calculate_kl_divergence as make_kld np.random.seed(seed=42) random.seed(a=42) # choose one of these: # dataset_key = 'Euclid'# Melissa Graham's data dataset_key = 'LSST'# Sam Schmidt's data dataname = dataset_key dataset_info = {} dataset_info[dataset_key] = {} if dataset_key == 'Euclid': datafilename = 'bpz_euclid_test_10_2.probs' elif dataset_key == 'LSST': datafilename = 'test_magscat_trainingfile_probs.out' dataset_info[dataset_key]['filename'] = datafilename if dataset_key == 'Euclid': z_low = 0.01 z_high = 3.51 elif dataset_key == 'LSST': z_low = 0.005 z_high = 2.11 dataset_info[dataset_key]['z_lim'] = (z_low, z_high) z_grid = np.arange(z_low, z_high, 0.01, dtype='float') z_range = z_high - z_low delta_z = z_range / len(z_grid) dataset_info[dataset_key]['z_grid'] = z_grid dataset_info[dataset_key]['delta_z'] = delta_z ## Warning: reading in the data is slow for Sam Schmidt's dataset! with open(dataset_info[dataset_key]['filename'], 'rb') as data_file: lines = (line.split(None) for line in data_file) lines.next() pdfs = np.array([[float(line[k]) for k in range(1,len(line))] for line in lines]) # dataset_info[dataset_key]['native_pdfs'] = pdfs print('storage footprint '+str(sys.getsizeof(pdfs))+' bytes') # colors = ['red','green','blue','cyan','magenta','yellow'] # n_plot = len(colors) # # if dataset_key == 'mg': # # indices = [1, 3, 14, 16, 19, 21] # # elif dataset_key == 'ss': # n_gals_tot = len(pdfs) # full_gal_range = range(n_gals_tot) # indices = np.random.choice(full_gal_range, n_plot) # for i in range(n_plot): # plt.plot(dataset_info[dataset_key]['z_grid'], pdfs[indices[i]], # color=colors[i], label=dataset_key+' #'+str(indices[i])) # plt.xlabel(r'$z$', fontsize=16) # plt.ylabel(r'$p(z)$', fontsize=16) # plt.title(dataset_key+' mock catalog') # plt.legend() # plt.savefig('pz_placeholder_'+dataset_key+'.pdf', dpi=250) if dataset_key == 'Euclid': chosen = 5390 elif dataset_key == 'LSST': # chosen = 108019 indices = [ 12543, 52661, 46216, 53296, 95524, 84574 , 2607 ,56017 , 64794, 7600] chosen = indices[9] start_time = timeit.default_timer() G = qp.PDF(gridded=(dataset_info[dataset_key]['z_grid'], pdfs[chosen])) print(timeit.default_timer() - start_time) G.plot() if dataset_key == 'Euclid': nc_needed = 3 elif dataset_key == 'LSST': nc_needed = 5 dataset_info[dataset_key]['N_GMM'] = nc_needed start_time = timeit.default_timer() G.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'], using='gridded', vb=True) time = timeit.default_timer() - start_time print(str(time)+' for GMM fit to gridded') G.plot() high_res = 1000 start_time = timeit.default_timer() G.sample(high_res, vb=False) G.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'], using='samples', vb=True) time = timeit.default_timer() - start_time print(str(time)+' for GMM fit to samples') G.plot() N_f = 7 M = qp.PDF(truth=G.mix_mod, limits=dataset_info[dataset_key]['z_lim']) M.quantize(N=N_f, vb=False) M.histogramize(N=N_f, binrange=dataset_info[dataset_key]['z_lim'], vb=False) M.sample(N=N_f, using='truth', vb=False) M.plot(loc=dataset_key+'_example_pz.pdf', vb=True) formats = ['quantiles', 'histogram', 'samples'] parametrizations = {} for f in formats: parametrizations[f] = {} for ff in formats: parametrizations[f][ff] = None parametrizations['quantiles']['quantiles'] = M.quantiles parametrizations['histogram']['histogram'] = M.histogram parametrizations['samples']['samples'] = M.samples dataset_info[dataset_key]['inits'] = parametrizations klds = {} P = qp.PDF(truth=M.truth) for f in formats: Q = qp.PDF(quantiles=dataset_info[dataset_key]['inits'][f]['quantiles'], histogram=dataset_info[dataset_key]['inits'][f]['histogram'], samples=dataset_info[dataset_key]['inits'][f]['samples']) klds[f] = make_kld(P, Q) print(klds) n_gals_tot = len(pdfs) n_gals_use = 100 full_gal_range = range(n_gals_tot) subset = np.random.choice(full_gal_range, n_gals_use) pdfs_use = pdfs[subset] # using the same grid for output as the native format, but doesn't need to be so dataset_info[dataset_key]['in_z_grid'] = dataset_info[dataset_key]['z_grid'] dataset_info[dataset_key]['metric_z_grid'] = dataset_info[dataset_key]['z_grid'] n_floats_use = 10 if dataset_key == 'Euclid': dataset_info[dataset_key]['N_GMM'] = 3 elif dataset_key == 'LSST': dataset_info[dataset_key]['N_GMM'] = 5 fit_components = dataset_info[dataset_key]['N_GMM'] n_moments_use = 3 colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'} def setup_from_grid(in_pdfs, z_grid, N_comps, high_res=1000): #read in the data, happens to be gridded zlim = (min(z_grid), max(z_grid)) N_pdfs = len(in_pdfs) # plot_examples(N_pdfs, z_grid, pdfs) print('making the initial ensemble of '+str(N_pdfs)+' PDFs') E0 = qp.Ensemble(N_pdfs, gridded=(z_grid, in_pdfs), vb=True) print('made the initial ensemble of '+str(N_pdfs)+' PDFs') #fit GMMs to gridded pdfs based on samples (faster than fitting to gridded) print('sampling for the GMM fit') samparr = E0.sample(high_res, vb=False) print('took '+str(high_res)+' samples') print('making a new ensemble from samples') Ei = qp.Ensemble(N_pdfs, samples=samparr, vb=False) print('made a new ensemble from samples') print('fitting the GMM to samples') GMMs = Ei.mix_mod_fit(comps=N_comps, vb=False) print('fit the GMM to samples') #set the GMMS as the truth print('making the final ensemble') Ef = qp.Ensemble(N_pdfs, truth=GMMs, vb=False) print('made the final ensemble') return(Ef) # return def plot_examples(z_grid, pdfs, n_plot=6): N_pdfs =len(pdfs) randos = np.random.choice(range(N_pdfs), n_plot) for i in range(n_plot): plt.plot(z_grid, pdfs[randos[i]], label=dataset_key+r'\#'+str(randos[i])) plt.xlabel(r'$z$', fontsize=16) plt.ylabel(r'$p(z)$', fontsize=16) plt.title(dataset_key+' mock catalog') plt.legend() plt.savefig('pz_placeholder_'+dataset_key+'.png', dpi=250) # pr = cProfile.Profile() # pr.enable() catalog = setup_from_grid(pdfs_use, dataset_info[dataset_key]['in_z_grid'], fit_components) # pr.disable() # s = StringIO.StringIO() # sortby = 'cumtime' # ps = pstats.Stats(pr, stream=s).sort_stats(sortby) # ps.print_stats() # print(s.getvalue()) plot_examples(dataset_info[dataset_key]['in_z_grid'], pdfs_use, n_plot=6) def analyze_individual(E, z_grid, N_floats, N_moments=4): zlim = (min(z_grid), max(z_grid)) z_range = zlim[-1] - zlim[0] delta_z = z_range / len(z_grid) Eq, Eh, Es = E, E, E inits = {} for f in formats: inits[f] = {} for ff in formats: inits[f][ff] = None print('performing quantization') inits['quantiles']['quantiles'] = Eq.quantize(N=N_floats, vb=False) print('performing histogramization') inits['histogram']['histogram'] = Eh.histogramize(N=N_floats, binrange=zlim, vb=False) print('performing sampling') inits['samples']['samples'] = Es.sample(samps=N_floats, vb=False) print('making the approximate ensembles') Eo ={} for f in formats: Eo[f] = qp.Ensemble(E.n_pdfs, truth=E.truth, quantiles=inits[f]['quantiles'], histogram=inits[f]['histogram'], samples=inits[f]['samples']) print('made the approximate ensembles') print('calculating the individual metrics') klds = {} metrics = {} moments = {} for key in Eo.keys(): print('starting '+key) klds[key] = Eo[key].kld(using=key, limits=zlim, dx=delta_z) samp_metric = qp.PDF(samples=klds[key]) gmm_metric = samp_metric.mix_mod_fit(n_components=dataset_info[dataset_key]['N_GMM'], using='samples') metrics[key] = qp.PDF(truth=gmm_metric) moments[key] = [] for n in range(N_moments+1): moments[key].append([qp.utils.calculate_moment(metrics[key], n, using=key, limits=zlim, dx=delta_z, vb=False)]) print('finished with '+key) print('calculated the individual metrics') # plot_individual(klds, N_floats) return(Eo, klds, moments) def plot_individual(pz_klds, N_floats): colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'} plot_bins = np.linspace(-3., 3., 20) for key in pz_klds.keys(): plt.hist(np.log(pz_klds[key]), color=colors[key], alpha=0.5, label=key, normed=True, bins=plot_bins) plt.legend() plt.ylabel('frequency') plt.xlabel(r'$\log[KLD]$') plt.title(dataset_key+r' dataset with $N_{f}='+str(N_floats)+r'$') plt.savefig(dataset_key+'_metric_histogram_placeholder.png', dpi=250) # pr = cProfile.Profile() # pr.enable() (ensembles, pz_klds, metric_moments) = analyze_individual(catalog, dataset_info[dataset_key]['metric_z_grid'], n_floats_use, n_moments_use) dataset_info[dataset_key]['pz_klds'] = pz_klds dataset_info[dataset_key]['pz_kld_moments'] = metric_moments plot_individual(pz_klds, n_floats_use) # pr.disable() # s = StringIO.StringIO() # sortby = 'cumtime' # ps = pstats.Stats(pr, stream=s).sort_stats(sortby) # ps.print_stats() # print(s.getvalue()) def analyze_stacked(E0, E, z_grid): zlim = (min(z_grid), max(z_grid)) z_range = zlim[-1] - zlim[0] delta_z = z_range / len(z_grid) parametrizations = E.keys() print('stacking the ensembles') stacked_pdfs = {} for key in formats: stacked_pdfs[key] = qp.PDF(gridded=E[key].stack(z_grid, using=key, vb=False)[key]) stacked_pdfs['truth'] = qp.PDF(gridded=E0.stack(z_grid, using='truth', vb=False)['truth']) print('stacked the ensembles') print('calculating the metrics') klds = {} for key in parametrizations: klds[key] = qp.utils.calculate_kl_divergence(stacked_pdfs['truth'], stacked_pdfs[key], limits=zlim, dx=delta_z) print('calculated the metrics') # plot_estimators(z_grid, stacked_pdfs, klds) return(stacked_pdfs, klds) def plot_estimators(z_grid, stacked_pdfs, klds): colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'} plt.title(r'$\hat{n}(z)$ for '+str(n_floats_use)+' numbers') plt.plot(z_grid, stacked_pdfs['truth'].evaluate(z_grid, vb=False)[1], color='black', lw=4, alpha=0.3, label='truth') for key in formats: plt.plot(z_grid, stacked_pdfs[key].evaluate(z_grid, vb=False)[1], label=key+' KLD='+str(klds[key]), color=colors[key]) plt.xlabel(r'$z$') plt.ylabel(r'$\hat{n}(z)$') plt.legend() plt.title(r'$\hat{n}(z)$ for '+str(n_floats_use)+' numbers') plt.savefig(dataset_key+'_nz_comparison.png', dpi=250) # pr = cProfile.Profile() # pr.enable() (stack_evals, nz_klds) = analyze_stacked(catalog, ensembles, dataset_info[dataset_key]['metric_z_grid']) dataset_info[dataset_key]['nz_ests'] = stack_evals dataset_info[dataset_key]['nz_klds'] = nz_klds plot_estimators(dataset_info[dataset_key]['metric_z_grid'], stack_evals, nz_klds) # pr.disable() # s = StringIO.StringIO() # sortby = 'cumtime' # ps = pstats.Stats(pr, stream=s).sort_stats(sortby) # ps.print_stats() # print(s.getvalue()) if os.path.exists('nz_metrics.hkl'): with open('nz_metrics.hkl', 'r') as nz_file: #read in content of list/dict nz_stats = hickle.load(nz_file) else: nz_stats = {} nz_stats['N_f'] = [] if N_f not in nz_stats['N_f']: nz_stats['N_f'].append(N_f) where_N_f = nz_stats['N_f'].index(N_f) if dataset_key not in nz_stats.keys(): nz_stats[dataset_key] = {} for f in parametrizations:#change this name to formats nz_stats[dataset_key][f] = [[]] for f in parametrizations: nz_stats[dataset_key][f][where_N_f].append(dataset_info[dataset_key]['nz_klds'][f]) with open('nz_metrics.hkl', 'w') as nz_file: hickle.dump(nz_stats, nz_file) with open('nz_metrics.hkl', 'r') as nz_file: nz_stats = hickle.load(nz_file) colors = {'quantiles':'b', 'histogram':'r', 'samples':'g'} # need to get some version of this working from nz_klds plt.figure(figsize=(5, 5)) for f in parametrizations.keys(): data_arr = np.swapaxes(np.array(nz_stats[dataset_key][f]), 0, 1)#turn N_f * instantiations into instantiations * N_f n_i = len(data_arr) a = 1./n_i plt.plot([2 * max(nz_stats['N_f']), 2 * max(nz_stats['N_f'])], [1., 10.], color=colors[f], alpha=a, label=f) for i in data_arr: # will be regular plot not scatter with more N_f options plt.plot(nz_stats['N_f'], i[0], color=colors[f], alpha=a) plt.semilogy() plt.semilogx() plt.xlim(min(nz_stats['N_f'])-1, max(nz_stats['N_f'])+1) plt.ylim(1., 10.) plt.xlabel(r'number of parameters') plt.ylabel(r'KLD') plt.legend() plt.title(r'$\hat{n}(z)$ KLD on '+str(n_gals_use)+' from '+dataset_key) plt.savefig(dataset_key+'_nz_metrics_placeholder.png', dpi=250) # won't really know how this looks without more N_f tested if os.path.exists('pz_metrics.hkl'): with open('pz_metrics.hkl', 'r') as pz_file: #read in content of list/dict pz_stats = hickle.load(pz_file) else: pz_stats = {} pz_stats['N_f'] = [] if N_f not in pz_stats['N_f']: pz_stats['N_f'].append(N_f) where_N_f = pz_stats['N_f'].index(N_f) if dataset_key not in pz_stats.keys(): pz_stats[dataset_key] = {} for f in parametrizations:#change this name to formats pz_stats[dataset_key][f] = [] for m in range(n_moments_use + 1): pz_stats[dataset_key][f].append([[]]) if N_f not in pz_stats['N_f']: pz_stats[dataset_key][f][m].append([]) for f in parametrizations: for m in range(n_moments_use + 1): pz_stats[dataset_key][f][m][where_N_f].append(dataset_info[dataset_key]['pz_kld_moments'][f][m]) with open('pz_metrics.hkl', 'w') as pz_file: hickle.dump(pz_stats, pz_file) with open('pz_metrics.hkl', 'r') as pz_file: pz_stats = hickle.load(pz_file) def make_patch_spines_invisible(ax): ax.set_frame_on(True) ax.patch.set_visible(False) for sp in ax.spines.values(): sp.set_visible(False) shapes = ['o','+','x','v','^','<','>'] fig, ax = plt.subplots() fig.subplots_adjust(right=1.) ax_n = ax for key in parametrizations.keys(): ax_n.plot([-1], [0], color=colors[key], label=key) for n in range(1, 4): ax.scatter([-1], [0], color='k', marker=shapes[n-1], label='moment '+str(n)) if n>1: ax_n = ax.twinx() if n>2: ax_n.spines["right"].set_position(("axes", 1. + 0.1 * (n-1))) make_patch_spines_invisible(ax_n) ax_n.spines["right"].set_visible(True) for f in parametrizations.keys(): data_arr = np.swapaxes(np.array(pz_stats[dataset_key][f][n]), 0, 1) n_i = len(data_arr) a = 1./n_i for i in data_arr: ax_n.scatter(pz_stats['N_f'], i, marker=shapes[n-1], color=colors[f], alpha=a) ax_n.set_ylabel('moment '+str(n)) ax.set_xlim(1,1000)#should be N_f range and logged ax.semilogx() ax.set_xlabel('number of parameters') ax.legend() fig.suptitle('KLD moments on '+str(n_gals_use)+' from '+dataset_key) fig.savefig(dataset_key+'_pz_metrics_placeholder.png', dpi=250) ## everything works above here! now it's time to make plots from this output! # # Function to test the experimental qp.Ensemble object! # def analyze():#(pdfs, N_comps, z, N_floats): # #read in the data, happens to be gridded # z_low, z_high = min(z), max(z) # N_pdfs = len(pdfs) # out_E = {} # E0 = qp.Ensemble(N_pdfs, gridded=(z, pdfs), vb=False) # #fit gridded pdfs as GMMs based on samples # samparr = E0.sample(1000, vb=False) # print(np.shape(samparr)) # Ei = qp.Ensemble(N_pdfs, samples=samparr, vb=False) # GMMs = Ei.mix_mod_fit(comps=N_comps, using='samples', vb=False) # # out_E['GMMs'] = [] # # for GMM in GMMs: # # out_E['GMMs'].append(GMM.functions[0].stats()) # #set the GMMS as the truth # Ef = qp.Ensemble(N_pdfs, truth=GMMs, vb=False) # #stack them and save the output # out_E['truth'] = Ef.stack(z, using='mix_mod', vb=False) # # #evaluate as gridded and save the output # # Et = qp.Ensemble(N_pdfs, gridded=Ef.evaluate(z)) # # out_E['gridded'] = Et.stack(z, using='gridded') # #evaluate as quantiles and save the output # Eq = qp.Ensemble(N_pdfs, quantiles=Ef.quantize(N=N_floats), vb=False) # #q_stack = Eq.stack(z, using='quantiles') # out_E['quantiles'] = Eq.stack(z, using='quantiles', vb=False) # # #evaluate as histogram and save the output # # Eh = qp.Ensemble(N_pdfs, histogram=Ef.histogramize(N=N_floats, binrange=(z_low, z_high))) # # #h_stack = Eh.stack(z, using='histogram') # # out_E['histogram'] = Eh.stack(z, using='histogram') # # #evaluate as samples and save the output # # Es = qp.Ensemble(N_pdfs, samples=Ef.sample(samps=N_floats)) # # #s_stack = Es.stack(z, using='samples') # # out_E['samples'] = Es.stack(z, using='samples') # return(out_E)#, KLDs, RMSEs) # print(n_gals_use, n_floats_use, s.getvalue()) # print(results.keys()) # print(results['truth']['mix_mod']) # KLDs, RMSEs = {}, {} # P = qp.PDF(gridded=results['truth']['mix_mod']) # metric_keys = results.keys() # metric_keys.remove('truth') # for est in metric_keys: # Q = qp.PDF(gridded=results[est][est]) # KLDs[est] = qp.utils.calculate_kl_divergence(P, Q, vb=False) # RMSEs[est] = qp.utils.calculate_rmse(P, Q, vb=False) # plt.plot(results[est][est][0], results[est][est][1], label=est) # plt.legend() # print(KLDs, RMSEs) # P = qp.PDF(gridded=stack_ests['truth']) # KLDs, RMSEs = {}, {} # for est in .keys(): # Q = qp.PDF(gridded=stack_ests[est]) # KLDs[est] = qp.utils.calculate_kl_divergence(P, Q, vb=False) # RMSEs[est] = qp.utils.calculate_rmse(P, Q, vb=False) # moments = np.array(results['stats']).T # fit_stats = moments[1] # plt.hist(np.log(fit_stats)) # D = qp.PDF(samples = np.log(fit_stats)) # T = D.mix_mod_fit(n_components=1) # D.plot() # print(np.exp(T.functions[0].stats())) # this ends the test of the experimental qp.Ensemble object # you may now return to your regularly scheduled programming # def analyze_one(index, N_comps, z, N_floats, logfilename='logfile.txt', vb=False): # # Model the input BPZ P(z) as a GMM, approximate that GMM in # various ways, and assess the quality of each approximation. # Parameters # ---------- # index : int # ID of galaxy # N_comps : int # Number of components used in GMM # N_floats : int # Number of floats used to parametrize the P(z) # z : float, ndarr # Redshift array for input gridded "truth". Used for # evaluating n(z) too # logfilename: string # where to put logging information # vb : boolean # Verbose output? # Returns # ------- # result : dict # Dictionary containing metric values, n(z) on standard # grid, samples, "true" GMM gridded p(z). # Notes # ----- # In some cases the GMM does not fit well, leading to bad KLD and # RMSE values when it is compared to the truth. # # # # Make z array if we don't already have it: # # if z is None: # # z = np.arange(0.01, 3.51, 0.01, dtype='float') # dz = (max(z) - min(z)) / len(z) # zlimits = [min(z), max(z)] # # Make a dictionary to contain the results: # result = {} # # Make a GMM model of the input BPZ p(z) (which are stored # # in the global 'pdfs' variable: # G = qp.PDF(gridded=(z, pdfs[index]), vb=vb) # # Draw 1000 samples, fit a GMM model to them, and make a true PDF: # G.sample(1000, vb=vb) # GMM = G.mix_mod_fit(n_components=N_comps, vb=vb) # P = qp.PDF(truth=GMM, vb=vb) # # Evaluate the GMM on the z grid, and store in the result dictionary. We'll # # need this to make our "true" n(z) estimator. We don't need to keep the # # z array, as we passed that in. # result['truth'] = P.evaluate(z, using='truth', vb=vb)[1] # # Now approximate P in various ways, and assess: # Q, KLD, RMSE, approximation = {}, {}, {}, {} # Q['quantiles'] = qp.PDF(quantiles=P.quantize(N=N_floats, vb=vb), vb=vb) # Q['histogram'] = qp.PDF(histogram=P.histogramize(N=N_floats, binrange=zlimits, vb=vb), vb=vb) # Q['samples'] = qp.PDF(samples=P.sample(N=N_floats, vb=vb), vb=vb) # for k in Q.keys(): # KLD[k] = qp.calculate_kl_divergence(P, Q[k], limits=zlimits, dx=dz, vb=vb) # RMSE[k] = qp.calculate_rmse(P, Q[k], limits=zlimits, dx=dz, vb=vb) # approximation[k] = Q[k].evaluate(z, using=k, vb=vb)[1] # # Store approximations: # result['KLD'] = KLD # result['RMSE'] = RMSE # result['approximation'] = approximation # result['samples'] = Q['samples'].samples # with open(logfilename, 'a') as logfile: # logfile.write(str((index, timeit.default_timer() - start_time))+'\n') # return result # def one_analysis(N): # all_results[str(N)] = [] # pr = cProfile.Profile() # pr.enable() # # with qp.Ensemble # n_gals_tot = len(pdfs) # full_gal_range = range(n_gals_tot) # subset = np.random.choice(full_gal_range, n_gals) # pdfs_use = pdfs[subset] # all_results[str(N)] = analyze(pdfs_use, nc_needed, z, N) # # # if multiprocessing: # # logfilename = dataname + str(n_gals) + 'multi' + str(N)+'.txt' # # def help_analyze(i): # # return analyze_one(i, nc_needed, z, N, logfilename=logfilename) # # pool = Pool(psutil.cpu_count() - 1) # # results = pool.map(help_analyze, range(n_gals)) # # all_results[str(N)] = results # # # tl;dr Tmax=270s for N_floats=3, 100 galaxies, 3 processors # # # if looping: # # logfilename = dataname + str(n_gals) + 'loop' + str(N)+'.txt' # # for i in range(100): # # all_results[str(N)].append(analyze_one(i, 2, z, N, logfilename=logfilename)) # # if i%10 == 0: print('.', end='') # # # tl;dr Tmax=352s for N_floats=3, 100 galaxies # pr.disable() # s = StringIO.StringIO() # sortby = 'cumtime' # ps = pstats.Stats(pr, stream=s).sort_stats(sortby) # ps.print_stats() # print(N, s.getvalue()) # return # #%%time # float_numbers = [3]#, 10, 30, 100] # n_float_numbers = len(float_numbers) # # gal_numbers = [100]#, 1000, 10000] # # n_gal_numbers = len(gal_numbers) # # total_results ={} # # for M in gal_numbers: # # n_gals = M # n_gals = 100 # all_results = {} # for N in float_numbers: # start_time = timeit.default_timer() # one_analysis(N) # # total_results[str(n_gals)] = all_results # with open('all_results.hkl', 'w') as result_file: # hickle.dump(all_results, result_file) # with open('all_results.hkl', 'r') as result_file: # all_results = hickle.load(result_file) # all_results = total_results[str(gal_numbers[0])] # all_KLD, all_RMSE = [], [] # for n in range(n_float_numbers): # KLD, RMSE = {}, {} # for approximation in all_results[str(float_numbers[n])][0]['KLD'].keys(): # x = np.array([]) # for k in range(len(all_results[str(float_numbers[n])])): # x = np.append(x, all_results[str(float_numbers[n])][k]['KLD'][approximation]) # KLD[approximation] = x # x = np.array([]) # for k in range(len(all_results[str(float_numbers[n])])): # x = np.append(x, all_results[str(float_numbers[n])][k]['RMSE'][approximation]) # RMSE[approximation] = x # all_KLD.append(KLD) # all_RMSE.append(RMSE) # colors = {'samples':'green', 'quantiles':'blue', 'histogram':'red'} # plt.figure(figsize=(12, 5 * n_float_numbers)) # i=0 # for n in range(n_float_numbers): # i += 1 # # Lefthand panel: KLD # plt.subplot(n_float_numbers, 2, i) # plt.title('KLD for '+str(float_numbers[n])+' stored numbers') # bins = np.linspace(0.0, 5., 25) # for k in ['samples', 'quantiles', 'histogram']: # plt.hist(all_KLD[n][k], bins, label=k, fc=colors[k], ec=colors[k], alpha=0.3, normed=True) # #plt.semilogx() # plt.xlabel('KL Divergence Metric', fontsize=16) # plt.ylim(0., 5.0) # plt.xlim(0., 5.0) # plt.legend() # i += 1 # # Righthand panel: RMSE # plt.subplot(n_float_numbers, 2, i)#+n_numbers) # plt.title('RMSE for '+str(float_numbers[n])+' stored numbers') # bins = np.linspace(0.0, 5., 25) # for k in ['samples', 'quantiles', 'histogram']: # plt.hist(all_RMSE[n][k], bins, label=k, fc=colors[k], ec=colors[k], alpha=0.3, normed=True) # #plt.semilogx() # plt.xlabel('RMS Error Metric', fontsize=16) # plt.ylim(0., 5.0) # plt.xlim(0., 5.0) # plt.legend(); # plt.savefig('money.png') # plt.figure(figsize=(6, 5 * n_float_numbers)) # all_n = [] # all_x = [] # all_y = [] # for i in range(n_float_numbers): # results = all_results[str(float_numbers[i])] # n = {} # # Pull out all truths and compute the average at each z: # x = np.zeros([len(z), len(results)]) # y = {} # for approx in ['samples', 'quantiles', 'histogram']: # y[approx] = np.zeros([len(z), len(results)]) # for k in range(len(results)): # y[approx][:,k] = results[k]['approximation'][approx] # for k in range(len(results)): # x[:,k] = results[k]['truth'] # # Now do the averaging to make the estimators: # n['truth'] = np.mean(x, axis=1) # n['truth'] /= np.sum(n['truth']) * delta_z # for approx in ['samples', 'quantiles', 'histogram']: # n[approx] = np.mean(y[approx], axis=1) # n[approx] /= np.sum(n[approx]) * delta_z # all_n.append(n) # all_x.append(x) # all_y.append(y) # # Note: this uses the samples' KDE to make the approximation. We could (and # # should!) also try simply concatenating the samples and histogramming them. # # Plot truth and all the approximations. # # The NaNs in the histogram approximation make that unplottable for now. # plt.subplot(n_float_numbers, 1, i+1)#+n_numbers) # plt.title(r'$n(z)$ for '+str(float_numbers[i])+' numbers') # plt.plot(z, n['truth'], color='black', lw=4, alpha=0.3, label='truth') # for k in ['samples', 'quantiles', 'histogram']: # plt.plot(z, n[k], label=k, color=colors[k]) # plt.xlabel('redshift z') # plt.ylabel('n(z)') # plt.legend(); # plt.savefig('nz_comparison.png', dpi=300) # all_p = [] # for i in range(n_float_numbers): # n = all_n[i] # p = {} # for k in ['samples', 'quantiles', 'histogram']: # p[k] = qp.PDF(gridded=(z,n[k]), vb=False) # p['truth'] = qp.PDF(gridded=(z,n['truth']), vb=False) # all_p.append(p) # all_KLD_nz, all_RMSE_nz = {}, {} # zlimits, dz = [z_low, z_high], 0.01 # for k in ['samples', 'quantiles', 'histogram']: # p = all_p[i] # KLD_nz, RMSE_nz = [], [] # for i in range(n_float_numbers): # KLD_nz.append(qp.calculate_kl_divergence(all_p[i]['truth'], all_p[i][k], limits=zlimits, dx=dz, vb=False)) # RMSE_nz.append(qp.calculate_rmse(all_p[i]['truth'], all_p[i][k], limits=zlimits, dx=dz, vb=False)) # all_KLD_nz[k] = KLD_nz # all_RMSE_nz[k] = RMSE_nz # plt.figure(figsize=(12, 5)) # both = [plt.subplot(1, 2, i+1) for i in range(2)] # KLD_plot = both[0] # RMSE_plot = both[1] # KLD_plot.set_title(r'KLD for $n(z)$') # RMSE_plot.set_title(r'RMSE for $n(z)$') # KLD_plot.set_xlabel('number of parameters') # RMSE_plot.set_xlabel('number of parameters') # KLD_plot.set_ylabel('KLD') # RMSE_plot.set_ylabel('RMSE') # # KLD_plot.semilogx() # # KLD_plot.semilogy() # # RMSE_plot.semilogx() # # RMSE_plot.semilogy() # for k in ['samples', 'quantiles', 'histogram']: # KLD_plot.plot(float_numbers, all_KLD_nz[k], color=colors[k], label=k) # RMSE_plot.plot(float_numbers, all_RMSE_nz[k], color=colors[k], label=k) # KLD_plot.semilogy() # KLD_plot.semilogx() # RMSE_plot.semilogy() # RMSE_plot.semilogx() # KLD_plot.legend() # RMSE_plot.legend() # plt.savefig('summary.png') # print('KLD metrics for n(z) estimator: ', all_KLD_nz) # print('RMSE metrics for n(z) estimator: ', all_RMSE_nz) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Set-up, Ingest Step2: Both datasets are fit with BPZ. Step3: The data files don't appear to come with information about the native format or metaparameters, but we are told they're evaluations on a regular grid of redshifts with given endpoints and number of parameters. Step4: Let's read in the catalog data. Note that it has a sizeable footprint even for a "small" number of galaxies. Step5: Visualizing the BPZ $p(z)$'s Step6: Note Step7: qp cannot currently convert gridded PDFs to histograms or quantiles - we need to make a GMM first, and use this to instantiate a qp.PDF object using a qp.composite object based on that GMM as qp.PDF.truth. The number of parameters necessary for a qualitatively good fit depends on the characteristics of the dataset. Step8: We can fit a GMM directly to the gridded PDF (via an internal interpolation). The direct fit, however, is not guaranteed to converge, particularly if the underlying distribution is not actually well-described by a weighted sum of Gaussians -- this is why storing the GMM parameters instead of a non-parametric format can be dangerous. Step9: The alternative is to take a large number of samples and fit a GMM to those (via the same internal interpolation). We can check that the fits are very similar. Though it is slower, we will sample before fitting to guarantee convergence. Step10: The qp.composite object can be used as the qp.PDF.truth to initialize a new qp.PDF object that doesn't have any information about the gridded or sample approximations but has a qualitatively similar shape and is thus "realistically complex" enough to draw conclusions about real data. Now we can approximate it any way we like! Consider this example for $N_f=7$ parameters. Step11: Quantifying the Accuracy of the Approximation Step12: Survey Mode Step13: We'll start by reading in our catalog of gridded PDFs, sampling them, fitting GMMs to the samples, and establishing a new qp.Ensemble object where each meber qp.PDF object has qp.PDF.truth$\neq$None. Step14: Next, we compute the KLD between each approximation and the truth for every member of the ensemble. We make the qp.Ensemble.kld into a qp.PDF object of its own to compare the moments of the KLD distributions for different parametrizations. Step15: Finally, we calculate metrics on the stacked estimator $\hat{n}(z)$ that is the average of all members of the ensemble. Step16: We save the data so we can remake the plots later without running everything again. Step17: We want to plot the KLD on $\hat{n}(z)$ for all formats as $N_{f}$ changes. We want to repeat this for many subsamples of the catalog to establush error bars on the KLD values. Step18: We want to plot the moments of the KLD distribution for each format as $N_{f}$ changes. Step19: Okay, now all I have to do is have this loop over both datasets, number of galaxies, and number of floats! Step20: Let's run a test with 100 galaxies and 10 parameters. This should take about 5 minutes or so. Step21: Let's show the stacked versions and compute metrics. Step22: Things are quite broken after this point! Step23: Let's plot the log standard deviations of the first component of the mixture models. Step24: Let's check the distribution of standard deviations of the ensemble. Step26: Now enough of the qp.Ensemble functionality has been implemented to merge into the master branch! Step27: OK, now lets's collate the metrics for the first 100 galaxies over a variable number of parameters, and look at the distribution of metric values. We're using multiprocessing because the for loop is slow; the rate-limiting step is the optimization routine for finding quantiles of a GMM. Step28: Since the previous step is quite slow (on the order of 5 minutes per test of different numbers of parameters for my laptop), this is a good point to save the results. We can load them from the file later and not remake them if we only want to do the rest of the analysis. Step29: Now let's plot histograms of the metric values. Step30: Interestingly, the metrics don't agree, nor is the behavior consistent across different numbers of parameters. However, as the number of parameters increases, the distribution of the metrics converge to lower numbers. Step31: The "samples" approximation gives the best result for the $n(z)$ estimator even with a small number of samples. However, once the number of parameters increases slightly, the "quantiles" approximation performs similarly. It takes a large number of parameters before the "histogram" approximation approaches the other options. Let's use the qp.PDF object to compare them quantitatively (since $n(z)$ can be normalized to give the global $p(z)$).
3,938
<ASSISTANT_TASK:> Python Code: from numpy import array arr = array([1, 2, 3]) print(arr) ten = array(range(10)) matrix = array([[1,2], [3, 4]]) nested_matrix = array([matrix, matrix]) strange_array = array([[1], 2]) print('Range demo:', ten) print('Matrix demo:', matrix) print('Array of NumPy arrays:', nested_matrix) print('Something strange:', strange_array) int_array = array([1., 2.5, -0.7], dtype='int') print('You have {0} array of type {0.dtype}'.format(int_array)) array([[0], 1], dtype='int') arrays = [ array([1, 2, 3]), array(((1, 2), (3, 4.))), array([[0], 1]), array('Hello world') ] for a in arrays: print('{0.dtype}: {0}'.format(a)) LENGTH = 4 a, b = array(range(LENGTH)), array(range(LENGTH, LENGTH*2)) print('Arighmetic') print('{} + {} = {}'.format(a, b, a + b)) print('{} * {} = {}'.format(a, b, a * b)) print('{} ** {} = {}'.format(a, b, a ** b)) print('{} / {} = {}'.format(a, b, a / b)) print('Binary') print('{} ^ {} = {}'.format(a, b, a ^ b)) print('{} | {} = {}'.format(a, b, a | b)) print('{} & {} = {}'.format(a, b, a & b)) arr = array(range(10)) indices_list = [ [1, 5, 8], range(1, 6, 2), array([8, 2, 0, -1]) ] for indices in indices_list: print('Indexed by {:<14}: {}'.format(str(indices), arr[indices])) arr = array(range(5)) print('Items more than 2:', arr > 2) print(arr[arr>2]) a, b = array(range(0, 5)), array(range(5, 10)) print(a[b>7]) matrix = array([range(3*i, 3*(i+1)) for i in range(3)]) print('We have a matrix of shape', matrix.shape) print('Regular Python indexing ', matrix[0][2]) print('Implicit tuple declaration', matrix[0, 2]) print('Explicit tuple declaration', matrix[(0, 2)]) print('All elements of the first column', matrix[:, 0]) print('Get elements of the second column', matrix[:, 1]) print('Pick first and last column', matrix[:, 0:3:2]) print('Get only first row', matrix[0, :]) print('You could do this easier but nevermind', matrix[0]) print('Get first two elements of the third column', matrix[0:2, 2]) a = array(range(5)) print(a) print(a[:]) print(a[...]) array3d = array([[range(3*(i+j), 3*(i+j+1)) for i in range(3)] for j in range(3)]) print('Here is array of shape {0.shape}: {0}'.format(array3d)) print('Item is a matrix of shape {0.shape}: {0}'.format(array3d[0])) print('Item of the matrix is an array of shape {0.shape}: {0}'.format(array3d[0][0])) print('Don`t forget about tuples: {0}'.format(array3d[0, 0])) array3d[:, :, -1] array3d[..., -1] print('First matrix with all elements', array3d[0, ...]) print('First elements of all rows of the second matrix', array3d[1, ..., 0]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: You just have to provide array constructor of numpy module with iterable type. Step2: Types Step3: Note that typecast was made automatically Step4: NumPy assigned data type automatically, if it was not specified Step5: Interesting thing Step6: Indexing Step7: Boolean array Step8: This can be read as "Give me the numbers which are greater than two" Step9: This gives you elements from array a, corresponding elements from b of which are greater than 7 Step10: It was noted that we can use "tuple of them" Step11: Ellipsis Step12: Though it's useful for n-dimensional arrays when you want to skip multiple dimensions Step13: Element of this array is a matrix, element of which is an array Step14: If you want to get only last elements of each row in this huge thing you can do following Step15: Also you can avoid these slices and use ellipsis Step16: Ellipsis can be placed in the middle or in the end
3,939
<ASSISTANT_TASK:> Python Code: import matplotlib.pyplot as plt import numpy as np import os assert os.environ["COLAB_TPU_ADDR"], "Make sure to select TPU from Edit > Notebook settings > Hardware accelerator" #!pip install -q cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl !pip install -q cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl #!pip install -q lightning-bolts !pip install --quiet torchmetrics lightning-bolts torchvision torch pytorch-lightning from pl_bolts.models.regression import LogisticRegression import pytorch_lightning as pl from pl_bolts.datamodules import MNISTDataModule, FashionMNISTDataModule, CIFAR10DataModule, ImagenetDataModule from sklearn.datasets import load_iris from pl_bolts.datamodules import SklearnDataModule import pytorch_lightning as pl # use any numpy or sklearn dataset X, y = load_iris(return_X_y=True) dm = SklearnDataModule(X, y, batch_size=12) # build model model = LogisticRegression(input_dim=4, num_classes=3) # fit trainer = pl.Trainer(tpu_cores=8) trainer.fit(model, train_dataloader=dm.train_dataloader(), val_dataloaders=dm.val_dataloader()) trainer.test(test_dataloaders=dm.test_dataloader()) # create dataset # dm = MNISTDataModule(num_workers=0, data_dir='data') dm = CIFAR10DataModule(num_workers=0, data_dir="data") dm.prepare_data() # force download now print(dm.size()) print(dm.num_classes) ndims = np.prod(dm.size()) nclasses = dm.num_classes print([ndims, nclasses, ndims * nclasses]) model = LogisticRegression(input_dim=ndims, num_classes=nclasses, learning_rate=0.001) print(model) trainer = pl.Trainer(tpu_cores=8, max_epochs=2) # trainer = pl.Trainer(max_epochs=2) trainer.fit(model, datamodule=dm) trainer.test(model, test_dataloaders=dm.val_dataloader()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Setup lightning Step2: Iris Step3: CIFAR
3,940
<ASSISTANT_TASK:> Python Code: %matplotlib inline %load_ext autoreload %autoreload 2 import numpy as np import matplotlib.pyplot as plt from pymks.datasets import make_checkerboard_microstructure = make_checkerboard_microstructure(square_size=, n_squares=) from pymks.tools import draw_microstructures draw_microstructures() print .shape from pymks.stats import autocorrelate from pymks import PrimitiveBasis = PrimitiveBasis(n_states=) = .discretize() = autocorrelate(, periodic_axes=(0, 1)) from pymks.tools import draw_autocorrelations correlations = [('black', 'black'), ('white', 'white')] draw_autocorrelations([0], autocorrelations=correlations) center = (.shape[1] + 1) / 2 print 'Volume fraction of black phase', [0, center, center, 0] print 'Volume fraction of white phase', [0, center, center, 1] from pymks.stats import crosscorrelate = crosscorrelate(, periodic_axes=(0, 1)) from pymks.tools import draw_crosscorrelations correlations = [('black', 'white')] draw_crosscorrelations([0], crosscorrelations=correlations) print 'Center value', [0, center, center, 0] <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 2-Point Statistics for Checkerboard Microstructure Step2: Now let's take a look at how the microstructure looks. Step3: Compute Periodic 2-Point Statistics Step4: We have now computed the autocorrelations. Step5: Notice that for this checkerboard microstructure, the autocorrelation for these 2 local states in the exact same. We have just computed the periodic autocorrelations for a perfectly periodic microstructure with equal volume fractions. In general this is not the case and the autocorrelations will be different as we will see later in this example. Step6: We can compute the cross-correlation of the microstructure function using the crosscorrelate function from pymks.stats Step7: Let's take a look at the cross correlation using draw_crosscorrelations from pymks.tools. Step8: Notice that the crosscorrelation is the exact opposite of the 2 autocorrelations. The (0, 0) vector has a value of 0. This statistic reflects the probablity of 2 phases having the same location. In our microstructure, this probability is zero as we have not allowed the two phases (colored black and white) to co-exist in the same spatial voxel.
3,941
<ASSISTANT_TASK:> Python Code: # make plots show up in notebook %matplotlib inline import pandas as pd # data package import matplotlib.pyplot as plt # pyplot module from pandas_datareader import data import datetime as dt # package to handle dates codes = ['GDPCA', 'PCECCA'] # real GDP, real consumption (from FRED website) start = dt.datetime(2003, 1, 1) # start date end = dt.datetime(2013, 1, 1) # end date us = data.DataReader(codes, 'fred', start, end) us.columns = ['gdp', 'pce'] #us.set_index (we don't need it now) print(us.head(3)) from pandas_datareader import wb iso_codes = ['BRA', 'CHN', 'FRA', 'IND', 'JPN', 'MEX', 'USA'] var = ['NY.GDP.PCAP.PP.KD'] year = 2013 wbdf = wb.download(indicator=var, country=iso_codes, start=year, end=year) print(wbdf) # Change the index for iso codes wbdf.index = iso_codes # Add country variable country = ['Brazil', 'China', 'France', 'India', 'Japan', 'Mexico', 'United States'] wbdf['country'] = country # Rename the variables wbdf.columns = ['gdppc', 'country'] # set the display precision in terms of decimal places pd.set_option('precision', 2) wbdf['gdppc'] = wbdf['gdppc']/1000 print(wbdf) ff = data.DataReader('F-F_Research_Data_factors', 'famafrench') ff type(ff) ff.keys() ff['DESCR'] ff = ff[1] ff ff.columns = ['xsm', 'smb', 'hml', 'rf'] ff['rm'] = ff['xsm'] + ff['rf'] ff = ff[['rm', 'rf']] # extract rm (market) and rf (riskfree) ff.head(5) # try this with US GDP us.plot() # do GDP alone us['gdp'].plot() us.plot(y="gdp") # bar chart us.plot(kind='bar') # scatter plot # we need to be explicit about the x and y variables: x = 'gdp', y = 'pce' us.plot.scatter('gdp', 'pce') us.plot('gdp', 'pce', kind='scatter') us.plot(kind='area') # fill the area below us.plot(subplots=True) # make separate subplots for the variables in the dataframe us.plot(subplots=True, sharey = True) # make the y axis the same us.plot(figsize = (10, 2)) # first arg: width, second: height (inches) us.plot(ylim = (0, 16000)) # change the range of the y axis # now try a few things with the Fama-French data ff.plot() ff.plot(kind='hist', bins=20, subplots=True) # "smoothed" histogram ff.plot(kind='kde', subplots=True, sharex=True) # smoothed histogram ("kernel density estimate") ff.plot(kind='kde', subplots=True, sharex=True, title='Fama-French returns', grid=True, legend=False) plt.plot(us.index, us['gdp']) # we can do two lines together plt.plot(us.index, us['gdp']) plt.plot(us.index, us['pce']) # we can also add things to plots plt.plot(us.index, us['gdp']) plt.plot(us.index, us['pce']) plt.title('US GDP', fontsize=14, loc='left') # add title plt.ylabel('Billions of 2009 USD') # y axis label plt.xlabel('Year') # y axis label plt.tick_params(labelcolor='red') # change tick labels to red #plt.legend(['GDP', 'Consumption'], loc='best') fig, ax = plt.subplots(2, 1) # create fig and ax objects -- nrows, ncols print('fig is ', type(fig)) print('ax is ', type(ax)) # let's try that again, this time with content fig, axe = plt.subplots(figsize=(8, 4)) # add things to ax us.plot(ax=axe, color = ['red', 'green']) # Fama-French example fig, ax = plt.subplots() ff.plot(ax=ax, kind='line', # line plot color=['blue', 'magenta'], # line color title='Fama-French market and riskfree returns') # Fama-French example fig, ax = plt.subplots(1, 2, figsize=(12, 4)) ff.plot(ax=ax[0], kind='hist', # line plot color=['blue', 'magenta'], # line color alpha=0.65, bins=20, title='Fama-French market and riskfree returns') ff.plot(ax=ax[1], kind='kde', # line plot color=['blue', 'magenta'], # line color title='Fama-French market and riskfree returns', alpha=0.65) fig.tight_layout() fig, ax = plt.subplots() us.plot(ax=ax) # Apply axis methods ax.set_title('US GDP and Consumption', fontsize=14, loc='left') ax.set_ylabel('Billions of 2013 USD') ax.legend(['Real GDP', 'Consumption'], loc=0) # more descriptive variable names ax.tick_params(labelcolor='red') # change tick labels to red ax.set_ylim(0) fig, ax = plt.subplots() us.plot(ax=ax, lw=2) ax.set_title('US GDP and Consumption', fontsize=14, loc='left') ax.set_ylabel('Billions of 2013 USD') ax.legend(['Real GDP', 'Consumption'], loc=2) # more descriptive variable names ax.tick_params(labelcolor='green') # change tick labels to green ax.set_ylim(0) fig, ax = plt.subplots(nrows=2, ncols=2, sharex=True) print('Object ax has dimension', ax.shape) fig, ax = plt.subplots(nrows=2, ncols=2, sharex=True, sharey=True) print('Object ax has dimension', ax.shape) ax[0, 1].plot(us.index, us['pce']) ax[1, 0].plot(us.index, us['gdp']) # now add some content fig, ax = plt.subplots(nrows=2, ncols=1, sharex=True, sharey=True) us['gdp'].plot(ax=ax[0], color='green') # first plot us['pce'].plot(ax=ax[1], color='red') # second plot url = 'http://dx.doi.org/10.1787/888932937035' pisa = pd.read_excel(url, skiprows=18, # skip the first 18 rows skipfooter=7, # skip the last 7 parse_cols=[0,1,9,13], # select columns index_col=0, # set index = first column header=[0,1] # set variable names ) pisa = pisa.dropna() # drop blank lines pisa.columns = ['Math', 'Reading', 'Science'] # simplify variable names # bar chart of math scores fig, ax = plt.subplots() pisa['Math'].plot(kind='barh', ax=ax) # make the plot taller fig, ax = plt.subplots(figsize=(4, 13)) # note figsize pisa['Math'].plot(kind='barh', ax=ax) ax.set_title('PISA Math Score', loc='left') fig, ax = plt.subplots() pisa['Math'].plot(ax=ax, kind='barh', figsize=(4,13)) ax.set_title('PISA Math Score', loc='left') us_index = pisa.index.tolist().index('United States') ax.get_children()[us_index].set_color('r') # variable list (GDP, GDP per capita, life expectancy) var = ['NY.GDP.PCAP.PP.KD', 'NY.GDP.MKTP.PP.KD', 'SP.DYN.LE00.IN'] # country list (ISO codes) iso = ['USA', 'FRA', 'JPN', 'CHN', 'IND', 'BRA', 'MEX'] year = 2013 # get data from World Bank df = wb.download(indicator=var, country=iso, start=year, end=year) # massage data df = df.reset_index(level='year', drop=True) df.columns = ['gdppc', 'gdp', 'life'] # rename variables df['pop'] = df['gdp']/df['gdppc'] # population df['gdp'] = df['gdp']/10**12 # convert to trillions df['gdppc'] = df['gdppc']/10**3 # convert to thousands df['order'] = [5, 3, 1, 4, 2, 6, 0] # reorder countries df = df.sort_values(by='order', ascending=False) df # We'll use this same basic graph a few times. # Let's make a function so we don't have to repeat the # code to create def gdp_bar(variable="gdp"): fig, ax = plt.subplots() df[variable].plot(ax=ax, kind='barh', alpha=0.5) ax.set_title('Real GDP', loc='left', fontsize=14) ax.set_xlabel('Trillions of US Dollars') ax.set_ylabel('') return fig, ax gdp_bar() # ditto for GDP per capita (per person) fig, ax = gdp_bar("gdppc") ax.set_title('GDP Per Capita', loc='left', fontsize=14) fig, ax = gdp_bar() # Tufte-like axes ax.spines['left'].set_position(('outward', 7)) ax.spines['bottom'].set_position(('outward', 7)) ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.yaxis.set_ticks_position('left') ax.xaxis.set_ticks_position('bottom') # scatterplot of life expectancy vs gdp per capita fig, ax = plt.subplots() ax.scatter(df['gdppc'], df['life'], # x,y variables s=df['pop']/10**6, # size of bubbles alpha=0.5) ax.set_title('Life expectancy vs. GDP per capita', loc='left', fontsize=14) ax.set_xlabel('GDP Per Capita') ax.set_ylabel('Life Expectancy') ax.text(58, 66, 'Bubble size represents population', horizontalalignment='right') # scatterplot of life expectancy vs gdp per capita fig, ax = plt.subplots() ax.scatter(df['gdppc'], df['life'], # x,y variables s=df['pop']/10**6, # size of bubbles alpha=0.5) ax.set_title('Life expectancy vs. GDP per capita', loc='left', fontsize=14) ax.set_xlabel('GDP Per Capita') ax.set_ylabel('Life Expectancy') ax.text(58, 66, 'Bubble size represents population', horizontalalignment='right') for (x, y, country) in zip(df['gdppc'], df['life'], df.index): ax.text(x, y, country) import pandas as pd data = {'Food': ['French Fries', 'Potato Chips', 'Bacon', 'Pizza', 'Chili Dog'], 'Calories per 100g': [607, 542, 533, 296, 260]} cals = pd.DataFrame(data) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Comment. When you run the code cell above, its output appears below it. Step2: World Bank. Step3: Comment. In the previous cell, we used the print() function to produce output. Here we just put the name of the dataframe. The latter displays the dataframe -- and formats it nicely -- if it's the last statement in the cell. Step4: What is this object? Step5: Learn about the structure Step6: Approach 1 Step7: Exercise. Add each of these arguments, one at a time, to us.plot() Step8: Fama-French asset returns Step9: Exercise. What do each of the arguments do in the code below? Step10: Exercise. Try adding the arguments title='Fama-French returns', grid=True, and legend=False. Step11: Approach 2 Step12: Comment. All of these statements must be in the same cell for this to work. Step13: Exercise. What do we have here? What type are fig and ax? Step14: Comment. Both of these statements must be in the same cell. Step15: Quick review of the bidding Step16: (Your results may differ, but we really enjoyed that.) Step17: Multiple subplots Step18: Examples Step19: Comment. Yikes! That's horrible! What can we do about it? Any suggestions? Step20: Comment. What if we wanted to make the US bar red? This is far too complicated, but we used our Google fu and found a solution. Step21: Exercise. Create the same graph for the Reading score. Step22: And just because it's fun, here's an example of Tufte-like axes from Matplotlib examples Step23: Exercise (challenging). Make the ticks point out. Step24: Exercise. Make the bubble a little larger. Step25: Review
3,942
<ASSISTANT_TASK:> Python Code: import numpy as np data = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) weights = np.array([1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5]) quantile = 0.25 ind_sorted = np.argsort(data) sorted_data = data[ind_sorted] sorted_weights = weights[ind_sorted] Sn = np.cumsum(sorted_weights) Pn = (Sn-0.5*sorted_weights)/Sn[-1] np.interp(quantile, Pn, sorted_data) np.quantile(data, 0.25) def weighted_percentile(a, q=np.array([75, 25]), w=None): Calculates percentiles associated with a (possibly weighted) array Parameters ---------- a : array-like The input array from which to calculate percents q : array-like The percentiles to calculate (0.0 - 100.0) w : array-like, optional The weights to assign to values of a. Equal weighting if None is specified Returns ------- values : np.array The values associated with the specified percentiles. # Standardize and sort based on values in a q = np.array(q) / 100.0 if w is None: w = np.ones(a.size) idx = np.argsort(a) a_sort = a[idx] w_sort = w[idx] # Get the cumulative sum of weights ecdf = np.cumsum(w_sort) # Find the percentile index positions associated with the percentiles p = q * (w.sum() - 1) # Find the bounding indices (both low and high) idx_low = np.searchsorted(ecdf, p, side='right') idx_high = np.searchsorted(ecdf, p + 1, side='right') idx_high[idx_high > ecdf.size - 1] = ecdf.size - 1 # Calculate the weights weights_high = p - np.floor(p) weights_low = 1.0 - weights_high # Extract the low/high indexes and multiply by the corresponding weights x1 = np.take(a_sort, idx_low) * weights_low x2 = np.take(a_sort, idx_high) * weights_high # Return the average return np.add(x1, x2) weighted_percentile(data, q=np.array([10, 25, 50, 75, 90, 100]), w=weights) def weighted_percentile2(data, weights, perc): perc : percentile in [0-1]! ix = np.argsort(data) data = data[ix] # sort data weights = weights[ix] # sort weights cdf = (np.cumsum(weights) - 0.5 * weights) / np.sum(weights) # 'like' a CDF function return np.interp(perc, cdf, data) weighted_percentile2(data, weights, 0.1) from statsmodels.stats.weightstats import DescrStatsW wq = DescrStatsW(data=np.arange(0, 101), weights=np.ones(101)* 1.5) wq.quantile(probs=np.arange(0, 1.01, 0.01), return_pandas=False) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step2: Calculating percentiles / quantiles Step3: I also found this post about equal frequency binning in Python useful.
3,943
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import matplotlib.pyplot as plt from math import log, sqrt from sklearn import linear_model dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int} sales = pd.read_csv('../datasets/kc_house_data.csv', dtype=dtype_dict) testing = pd.read_csv('../datasets/wk3_kc_house_test_data.csv', dtype=dtype_dict) training = pd.read_csv('../datasets/wk3_kc_house_train_data.csv', dtype=dtype_dict) validation = pd.read_csv('../datasets/wk3_kc_house_valid_data.csv', dtype=dtype_dict) sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt) sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt) sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms'] sales['floors_square'] = sales['floors']*sales['floors'] testing['sqft_living_sqrt'] = testing['sqft_living'].apply(sqrt) testing['sqft_lot_sqrt'] = testing['sqft_lot'].apply(sqrt) testing['bedrooms_square'] = testing['bedrooms']*testing['bedrooms'] testing['floors_square'] = testing['floors']*testing['floors'] training['sqft_living_sqrt'] = training['sqft_living'].apply(sqrt) training['sqft_lot_sqrt'] = training['sqft_lot'].apply(sqrt) training['bedrooms_square'] = training['bedrooms']*training['bedrooms'] training['floors_square'] = training['floors']*training['floors'] validation['sqft_living_sqrt'] = validation['sqft_living'].apply(sqrt) validation['sqft_lot_sqrt'] = validation['sqft_lot'].apply(sqrt) validation['bedrooms_square'] = validation['bedrooms']*validation['bedrooms'] validation['floors_square'] = validation['floors']*validation['floors'] all_features = ['bedrooms', 'bedrooms_square', 'bathrooms', 'sqft_living', 'sqft_living_sqrt', 'sqft_lot', 'sqft_lot_sqrt', 'floors', 'floors_square', 'waterfront', 'view', 'condition', 'grade', 'sqft_above', 'sqft_basement', 'yr_built', 'yr_renovated'] model_all = linear_model.Lasso(alpha=5e2, normalize=True) # set parameters model_all.fit(sales[all_features], sales['price']) # learn weights print (all_features, '\n') print (model_all.coef_) print ('With intercetp, nonzeros: ', np.count_nonzero(model_all.coef_) + np.count_nonzero(model_all.intercept_)) print ('No intercept: ', np.count_nonzero(model_all.coef_)) l1pentalities = np.logspace(1, 7, num=13) l1pentalities import sys minRSS = sys.maxsize bestL1 = 0 for l1p in l1pentalities: model = linear_model.Lasso(alpha=l1p, normalize=True) model.fit(training[all_features], training['price']) # learn weights # Calculate the RSS RSS = ((model.predict(validation[all_features]) - validation.price) ** 2).sum() print ("RSS: ", RSS, " for L1: ", l1p) # Remember the min RSS if RSS < minRSS: minRSS = RSS bestL1 = l1p print ('\nMinimum RSS:', minRSS, ' for L1: ', bestL1) model = linear_model.Lasso(alpha=bestL1, normalize=True) model.fit(training[all_features], training['price']) # learn weights # Calculate the RSS RSS = ((model.predict(testing[all_features]) - testing.price) ** 2).sum() print ("RSS: ", RSS, " for L1: ", bestL1) print (model.coef_) print (model.intercept_) np.count_nonzero(model.coef_) + np.count_nonzero(model.intercept_) max_nonzeros = 7 L1_range = np.logspace(1, 4, num=20) print (L1_range) l1_penalty_min = L1_range[0] l1_penalty_max = L1_range[19] for L1 in L1_range: model = linear_model.Lasso(alpha=L1, normalize=True) model.fit(training[all_features], training['price']) # learn weights nonZeroes = np.count_nonzero(model.coef_) + np.count_nonzero(model.intercept_) # The largest l1_penalty that has more non-zeros than ‘max_nonzeros’ if (nonZeroes > max_nonzeros) and (L1 > l1_penalty_min) : l1_penalty_min = L1 # The smallest l1_penalty that has fewer non-zeros than ‘max_nonzeros’ if (nonZeroes < max_nonzeros) and (L1 < l1_penalty_max): l1_penalty_max = L1 print ('l1_penalty_min: ', l1_penalty_min) print ('l1_penalty_max: ', l1_penalty_max) L1_narrow_interval = np.linspace(l1_penalty_min,l1_penalty_max,20) print (L1_narrow_interval) minRSS = sys.maxsize bestL1 = 0 # Find the model with desired number of nonzeroes with the best RSS for L1 in L1_narrow_interval: model = linear_model.Lasso(alpha=L1, normalize=True) model.fit(validation[all_features], validation['price']) # learn weights # Calculate the RSS RSS = ((model.predict(validation[all_features]) - validation.price) ** 2).sum() print ("RSS: ", RSS, " for L1: ", L1) # Remember the min RSS if RSS < minRSS: minRSS = RSS bestL1 = L1 print ('\nMinimum RSS:', minRSS, ' for L1: ', bestL1) model = linear_model.Lasso(alpha=bestL1, normalize=True) model.fit(validation[all_features], validation['price']) # learn weights print (all_features) print () print (model.coef_) print () print (model.intercept_) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load the data Step2: Feature engineering Step3: LASSO Step4: Calculate the RSS for best L1 on test data Step5: 2 Phase LASSO for finding a desired number of nonzero features
3,944
<ASSISTANT_TASK:> Python Code: import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG ! pip3 install -U google-cloud-storage $USER_FLAG if os.environ["IS_TESTING"]: ! pip3 install --upgrade tensorflow $USER_FLAG import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID REGION = "us-central1" # @param {type: "string"} from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. import os import sys # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS '' BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP ! gsutil mb -l $REGION $BUCKET_NAME ! gsutil ls -al $BUCKET_NAME import google.cloud.aiplatform as aip aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME) IMPORT_FILE = ( "gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv" ) if "IMPORT_FILES" in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head dataset = aip.ImageDataset.create( display_name="Flowers" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification, ) print(dataset.resource_name) dag = aip.AutoMLImageTrainingJob( display_name="flowers_" + TIMESTAMP, prediction_type="classification", multi_label=False, model_type="CLOUD", base_model=None, ) print(dag) model = dag.run( dataset=dataset, model_display_name="flowers_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=8000, disable_early_stopping=False, ) # Get model resource ID models = aip.Model.list(filter="display_name=flowers_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation) endpoint = model.deploy() test_item = !gsutil cat $IMPORT_FILE | head -n1 if len(str(test_item[0]).split(",")) == 3: _, test_item, test_label = str(test_item[0]).split(",") else: test_item, test_label = str(test_item[0]).split(",") print(test_item, test_label) import base64 import tensorflow as tf with tf.io.gfile.GFile(test_item, "rb") as f: content = f.read() # The format of each instance should conform to the deployed model's prediction input schema. instances = [{"content": base64.b64encode(content).decode("utf-8")}] prediction = endpoint.predict(instances=instances) print(prediction) endpoint.undeploy_all() delete_all = True if delete_all: # Delete the dataset using the Vertex dataset object try: if "dataset" in globals(): dataset.delete() except Exception as e: print(e) # Delete the model using the Vertex model object try: if "model" in globals(): model.delete() except Exception as e: print(e) # Delete the endpoint using the Vertex endpoint object try: if "endpoint" in globals(): endpoint.delete() except Exception as e: print(e) # Delete the AutoML or Pipeline trainig job try: if "dag" in globals(): dag.delete() except Exception as e: print(e) # Delete the custom trainig job try: if "job" in globals(): job.delete() except Exception as e: print(e) # Delete the batch prediction job using the Vertex batch prediction object try: if "batch_predict_job" in globals(): batch_predict_job.delete() except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object try: if "hpt_job" in globals(): hpt_job.delete() except Exception as e: print(e) if "BUCKET_NAME" in globals(): ! gsutil rm -r $BUCKET_NAME <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Install the latest GA version of google-cloud-storage library as well. Step2: Restart the kernel Step3: Before you begin Step4: Region Step5: Timestamp Step6: Authenticate your Google Cloud account Step7: Create a Cloud Storage bucket Step8: Only if your bucket doesn't already exist Step9: Finally, validate access to your Cloud Storage bucket by examining its contents Step10: Set up variables Step11: Initialize Vertex SDK for Python Step12: Tutorial Step13: Quick peek at your data Step14: Create the Dataset Step15: Create and run training pipeline Step16: Run the training pipeline Step17: Review model evaluation scores Step18: Deploy the model Step19: Send a online prediction request Step20: Make the prediction Step21: Undeploy the model Step22: Cleaning up
3,945
<ASSISTANT_TASK:> Python Code: # Magics first (server issues) %matplotlib inline # Do below if you want interactive matplotlib plot () # %matplotlib notebook # https://ipython.org/ipython-doc/dev/config/extensions/autoreload.html %load_ext autoreload %autoreload 2 # %install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py %load_ext version_information %version_information numpy, scipy, matplotlib, pandas # Standard library import os import sys sys.path.append("../src/") # Third party imports import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns # Local imports from simpleexample import example_func # Customizations sns.set() # matplotlib defaults # Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here plt.rcParams['figure.figsize'] = (12, 12) # Find the notebook the saved figures came from fig_prefix = "../figures/2015-07-16-jw-" example_func() from IPython.display import FileLink FileLink("../deliver/coal_data_cleanup.ipynb") dframe = pd.read_csv("../data/coal_prod_cleaned.csv") plt.scatter(dframe['Year'], dframe['Production_short_tons']) df2 = dframe.groupby('Mine_State').sum() sns.jointplot('Labor_Hours', 'Production_short_tons', data=df2, kind="reg", ) plt.xlabel("Labor Hours Worked") plt.ylabel("Total Amount Produced") plt.tight_layout() # plt.savefig(fig_prefix + "production-vs-hours-worked.png", dpi=350) %load_ext autoreload %autoreload 2 import sys sys.path.append("../src/") from simpleexample import example_func example_func() example_func() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Importing cleaned data Step2: [Dead end] Does year predict production? Step3: Does Hours worked correlate with output?
3,946
<ASSISTANT_TASK:> Python Code: import numpy as np import pylab as pl from IEtools import * %pylab inline filename='C:/econdata/GDP_PAYEMS.csv' output = FREDcsvRead(filename) plotData = output['data'] plotType = pl.plot plotType = pl.semilogy plotScale = 10**6 plotLabel = output['name']+' [$ p.c.]' pl.figure(figsize=(12,6)) plotType(plotData[:,0],plotData[:,1]*plotScale) pl.ylabel(plotLabel) pl.show() temp = dynamic_equilibrium_optimize(output['data'],alphaDelta=0.01,method='brute') transform = log_linear_transform(output['data'], temp) transform[:,1] = np.exp(transform[:,1]) guess = [1.0,-1.0,1977.0,0.1,0.1,2015.0, 0.0] result = dynamic_eq_fit(two_shock, transform, guess) fitData = result['fit'] popt = result['params'] pcov = result['cov'] plotData = transform plotScale = 1 plotLabel = output['name']+' [transformed]' plotData2 = fitData pl.figure(figsize=(12,6)) for index in range(len(result['transitions'])): pl.axvline(x=result['transitions'][index],color='0.5',linewidth=0.5) pl.plot(plotData[:,0],np.log(plotData[:,1]),plotData[:,0],np.log(plotData2)) pl.ylabel(plotLabel) pl.show() print('Dynamic equilibrium growth rate (entropy min) = ',np.round(100*temp,decimals=2)) print('Shock centers = ',np.round(result['transitions'],decimals=1)) print('Shock widths = ',np.round(result['shock_widths'],decimals=2)) print('Shock magnitudes = ',np.round(result['shock_mags'],decimals=2)) guess = [1.0,-1.0,1980.0,0.1,0.1,2010.0,0.04,0.0] result = dynamic_eq_fit(two_shock_eq, output['data'], guess) fitData = result['fit'] popt = result['params'] pcov = result['cov'] growthRate=np.round(popt[-2]*100,decimals=2) growthError=np.round(2*100*np.sqrt(np.diag(pcov))[-2],decimals=2) plotData = output['data'] plotType = pl.plot plotType = pl.semilogy plotScale = 10**6 plotLabel = output['name']+' [$ p.c.]' plotData2 = fitData pl.figure(figsize=(12,6)) for index in range(len(result['transitions'])): pl.axvline(x=result['transitions'][index],color='0.5',linewidth=0.5) plotType(plotData[:,0],plotData[:,1]*plotScale,plotData[:,0],plotData2*plotScale) pl.ylabel(plotLabel) pl.show() print('Dynamic equilibrium growth rate = ',growthRate,'+/-',growthError,'%') print('Shock centers = ',np.round(result['transitions'],decimals=1)) print('Shock widths = ',np.round(result['shock_widths'],decimals=2)) print('Shock magnitudes = ',np.round(result['shock_mags'],decimals=2)) filename='C:/econdata/UNRATE.csv' output0 = FREDcsvRead(filename) output = { 'data': output0['data'][-170:-1], 'name': output0['name'] } plotData = output['data'] plotType = pl.plot plotScale = 1 plotLabel = output['name']+' [$ p.c.]' pl.figure(figsize=(12,6)) plotType(plotData[:,0],plotData[:,1]*plotScale) pl.ylabel(plotLabel) pl.show() temp = dynamic_equilibrium_optimize(output['data'],alphaDelta=0.001,method='brute') transform = log_linear_transform(output['data'], temp) transform[:,1] = np.exp(transform[:,1]) guess = [1.0,-0.1,2008.0,0.1,0.1,2014.0, 0.0] result = dynamic_eq_fit(two_shock, transform, guess) fitData = result['fit'] popt = result['params'] pcov = result['cov'] plotData = transform plotScale = 1 plotLabel = output['name']+' [transformed]' plotData2 = fitData pl.figure(figsize=(12,6)) for index in range(len(result['transitions'])): pl.axvline(x=result['transitions'][index],color='0.5',linewidth=0.5) pl.plot(plotData[:,0],np.log(plotData[:,1]),plotData[:,0],np.log(plotData2)) pl.ylabel(plotLabel) pl.show() print('Dynamic equilibrium growth rate (entropy min) = ',np.round(100*temp,decimals=2)) print('Shock centers = ',np.round(result['transitions'],decimals=1)) print('Shock widths = ',np.round(result['shock_widths'],decimals=2)) print('Shock magnitudes = ',np.round(result['shock_mags'],decimals=2)) guess = [1.0,-0.1,2008.0,0.1,0.1,2014.0,-0.09,0.0] result = dynamic_eq_fit(two_shock_eq, output['data'], guess) fitData = result['fit'] popt = result['params'] pcov = result['cov'] growthRate=np.round(popt[-2]*100,decimals=2) growthError=np.round(2*100*np.sqrt(np.diag(pcov))[-2],decimals=2) plotData = output['data'] plotType = pl.plot plotScale = 1 plotLabel = output['name']+' [%]' plotData2 = fitData pl.figure(figsize=(12,6)) for index in range(len(result['transitions'])): pl.axvline(x=result['transitions'][index],color='0.5',linewidth=0.5) plotType(plotData[:,0],plotData[:,1]*plotScale,plotData[:,0],plotData2*plotScale) pl.ylabel(plotLabel) pl.show() print('Dynamic equilibrium growth rate = ',growthRate,'+/-',growthError,'%') print('Shock centers = ',np.round(result['transitions'],decimals=1)) print('Shock widths = ',np.round(result['shock_widths'],decimals=2)) print('Shock magnitudes = ',np.round(result['shock_mags'],decimals=2)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Example Step2: Dynamic equilibrium models Step3: This uses the global fit approach. Step4: Example Step5: Dynamic equilibrium models Step6: This uses the global fit approach.
3,947
<ASSISTANT_TASK:> Python Code: import pandas as pd import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') %matplotlib inline import statsmodels.formula.api as smf df = pd.read_csv('heights_weights_genders.csv') df.head(3) female_df = df[df['Gender'] == 'Female'] male_df = df[df['Gender'] == 'Male'] female_df.describe() lm = smf.ols(formula="Weight~Height",data=female_df).fit() lm.params female_intercept, female_slope = lm.params female_df.plot(kind="scatter",x="Height",y="Weight") plt.plot(female_df["Height"],female_slope*female_df["Height"]+female_intercept,"-",color="darkgrey") plt.title('Correlation between height and weight for females') plt.xlabel('Height (inches)') plt.ylabel('Weight (lbs)') male_df.describe() lm = smf.ols(formula="Weight~Height",data=male_df).fit() lm.params male_intercept, male_slope = lm.params male_df.plot(kind="scatter",x="Height",y="Weight") plt.plot(male_df["Height"],male_slope*male_df["Height"]+male_intercept,"-",color="darkgrey") plt.title('Correlation between height and weight for males') plt.xlabel('Height (inches)') plt.ylabel('Weight (lbs)') gender = input('Male or female? ') if gender == 'female' or gender == 'Female': height = int(input('Height (in inches): ')) weight = female_slope * height + female_intercept print('If a person is female and ' + str(height) + ' inches tall, they probably weigh ' + str(round(weight,2)) + ' pounds.') elif gender == 'male' or gender == 'Male': height = int(input('Height (in inches): ')) weight = male_slope * height + male_intercept print('If a person is male and ' + str(height) + ' inches tall, they probably weigh ' + str(round(weight,2)) + ' pounds.') else: print("Input error.") gender = input('Male or female? ') if gender == 'female' or gender == 'Female': height = int(input('Height (in inches): ')) weight = 5.994047 * height - 246.013266 print('If a person is female and ' + str(height) + ' inches tall, they probably weigh ' + str(round(weight,2)) + ' pounds.') elif gender == 'male' or gender == 'Male': height = int(input('Height (in inches): ')) weight = 5.961774 * height - 224.498841 print('If a person is male and ' + str(height) + ' inches tall, they probably weigh ' + str(round(weight,2)) + ' pounds.') else: print("Input error.") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: For Female Step2: For Male Step3: Function
3,948
<ASSISTANT_TASK:> Python Code: Now, to confirm, let us do a little bit of simple tests. import pandas as pd from matplotlib import pyplot as plt from matplotlib import cm as cm import numpy as np %matplotlib inline train=pd.read_csv('../train.csv') train.describe() # first take a look at the whole picture of time data: train['time'].plot(kind='hist', bins=500) # Hypothesis 1, time in sec: hourly=(train['time'].divide(3600)).apply(int) pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot() # Hypothesis 2, time in min: hourly=(train['time'].divide(60)).apply(int) pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot() # random control 1: hourly=(train['time'].divide(1485)).apply(int) pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot() # random control 2: hourly=(train['time'].divide(18)).apply(int) pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot() # random control 3: hourly=(train['time'].divide(83)).apply(int) pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot() # random control 4: hourly=(train['time'].divide(634)).apply(int) pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot() # random control 5: hourly=(train['time'].divide(20059)).apply(int) pd.Series([len(hourly[(hourly%168)==i]) for i in xrange(168)]).plot() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Nice little peaks on top of the picture show a periodic component of ~10000, very close to the number of minutes in a week Step2: This has no 24 hour cycle for sure. Step3: There might be 24 hour cycle component, but not clear.
3,949
<ASSISTANT_TASK:> Python Code: %matplotlib inline %config InlineBackend.figure_format = 'svg' import matplotlib as mpl mpl.rcParams['font.size'] = 8 figsize =(8,4) mpl.rcParams['figure.figsize'] = figsize import matplotlib.pyplot as plt import numpy as np from ipywidgets import interact from ipywidgets import widgets, FloatSlider from utils import riemann_tools from exact_solvers import traffic_variable_speed from clawpack import pyclaw from clawpack import riemann rho = np.linspace(0,1) f_l = rho*(1-rho); f_r = 2*rho*(1-rho) plt.plot(rho,f_l,'-c',rho,f_r,'-g'); plt.xlim(0,1); plt.ylim(0,0.6); plt.xlabel(r'$\rho$'); plt.ylabel(r'$f(\rho)$') plt.legend(['$f_l$','$f_r$']); f_l = rho*(1-rho); f_r = 2*rho*(1-rho) plt.plot(rho[26:],f_l[26:],'-c',rho[:26],f_r[:26],'-g'); plt.xlim(0,1); plt.ylim(0,0.6); plt.plot(rho[:26],f_l[:26],'--k',rho[26:],f_r[26:],'--k',alpha=0.5); plt.xlim(0,1); plt.ylim(0,0.6); plt.xlabel(r'$\rho$'); plt.ylabel(r'$f(\rho)$') plt.legend(['$f_l$','$f_r$']); rho_l = 0.6; rho_r = 0.6 v_l = 1.0; v_r = 0.8 traffic_variable_speed.phase_plane_plot(rho_l,rho_r, v_l,v_r,connect=False) rho_l = 0.6; rho_r = 0.6 v_l = 1.0; v_r = 0.8 traffic_variable_speed.phase_plane_plot(rho_l,rho_r, v_l,v_r,connect=False) def c(rho, xi, v): return v*(1.-2*rho) def make_plot_function(rho_l,rho_r,v_l,v_r,connect=True): states, speeds, reval, wave_types = \ traffic_variable_speed.exact_riemann_solution(rho_l,rho_r, v_l,v_r) def plot_function(t): ax = riemann_tools.plot_riemann(states,speeds, reval,wave_types,t=t,t_pointer=0, extra_axes=True,variable_names=['Density']); riemann_tools.plot_characteristics(reval,c,(v_l,v_r),ax[0], extra_lines=[[[0,0],[0,1]]]) traffic_variable_speed.phase_plane_plot(rho_l,rho_r,v_l,v_r, axes=ax[2],connect=connect) plt.show() return plot_function def plot_riemann_traffic_vc(rho_l,rho_r,v_l,v_r,connect=True): plot_function = make_plot_function(rho_l,rho_r,v_l,v_r,connect) interact(plot_function, t=widgets.FloatSlider(value=0.5,min=0,max=.9)) plot_riemann_traffic_vc(rho_l,rho_r,v_l,v_r); rho_l = 0.9; rho_r = 0.6 v_l = 1.0; v_r = 0.8 traffic_variable_speed.phase_plane_plot(rho_l,rho_r, v_l,v_r,connect=False) plot_riemann_traffic_vc(rho_l,rho_r,v_l,v_r); rho_l = 0.9; rho_r = 0.2 v_l = 1.0; v_r = 0.8 traffic_variable_speed.phase_plane_plot(rho_l,rho_r, v_l,v_r,connect=False) plot_riemann_traffic_vc(rho_l,rho_r,v_l,v_r); rho_l = 0.9; rho_r = 0.2 v_l = 1.0; v_r = 1.2 plot_riemann_traffic_vc(rho_l,rho_r,v_l,v_r); rho_l = 0.7; rho_r = 0.6 v_l = 1.0; v_r = 2. traffic_variable_speed.phase_plane_plot(rho_l,rho_r, v_l,v_r,connect=False) plot_riemann_traffic_vc(rho_l,rho_r,v_l,v_r); states = np.array([[rho_l,0.88,rho_r]]) speeds = np.array([0,0.5]) def reval(xi): rho = np.ones((1,len(xi))) return rho wave_types = ['contact','shock'] fig, ax = plt.subplots(1,2,figsize=figsize) traffic_variable_speed.phase_plane_plot(rho_l,rho_r,v_l,v_r,states, speeds,reval,wave_types,axes=ax[1],show=False) riemann_tools.plot_waves(states,speeds,reval,wave_types,ax=ax[0], t_pointer=False,t=0) riemann_tools.plot_characteristics(reval,c,(v_l,v_r),ax[0]) rho_l = 0.7; rho_r = 0.1 v_l = 1.0; v_r = 0.4 traffic_variable_speed.phase_plane_plot(rho_l,rho_r, v_l,v_r,connect=False) plot_riemann_traffic_vc(rho_l,rho_r,v_l,v_r); def plot_all(rho_l,rho_r,v_l,v_r,connect=False): states, speeds, reval, wave_types = \ traffic_variable_speed.exact_riemann_solution(rho_l,rho_r,v_l,v_r) ax = riemann_tools.plot_riemann(states,speeds,reval,wave_types, t=0.5,extra_axes=True,variable_names=[r'$\rho$']); riemann_tools.plot_characteristics(reval,c,(v_l,v_r),ax[0], extra_lines=[[[0,0],[0,1]]]) ax[1].set_ylim(0,1) traffic_variable_speed.phase_plane_plot(rho_l,rho_r,v_l,v_r, axes=ax[2],connect=connect) plt.show() interact(plot_all, rho_l=FloatSlider(min=0.,max=1.,step=0.01,value=0.4, description=r'$\rho_l$'), rho_r=FloatSlider(min=0.,max=1.,step=0.01,value=0.7, description=r'$\rho_r$'), v_l=FloatSlider(min=0.1,max=2.,value=1.), v_r=FloatSlider(min=0.1,max=2.,value=0.4), connect=widgets.Checkbox(description='Connect states') ); def test_solver(rho_l, rho_r, v_l, v_r, riemann_solver): solver = pyclaw.ClawSolver1D(riemann_solver) solver.bc_lower[0] = pyclaw.BC.extrap solver.bc_upper[0] = pyclaw.BC.extrap solver.aux_bc_lower[0] = pyclaw.BC.extrap solver.aux_bc_upper[0] = pyclaw.BC.extrap x = pyclaw.Dimension(-1.0,1.0,50,name='x') domain = pyclaw.Domain(x) num_aux = 1 state = pyclaw.State(domain,solver.num_eqn,num_aux) grid = state.grid xc=grid.p_centers[0] state.q[0,:] = rho_l*(xc<0) + rho_r*(xc>=0.) state.aux[0,:] = v_l*(xc<0) + v_r*(xc>=0.) # Speed limit claw = pyclaw.Controller() claw.tfinal = 1.0 claw.solution = pyclaw.Solution(state,domain) claw.solver = solver claw.keep_copy = True claw.verbosity = 0 claw.run() return xc, claw.frames rho_l = 0.7; rho_r = 0.5 v_l = 1.0; v_r = 1.5 t = 0.5 states, speeds, reval, wave_types = \ traffic_variable_speed.exact_riemann_solution(rho_l,rho_r, v_l,v_r) x, frames = test_solver(rho_l,rho_r,v_l,v_r,riemann.traffic_vc_1D) def plot_frame(t): ax = riemann_tools.plot_riemann(states, speeds, reval, wave_types, t, layout='horizontal', extra_axes=True); riemann_tools.plot_characteristics(reval,c,(v_l,v_r),ax[0], extra_lines=[[[0,0],[0,1]]]) rho = frames[int(t*10)].q[0,:] ax[1].plot(x,rho,'-sg') ax[1].set_xlim(-0.7, 0.7) ax[1].legend(['Exact','Approx'],loc='upper right',fontsize=10); traffic_variable_speed.phase_plane_plot(rho_l,rho_r, v_l,v_r,axes=ax[2]) plt.show() interact(plot_frame, t=widgets.FloatSlider(min=0,max=1,step=0.1,value=0.5)); rho_l = 0.7; rho_r = 0.2 v_l = 1.0; v_r = 1.0 t = 0.5 states, speeds, reval, wave_types = \ traffic_variable_speed.exact_riemann_solution(rho_l,rho_r, v_l,v_r) x, frames = test_solver(rho_l,rho_r,v_l,v_r, riemann.traffic_vc_1D) x ,fframes = test_solver(rho_l,rho_r,v_l,v_r, riemann.traffic_vc_fwave_1D) def plot_frame(t): rho = frames[int(t*10)].q[0,:] rho2 = fframes[int(t*10)].q[0,:] ax = riemann_tools.plot_riemann(states, speeds, reval, wave_types, t, layout='horizontal', extra_axes=True); riemann_tools.plot_characteristics(reval,c,(v_l,v_r),ax[0]) ax[1].plot(x,rho,'-sg') ax[1].plot(x,rho2,'-or') ax[1].legend(['Exact','Approx','$f$-wave'],loc='upper right', fontsize=10); ax[1].set_xlim(-0.2,0.2) traffic_variable_speed.phase_plane_plot(rho_l,rho_r,v_l,v_r, axes=ax[2]) plt.show() interact(plot_frame, t=widgets.FloatSlider(min=0,max=1,step=0.1,value=0.5)); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Admissible regions Step2: Shocks and rarefactions Step3: The stationary wave Step4: From the rules determined above, we know that we can connect $\rho_r$ to an intermediate state only by the stationary jump or by a shock. If it is connected by a shock, the intermediate state must lie on the red part of the $f_r$ curve. But then the stationary jump would need to either cross the midline (not allowed) or connect to the inadmissible left portion of the $f_l$ curve (also not allowed). Hence $\rho_r$ must be connected by the stationary wave to the admissible (red) portion of the $f_l$ curve. It is then clear that $\rho_l$ must be connected to the intermediate state by a shock. Step5: The $\rho-f$ plane is shown in the rightmost plot above. The Riemann solution is indicated with a thick line; as usual we indicate shocks in red and rarefactions in blue; the stationary jump at $x=0$ is plotted in black. Notice that the jump at the interface corresponds to a horizontal line, since the flux is the same on both sides of the interface. Notice also that the jump does not cross the dashed line at $\rho=1/2$. The solution shown satisfies all of the admissibility criteria laid out above; indeed, it is the only solution that does so. Step6: How can we connect these states? By the same reasoning used in the previous problem, $\rho_r$ must be connected to the $f_l$ curve by the stationary wave. Then it must be that $\rho_l$ is connected to the intermediate state by a rarefaction. Step7: Again, observe how the waves involved satisfy all of the required conditions. No other set of waves does so. Step8: Exactly how will we connect these states? In order to obtain an admissible stationary jump, there is only one possibility Step9: Because we have a stationary jump in $\rho$ in order to maintain flux continuity at the interface, it is impossible that the characteristic velocity approach zero on both sides of the stationary jump. Thus the transonic rarefaction is broken into two rarefactions, with an intermediate constant state lying either just to the left or just to the right of $x=0$. Where this state lies depends on whether the speed limit increases or decreases at the interface; in the example above this "shoulder" appears on the right. Next is an example in which it appears on the left. Step10: Case 4 Step11: Now $\rho_r$ cannot be connected to anything via a rarefaction, and cannot be connected to a state on the $f_l$ curve by a stationary jump (i.e., a horizontal line). The only possibility then is to connect $\rho_r$ to the admissible (left) part of the $f_r$ curve, evidently by a shock. But then how can we connect the intermediate state back to $\rho_l$, which lies to the left of the midline? This is only possible by connecting from the $f_r$ curve to the $f_l$ curve at the sonic point $\rho=1/2$. Since this connection corresponds to the stationary jump and must be a horizontal line, this condition determines the intermediate state on the $f_r$ curve. Then $\rho_l$ must be connected to the sonic point by a rarefaction. Step12: Physically, this corresponds to the situation in which the downstream flux is so high that the influx of cars from upstream cannot possibly keep up. The density just downstream of the jump must be lower than that just upstream. Thus we see a left-going rarefaction as the upstream congestion decreases since downstream traffic flux is greater; a stationary jump at the interface, where cars instantaneously accelerate due to the higher speed limit; and a right-going shock carrying a drop in density (from right to left) as the road clears out since there is less traffic arriving from upstream. Step13: As we can see, the shock is unphysical. The characteristics to the left of the shock are emerging from it, rather than impinging on it; this violates the entropy condition. It's also easy to see why we can't replace that shock with a rarefaction Step14: Through similar reasoning, it is evident that these states must be connected through a right-going rarefaction and a left-going shock. Step15: Physically in this case the flux upstream of the jump is so great that the right section of road (with lower speed limit) cannot let it all through fast enough. Since there is then a high density of cars at $x=0$, we might expect a rarefaction as they spread out into the uncongested section of road. That is indeed what happens. Step16: Approximate solvers Step17: $f$-wave solvers
3,950
<ASSISTANT_TASK:> Python Code: #Advanced: Using other libs... # To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import numpy.random as rnd import os # to make this notebook's output stable across runs rnd.seed(42) # To plot pretty figures %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.rcParams['axes.labelsize'] = 14 plt.rcParams['xtick.labelsize'] = 12 plt.rcParams['ytick.labelsize'] = 12 # Where to save the figures PROJECT_ROOT_DIR = "." CHAPTER_ID = "decision_trees" def image_path(fig_id): return os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id) def save_fig(fig_id, tight_layout=True): print("Saving figure", fig_id) if tight_layout: plt.tight_layout() plt.savefig(image_path(fig_id) + ".png", format='png', dpi=300) from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier, export_graphviz iris = load_iris() X = iris.data[:, 2:] # petal length and width y = iris.target tree_clf = DecisionTreeClassifier(max_depth=2, random_state=42) tree_clf.fit(X, y) export_graphviz( tree_clf, out_file=image_path("iris_tree.dot"), feature_names=iris.feature_names[2:], class_names=iris.target_names, rounded=True, filled=True ) from matplotlib.colors import ListedColormap def plot_decision_boundary(clf, X, y, axes=[0, 7.5, 0, 3], iris=True, legend=False, plot_training=True): x1s = np.linspace(axes[0], axes[1], 100) x2s = np.linspace(axes[2], axes[3], 100) x1, x2 = np.meshgrid(x1s, x2s) X_new = np.c_[x1.ravel(), x2.ravel()] y_pred = clf.predict(X_new).reshape(x1.shape) custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0']) plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap, linewidth=10) if not iris: custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50']) plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8) if plot_training: plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa") plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor") plt.plot(X[:, 0][y==2], X[:, 1][y==2], "g^", label="Iris-Virginica") plt.axis(axes) if iris: plt.xlabel("Petal length", fontsize=14) plt.ylabel("Petal width", fontsize=14) else: plt.xlabel(r"$x_1$", fontsize=18) plt.ylabel(r"$x_2$", fontsize=18, rotation=0) if legend: plt.legend(loc="lower right", fontsize=14) plt.figure(figsize=(8, 4)) plot_decision_boundary(tree_clf, X, y) plt.plot([2.45, 2.45], [0, 3], "k-", linewidth=2) plt.plot([2.45, 7.5], [1.75, 1.75], "k--", linewidth=2) plt.plot([4.95, 4.95], [0, 1.75], "k:", linewidth=2) plt.plot([4.85, 4.85], [1.75, 3], "k:", linewidth=2) plt.text(1.40, 1.0, "Depth=0", fontsize=15) plt.text(3.2, 1.80, "Depth=1", fontsize=13) plt.text(4.05, 0.5, "(Depth=2)", fontsize=11) save_fig("decision_tree_decision_boundaries_plot") plt.show() tree_clf.predict_proba([[5, 1.5]]) tree_clf.predict([[5, 1.5]]) X[(X[:, 1]==X[:, 1][y==1].max()) & (y==1)] # widest Iris-Versicolor flower not_widest_versicolor = (X[:, 1]!=1.8) | (y==2) X_tweaked = X[not_widest_versicolor] y_tweaked = y[not_widest_versicolor] tree_clf_tweaked = DecisionTreeClassifier(max_depth=2, random_state=40) tree_clf_tweaked.fit(X_tweaked, y_tweaked) plt.figure(figsize=(8, 4)) plot_decision_boundary(tree_clf_tweaked, X_tweaked, y_tweaked, legend=False) plt.plot([0, 7.5], [0.8, 0.8], "k-", linewidth=2) plt.plot([0, 7.5], [1.75, 1.75], "k--", linewidth=2) plt.text(1.0, 0.9, "Depth=0", fontsize=15) plt.text(1.0, 1.80, "Depth=1", fontsize=13) save_fig("decision_tree_instability_plot") plt.show() from sklearn.datasets import make_moons Xm, ym = make_moons(n_samples=100, noise=0.25, random_state=53) deep_tree_clf1 = DecisionTreeClassifier(random_state=42) deep_tree_clf2 = DecisionTreeClassifier(min_samples_leaf=4, random_state=42) deep_tree_clf1.fit(Xm, ym) deep_tree_clf2.fit(Xm, ym) plt.figure(figsize=(11, 4)) plt.subplot(121) plot_decision_boundary(deep_tree_clf1, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False) plt.title("No restrictions", fontsize=16) plt.subplot(122) plot_decision_boundary(deep_tree_clf2, Xm, ym, axes=[-1.5, 2.5, -1, 1.5], iris=False) plt.title("min_samples_leaf = {}".format(deep_tree_clf2.min_samples_leaf), fontsize=14) save_fig("min_samples_leaf_plot") plt.show() angle = np.pi / 180 * 20 rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) Xr = X.dot(rotation_matrix) tree_clf_r = DecisionTreeClassifier(random_state=42) tree_clf_r.fit(Xr, y) plt.figure(figsize=(8, 3)) plot_decision_boundary(tree_clf_r, Xr, y, axes=[0.5, 7.5, -1.0, 1], iris=False) plt.show() rnd.seed(6) Xs = rnd.rand(100, 2) - 0.5 ys = (Xs[:, 0] > 0).astype(np.float32) * 2 angle = np.pi / 4 rotation_matrix = np.array([[np.cos(angle), -np.sin(angle)], [np.sin(angle), np.cos(angle)]]) Xsr = Xs.dot(rotation_matrix) tree_clf_s = DecisionTreeClassifier(random_state=42) tree_clf_s.fit(Xs, ys) tree_clf_sr = DecisionTreeClassifier(random_state=42) tree_clf_sr.fit(Xsr, ys) plt.figure(figsize=(11, 4)) plt.subplot(121) plot_decision_boundary(tree_clf_s, Xs, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False) plt.subplot(122) plot_decision_boundary(tree_clf_sr, Xsr, ys, axes=[-0.7, 0.7, -0.7, 0.7], iris=False) save_fig("sensitivity_to_rotation_plot") plt.show() from sklearn.tree import DecisionTreeRegressor # Quadratic training set + noise rnd.seed(42) m = 200 X = rnd.rand(m, 1) y = 4 * (X - 0.5) ** 2 y = y + rnd.randn(m, 1) / 10 tree_reg1 = DecisionTreeRegressor(random_state=42, max_depth=2) tree_reg2 = DecisionTreeRegressor(random_state=42, max_depth=3) tree_reg1.fit(X, y) tree_reg2.fit(X, y) def plot_regression_predictions(tree_reg, X, y, axes=[0, 1, -0.2, 1], ylabel="$y$"): x1 = np.linspace(axes[0], axes[1], 500).reshape(-1, 1) y_pred = tree_reg.predict(x1) plt.axis(axes) plt.xlabel("$x_1$", fontsize=18) if ylabel: plt.ylabel(ylabel, fontsize=18, rotation=0) plt.plot(X, y, "b.") plt.plot(x1, y_pred, "r.-", linewidth=2, label=r"$\hat{y}$") plt.figure(figsize=(11, 4)) plt.subplot(121) plot_regression_predictions(tree_reg1, X, y) for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")): plt.plot([split, split], [-0.2, 1], style, linewidth=2) plt.text(0.21, 0.65, "Depth=0", fontsize=15) plt.text(0.01, 0.2, "Depth=1", fontsize=13) plt.text(0.65, 0.8, "Depth=1", fontsize=13) plt.legend(loc="upper center", fontsize=18) plt.title("max_depth=2", fontsize=14) plt.subplot(122) plot_regression_predictions(tree_reg2, X, y, ylabel=None) for split, style in ((0.1973, "k-"), (0.0917, "k--"), (0.7718, "k--")): plt.plot([split, split], [-0.2, 1], style, linewidth=2) for split in (0.0458, 0.1298, 0.2873, 0.9040): plt.plot([split, split], [-0.2, 1], "k:", linewidth=1) plt.text(0.3, 0.5, "Depth=2", fontsize=13) plt.title("max_depth=3", fontsize=14) save_fig("tree_regression_plot") plt.show() export_graphviz( tree_reg1, out_file=image_path("regression_tree.dot"), feature_names=["x1"], rounded=True, filled=True ) tree_reg1 = DecisionTreeRegressor(random_state=42) tree_reg2 = DecisionTreeRegressor(random_state=42, min_samples_leaf=10) tree_reg1.fit(X, y) tree_reg2.fit(X, y) x1 = np.linspace(0, 1, 500).reshape(-1, 1) y_pred1 = tree_reg1.predict(x1) y_pred2 = tree_reg2.predict(x1) plt.figure(figsize=(11, 4)) plt.subplot(121) plt.plot(X, y, "b.") plt.plot(x1, y_pred1, "r.-", linewidth=2, label=r"$\hat{y}$") plt.axis([0, 1, -0.2, 1.1]) plt.xlabel("$x_1$", fontsize=18) plt.ylabel("$y$", fontsize=18, rotation=0) plt.legend(loc="upper center", fontsize=18) plt.title("No restrictions", fontsize=14) plt.subplot(122) plt.plot(X, y, "b.") plt.plot(x1, y_pred2, "r.-", linewidth=2, label=r"$\hat{y}$") plt.axis([0, 1, -0.2, 1.1]) plt.xlabel("$x_1$", fontsize=18) plt.title("min_samples_leaf={}".format(tree_reg2.min_samples_leaf), fontsize=14) save_fig("tree_regression_regularization_plot") plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Training and visualizing Step2: Predicting classes and class probabilities Step3: Sensitivity to training set details Step4: Regression trees
3,951
<ASSISTANT_TASK:> Python Code: from pyspark.sql import SQLContext from pyspark.sql.types import * from datetime import datetime as _datetime, timedelta, date import boto3 import botocore import csv import os.path bucket = "telemetry-parquet" prefix = "main_summary/v4" %time dataset = sqlContext.read.load("s3://{}/{}".format(bucket, prefix), "parquet") sc.defaultParallelism dataset.printSchema() def fmt(the_date, date_format="%Y%m%d"): return _datetime.strftime(the_date, date_format) # Our calculations look for activity date reported within # a certain time window. If that window has passed, we do # not need to re-compute data for that period. def should_be_updated(record, target_col="day", generated_col="generated_on", date_format="%Y%m%d"): target = _datetime.strptime(record[target_col], date_format) generated = _datetime.strptime(record[generated_col], date_format) # Don't regenerate data that was already updated today. today = fmt(_datetime.utcnow(), date_format) if record[generated_col] >= today: return False diff = generated - target return diff.days <= 10 from moztelemetry.standards import filter_date_range, count_distinct_clientids # Similar to the version in standards.py, but uses subsession_start_date # instead of activityTimestamp def dau(dataframe, target_day, future_days=10, date_format="%Y%m%d"): Compute Daily Active Users (DAU) from the Executive Summary dataset. See https://bugzilla.mozilla.org/show_bug.cgi?id=1240849 target_day_date = _datetime.strptime(target_day, date_format) min_activity = _datetime.strftime(target_day_date, "%Y-%m-%d") max_activity = _datetime.strftime(target_day_date + timedelta(1), "%Y-%m-%d") act_col = dataframe.subsession_start_date min_submission = target_day max_submission_date = target_day_date + timedelta(future_days) max_submission = _datetime.strftime(max_submission_date, date_format) sub_col = dataframe.submission_date_s3 filtered = filter_date_range(dataframe, act_col, min_activity, max_activity, sub_col, min_submission, max_submission) return count_distinct_clientids(filtered) # Similar to the version in standards.py, but uses subsession_start_date # instead of activityTimestamp def mau(dataframe, target_day, past_days=28, future_days=10, date_format="%Y%m%d"): Compute Monthly Active Users (MAU) from the Executive Summary dataset. See https://bugzilla.mozilla.org/show_bug.cgi?id=1240849 target_day_date = _datetime.strptime(target_day, date_format) # Compute activity over `past_days` days leading up to target_day min_activity_date = target_day_date - timedelta(past_days) min_activity = _datetime.strftime(min_activity_date, "%Y-%m-%d") max_activity = _datetime.strftime(target_day_date + timedelta(1), "%Y-%m-%d") act_col = dataframe.subsession_start_date min_submission = _datetime.strftime(min_activity_date, date_format) max_submission_date = target_day_date + timedelta(future_days) max_submission = _datetime.strftime(max_submission_date, date_format) sub_col = dataframe.submission_date_s3 filtered = filter_date_range(dataframe, act_col, min_activity, max_activity, sub_col, min_submission, max_submission) return count_distinct_clientids(filtered) # Identify all missing days, or days that have not yet passed # the "still reporting in" threshold (as of 2016-03-17, that is # defined as 10 days). def update_engagement_csv(dataset, old_filename, new_filename, cutoff_days=30, date_format="%Y%m%d"): cutoff_date = _datetime.utcnow() - timedelta(cutoff_days) cutoff = fmt(cutoff_date, date_format) print "Cutoff date: {}".format(cutoff) fields = ["day", "dau", "mau", "generated_on"] should_write_header = True potential_updates = {} # Carry over rows we won't touch if os.path.exists(old_filename): with open(old_filename) as csv_old: reader = csv.DictReader(csv_old) with open(new_filename, "w") as csv_new: writer = csv.DictWriter(csv_new, fields) writer.writeheader() should_write_header = False for row in reader: if row['day'] < cutoff: writer.writerow(row) else: potential_updates[row['day']] = row with open(new_filename, "a") as csv_new: writer = csv.DictWriter(csv_new, fields) if should_write_header: writer.writeheader() for i in range(cutoff_days, 0, -1): target_day = fmt(_datetime.utcnow() - timedelta(i), date_format) if target_day in potential_updates and not should_be_updated(potential_updates[target_day]): # It's fine as-is. writer.writerow(potential_updates[target_day]) else: # Update it. print "We should update data for {}".format(target_day) record = {"day": target_day, "generated_on": fmt(_datetime.utcnow(), date_format)} print "Starting dau {} at {}".format(target_day, _datetime.utcnow()) record["dau"] = dau(dataset, target_day) print "Finished dau {} at {}".format(target_day, _datetime.utcnow()) print "Starting mau {} at {}".format(target_day, _datetime.utcnow()) record["mau"] = mau(dataset, target_day) print "Finished mau {} at {}".format(target_day, _datetime.utcnow()) writer.writerow(record) from boto3.s3.transfer import S3Transfer data_bucket = "net-mozaws-prod-us-west-2-pipeline-analysis" engagement_basename = "engagement_ratio.csv" new_engagement_basename = "engagement_ratio.{}.csv".format(_datetime.strftime(_datetime.utcnow(), "%Y%m%d")) s3path = "mreid/maudau" engagement_key = "{}/{}".format(s3path, engagement_basename) client = boto3.client('s3', 'us-west-2') transfer = S3Transfer(client) try: transfer.download_file(data_bucket, engagement_key, engagement_basename) except botocore.exceptions.ClientError as e: # If the file wasn't there, that's ok. Otherwise, abort! if e.response['Error']['Code'] != "404": raise e else: print "Did not find an existing file at '{}'".format(engagement_key) # reorganize dataset dataset = dataset.select(dataset.client_id.alias('clientId'), 'subsession_start_date', 'submission_date_s3') update_engagement_csv(dataset, engagement_basename, new_engagement_basename) ## Upload the updated csv file to S3 # Update the day-specific file: new_s3_name = "{}/{}".format(s3path, new_engagement_basename) transfer.upload_file(new_engagement_basename, data_bucket, new_s3_name) # Update the "main" file transfer.upload_file(new_engagement_basename, data_bucket, engagement_key) # Update the dashboard file dash_bucket = "net-mozaws-prod-metrics-data" dash_s3_name = "firefox-dashboard/{}".format(engagement_basename) transfer.upload_file(new_engagement_basename, dash_bucket, dash_s3_name, extra_args={'ACL': 'bucket-owner-full-control'}) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: How many cores are we running on? Step2: And what do the underlying records look like? Step5: We want to incrementally update the data, re-computing any values that are missing or for which data is still arriving. Define that logic here. Step6: Fetch existing data from S3 Step7: Update data on S3
3,952
<ASSISTANT_TASK:> Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. !pip install tensorflow_graphics import matplotlib.pyplot as plt import tensorflow as tf from tensorflow_graphics.math.interpolation import bspline from tensorflow_graphics.math.interpolation import slerp tf.compat.v1.enable_eager_execution() #@title Slerp - Vectors will be normalized first { vertical-output: true, run: "auto"} vector_1_x = -0.56 #@param { type: "slider", min: -1.0, max: 1.0, step: 0.01} vector_1_y = -0.39 #@param { type: "slider", min: -1.0, max: 1.0, step: 0.01} vector_2_x = 0.47 #@param { type: "slider", min: -1.0, max: 1.0, step: 0.01} vector_2_y = 0.74 #@param { type: "slider", min: -1.0, max: 1.0, step: 0.01} percent = 0.7 #@param { type: "slider", min: 0.0, max: 1.0, step: 0.01} vector_1 = tf.constant((vector_1_x, vector_1_y), dtype=tf.float32) vector_2 = tf.constant((vector_2_x, vector_2_y), dtype=tf.float32) vector_1 = tf.nn.l2_normalize(vector_1) vector_2 = tf.nn.l2_normalize(vector_2) vector_3 = slerp.interpolate( vector_1, vector_2, percent, method=slerp.InterpolationType.VECTOR) v1 = vector_1.numpy() v2 = vector_2.numpy() v3 = vector_3.numpy() plt.figure(figsize=(10, 10)) circle = plt.Circle((0, 0), 1.0, color='g', fill=False) ax = plt.gca() ax.add_artist(circle) plt.arrow( 0.0, 0.0, v1[0], v1[1], width=0.001, color='k', length_includes_head=True) plt.arrow( 0.0, 0.0, v2[0], v2[1], width=0.001, color='b', length_includes_head=True) plt.arrow( 0.0, 0.0, v3[0], v3[1], width=0.001, color='r', length_includes_head=True) plt.axis((-1.1, 1.1, -1.1, 1.1)) plt.show() #@title B-Spline Interpolation { vertical-output: true, run: "auto"} num_knots = 5 cyclical = True #@param { type: "boolean" } degree = 3 #@param { type: "slider", min: 1, max: 4, step: 1} knot_1_x = -2.5 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_1_y = -1 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_2_x = -1.5 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_2_y = 2 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_3_x = 0 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_3_y = -3 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_4_x = 1.5 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_4_y = 3 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_5_x = 3 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} knot_5_y = 0 #@param { type: "slider", min: -3.0, max: 3.0, step: 0.5} max_pos = num_knots if cyclical else num_knots - degree knots = tf.constant(((knot_1_x, knot_2_x, knot_3_x, knot_4_x, knot_5_x), (knot_1_y, knot_2_y, knot_3_y, knot_4_y, knot_5_y))) positions = tf.expand_dims( tf.range(start=0.0, limit=max_pos, delta=0.01, dtype=knots.dtype), axis=-1) spline = bspline.interpolate(knots, positions, degree, cyclical) spline = tf.squeeze(spline, axis=1) plt.figure(figsize=(10, 10)) plt.plot(spline[:, 0], spline[:, 1], 'r') plt.plot(knots[0, :], knots[1, :], 'b.') plt.axis((-3.5, 3.5, -3.5, 3.5)) plt.show() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Interpolation Step2: Now that Tensorflow Graphics is installed, let's import everything needed to run the demo contained in this notebook. Step3: Slerp interpolation Step4: B-Spline interpolation
3,953
<ASSISTANT_TASK:> Python Code: # Authors: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk> # Martin Luessi <mluessi@nmr.mgh.harvard.edu> # Alexandre Gramfort <alexandre.gramfort@inria.fr> # Nicolas P. Rougier (graph code borrowed from his matplotlib gallery) # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import (read_inverse_operator, make_inverse_resolution_matrix, get_point_spread) from mne.viz import circular_layout, plot_connectivity_circle print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif' forward = mne.read_forward_solution(fname_fwd) # Convert forward solution to fixed source orientations mne.convert_forward_solution( forward, surf_ori=True, force_fixed=True, copy=False) inverse_operator = read_inverse_operator(fname_inv) # Compute resolution matrices for MNE rm_mne = make_inverse_resolution_matrix(forward, inverse_operator, method='MNE', lambda2=1. / 3.**2) src = inverse_operator['src'] del forward, inverse_operator # save memory labels = mne.read_labels_from_annot('sample', parc='aparc', subjects_dir=subjects_dir) n_labels = len(labels) label_colors = [label.color for label in labels] # First, we reorder the labels based on their location in the left hemi label_names = [label.name for label in labels] lh_labels = [name for name in label_names if name.endswith('lh')] # Get the y-location of the label label_ypos = list() for name in lh_labels: idx = label_names.index(name) ypos = np.mean(labels[idx].pos[:, 1]) label_ypos.append(ypos) # Reorder the labels based on their location lh_labels = [label for (yp, label) in sorted(zip(label_ypos, lh_labels))] # For the right hemi rh_labels = [label[:-2] + 'rh' for label in lh_labels] # Compute first PCA component across PSFs within labels. # Note the differences in explained variance, probably due to different # spatial extents of labels. n_comp = 5 stcs_psf_mne, pca_vars_mne = get_point_spread( rm_mne, src, labels, mode='pca', n_comp=n_comp, norm=None, return_pca_vars=True) n_verts = rm_mne.shape[0] del rm_mne with np.printoptions(precision=1): for [name, var] in zip(label_names, pca_vars_mne): print(f'{name}: {var.sum():.1f}% {var}') # get PSFs from Source Estimate objects into matrix psfs_mat = np.zeros([n_labels, n_verts]) # Leakage matrix for MNE, get first principal component per label for [i, s] in enumerate(stcs_psf_mne): psfs_mat[i, :] = s.data[:, 0] # Compute label-to-label leakage as Pearson correlation of PSFs # Sign of correlation is arbitrary, so take absolute values leakage_mne = np.abs(np.corrcoef(psfs_mat)) # Save the plot order and create a circular layout node_order = lh_labels[::-1] + rh_labels # mirror label order across hemis node_angles = circular_layout(label_names, node_order, start_pos=90, group_boundaries=[0, len(label_names) / 2]) # Plot the graph using node colors from the FreeSurfer parcellation. We only # show the 200 strongest connections. fig = plt.figure(num=None, figsize=(8, 8), facecolor='black') plot_connectivity_circle(leakage_mne, label_names, n_lines=200, node_angles=node_angles, node_colors=label_colors, title='MNE Leakage', fig=fig) # left and right lateral occipital idx = [22, 23] stc_lh = stcs_psf_mne[idx[0]] stc_rh = stcs_psf_mne[idx[1]] # Maximum for scaling across plots max_val = np.max([stc_lh.data, stc_rh.data]) brain_lh = stc_lh.plot(subjects_dir=subjects_dir, subject='sample', hemi='both', views='caudal', clim=dict(kind='value', pos_lims=(0, max_val / 2., max_val))) brain_lh.add_text(0.1, 0.9, label_names[idx[0]], 'title', font_size=16) brain_rh = stc_rh.plot(subjects_dir=subjects_dir, subject='sample', hemi='both', views='caudal', clim=dict(kind='value', pos_lims=(0, max_val / 2., max_val))) brain_rh.add_text(0.1, 0.9, label_names[idx[1]], 'title', font_size=16) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load forward solution and inverse operator Step2: Read and organise labels for cortical parcellation Step3: Compute point-spread function summaries (PCA) for all labels Step4: We can show the explained variances of principal components per label. Note Step5: The output shows the summed variance explained by the first five principal Step6: Most leakage occurs for neighbouring regions, but also for deeper regions Step7: Point-spread function for the lateral occipital label in the left hemisphere Step8: and in the right hemisphere.
3,954
<ASSISTANT_TASK:> Python Code: import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" %%writefile requirements.txt joblib~=1.0 numpy~=1.20 google-cloud-storage>=1.26.0,<2.0.0dev #new files for this example pystan==2.19.1.1 pandas==1.0.5 plotly==4.8.2 # Required in Docker serving container %pip install {USER_FLAG} --upgrade pip %pip install {USER_FLAG} -r requirements.txt # fbprophet requires a second install %pip install {USER_FLAG} fbprophet==0.6 # For local FastAPI development and running %pip install -U {USER_FLAG} "uvicorn[standard]>=0.12.0,<0.14.0" fastapi~=0.63 # Vertex SDK for Python %pip install -U --user google-cloud-aiplatform # Automatically restart kernel after installs import os if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True) # Get your Google Cloud project ID from gcloud shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null try: PROJECT_ID = shell_output[0] except IndexError: PROJECT_ID = None print("Project ID:", PROJECT_ID) if PROJECT_ID == "" or PROJECT_ID is None: PROJECT_ID = "[your-project-id]" # @param {type:"string"} import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS 'path/to/your/credentials.json' BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} ! gsutil mb -l $REGION $BUCKET_NAME ! gsutil ls -al $BUCKET_NAME %mkdir -p app MODEL_ARTIFACT_DIR = "custom-fb-container-prediction-model" # @param {type:"string"} REPOSITORY = "custom-fb-container-prediction" # @param {type:"string"} IMAGE = "custom-fb-fastapi-server" # @param {type:"string"} MODEL_NAME = "fb_custom_container" # @param {type:"string"} MODEL_DISPLAY_NAME = "FB Prophet Forecast" # @param {type:"string"} import datetime import os # BQ libraries import google.auth from fbprophet import Prophet from google.cloud import bigquery, bigquery_storage TODAY = datetime.date.today() credentials, your_project_id = google.auth.default( scopes=["https://www.googleapis.com/auth/cloud-platform"] ) bqclient = bigquery.Client( credentials=credentials, project=your_project_id, ) bqstorageclient = bigquery_storage.BigQueryReadClient(credentials=credentials) import joblib def train(category="1012100"): # Download query results. # change to Data, Adj_Close query_string = SELECT date as Date, upper(category_name) as category_name, category, sum(sale_dollars) as sales FROM `bigquery-public-data.iowa_liquor_sales.sales` where category = '{}' and date >= '2020-01-01' group by Date, category_name, category .format( category ) data = ( bqclient.query(query_string) .result() .to_dataframe(bqstorage_client=bqstorageclient) ) data.head() data.plot(title=f"{category} Daily Sales", x="Date", y="sales") # Copy results into a dataframe with two columns ds=Date, y=value df_forecast = data.copy() df_forecast.reset_index(inplace=True) df_forecast["ds"] = df_forecast["Date"] df_forecast["y"] = df_forecast[ "sales" ] # Add underscore as BQ changes this field name df_forecast = df_forecast[["ds", "y"]] df_forecast # Train the Prophet model model = Prophet() model.fit(df_forecast) # Save the model locally joblib.dump(model, "{}.sav".format(category)) def train_w_regressor(category="1012100"): # Download query results. # change to Data, Adj_Close query_string = SELECT date as Date, upper(category_name) as category_name, category, sum(sale_dollars) as sales FROM `bigquery-public-data.iowa_liquor_sales.sales` where category = '{}' and date >= '2020-01-01' group by Date, category_name, category .format( category ) query_string_regressor = WITH history as ( SELECT wx.date as ds, AVG(wx.value/10) AS temp FROM `bigquery-public-data.ghcn_d.ghcnd_stations` AS stn JOIN `bigquery-public-data.ghcn_d.ghcnd_2021` AS wx ON wx.id = stn.id WHERE stn.state = 'IA' AND wx.element = 'TMIN' AND wx.qflag IS NULL GROUP by wx.date union all SELECT wx.date as ds, AVG(wx.value/10) AS temp FROM `bigquery-public-data.ghcn_d.ghcnd_stations` AS stn JOIN `bigquery-public-data.ghcn_d.ghcnd_2020` AS wx ON wx.id = stn.id WHERE stn.state = 'IA' AND wx.element = 'TMIN' AND wx.qflag IS NULL GROUP by wx.date ) ,next_100_days as ( SELECT DATE_ADD(max(stn.date) over (order by 1), INTERVAL row_number() over (order by stn.id) DAY) as ds, null as temp FROM `bigquery-public-data.ghcn_d.ghcnd_2021` AS stn LIMIT 100 ) ,combined_data as ( select * from history union all select * from next_100_days ) select c.ds ,case when temp is null then AVG(c.temp) OVER (ORDER BY c.ds ASC ROWS 100 PRECEDING) else temp end as temp from combined_data c; data = ( bqclient.query(query_string) .result() .to_dataframe(bqstorage_client=bqstorageclient) ) data.head() data.plot(title=f"{category} Daily Sales", x="Date", y="sales") # Copy results into a dataframe with two columns ds=Date, y=value df_forecast = data.copy() df_forecast.reset_index(inplace=True) df_forecast["ds"] = df_forecast["Date"] df_forecast["y"] = df_forecast[ "sales" ] # Add underscore as BQ changes this field name df_forecast = df_forecast[["ds", "y"]] df_forecast # Return the weather data from BQ data_regressor = ( bqclient.query(query_string_regressor) .result() .to_dataframe(bqstorage_client=bqstorageclient) ) data_regressor.head() df_regressor = data_regressor.copy() # Deal with any NaN values using fillna df_regressor_out = df_regressor.fillna(method="pad") df_regressor_out.to_csv(f"{category}_weather.csv", index=False) # Add a temp column to the df_forecast dataframe, and populate it from the df_regressor dataframe df_forecast["temp"] = df_forecast.ds.map( df_regressor.set_index("ds")["temp"].to_dict() ) df_forecast.head() df_forecast.fillna(method="pad") # Address NaN values of temp # Train the Prophet model with the regressor model = Prophet() model.add_regressor("temp") model.fit(df_forecast) # Save the model locally, with _weather appended to the file name joblib.dump(model, f"{category}_weather.sav") %cd app/ # Choose a few product categories to build forecasts for from the Iowa Liquor sales data in BQ: bigquery-public-data:iowa_liquor_sales.sales # 1012100 CANADIAN WHISKIES # 1031100 AMERICAN VODKAS train("1012100") train("1031100") %cd .. %cd app/ # Choose a few product categories to build forecasts for from the Iowa Liquor sales data in BQ: bigquery-public-data:iowa_liquor_sales.sales # 1012100 CANADIAN WHISKIES # 1031100 AMERICAN VODKAS train_w_regressor("1012100") train_w_regressor("1031100") %cd .. %cd app !gsutil cp *.sav *.csv {BUCKET_NAME}/{MODEL_ARTIFACT_DIR}/ %cd .. os.environ["AIP_STORAGE_URI"] = f"{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}" os.environ["AIP_HTTP_PORT"] = "8080" os.environ["AIP_HEALTH_ROUTE"] = "/health" os.environ["AIP_PREDICT_ROUTE"] = "/predict" print(os.environ["AIP_STORAGE_URI"]) print(os.environ["AIP_HTTP_PORT"]) print(os.environ["AIP_HEALTH_ROUTE"]) print(os.environ["AIP_PREDICT_ROUTE"]) os.getcwd() %cd app import datetime import pandas as pd from google.cloud import storage gcs_client = storage.Client() # Set a sample category and number of days to test with category = "1012100" days = 7 # Test the loading of the model from GCS fname = f"{category}.sav" with open(f"{fname}", "wb") as model_g: gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/{fname}", model_g ) model = joblib.load(f"{category}.sav") # Create a dataframe that ranges from 2020 to Today + the number of days set above TODAY = datetime.date.today() future = TODAY + datetime.timedelta(days=days) dates = pd.date_range( start="2020-01-01", end=future.strftime("%m/%d/%Y"), ) df = pd.DataFrame({"ds": dates}) # Run a prediction for these dates and save it to a forecast dataframe forecast = model.predict(df) model.plot(forecast).savefig(f"{category}_plot.png") model.plot_components(forecast).savefig(f"{category}_plot_components.png") forecast.tail(days).to_dict("records") %cd .. %cd app gcs_client = storage.Client() # Set a sample category and number of days to test with category = "1012100_weather" days = 7 # Test the loading of the model from GCS fname = f"{category}.sav" with open(f"{fname}", "wb") as model_g: gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/{fname}", model_g ) model = joblib.load(f"{category}.sav") # Test the loading of the regressor from GCS fname_csv = f"{category}.csv" with open(f"{fname_csv}", "wb") as model_csv: gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/{fname_csv}", model_csv ) df_regressor = pd.read_csv(fname_csv) # Return the data from the regressor that ranges from 2020 to Today + the number of days set above TODAY = datetime.date.today() future = TODAY + datetime.timedelta(days=days) start = "2020-01-01" after_start_date = df_regressor["ds"] >= start before_end_date = df["ds"] <= future.strftime("%m/%d/%Y") between_two_dates = after_start_date & before_end_date df_final = df_regressor.loc[between_two_dates] df_final.tail() # Run a prediction for these dates and the regressor forecast = model.predict(df_final) model.plot(forecast).savefig(f"{category}_plot.png") model.plot_components(forecast).savefig(f"{category}_plot_components.png") %cd .. %%writefile app/main.py from fastapi import FastAPI, Request from fastapi.exceptions import RequestValidationError from fastapi.responses import JSONResponse import joblib import json import numpy as np import os import datetime import pandas as pd import errno from google.cloud import storage app = FastAPI() gcs_client = storage.Client() @app.exception_handler(Exception) async def validation_exception_handler(request, err): base_error_message = f"Failed to execute: {request.method}: {request.url}" # Change here to LOGGER return JSONResponse(status_code=400, content={"message": f"{base_error_message}. Detail: {err}"}) def download_model(category="1012100", regressor=""): #defaults to "1012100" if not category: raise HTTPException(status_code=400, detail=f"category not found. category={category}") fname = f"{category}{regressor}.sav" with open(f"{fname}", 'wb') as model_g: gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/{fname}", model_g ) model = joblib.load (fname) return model def predict_in(model, days=7, regressor="", category="1012100"): TODAY = datetime.date.today() future = TODAY + datetime.timedelta(days=days) dates = pd.date_range(start="2020-01-01", end=future.strftime("%m/%d/%Y"),) df = pd.DataFrame({"ds": dates}) if (regressor > ""): fname_csv = f"{category}{regressor}.csv" with open(f"{fname_csv}", 'wb') as model_csv: gcs_client.download_blob_to_file( f"{os.environ['AIP_STORAGE_URI']}/{fname_csv}", model_csv ) df_regressor = pd.read_csv(fname_csv) start="2020-01-01" after_start_date = df_regressor["ds"] >= start before_end_date = df["ds"] <= future.strftime("%m/%d/%Y") between_two_dates = after_start_date & before_end_date df_final = df_regressor.loc[between_two_dates] df_final.tail() forecast = model.predict(df_final) else: forecast = model.predict(df) model.plot(forecast).savefig("model_plot.png") model.plot_components(forecast).savefig("model_plot_components.png") return forecast.tail(days).to_dict("records") def convert(prediction_list): output = {} for data in prediction_list: date = data["ds"].strftime("%m/%d/%Y") output[date] = data["trend"] return output @app.get(os.environ['AIP_HEALTH_ROUTE'], status_code=200) def health(): return {} @app.post(os.environ['AIP_PREDICT_ROUTE']) async def predict(request: Request): body = await request.json() try: if type(body) is list: body = body[0] except: None #Do Nothing print (body) instances = body["instances"] try: if type(instances) is list: instances = instances[0] except: None #Do Nothing print(instances) category = instances["category"] days = instances["days"] regressor = instances["regressor"] try: if type(category) is list: category = category[0] except: None #Do Nothing print(category) try: if type(days) is list: days = days[0] except: None #Do Nothing print(days) try: if type(regressor) is list: regressor = regressor[0] except: None #Do Nothing print(regressor) model_download = download_model (category, regressor) prediction_list = predict_in(model_download, days, regressor, category) if not prediction_list: raise HTTPException(status_code=400, detail="Model not found.") prediction_output = convert(prediction_list) final_output = [(k, v) for k, v in prediction_output.items()] return {"predictions": final_output} %%writefile app/prestart.sh export PORT=$AIP_HTTP_PORT %%writefile instances.json {"instances": [{"category":["1012100"], "days": [30], "regressor": ["_weather"]}]} %%writefile Dockerfile FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 COPY ./app /app COPY requirements.txt requirements.txt RUN pip install -r requirements.txt RUN pip install fbprophet==0.6 !gcloud auth configure-docker {REGION}-docker.pkg.dev --quiet !sudo docker build \ --tag="{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}" \ . !sudo docker stop local-ts !sudo docker rm local-ts !sudo docker run -d -p 80:8080 \ --name=local-ts \ -e AIP_HTTP_PORT=8080 \ -e AIP_HEALTH_ROUTE=/health \ -e AIP_PREDICT_ROUTE=/predict \ -e AIP_STORAGE_URI={BUCKET_NAME}/{MODEL_ARTIFACT_DIR} \ "{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}" !curl localhost/health !curl -X POST \ -d @instances.json \ -H "Content-Type: application/json; charset=utf-8" \ localhost/predict %%writefile instances.json {"instances": [{"category":["1012100"], "days": [30], "regressor": [""]}]} !curl -X POST \ -d @instances.json \ -H "Content-Type: application/json; charset=utf-8" \ localhost/predict !sudo docker stop local-ts !gcloud services enable artifactregistry.googleapis.com !gcloud beta artifacts repositories create {REPOSITORY} \ --repository-format=docker \ --location=$REGION !docker push {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} from google.cloud import aiplatform aiplatform.init(project=PROJECT_ID, location=REGION) model = aiplatform.Model.upload( display_name=MODEL_DISPLAY_NAME, artifact_uri=f"{BUCKET_NAME}/{MODEL_ARTIFACT_DIR}", serving_container_image_uri=f"{REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE}", ) endpoint = model.deploy(machine_type="n1-standard-4") endpoint.predict(instances=[{"category": ["1012100"], "days": [30], "regressor": [""]}]) endpoint.predict( instances=[{"category": ["1012100"], "days": [30], "regressor": ["_weather"]}] ) ENDPOINT_ID = endpoint.name ! curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ -d @instances.json \ https://{REGION}-aiplatform.googleapis.com/v1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:predict !gcloud beta ai endpoints predict $ENDPOINT_ID \ --region=$REGION \ --json-request=instances.json # Undeploy model and delete endpoint endpoint.delete(force=True) # Delete the model resource model.delete() # Delete the container image from Artifact Registry !gcloud artifacts docker images delete \ --quiet \ --delete-tags \ {REGION}-docker.pkg.dev/{PROJECT_ID}/{REPOSITORY}/{IMAGE} <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Restart the kernel Step2: Before you begin Step3: Otherwise, set your project ID here. Step4: Authenticate your Google Cloud account Step5: Create a Cloud Storage bucket Step6: Only if your bucket doesn't already exist Step7: Finally, validate access to your Cloud Storage bucket by examining its contents Step8: Create a directory for all your model artifacts Step9: Configure model, repository, and artifact names Step10: MODEL_ARTIFACT_DIR - Folder directory path to your model artifacts within a Cloud Storage bucket, for example Step12: Train and store the model locally Step15: Train additional models that use the weather as a regressor Step16: Train a model for two sample categories (Canadian Whiskies and American Vodkas) Step17: Train a model for the same sample categories (Canadian Whiskies and American Vodkas) Step18: Upload model artifacts and custom code to Cloud Storage Step19: Local model testing Step20: Test model with regressor Step21: Build a FastAPI server Step22: Add pre-start script Step23: Store test instances to use later Step24: Build and push container to Artifact Registry Step25: Build the image and tag the Artifact Registry path that you will push to. Step26: Run and test the container locally (optional) Step27: Test without regressor Step28: Push the container to artifact registry Step29: Run this once to create the artifact repository Step30: Deploy to Vertex AI Step31: Deploy the model endpoint Step32: Send predictions Step33: Using REST Step34: Using gcloud CLI Step35: Cleaning up
3,955
<ASSISTANT_TASK:> Python Code: def sum_if_3(k, m): if (m % 3 == 0) or (str(m).endswith('3')): return k + m else: return k sum_if_3(5, 18) sum_if_3(4, 7) if 'bc' in 'abcde': print("'bc'가 'abcde'의 부분문자열이다.") def sum_if_3s(k, m): if (m % 3 == 0) or ('3' in str(m)): return k + m else: return k sum_if_3s(2, 31) sum_if_3s(3, 15) sum_if_3s(13, 28) num1 = 5 num2 = 10 if num1 < num2: print("num1이 num2 보다 작다.") else: if num1 == num2: print("num1이 num2와 같다.") else: print("num1이 num2 보다 크다.") num1 = 5 num2 = 10 if num1 < num2: print("num1이 num2 보다 작다.") elif num1 == num2: print("num1이 num2와 같다.") else: print("num1이 num2 보다 크다.") number = 43 divisor = 7 answer = 0 # While 루프 while number > 0: number = number - divisor # 음수가 아니라면 빼주는 횟수를 1회 늘린다. if number > 0: answer += 1 # 이제 answer를 출력하면 된다. print('몫은', answer, '이다') def gcd(a, b): if a < b: # 이 경우에는 a와 b의 값을 서로 바꾼다. a, b = b, a while b != 0: a, b = b, a % b return a gcd(6, 8) gcd(14, 21) for char in "adam": print(char) a_word = 'aardvarks' new_word = '' for char in a_word: if char == 'a': new_word = new_word + 'A' else: new_word = new_word + char print(new_word) a_word = ' n o r t h w e s t e r n' temp_word = '' for char in a_word: if char != ' ': temp_word = temp_word + char new_word = temp_word.title() print(new_word) a_range = range(10) print(a_range) type(a_range) a_range_1 = range(3, 10) a_range_1 a_range_2 = range(3, 10, 2) a_range_2 for i in range(6): print(i,"의 제곱은", i ** 2, "이다.") for i in range(0, 6, 2): print(i,"의 제곱은", i ** 2, "이다.") for i in range(5): print("다섯 번 출력합니다.") a_word = 'hamster' for i in range(7): print(a_word[i]) a_word = 'hamster' for i in a_word: print(i) for i in range(8): print(a_word[i]) def sum_of_3s(n): sum = 0 for i in range(1, n+1): if i % 3 == 0: sum = sum + i elif '3' in str(i): sum = sum + i return sum sum_of_3s(10) sum_of_3s(15) def gcd(a, b): if a < b : a, b = b, a while b != 0: a, b = b, a % b return a gcd(10, 25) gcd(124, 36) def lcm(a, b): g = gcd(a, b) c = a/g return c*b lcm(10, 25) lcm(124, 36) song = "When you are smiling, the whole world smiles with you" count_a = 0 for word in song: if word == 'a': count_a += 1 print(count_a) count_w = 0 for word in song.lower(): if word == 'w': count_w += 1 print(count_w) new_song = '' for word in song.lower(): if word not in 'aeiou, ': new_song += word print(new_song) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 주의 Step2: 예제 Step3: 견본답안 Step4: 중첩 조건문과 일반화된 조건문 Step5: 다중 조건문(다중 if문) 활용 예제 Step6: 주의 Step7: 'while' 루프를 작성할 때 조건문이 언젠가는 만족되지 않아서 더 이상 루프가 돌지 않도록 코드를 작성하는 것이 가장 중요하다. Step8: 주의 Step9: for 루프 Step10: 연습 Step11: 연습 Step12: range() 함수와 for 문 Step13: range() 함수의 리턴값의 자료형은 리스트이다. Step14: range() 함수는 인자를 최대 세 개까지 받을 수 있다. 각 인자들의 역할은 슬라이싱에 사용되는 세 개의 인자들의 역할과 동일하다. Step15: range 함수는 for문에서 유용하게 활용된다. Step16: 단순한 카운트 역할을 수행하는 용도로 range함수를 활용할 수도 있다. Step17: range() 함수와 문자열 인덱싱을 활용하면 문자열에 대해 for문을 직접 활용하는 것과 동일한 일을 할 수 있다. Step18: 주의 Step19: 이제 아래 문제를 해결할 수 있다. Step20: 연습 Step21: 연습 Step22: 연습 Step23: (1) 위 문자열에서 a가 등장하는 횟수를 구하는 코드를 작성하라. Step24: (2) 위 문자열에서 대소문자 구별없이 w가 등장하는 횟수를 구하는 코드를 작성하라. Step25: (3) 다음 문자열을 이용하여, whnyrsmlngthwhlwrldsmlswthyu를 생성하는 코드를 작성하라.
3,956
<ASSISTANT_TASK:> Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from qutip import * compl_circ = np.array([[(x + 1j*y) if x**2 + y**2 <= 1 else 0j for x in np.arange(-1,1,0.005)] for y in np.arange(-1,1,0.005)]) fig = plt.figure(figsize=(6, 3)) for i, theme in enumerate(['light', 'dark']): ax = plt.subplot(1, 2, i + 1) ax.set_xlabel('x', fontsize=14) ax.set_ylabel('y', fontsize=14) ax.imshow(complex_array_to_rgb(compl_circ, rmax=1, theme=theme), extent=(-1,1,-1,1)) plt.tight_layout() singlet = (ket('01') - ket('10')).unit() separable = (ket('01') - ket('00')).unit() plot_schmidt(singlet, figsize=(2,2)); plot_schmidt(separable, figsize=(2,2)); plot_schmidt(1j * tensor([singlet, singlet]), figsize=(2,2)); plot_schmidt(1j * tensor([singlet, singlet]).permute([1,2,3,0]), figsize=(2,2)); plot_schmidt(1j * tensor([singlet, singlet]), splitting=1, labels_iteration=(1,3), figsize=(4,2)); fig = plt.figure(figsize=(8, 4)) for i in [1, 2]: ax = plt.subplot(1, 2, i) plot_qubism(0 * ket('0000'), legend_iteration=i, grid_iteration=i, fig=fig, ax=ax) state = ket('0010') + 0.5 * ket('1111') + 0.5j * ket('0101') - 1j * ket('1101') \ - 0.2 * ket('0110') plot_qubism(state, figsize=(4,4)); plot_qubism(state, legend_iteration=2, figsize=(4,4)); plot_qubism(state, legend_iteration=2, theme='dark', figsize=(4,4)); fig = plt.figure(figsize=(15, 3)) for k in range(1,6): ax = plt.subplot(1, 5, k) plot_qubism(tensor([singlet]*k), fig=fig, ax=ax) def spinchainize(op, n, bc='periodic'): if isinstance(op, list): return sum([spinchainize(each, n, bc=bc) for each in op]) k = len(op.dims[0]) d = op.dims[0][0] expanded = tensor([op] + [qeye(d)]*(n - k)) if bc == 'periodic': shifts = n elif bc == 'open': shifts = n - k + 1 shifteds = [expanded.permute([(i + j) % n for i in range(n)]) for j in range(shifts)] return sum(shifteds) def gs_of(ham): gval, gstate = ham.groundstate() return gstate heis = sum([tensor([pauli]*2) for pauli in [sigmax(), sigmay(), sigmaz()]]) heis2 = sum([tensor([pauli, qeye(2), pauli]) for pauli in [sigmax(), sigmay(), sigmaz()]]) N = 10 Js = [0., 0.5, 1.] fig = plt.figure(figsize=(2*len(Js), 4.4)) for b in [0, 1]: for k, J in enumerate(Js): ax = plt.subplot(2, len(Js), b*len(Js) + k + 1) if b == 0: spinchain = spinchainize([heis, J*heis2], N, bc='periodic') elif b ==1: spinchain = spinchainize([heis, J*heis2], N, bc='open') plot_qubism(gs_of(spinchain), ax=ax) if k == 0: if b == 0: ax.set_ylabel("periodic BC", fontsize=16) else: ax.set_ylabel("open BC", fontsize=16) if b == 1: ax.set_xlabel("$J={0:.1f}$".format(J), fontsize=16) plt.tight_layout() ss = sum([tensor([jmat(1, s)]*2) for s in ['x', 'y', 'z']]) H = spinchainize([ss, (1./3.) * ss**2], n=6, bc='periodic') plot_qubism(gs_of(H), figsize=(4,4)); fig = plt.figure(figsize=(10, 5)) for i in [1, 2]: ax = plt.subplot(1, 2, i) plot_qubism(0 * ket('0000', dim=3), legend_iteration=i, grid_iteration=i, fig=fig, ax=ax) fig = plt.figure(figsize=(8, 4)) for i in [1, 2]: ax = plt.subplot(1, 2, i) plot_qubism(0 * ket('0000'), how='pairs_skewed', legend_iteration=i, grid_iteration=i, fig=fig, ax=ax) fig = plt.figure(figsize=(8, 4)) for i in [1, 2]: ax = plt.subplot(1, 2, i) plot_qubism(0 * ket('0000'), how='before_after', legend_iteration=i, grid_iteration=i, fig=fig, ax=ax) heis = sum([tensor([pauli]*2) for pauli in [sigmax(), sigmay(), sigmaz()]]) N = 10 gs = gs_of(spinchainize(heis, N, bc='periodic')) fig = plt.figure(figsize=(12, 4)) for i, how in enumerate(['schmidt_plot', 'pairs', 'pairs_skewed', 'before_after']): ax = plt.subplot(1, 4, i + 1) if how == 'schmidt_plot': plot_schmidt(gs, fig=fig, ax=ax) else: plot_qubism(gs, how=how, fig=fig, ax=ax) ax.set_title(how) plt.tight_layout() product_1 = ket('0000') product_2 = tensor([(ket('0') + ket('1')).unit()]*4) w = (ket('0001') + ket('0010') + ket('0100') + ket('1000')).unit() dicke_2of4 = (ket('0011') + ket('0101') + ket('0110') + ket('1001') + ket('1010') + ket('1100')).unit() ghz = (ket('0000') + ket('1111')).unit() states = ['product_1', 'product_2', 'w', 'dicke_2of4', 'ghz'] fig = plt.figure(figsize=(2 * len(states), 2)) for i, state_str in enumerate(states): ax = plt.subplot(1, len(states), i + 1) plot_qubism(eval(state_str), fig=fig, ax=ax) ax.set_title(state_str) plt.tight_layout() def product_state(theta, phi=0, n=1): single = Qobj([[np.cos(theta/2.)], [np.sin(theta/2.) * np.exp(1j*phi)]]) return tensor([single]*n) thetas = 0.5 * np.pi * np.array([0., 0.5, 0.75, 1.]) phis = np.pi * np.array([0., 0.1, 0.2, 0.3]) fig, axes2d = plt.subplots(nrows=len(phis), ncols=len(thetas), figsize=(6,6)) for i, row in enumerate(axes2d): for j, cell in enumerate(row): plot_qubism(product_state(thetas[j], phi=phis[i], n=8), grid_iteration=1, ax=cell) if i == len(axes2d) - 1: cell.set_xlabel("$\\theta={0:s}\pi$".format(["0", "(1/4)", "(3/8)", "(1/2)"][j]), fontsize=16) if j == 0: cell.set_ylabel("$\\varphi={0:.1f}\pi$".format(phis[i] / np.pi), fontsize=16) plt.tight_layout() from qutip.ipynbtools import version_table version_table() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Colors Step2: Schmidt plot Step3: As we see, for separable state the plot is a product of x and y coordinates, while for the singlet state - is is not. Step4: As we see, we have a product, as the state is a product state with the respect to the splitting of first 2 vs last 2 particles. Step5: So we see that it is entangled. Step6: Qubism plot Step7: That is, all amplitudes for states starting with Step8: Or if we want to make sure how did we map amplitudes to particular regions in the plot Step9: Or how about making it dark? (E.g. to fit out slides with black background). Step10: The most important property of Qubism is the recursive structure. So that we can add more particles seamlessly. Step11: OK, but once we can type the wavefunction by hand, plots offer little added value. Step12: For example, let us consider Hamiltonian for $N$ particles, of the following form (a generalization of the Majumdar-Ghosh model) Step13: We are not restricted to qubits. We can have it for other dimensions, e.g. qutrits. Step14: Qubism for qutrits works similarly as for qubits Step15: Just in this case we interpret Step16: The one above emphasis ferromagnetic (put on the left) vs antiferromagnetic (put on the right) states. Step17: It is very similar to the Schmidt plot (for the default splitting), with the only difference being ordering of the y axis (particle order is reversed). All entanglement properties are the same. Step18: Seeing entanglement Step19: Then entanglement (or exactly Step20: In each plot squares are the same, up to a factor (which is visualized as intensity and hue).
3,957
<ASSISTANT_TASK:> Python Code: # for fast array manipulation import numpy as np # for plotting import matplotlib.pyplot as plt # for numerical ODE integration from scipy.integrate import odeint # for nonlinear equations from scipy.optimize import fsolve # to display plots in-line %matplotlib inline def sigmoid(x, a, thr): return 1 / (1 + np.exp(-a * (x - thr))) # couplings c1 = 16 c2 = 12 c3 = 15 c4 = 3 # refractory periods rE = 1 rI = 1 # external inputs P = 1. Q = 1 # nonlinear functions def Se(x): aE = 1.3 thrE = 4 return sigmoid(x, thrE, aE) - sigmoid(0, thrE, aE) def Si(x): aI = 2 thrI = 3.7 return sigmoid(x, thrI, aI) - sigmoid(0, thrI, aI) # this function returns the right hand side of the Wilson-Cowan equation # (both, in a 2-vector) def WilsonCowan(y, t): E = y[0] I = y[1] y1 = -E + (1 - rE * E) * Se(c1 * E - c2 * I + P) y2 = -I + (1 - rI * I) * Si(c3 * E - c4 * I + Q) return [y1, y2] # minimum and maximum E and I values we want displayed in the graph minval = -.1 maxval = .6 resolution = 50 # State variables x1 = np.linspace(minval, maxval, resolution) x2 = np.linspace(minval, maxval, resolution) # Create a grid for evaluation of the vector field x1, x2 = np.meshgrid(x1, x2) # Evaluate the slopes X1, X2 = WilsonCowan([x1, x2], 0) # Compute the magnitude vector M = np.hypot(X1, X2) # Normalize the slopes vectors (for the field plot) #X1, X2 = X1/M, X2/M fixed_p = [] y1 = x1.ravel() y2 = x2.ravel() for i in range(resolution**2): # find a zero sol, infodict, ier, mesg = fsolve(WilsonCowan, [y1[i], y2[i]], args=(0), full_output=1) if ier == 1: # I exclude the cases where fsolve didn't converge fixed_p.append(sol) fixed_p = np.array(fixed_p).T # simulation duration and step size time = np.linspace(0, 100, 2000) # starting point, hopefully inside the basin of attraction of our attractor E0, I0 = 0.39, 0.49 # try changing this # find the solution with scint.odeint odesol = odeint(WilsonCowan, [E0, I0], time) # separate the two solutions exc_timeseries, inh_timeseries = odesol.T # plotting the vector field in the state space (E, I) plt.figure(figsize=(10, 10)) plt.quiver(x2, x1, X2, X1, pivot='mid', alpha=.5) plt.xlim([minval, maxval]) plt.ylim([minval, maxval]) plt.xlabel(r'$I$', fontsize=16) # yes, you can use Latex code! plt.ylabel(r'$E$', fontsize=16) plt.grid() # plot the solution in the state space plt.plot(inh_timeseries, exc_timeseries, '.-'); # plot the starting point plt.scatter(I0, E0, marker='*', s=300, label="Starting point") plt.legend(loc="upper left") # plot the fixed points we identified plt.scatter(fixed_p[1], fixed_p[0], marker='o', s=50, label="Stationary points") # plot the solution in time plt.figure(figsize=(10.3,3)) plt.ylabel(r'$E, I$') plt.xlabel(r'$t$') plt.plot(time, exc_timeseries, '.-', label="excitatory"); plt.plot(time, inh_timeseries, '.-', label="inhibitory"); plt.legend(); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Parameter definitions Step2: For a stable limit cycle and a stable fixed point Step3: More preparation Step4: Solving and plotting Step5: Numerical ODE integration Step6: Plotting
3,958
<ASSISTANT_TASK:> Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import datetime from qutip import Qobj, identity, sigmax, sigmaz, tensor import random import qutip.logging_utils as logging logger = logging.get_logger() #Set this to None or logging.WARN for 'quiet' execution log_level = logging.INFO #QuTiP control modules import qutip.control.pulseoptim as cpo example_name = '2qubitInteract' random.seed(20) alpha = [random.random(),random.random()] beta = [random.random(),random.random()] Sx = sigmax() Sz = sigmaz() H_d = (alpha[0]*tensor(Sx,identity(2)) + alpha[1]*tensor(identity(2),Sx) + beta[0]*tensor(Sz,identity(2)) + beta[1]*tensor(identity(2),Sz)) H_c = [tensor(Sz,Sz)] # Number of ctrls n_ctrls = len(H_c) q1_0 = q2_0 = Qobj([[1], [0]]) q1_targ = q2_targ = Qobj([[0], [1]]) psi_0 = tensor(q1_0, q2_0) psi_targ = tensor(q1_targ, q2_targ) # Number of time slots n_ts = 100 # Time allowed for the evolution evo_time = 18 # Fidelity error target fid_err_targ = 1e-3 # Maximum iterations for the optisation algorithm max_iter = 500 # Maximum (elapsed) time allowed in seconds max_wall_time = 120 # pulse type alternatives: RND|ZERO|LIN|SINE|SQUARE|SAW|TRIANGLE| p_type = 'DEF' #Set to None to suppress output files f_ext = "{}_n_ts{}_ptype{}.txt".format(example_name, n_ts, p_type) result = cpo.opt_pulse_crab_unitary(H_d, H_c, psi_0, psi_targ, n_ts, evo_time, fid_err_targ=fid_err_targ, max_iter=max_iter, max_wall_time=max_wall_time, init_coeff_scaling=5.0, num_coeffs=5, method_params={'xtol':1e-3}, guess_pulse_type=None, guess_pulse_action='modulate', out_file_ext=f_ext, log_level=log_level, gen_stats=True) result.stats.report() print("Final evolution\n{}\n".format(result.evo_full_final)) print("********* Summary *****************") print("Final fidelity error {}".format(result.fid_err)) print("Final gradient normal {}".format(result.grad_norm_final)) print("Terminated due to {}".format(result.termination_reason)) print("Number of iterations {}".format(result.num_iter)) print("Completed in {} HH:MM:SS.US".format( datetime.timedelta(seconds=result.wall_time))) fig1 = plt.figure() ax1 = fig1.add_subplot(2, 1, 1) ax1.set_title("Initial Control amps") ax1.set_ylabel("Control amplitude") ax1.step(result.time, np.hstack((result.initial_amps[:, 0], result.initial_amps[-1, 0])), where='post') ax2 = fig1.add_subplot(2, 1, 2) ax2.set_title("Optimised Control Amplitudes") ax2.set_xlabel("Time") ax2.set_ylabel("Control amplitude") ax2.step(result.time, np.hstack((result.final_amps[:, 0], result.final_amps[-1, 0])), where='post') plt.show() from qutip.ipynbtools import version_table version_table() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Defining the physics Step2: Defining the time evolution parameters Step3: Set the conditions which will cause the pulse optimisation to terminate Step4: Set the initial pulse type Step5: Give an extension for output files Step6: Run the optimisation Step7: Report the results Step8: Plot the initial and final amplitudes Step9: Versions
3,959
<ASSISTANT_TASK:> Python Code: # If we're running on Colab, install empiricaldist # https://pypi.org/project/empiricaldist/ import sys IN_COLAB = 'google.colab' in sys.modules if IN_COLAB: !pip install empiricaldist # Get utils.py import os if not os.path.exists('utils.py'): !wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py import numpy as np import pandas as pd import matplotlib.pyplot as plt from empiricaldist import Pmf from utils import decorate, savefig ps = np.linspace(0, 1, 101) prior_p1 = Pmf(1.0, ps) prior_p1.index.name = 'p1' prior_p2 = Pmf(1.0, ps) prior_p2.index.name = 'p2' from utils import make_joint joint = make_joint(prior_p2, prior_p1) prior_pmf = Pmf(joint.stack()) prior_pmf.head() def update(prior, data): n1, n2, outcome = data likelihood = prior.copy() for p1, p2 in prior.index: like = p1**n1 * p2**n2 if outcome == 'guilty': likelihood.loc[p1, p2] = like elif outcome == 'hung': likelihood.loc[p1, p2] = 1-like else: raise ValueError() posterior = prior * likelihood posterior.normalize() return posterior from utils import pmf_marginal def plot_marginals(posterior): marginal0 = pmf_marginal(posterior, 0) marginal0.plot(label='white') marginal1 = pmf_marginal(posterior, 1) marginal1.plot(label='black') decorate(xlabel='Probability of voting to convict', ylabel='PDF', title='Marginal posterior distributions') data1 = 12, 0, 'guilty' posterior1 = update(prior_pmf, data1) plot_marginals(posterior1) data2 = 11, 1, 'guilty' posterior2 = update(posterior1, data2) plot_marginals(posterior2) data3 = 11, 1, 'guilty' posterior3 = update(posterior2, data3) plot_marginals(posterior3) data4 = 7, 5, 'hung' posterior4 = update(posterior3, data4) plot_marginals(posterior4) data5 = 9, 3, 'hung' posterior5 = update(posterior4, data5) plot_marginals(posterior5) data6 = 11, 1, 'guilty' posterior6 = update(posterior5, data6) plot_marginals(posterior6) marginal_p1 = pmf_marginal(posterior6, 0) marginal_p2 = pmf_marginal(posterior6, 1) marginal_p1.mean(), marginal_p2.mean(), sample = posterior6.sample(1000) np.mean([p1 > p2 for p1, p2 in sample]) def prob_guilty(sample, n1, n2): ps = [p1**n1 * p2**n2 for p1, p2 in sample] return Pmf.from_seq(ps) pmf = prob_guilty(sample, 7, 5) pmf.mean(), pmf.credible_interval(0.9) pmf = prob_guilty(sample, 6, 6) pmf.mean(), pmf.credible_interval(0.9) pmf_seq = [] n2s = range(0, 13) for n2 in n2s: n1 = 12 - n2 pmf = prob_guilty(sample, n1, n2) pmf_seq.append(pmf) means = [pmf.mean() for pmf in pmf_seq] lows = [pmf.quantile(0.05) for pmf in pmf_seq] highs = [pmf.quantile(0.95) for pmf in pmf_seq] means plt.plot(n2s, means) plt.fill_between(n2s, lows, highs, color='C0', alpha=0.1) decorate(xlabel='Number of black jurors', ylabel='Probability of a guilty verdict', title='Probability of a guilty verdict vs jury composition', ylim=[0, 1]) from scipy.stats import binom k1 = 12 + 11 + 11 + 11 like1 = binom(k1, ps).pmf(k1) prior_p1 = Pmf(like1, ps) k2 = 0 + 1 + 1 + 1 like2 = binom(k2, ps).pmf(k2) prior_p2 = Pmf(like2, ps) prior_p1.plot() prior_p2.plot() prior = Pmf(make_joint(prior_p2, prior_p1).stack()) posterior = update(prior, data4) posterior = update(posterior, data5) plot_marginals(posterior) marginal_p1 = pmf_marginal(posterior, 0) marginal_p2 = pmf_marginal(posterior, 1) marginal_p1.mean(), marginal_p2.mean(), binom.pmf(4, 6, 0.15) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: On September 5, 2020, prosecutors in Mississippi dropped charges against Curtis Flowers, freeing him after 23 years of incarceration. Step2: To prepare for the updates, I'll form a joint distribution of the two probabilities. Step3: Here's how we compute the update. Step4: I'll use the following function to plot the marginal posterior distributions after each update. Step5: Here's the update for the first trial. Step6: Since there were no black jurors for the first trial, we learn nothing about their probability of conviction, so the posterior distribution is the same as the prior. Step7: And the third. Step8: Since the first three verdicts were guilty, we infer that all 36 jurors voted to convict, so the estimated probabilities for both groups are high. Step9: The effect of the fifth trial is similar; it decreases the estimates for both pools, but the effect on the estimate for black jurors is greater. Step10: Finally, here are the posterior distributions after all six trials. Step11: The posterior distributions for the two pools are substantially different. Here are the posterior means. Step12: Based on the outcomes of all six trials, we estimate that the probability is 98% that a white juror would vote to convict, and the probability is 68% that a black juror would vote to convict. Step13: Here's the probability that white jurors were more likely to convict. Step14: The following function takes this sample and a hypothetical composition and returns the posterior predictive distribution for the probability of conviction. Step15: According to Wikipedia Step16: And with 6 white and 6 black jurors. Step17: With a jury that represents the population of Montgomery County, the probability Flowers would be convicted is 14-15%. Step18: Double Check Step19: We can use the posteriors from those updates as priors and update them based on the two trials that resulted in a hung jury. Step20: The posterior marginals look the same. Step21: And yield the same posterior means. Step22: Here's the probability that a fair jury would convict four times out of six.
3,960
<ASSISTANT_TASK:> Python Code: import os import sys # Modify the path sys.path.append("..") import pandas as pd import yellowbrick as yb import matplotlib.pyplot as plt data = pd.read_csv("data/No-show-Issue-Comma-300k.csv") data.head() data.columns = ['Age','Gender','Appointment Registration','Appointment Date', 'Day Of Week','Status','Diabetes','Alcoholism','Hypertension','Handicap', 'Smoker','Scholarship','Tuberculosis','SMS Reminder','Awaiting Time'] data.describe() features = ['Age','Gender','Appointment Registration','Appointment Date', 'Day Of Week','Diabetes','Alcoholism','Hypertension','Handicap', 'Smoker','Scholarship','Tuberculosis','SMS Reminder','Awaiting Time'] numerical_features = data.describe().columns.values # Feature Analysis Imports # NOTE that all these are available for import from the `yellowbrick.features` module from yellowbrick.features.rankd import Rank2D from yellowbrick.features.radviz import RadViz from yellowbrick.features.pcoords import ParallelCoordinates # To help interpret the column features being described in the visualization pd.DataFrame(numerical_features) # For this visualizer numerical features are required X = data[numerical_features].as_matrix() y = data.Status.as_matrix() # Instantiate the visualizer with the Covariance ranking algorithm visualizer = Rank2D(features=numerical_features, algorithm='covariance') visualizer.fit(X, y) # Fit the data to the visualizer visualizer.transform(X) # Transform the data visualizer.show() # Draw/show/show the data # Instantiate the visualizer with the Pearson ranking algorithm visualizer = Rank2D(features=numerical_features, algorithm='pearson') visualizer.fit(X, y) # Fit the data to the visualizer visualizer.transform(X) # Transform the data visualizer.show() # Draw/show/show the data #Need to specify the classes of interest classes = data.Status.unique().tolist() # For this visualizer numerical features are required X = data[numerical_features].as_matrix() # Additional step here of converting categorical data 0's and 1's y = data.Status.replace(classes,[0,1]).as_matrix() # Instantiate the visualizer visualizer = visualizer = RadViz(classes=classes, features=numerical_features) visualizer.fit(X, y) # Fit the data to the visualizer visualizer.transform(X) # Transform the data visualizer.show() # Draw/show/show the data # Instantiate the visualizer visualizer = visualizer = ParallelCoordinates(classes=classes, features=numerical_features) visualizer.fit(X, y) # Fit the data to the visualizer visualizer.transform(X) # Transform the data visualizer.show() # Draw/show/show the data # Classifier Evaluation Imports from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from yellowbrick.classifier import ClassificationReport, ROCAUC, ClassBalance # Create the train and test data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) # Instantiate the classification model and visualizer bayes = GaussianNB() visualizer = ClassificationReport(bayes, classes=classes) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data g = visualizer.show() # Draw/show/show the data # Instantiate the classification model and visualizer logistic = LogisticRegression() visualizer = ROCAUC(logistic) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data g = visualizer.show() # Draw/show/show the data # Instantiate the classification model and visualizer forest = RandomForestClassifier() visualizer = ClassBalance(forest, classes=classes) visualizer.fit(X_train, y_train) # Fit the training data to the visualizer visualizer.score(X_test, y_test) # Evaluate the model on the test data g = visualizer.show() # Draw/show/show the data <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load Medical Appointment Data Step2: Feature Analysis Step3: Rank2D Step4: Diagnostic Interpretation from Rank2D(Covariance) Step5: Diagnostic Interpretation from Rank2D(Pearson) Step6: For regression, the RadViz visualizer should use a color sequence to display the target information, as opposed to discrete colors. Step7: Classifier Evaluation Step8: Classification Report Step9: ROCAUC Step10: ClassBalance
3,961
<ASSISTANT_TASK:> Python Code: import os import mne sample_data_folder = mne.datasets.sample.data_path() sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample', 'sample_audvis_filt-0-40_raw.fif') raw = mne.io.read_raw_fif(sample_data_raw_file) print(raw.info) info = mne.io.read_info(sample_data_raw_file) print(info) print(info.keys()) print() # insert a blank line print(info['ch_names']) print(info['chs'][0].keys()) print(mne.pick_channels(info['ch_names'], include=['MEG 0312', 'EEG 005'])) print(mne.pick_channels(info['ch_names'], include=[], exclude=['MEG 0312', 'EEG 005'])) print(mne.pick_types(info, meg=False, eeg=True, exclude=[])) print(mne.pick_channels_regexp(info['ch_names'], '^E.G')) print(mne.channel_type(info, 25)) picks = (25, 76, 77, 319) print([mne.channel_type(info, x) for x in picks]) print(raw.get_channel_types(picks=picks)) ch_idx_by_type = mne.channel_indices_by_type(info) print(ch_idx_by_type.keys()) print(ch_idx_by_type['eog']) print(info['nchan']) eeg_indices = mne.pick_types(info, meg=False, eeg=True) print(mne.pick_info(info, eeg_indices)['nchan']) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: As seen in the introductory tutorial &lt;tut-overview&gt;, when a Step2: However, it is not strictly necessary to load the Step3: As you can see, the Step4: Most of the fields contain Step5: Obtaining subsets of channels Step6: Step7: Note that the meg and fnirs parameters of Step8: Step9: To obtain several channel types at once, you could embed Step10: Alternatively, you can get the indices of all channels of all channel types Step11: Dropping channels from an Info object
3,962
<ASSISTANT_TASK:> Python Code: rs = np.random.RandomState(seed=10) ns = 100 nf = 10 x, y_orig, coef = make_regression(n_samples=ns, n_features=nf, coef=True, noise=0.0, random_state=rs) x = pd.DataFrame(x) y = pd.Series(y_orig) n_quantiles = 3 # two-thirds of the data is truncated quantile = 100/float(n_quantiles) lower = np.percentile(y, quantile) upper = np.percentile(y, (n_quantiles - 1) * quantile) left = y < lower right = y > upper cens = pd.Series(np.zeros((ns,))) cens[left] = -1 cens[right] = 1 y = y.clip(upper=upper, lower=lower) hist = plt.hist(y) tr = TobitModel() result = tr.fit(x, y, cens, verbose=False) fig, ax = plt.subplots() ind = np.arange(len(coef)) width = 0.25 rects1 = ax.bar(ind, coef, width, color='g', label='True') rects2 = ax.bar(ind + width, tr.coef_, width, color='r', label='Tobit') rects3 = ax.bar(ind + (2 * width), tr.ols_coef_, width, color='b', label='OLS') plt.ylabel("Coefficient") plt.xlabel("Index of regressor") plt.title("Tobit vs. OLS on censored data") leg = plt.legend(loc=(0.22, 0.65)) data_file = 'tobit_data.txt' df = pd.read_table(data_file, sep=' ') df.loc[df.gender=='male', 'gender'] = 1 df.loc[df.gender=='female', 'gender'] = 0 df.loc[df.children=='yes', 'children'] = 1 df.loc[df.children=='no', 'children'] = 0 df = df.astype(float) df.head() y = df.affairs x = df.drop(['affairs', 'gender', 'education', 'children'], axis=1) cens = pd.Series(np.zeros((len(y),))) cens[y==0] = -1 cens.value_counts() tr = TobitModel() tr = tr.fit(x, y, cens, verbose=False) tr.coef_ <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Note that the truncation values do not have to be the same for e.g. all left-censored observations, or all right-censored observations, as in this example. However, the model does assume that the errors will be normally-distributed.
3,963
<ASSISTANT_TASK:> Python Code: from collatex import * from lxml import etree import json,re class WitnessSet: def __init__(self,witnessList): self.witnessList = witnessList def generate_json_input(self): json_input = {} witnesses = [] json_input['witnesses'] = witnesses for witness in self.witnessList: line = Line(witness) witnessData = {} witnessData['id'] = line.siglum() witnessTokens = {} witnessData['tokens'] = line.tokens() witnesses.append(witnessData) return json_input class Line: addWMilestones = etree.XML( <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="no" encoding="UTF-8" omit-xml-declaration="yes"/> <xsl:template match="*|@*"> <xsl:copy> <xsl:apply-templates select="node() | @*"/> </xsl:copy> </xsl:template> <xsl:template match="/*"> <xsl:copy> <xsl:apply-templates select="@*"/> <!-- insert a <w/> milestone before the first word --> <w/> <xsl:apply-templates/> </xsl:copy> </xsl:template> <!-- convert <add>, <sic>, and <crease> to milestones (and leave them that way) CUSTOMIZE HERE: add other elements that may span multiple word tokens --> <xsl:template match="add | sic | crease "> <xsl:element name="{name()}"> <xsl:attribute name="n">start</xsl:attribute> </xsl:element> <xsl:apply-templates/> <xsl:element name="{name()}"> <xsl:attribute name="n">end</xsl:attribute> </xsl:element> </xsl:template> <xsl:template match="note"/> <xsl:template match="text()"> <xsl:call-template name="whiteSpace"> <xsl:with-param name="input" select="translate(.,'&#x0a;',' ')"/> </xsl:call-template> </xsl:template> <xsl:template name="whiteSpace"> <xsl:param name="input"/> <xsl:choose> <xsl:when test="not(contains($input, ' '))"> <xsl:value-of select="$input"/> </xsl:when> <xsl:when test="starts-with($input,' ')"> <xsl:call-template name="whiteSpace"> <xsl:with-param name="input" select="substring($input,2)"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="substring-before($input, ' ')"/> <w/> <xsl:call-template name="whiteSpace"> <xsl:with-param name="input" select="substring-after($input,' ')"/> </xsl:call-template> </xsl:otherwise> </xsl:choose> </xsl:template> </xsl:stylesheet> ) transformAddW = etree.XSLT(addWMilestones) xsltWrapW = etree.XML(''' <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output method="xml" indent="no" omit-xml-declaration="yes"/> <xsl:template match="/*"> <xsl:copy> <xsl:apply-templates select="w"/> </xsl:copy> </xsl:template> <xsl:template match="w"> <!-- faking <xsl:for-each-group> as well as the "<<" and except" operators --> <xsl:variable name="tooFar" select="following-sibling::w[1] | following-sibling::w[1]/following::node()"/> <w> <xsl:copy-of select="following-sibling::node()[count(. | $tooFar) != count($tooFar)]"/> </w> </xsl:template> </xsl:stylesheet> ''') transformWrapW = etree.XSLT(xsltWrapW) def __init__(self,line): self.line = line def siglum(self): return str(etree.XML(self.line).xpath('/l/@wit')[0]) def tokens(self): return [Word(token).createToken() for token in Line.transformWrapW(Line.transformAddW(etree.XML(self.line))).xpath('//w')] class Word: unwrapRegex = re.compile('<w>(.*)</w>') stripTagsRegex = re.compile('<.*?>') def __init__(self,word): self.word = word def unwrap(self): return Word.unwrapRegex.match(etree.tostring(self.word,encoding='unicode')).group(1) def normalize(self): return Word.stripTagsRegex.sub('',self.unwrap().lower()) def createToken(self): token = {} token['t'] = self.unwrap() token['n'] = self.normalize() return token A = <l wit='A'><abbrev>Et</abbrev>cil i partent seulement</l> B = <l wit='B'><abbrev>Et</abbrev>cil i p<abbrev>er</abbrev>dent ausem<abbrev>en</abbrev>t</l> C = <l wit='C'><abbrev>Et</abbrev>cil i p<abbrev>ar</abbrev>tent seulema<abbrev>n</abbrev>t</l> D = <l wit='D'>E cil i partent sulement</l> witnessSet = WitnessSet([A,B,C,D]) json_input = witnessSet.generate_json_input() print(json_input) collationText = collate(json_input,output='table',layout='vertical') print(collationText) collationJSON = collate(json_input,output='json') print(collationJSON) collationHTML2 = collate(json_input,output='html2') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The WitnessSet class represents all of the witnesses being collated. The generate_json_input() method returns a JSON object that is suitable for input into CollateX. Step3: The Line class contains methods applied to individual lines (note that each witness in this part of the tutorial consists of only a single line). The XSLT stylesheets and the functions to use them have been moved into the Line class, since they apply to individual lines. The siglum() method returns the manuscript identifier and the tokens() method returns a list of JSON objects, one for each word token. Step4: The Word class contains methods that apply to individual words. unwrap() and normalize() are private; they are used by createToken() to return a JSON object with the "t" and "n" properties for a word token. Step9: Create XML data and assign to a witnessSet variable Step10: Generate JSON from the data and examine it Step11: Collate and output the results as a plain-text alignment table, as JSON, and as colored HTML
3,964
<ASSISTANT_TASK:> Python Code: import matplotlib.pyplot as plt %matplotlib inline import numpy as np import tensorflow as tf from gpflow.likelihoods import MultiClass from gpflow.kernels import RBF, White from gpflow.models.svgp import SVGP from gpflow.training import AdamOptimizer from scipy.stats import mode from scipy.cluster.vq import kmeans2 from doubly_stochastic_dgp.dgp import DGP import time def get_mnist_data(data_path='/data'): from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets(data_path+'/MNIST_data/', one_hot=False) X, Y = mnist.train.next_batch(mnist.train.num_examples) Xval, Yval = mnist.validation.next_batch(mnist.validation.num_examples) Xtest, Ytest = mnist.test.next_batch(mnist.test.num_examples) Y, Yval, Ytest = [np.array(y, dtype=float)[:, None] for y in [Y, Yval, Ytest]] X = np.concatenate([X, Xval], 0) Y = np.concatenate([Y, Yval], 0) return X.astype(float), Y.astype(float), Xtest.astype(float), Ytest.astype(float) X, Y, Xs, Ys = get_mnist_data() M = 100 Z = kmeans2(X, M, minit='points')[0] m_sgp = SVGP(X, Y, RBF(784, lengthscales=2., variance=2.), MultiClass(10), Z=Z, num_latent=10, minibatch_size=1000, whiten=True) def make_dgp(L): kernels = [RBF(784, lengthscales=2., variance=2.)] for l in range(L-1): kernels.append(RBF(30, lengthscales=2., variance=2.)) model = DGP(X, Y, Z, kernels, MultiClass(10), minibatch_size=1000, num_outputs=10) # start things deterministic for layer in model.layers[:-1]: layer.q_sqrt = layer.q_sqrt.value * 1e-5 return model m_dgp2 = make_dgp(2) m_dgp3 = make_dgp(3) def assess_model_sgp(model, X_batch, Y_batch): m, v = model.predict_y(X_batch) l = model.predict_density(X_batch, Y_batch) a = (np.argmax(m, 1).reshape(Y_batch.shape).astype(int)==Y_batch.astype(int)) return l, a S = 100 def assess_model_dgp(model, X_batch, Y_batch): m, v = model.predict_y(X_batch, S) l = model.predict_density(X_batch, Y_batch, S) a = (mode(np.argmax(m, 2), 0)[0].reshape(Y_batch.shape).astype(int)==Y_batch.astype(int)) return l, a def batch_assess(model, assess_model, X, Y): n_batches = max(int(len(X)/1000), 1) lik, acc = [], [] for X_batch, Y_batch in zip(np.split(X, n_batches), np.split(Y, n_batches)): l, a = assess_model(model, X_batch, Y_batch) lik.append(l) acc.append(a) lik = np.concatenate(lik, 0) acc = np.array(np.concatenate(acc, 0), dtype=float) return np.average(lik), np.average(acc) iterations = 20000 AdamOptimizer(0.01).minimize(m_sgp, maxiter=iterations) l, a = batch_assess(m_sgp, assess_model_sgp, Xs, Ys) print('sgp test lik: {:.4f}, test acc {:.4f}'.format(l, a)) AdamOptimizer(0.01).minimize(m_dgp2, maxiter=iterations) l, a = batch_assess(m_dgp2, assess_model_dgp, Xs, Ys) print('dgp2 test lik: {:.4f}, test acc {:.4f}'.format(l, a)) AdamOptimizer(0.01).minimize(m_dgp3, maxiter=iterations) l, a = batch_assess(m_dgp3, assess_model_dgp, Xs, Ys) print('dgp3 test lik: {:.4f}, test acc {:.4f}'.format(l, a)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We'll use 100 inducing points Step2: We'll compare three models Step3: For the SGP model we'll calcuate accuracy by simply taking the max mean prediction Step4: For the DGP models we have stochastic predictions. We need a single prediction for each datum, so to do this we take $S$ samples for the one-hot predictions ($(S, N, 10)$ matrices for mean and var), then we take the max over the class means (to give a $(S, N)$ matrix), and finally we take the modal class over the samples (to give a vector of length $N$) Step5: We need batch predictions (we might run out of memory otherwise) Step6: Now we're ready to go Step7: Using more inducing points improves things, but at the expense of very slow computation (500 inducing points takes about a day) Step8: And the three layer
3,965
<ASSISTANT_TASK:> Python Code: from polyglotdb import CorpusContext import polyglotdb.io as pgio corpus_root = '/mnt/e/Data/pg_tutorial' parser = pgio.inspect_mfa(corpus_root) parser.call_back = print # To show progress output with CorpusContext('pg_tutorial') as c: c.load(parser, corpus_root) with CorpusContext('pg_tutorial') as c: c.reset() with CorpusContext('pg_tutorial') as c: print('Speakers:', c.speakers) print('Discourses:', c.discourses) q = c.query_lexicon(c.lexicon_phone) q = q.order_by(c.lexicon_phone.label) q = q.columns(c.lexicon_phone.label.column_name('phone')) results = q.all() print(results) from polyglotdb.query.base.func import Count, Average with CorpusContext('pg_tutorial') as c: q = c.query_graph(c.phone).group_by(c.phone.label.column_name('phone')) results = q.aggregate(Count().column_name('count'), Average(c.phone.duration).column_name('average_duration')) for r in results: print('The phone {} had {} occurrences and an average duration of {}.'.format(r['phone'], r['count'], r['average_duration'])) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: The import statements get the necessary classes and functions for importing, namely the CorpusContext class and Step2: Important Step3: Warning Step4: A more interesting summary query is perhaps looking at the count and average duration of different phone types across the corpus, via
3,966
<ASSISTANT_TASK:> Python Code: import os, re # Provide the directory where files are located directory = '/home/roli/FORESTs_BHAVYA/Combined_Libraries/ITS/' #directory = '/home/roli/FORESTs_BHAVYA/Combined_Libraries/16S/' # Provide a list of all the FeatureTables you will merge # Produced by QIIME2 in STEP 7 (i.e. DADA2 Denoising/Merging/FeatureTable) qva_files = ['ITS.table.Honnedaga.qza','ITS.table.Woods.qza'] #qva_files = ['SSU.table.Honnedaga.qza','SSU.table.Woods.qza'] # Provide a list of all the Representative Sequences you will merge # Also produced from STEP 7 (i.e. DADA2 Denoising/Merging/FeatureTable) seq_files = ['ITS.rep.seqs.Honnedaga.qza','ITS.rep.seqs.Woods.qza'] #seq_files = ['SSU.rep.seqs.Honnedaga.qza','SSU.rep.seqs.Woods.qza'] # Provide a concatenated metadatafile with matching column order metadata = 'ITS.metadata.combined.tsv' ## Enter Minimum Support for Keeping QIIME Classification # Note: Classifications that do not meet this criteria will simply be retained, but labeled 'putative' min_support = 0.8 domain = 'fungi' # 'bacteria' | 'fungi' %mkdir $directory/output for n in range(0, len(qva_files), 1): if n == 0: os.system(' '.join([ "qiime feature-table merge", "--i-table1 "+directory+"/"+qva_files[n], "--i-table2 "+directory+"/"+qva_files[n+1], "--o-merged-table "+directory+"/output/merged.table.final.qza" ])) os.system(' '.join([ "qiime feature-table merge-seq-data", "--i-data1 "+directory+"/"+seq_files[n], "--i-data2 "+directory+"/"+seq_files[n+1], "--o-merged-data "+directory+"/output/merged.rep.seqs.final.qza" ])) if n > 0 and n + 1 < len(qva_files): os.system(' '.join([ "qiime feature-table merge", "--i-table1 "+directory+"/output/merged.table.final.qza", "--i-table2 "+directory+"/"+qva_files[n+1], "--o-merged-table "+directory+"/output/merged.table.final.qza" ])) os.system(' '.join([ "qiime feature-table merge-seq-data", "--i-data1 "+directory+"/output/merged.rep.seqs.final.qza", "--i-data2 "+directory+"/"+seq_files[n+1], "--o-merged-data "+directory+"/output/merged.rep.seqs.final.qza" ])) os.system(' '.join([ "qiime feature-table summarize", "--i-table "+directory+"/output/merged.table.final.qza", "--o-visualization "+directory+"/output/merged.table.final.qzv", "--m-sample-metadata-file "+directory+metadata ])) os.system(' '.join([ "qiime feature-table tabulate-seqs", "--i-data "+directory+"/output/merged.rep.seqs.final.qza", "--o-visualization "+directory+"/output/merged.rep.seqs.final.qzv" ])) if domain != "fungi": # Generate Alignment with MAFFT os.system(' '.join([ "qiime alignment mafft", "--i-sequences "+directory+"/output/merged.rep.seqs.final.qza", "--o-alignment "+directory+"/output/merged.rep.seqs.aligned.qza" ])) # Mask Hypervariable parts of Alignment os.system(' '.join([ "qiime alignment mask", "--i-alignment "+directory+"/output/merged.rep.seqs.aligned.qza", "--o-masked-alignment "+directory+"/output/merged.rep.seqs.aligned.masked.qza" ])) # Generate Tree with FastTree os.system(' '.join([ "qiime phylogeny fasttree", "--i-alignment "+directory+"/output/merged.rep.seqs.aligned.masked.qza", "--o-tree "+directory+"/output/merged.rep.seqs.tree.unrooted.qza" ])) # Root Tree os.system(' '.join([ "qiime phylogeny midpoint-root", "--i-tree "+directory+"/output/merged.rep.seqs.tree.unrooted.qza", "--o-rooted-tree "+directory+"/output/merged.rep.seqs.tree.final.qza" ])) # Classify if domain == 'bacteria': os.system(' '.join([ "qiime feature-classifier classify-sklearn", "--i-classifier /home/db/GreenGenes/qiime2_13.8.99_515.806_nb.classifier.qza", "--i-reads "+directory+"/output/merged.rep.seqs.final.qza", "--o-classification "+directory+"/output/merged.taxonomy.final.qza" ])) if domain == 'fungi': os.system(' '.join([ "qiime feature-classifier classify-sklearn", "--i-classifier /home/db/UNITE/qiime2_unite_ver7.99_20.11.2016_classifier.qza", "--i-reads "+directory+"/output/merged.rep.seqs.final.qza", "--o-classification "+directory+"/output/merged.taxonomy.final.qza" ])) # Output Summary os.system(' '.join([ "qiime metadata tabulate", "--m-input-file "+directory+"/output/merged.taxonomy.final.qza", "--o-visualization "+directory+"/output/merged.taxonomy.final.summary.qzv" ])) ## Make Function to Re-Format Taxonomy File to Contain Full Column Information # and factor in the certain of the taxonomic assignment def format_taxonomy(tax_file, min_support): rank_dict = {'k__':"Domain",'k__':"Domain",} output = open(re.sub(".tsv",".fixed.tsv",tax_file), "w") output.write("\t".join(["Domain","Phylum","Class","Order","Family","Genus","Species"])+"\n") with open(tax_file, "r") as f: next(f) #skip header for line in f: line = line.strip() line = line.split("\t") read_id = line[0] tax_string = line[1] # Annotate those strings which do not meet minimum support if float(line[2]) < float(min_support): tax_string = re.sub("__","__putative ",tax_string) # Manage cases were taxonomic string is empty tax_string = re.sub("k__;|p__;|c__;|o__;|f__;|g__;","unclassified;",tax_string) tax_string = re.sub("s__","unclassified",tax_string) #note: this makes all species unclassified. # Remove All Underscore Garbage (gimmie aesthetics) tax_string = re.sub("k__|p__|c__|o__|f__|g__|s__","",tax_string) # Add in columns containing unclassified taxonomic information # Predicated on maximum 7 ranks (Domain -> Species) full_rank = tax_string.split(";") last_classified = full_rank[len(full_rank)-1] for n in range(len(full_rank), 7, 1): full_rank.append("unclassifed "+last_classified) output.write(read_id+"\t"+'\t'.join(full_rank)+"\n") return() ##################### ## Export from QIIME2 ## Final Output Names fasta_file = directory+"/output/merged.rep.seqs.final.fasta" tree_file = directory+"/output/merged.tree.final.nwk" tax_file = directory+"/output/merged.taxonomy.final.tsv" count_table = directory+"/output/merged.counts.final.biom" # Export Classifications os.system(' '.join([ "qiime tools export", directory+"/output/merged.taxonomy.final.qza", "--output-dir "+directory+"/output/" ])) # Reformat Classifications to meet phyloseq format format_taxonomy(directory+"/output/taxonomy.tsv", min_support) # Export SV Table os.system(' '.join([ "qiime tools export", directory+"/output/merged.table.final.qza", "--output-dir "+directory+"/output/" ])) # Export SV Sequences os.system(' '.join([ "qiime tools export", directory+"/output/merged.rep.seqs.final.qza", "--output-dir "+directory+"/output/" ])) if domain == "bacteria": # Export Tree os.system(' '.join([ "qiime tools export", directory+"/output/merged.rep.seqs.tree.final.qza", "--output-dir "+directory+"/output/" ])) %mv $directory/output/tree.nwk $tree_file # Rename Exported Files %mv $directory/output/dna-sequences.fasta $fasta_file %mv $directory/output/feature-table.biom $count_table %mv $directory/output/taxonomy.fixed.tsv $tax_file ## Setup R-Magic for Jupyter Notebooks import rpy2 %load_ext rpy2.ipython def fix_biom_conversion(file): with open(file, 'r') as fin: data = fin.read().splitlines(True) with open(file, 'w') as fout: fout.writelines(data[1:]) import pandas as pd %R library(phyloseq) %R library(ape) #### IMPORT DATA to R ## For '.tsv' files, use Pandas to create a dataframe and then pipe that to R ## For '.biom' files, first convert using 'biom convert' on the command-line ## Had problems importing the count table with pandas, opted for using read.table in R # Import Taxonomy File tax_file = pd.read_csv(directory+"/output/merged.taxonomy.final.tsv", sep="\t") %R -i tax_file %R tax_file <- tax_file[sort(row.names(tax_file)),] #read names must match the count_table # Import Sample Data sample_file = pd.read_csv(directory+metadata, sep="\t") %R -i sample_file %R rownames(sample_file) = sample_file$X.SampleID %R sample_file$X.SampleID <- NULL %R sample_file$LinkerPrimerSequence <- NULL ## Clean-up some other stuff # Import Count Data os.system(' '.join([ "biom convert", "-i", directory+"/output/merged.counts.final.biom", "-o", directory+"/output/merged.counts.final.tsv", "--to-tsv" ])) # The biom converter adds a stupid line that messes with the table formatting fix_biom_conversion(directory+"/output/merged.counts.final.tsv") # Finally import count_table = pd.read_csv(directory+"/output/merged.counts.final.tsv", sep="\t") %R -i count_table %R rownames(count_table) = count_table$X.OTU.ID %R count_table$X.OTU.ID <- NULL %R count_table <- count_table[sort(row.names(count_table)),] #read names must match the tax_table # Convert to Phyloseq Objects %R p_counts = otu_table(count_table, taxa_are_rows = TRUE) %R p_samples = sample_data(sample_file) %R p_tax = tax_table(tax_file) %R taxa_names(p_tax) <- rownames(tax_file) # phyloseq throws out rownames %R colnames(p_tax) <- colnames(tax_file) # phyloseq throws out colnames # Merge Phyloseq Objects %R p = phyloseq(p_counts, p_tax) if domain == "bacteria": # Import Phylogenetic Tree tree_file = directory+"/output/merged.tree.final.nwk" %R -i tree_file %R p_tree <- read.tree(tree_file) # Combine All Objects into One Phyloseq %R p_final <- merge_phyloseq(p, p_samples, p_tree) else: # Combine All Objects into One Phyloseq %R p_final <- merge_phyloseq(p, p_samples) # Save Phyloseq Object as '.rds' output = directory+"/output/p_merged.final.rds" %R -i output %R saveRDS(p_final, file = output) # Confirm Output %R print(p_final) # Remove Files if domain == "bacteria": %rm $directory/output/*tree.unrooted.qza %rm $directory/output/*aligned.masked.qza %rm $directory/output/*.biom %rm $directory/output/taxonomy.tsv # Separate Final Files %mkdir $directory/final/ %mv $directory/output/*.final.rds $directory/final/ %mv $directory/output/*.taxonomy.final.tsv $directory/final/ %mv $directory/output/*.counts.final.tsv $directory/final/ %mv $directory/output/*.final.fasta $directory/final/ %cp $directory$metadata $directory/final/ # Gzip and Move Intermediate Files !pigz -p 10 $directory/output/*.qza !pigz -p 10 $directory/output/*.qzv %mv $directory/output/ $directory/intermediate_files print("Your sequences have been successfully saved to directories 'final' and 'intermediate_files'") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Step 2 Step2: Step 3 Step3: Step 4 Step4: Step 5 Step5: Step 6 Step6: Step 7 Step7: Step 8
3,967
<ASSISTANT_TASK:> Python Code: from poppy.creatures import PoppyErgo ergo = PoppyErgo() from poppy.creatures import PoppyErgoJr ergo = PoppyErgoJr() ergo ergo.m2 ergo.m2.present_position ergo.m2.present_temperature for m in ergo.motors: print 'Motor "{}" current position = {}'.format(m.name, m.present_position) ergo.m3.compliant ergo.m6.compliant = False ergo.m6.goal_position = 0. for m in ergo.motors: m.compliant = False # Goes to the position 0 in 2s m.goto_position(0, 2) # You can also change the maximum speed of the motors # Warning! Goto position also change the maximum speed. for m in ergo.motors: m.moving_speed = 50 import time ergo.m4.goal_position = 30 time.sleep(1.) ergo.m4.goal_position = -30 ergo.m4.goal_position = 30 ergo.m5.goal_position = 20 ergo.m6.goal_position = -20 time.sleep(1.) ergo.m4.goal_position = -30 ergo.m5.goal_position = -20 ergo.m6.goal_position = 20 def dance(): ergo.m4.goal_position = 30 ergo.m5.goal_position = 20 ergo.m6.goal_position = -20 time.sleep(1.) ergo.m4.goal_position = -30 ergo.m5.goal_position = -20 ergo.m6.goal_position = 20 time.sleep(1.) dance() for _ in range(4): dance() def dance2(): ergo.goto_position({'m4': 30, 'm5': 20, 'm6': -20}, 1., wait=True) ergo.goto_position({'m4': -30, 'm5': -20, 'm6': 20}, 1., wait=True) for _ in range(4): dance2() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: For a PoppyErgoJr Step2: Get robot current status Step3: Turn on/off the compliancy of a motor Step4: Go to the zero position Step5: Make a simple dance movement Step6: On multiple motors Step7: Wrap it inside a function for convenience Step8: Using goto position instead
3,968
<ASSISTANT_TASK:> Python Code: ## House keeping tasks %reset -f # standard imports import urllib2 import os import json import StringIO import pickle # dataframe and numerical import pandas as pd import numpy as np # plotting import matplotlib.pyplot as plt %matplotlib inline #scipy from scipy import stats from scipy.special import erf from scipy import sqrt from IPython.display import HTML def addToggle(): return '''<script> code_show=true; function code_toggle() { if (code_show){ $('div.input').hide(); } else { $('div.input').show(); } code_show = !code_show } $( document ).ready(code_toggle); </script> The raw code for this IPython notebook is by default hidden for easier reading. To toggle on/off the raw code, click <a href="javascript:code_toggle()">here</a>.''' HTML(addToggle()) ## variables for this project samples_in="../data/galectin-3/galectin-3_5.0_human.json" results_dir = "../results/galectin-3/" dataframe_out=results_dir+"dataframes_galectin.pkl" # Check whether or not the dataframes exist subdir="./" dataframefile=dataframe_out if not os.path.isfile(dataframefile): print "calling the notebook that loads the data" %run download_cfg_for_sna.ipynb with open(os.path.join(subdir, dataframefile)) as f: dataframes = pickle.load(f) # peak at the data in frame 0 frame=dataframes[0]["dataframe"] frame.head() # recalculate CV # rank the glycans by RFU STDEV="StDev" RFU="Average RFU" frame["CV"]=100*frame[STDEV]/frame[RFU] maxrfu=frame[RFU].max() frame["rank"]=100*frame[RFU]/maxrfu frame.head() # choose data to work with # ignore the 0.5 ug, use 2,5,10 () frames=[dataframes[1]["dataframe"],dataframes[2]["dataframe"], dataframes[3]["dataframe"]] sample_keys=[dataframes[1]["sample"].encode('utf-8'),dataframes[2]["sample"].encode('utf-8'), dataframes[3]["sample"].encode('utf-8')] # recalculate CV and rank the glycans by RFU for frame in frames: frame["%CV"]=frame["% CV"] frame["CV"]=100*frame[STDEV]/frame[RFU] maxrfu=frame[RFU].max() frame["rank"]=100*frame[RFU]/maxrfu # peak at all frames result = pd.concat(frames, keys=sample_keys) result # calculate rank, %CV for all frames, z-score, p-value for all frames and sort by average rank Structure="Structure on Masterlist" for aframe in frames: aframe["CV"]=100*aframe[STDEV]/aframe[RFU] maxrfu=aframe[RFU].max() aframe["rank"]=100*aframe[RFU]/maxrfu aframe["z-score"]=stats.zscore(aframe[RFU]) aframe["p-value"]=1- 0.5 * (1 + erf(aframe["z-score"] / sqrt(2))) #. merge_frames df_final = reduce(lambda left,right: pd.merge(left,right,on=[Structure,'Chart Number']), frames) df_final frames[2]["CV"], sample_keys[2] #. calculate the average rank df_final["avg_rank"]=df_final.filter(regex=("rank.*")).sum(axis=1)/df_final.filter(regex=("rank.*")).shape[1] # http://stackoverflow.com/questions/30808430/how-to-select-columns-from-dataframe-by-regex #. calculate the summed RFU df_final["summed_RFU"]=df_final.filter(regex=("RFU.*")).sum(axis=1) #. calculate the z-score and p-value for the summed RFU df_final.head() df_final["summed_RFU_z-score"]=stats.zscore(df_final["summed_RFU"]) df_final["summed_RFU_p-value"]=1- 0.5 * (1 + erf(df_final["summed_RFU_z-score"] / sqrt(2))) df_final.sort_values("avg_rank",ascending=False) #frames_RFU_sum["p-value_them"]=1- 0.5 * (1 + erf(frames_RFU_sum["stats_z-score"] / sqrt(2))) #. extract the high binders. p-value < 0.15 df_final_high_binders = df_final[df_final["summed_RFU_p-value"] <0.15] df_final_high_binders.sort_values("avg_rank",ascending=False) #print df_final_high_binders.shape high_binders= set(df_final_high_binders["Chart Number"]) high_binders df_final[df_final["Chart Number"]==582] #. lets pull out any %CV column of df_final and ensure CV <20 #.. remember there are negative CV in this sample, exclude these df_cv_20=df_final.filter(regex=("%CV.*")) <=20 df_cv_0=df_final.filter(regex=("%CV.*")) >0 df_cv_0_20=(df_cv_0 & df_cv_20) #print df_cv_0_20.head(340) andmask=df_cv_0_20["%CV_x"]&df_cv_0_20["%CV_y"]&df_cv_0_20["%CV"] ormask=df_cv_0_20["%CV_x"]|df_cv_0_20["%CV_y"]|df_cv_0_20["%CV"] ormask1=df_cv_0_20["%CV_x"] ormask2=df_cv_0_20["%CV_y"] ormask3=df_cv_0_20["%CV"] #mask glycan_ids_cv_20=df_final["Chart Number"][andmask] print len(glycan_ids_cv_20) df_final_high_binders.sort_values("avg_rank",ascending=False)["Chart Number"] sample_keys # but note the way I made the frame means that rank_x is 2mg, rank_y is 5mg and rank is 10mg glycan_ids_cv_20_1= df_final[ormask1].sort_values("rank_x",ascending=False) print len(glycan_ids_cv_20_1) glycan_ids_cv_20_2= df_final[ormask2].sort_values("rank_y",ascending=False) print len(glycan_ids_cv_20_2) glycan_ids_cv_20_3= df_final[ormask3].sort_values("rank",ascending=False) print len(glycan_ids_cv_20_3) # create dictionary to store results results = {} def highbinders(dataframe, pvalue="p-value", rank="rank",pvalue_cutoff=0.15, rank_cutoff=75): A function which filter the input dataframe by pvalue and rank returns the filtered dataframe and a list of glycan chart number for the current array dataframe_p=dataframe[dataframe[pvalue]<pvalue_cutoff] dataframe_p_r = dataframe_p[dataframe_p[rank]>rank_cutoff] dataframe_p_r.sort_values(rank,ascending=False) return dataframe_p_r,set(dataframe_p_r["Chart Number"]) #. extract the high binders. p-value < 0.15 rank> 75 average_high_df, average_highbind_list = highbinders(df_final,pvalue="summed_RFU_p-value", rank="avg_rank") average_highbind_list, len(average_highbind_list) results["average"]=average_highbind_list # rank filter 75 rf=75 #. extract the high binders for 2mg . p-value < 0.15 rank> 75 twomggly, twomg_highbind_list = highbinders(glycan_ids_cv_20_1,pvalue="p-value_x",rank="rank_x") results["twomg_filter"+str(rf)]=set(twomg_highbind_list) #. extract the high binders for 5mg . p-value < 0.15 rank> 75 fivemggly, fivemg_highbind_list = highbinders(glycan_ids_cv_20_2,pvalue="p-value_y",rank="rank_y") results["fivemg_filter"+str(rf)]=set(fivemg_highbind_list) #. extract the high binders for 5mg . p-value < 0.15 rank> 75 tenmggly, tenmg_highbind_list = highbinders(glycan_ids_cv_20_3,pvalue="p-value",rank="rank") results["tenmg_filter"+str(rf)]=set(tenmg_highbind_list) # rank filter 50 rf=50 #. extract the high binders for 2mg . p-value < 0.15 rank> 75 twomggly, twomg_highbind_list = highbinders(glycan_ids_cv_20_1,pvalue="p-value_x",rank="rank_x",rank_cutoff=rf) results["twomg_filter"+str(rf)]=set(twomg_highbind_list) #. extract the high binders for 5mg . p-value < 0.15 rank> 75 fivemggly, fivemg_highbind_list = highbinders(glycan_ids_cv_20_2,pvalue="p-value_y",rank="rank_y",rank_cutoff=rf) results["fivemg_filter"+str(rf)]=set(fivemg_highbind_list) #. extract the high binders for 5mg . p-value < 0.15 rank> 75 tenmggly, tenmg_highbind_list = highbinders(glycan_ids_cv_20_3,pvalue="p-value",rank="rank",rank_cutoff=rf) results["tenmg_filter"+str(rf)]=set(tenmg_highbind_list) # top 10 without filtering results["twomg_topten_nofilter"]=set(df_final.sort_values("rank_x",ascending=False)[0:10]["Chart Number"]) results["fivemg_topten_nofilter"]=set(df_final.sort_values("rank_y",ascending=False)[0:10]["Chart Number"]) results["tenmg_topten_nofilter"]=set(df_final.sort_values("rank",ascending=False)[0:10]["Chart Number"]) results #a={"average":average_highbind_list, "twomg":set(twomg_highbind_list),"fivemg":set(fivemg_highbind_list),"tenmg":set(tenmg_highbind_list)} #a # see http://pandas.pydata.org/pandas-docs/stable/options.html for pandas options pd.set_option('display.max_columns',1000) pd.set_option('max_columns', 100) df_final[df_final["Chart Number"]==340] # make various views of frame 0 based on the %CV df_cv_50 = frame[frame.CV <50] df_cv_30 = frame[frame.CV <30] df_cv_20 = frame[frame.CV <20] df_cv_20_0 = df_cv_20[df_cv_20.CV>0] # plot rank v %CV # plot comparison of different %CV subsets plt.figure() df_cv_20["CV"].plot(legend=True, title='%CV<=20%') df_cv_20[STDEV].plot(secondary_y=True, style='g', legend=True) plt.figure() df_cv_20_0["CV"].plot(legend=True, title='0<%CV<=20%') df_cv_20_0[STDEV].plot(secondary_y=True, style='g', legend=True) plt.figure() df_cv_30["CV"].plot(legend=True, title='%CV<=30%') df_cv_30[STDEV].plot(secondary_y=True, style='g', legend=True) plt.figure() df_cv_50["CV"].plot(legend=True, title='%CV<=50%') df_cv_50[STDEV].plot(secondary_y=True, style='g', legend=True) # use 0<cv<20 and order by rank pd.set_option('max_rows', 300) df_cv_20_0.sort_values("rank",ascending=False) plt.figure() df_cv_20[RFU].plot(legend=True, title='%CV<=20%') df_cv_20[STDEV].plot(secondary_y=True, style='g', legend=True) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: import all required dependencies Step2: Load data from pickle Step3: RFU, zscore and p-value Step4: extract only the high binders Step5: What about the % CV? Step7: extract highbinders and other sets for MCAW analysis Step8: A consideration of the glycans by %CV for the first frame
3,969
<ASSISTANT_TASK:> Python Code: from jupyterthemes import get_themes from jupyterthemes.stylefx import set_nb_theme themes = get_themes() set_nb_theme(themes[3]) # 1. magic for inline plot # 2. magic to print version # 3. magic so that the notebook will reload external python modules # 4. magic to enable retina (high resolution) plots # https://gist.github.com/minrk/3301035 %matplotlib inline %load_ext watermark %load_ext autoreload %autoreload 2 %config InlineBackend.figure_format='retina' import os import json import time import numpy as np import pandas as pd %watermark -a 'Ethen' -d -t -v -p numpy,pandas,pyarrow,sklearn data_dir = 'cleaned_data' path_train = os.path.join(data_dir, 'train_clean.parquet') path_test = os.path.join(data_dir, 'test_clean.parquet') engine = 'pyarrow' df_train = pd.read_parquet(path_train, engine) df_test = pd.read_parquet(path_test, engine) print('train dimension: ', df_train.shape) print('test dimension: ', df_test.shape) df_train.head() config_path = os.path.join('config', 'gbt_training_template.json') with open(config_path) as f: config_file = json.load(f) config_file # extract settings from the configuration file into local variables columns = config_file['columns'] num_cols = columns['num_cols_pattern'] cat_cols = columns['cat_cols_pattern'] id_cols = columns['id_cols'] label_col = columns['label_col'] weights_col = columns['weights_col'] model_task = config_file['model_task'] model_type = config_file['model_type'] model_parameters = config_file['model_parameters'][model_type] model_hyper_parameters = config_file['model_hyper_parameters'][model_type] model_fit_parameters = config_file['model_fit_parameters'][model_type] search_parameters = config_file['search_parameters'] df_train = df_train[df_train[label_col] != 0].reset_index(drop=True) mask = df_train['Date'] == df_train['Date'].iloc[len(df_test)] val_index = df_train.loc[mask, 'Date'].index.max() val_index val_fold = np.full(df_train.shape[0], fill_value=-1) val_fold[:(val_index + 1)] = 0 val_fold df_train['val_fold'] = val_fold df_train[(val_index - 2):(val_index + 2)] # the model id is used as the indicator when saving the model model_id = 'gbt' input_cols = num_cols + cat_cols df_train = df_train[input_cols + [label_col]] # we will perform the modeling at the log-scale df_train[label_col] = np.log(df_train[label_col]) df_test = df_test[input_cols + id_cols] print('train dimension: ', df_train.shape) print('test dimension: ', df_test.shape) df_train.head() for cat_col in cat_cols: df_train[cat_col] = df_train[cat_col].astype('category') df_test[cat_col] = df_test[cat_col].astype('category') df_train.head() from gbt_module.model import GBTPipeline model = GBTPipeline(input_cols, cat_cols, label_col, weights_col, model_task, model_id, model_type, model_parameters, model_hyper_parameters, search_parameters) model start = time.time() model.fit(df_train, val_fold, model_fit_parameters) elapsed = time.time() - start print('elapsed minutes: ', elapsed / 60) pd.DataFrame(model.model_tuned_.cv_results_) # we logged our label, remember to exponentiate it back to the original scale prediction_test = model.predict(df_test[input_cols]) df_test[label_col] = np.exp(prediction_test) submission_cols = id_cols + [label_col] df_test[submission_cols] = df_test[submission_cols].astype('int') submission_dir = 'submission' if not os.path.isdir(submission_dir): os.makedirs(submission_dir, exist_ok=True) submission_file = 'rossmann_submission_{}.csv'.format(model_id) submission_path = os.path.join(submission_dir, submission_file) df_test[submission_cols].to_csv(submission_path, index=False) df_test[submission_cols].head() model.get_feature_importance() model_checkpoint = os.path.join('models', model_id + '.pkl') model.save(model_checkpoint) loaded_model = GBTPipeline.load(model_checkpoint) # print the cv_results_ again to ensure the checkpointing works pd.DataFrame(loaded_model.model_tuned_.cv_results_) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Rossman GBT Modeling Step2: We've pulled most of our configurable parameters outside into a json configuration file. In the ideal scenario, we can move all of our code into a python script and only change the configuration file to experiment with different type of settings to see which one leads to the best overall performance. Step3: Here, we will remove all records where the store had zero sale / was closed (feel free to experiment with not excluding the zero sales record and see if improves performance) Step4: The validation fold we're creating is used for sklearn's PredefinedSplit, where we set the index to 0 for all samples that are part of the validation set, and to -1 for all other samples. Step5: Here, we assign the validation fold back to the original dataframe to illustrate the point, this is technically not required for the rest of the pipeline. Notice in the dataframe that we've printed out, the last record's date, 2015-06-18 is different from the rest, and the record's val_fold takes on a value of -1. This means that all records including/after the date 2015-06-19 will become our validation set. Step6: We proceed to extracting the necessary columns both numerical and categorical that we'll use for modeling. Step7: Model Training
3,970
<ASSISTANT_TASK:> Python Code: document = 'Lorem Ipsum dolor sit amet' # shingle and discard the last 5 as these are just the last n<5 characters from the document shingles = [document[i:i+5] for i in range(len(document))][:-5] shingles other_document = 'Lorem Ipsum dolor sit amet is how dummy text starts' # shingle and discard the last 5 as these are just the last n<5 characters from the document other_shingles = [other_document[i:i+5] for i in range(len(other_document))][:-5] # Jaccard distance is the size of set intersection divided by the size of set union len(set(shingles) & set(other_shingles)) / len(set(shingles) | set(other_shingles)) from lsh import minhash for _ in range(5): hasher = minhash.MinHasher(seeds=100, char_ngram=5) fingerprint0 = hasher.fingerprint('Lorem Ipsum dolor sit amet'.encode('utf8')) fingerprint1 = hasher.fingerprint('Lorem Ipsum dolor sit amet is how dummy text starts'.encode('utf8')) print(sum(fingerprint0[i] in fingerprint1 for i in range(hasher.num_seeds)) / hasher.num_seeds) for _ in range(5): hasher = minhash.MinHasher(seeds=1000, char_ngram=5) fingerprint0 = hasher.fingerprint('Lorem Ipsum dolor sit amet'.encode('utf8')) fingerprint1 = hasher.fingerprint('Lorem Ipsum dolor sit amet is how dummy text starts'.encode('utf8')) print(sum(fingerprint0[i] in fingerprint1 for i in range(hasher.num_seeds)) / hasher.num_seeds) %matplotlib inline import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt ix = pd.IndexSlice df = pd.DataFrame(data=[(2, 50), (50, 2), (10, 10), (5, 20), (20, 5)], columns=['pieces', 'size']) df['hashes'] = df['pieces'] * df['size'] for pr in np.linspace(0, 1, 200): df[pr] = 1 - (1 - pr**df['size']) ** df['pieces'] df = pd.pivot_table(df, index=['hashes', 'pieces', 'size']) ax = df.T.plot(figsize=(10, 7), title='Probability of LSH finding a candidate pair'); plt.ylabel('p(candidate | Jaccad)'); plt.xlabel('Jaccard similarity'); plt.legend(list(df.loc[ix[100]].index), bbox_to_anchor=(1., 1, 1., 0), loc='upper left', fontsize=12, ncol=1, borderaxespad=0., title='Each line shows the\nfingerprint chopped\ninto (pieces, size)\n'); !wc -l ../data/rcv1/headline.text.txt !head -1 ../data/rcv1/headline.text.txt import itertools from lsh import cache, minhash # https://github.com/mattilyra/lsh # a pure python shingling function that will be used in comparing # LSH to true Jaccard similarities def shingles(text, char_ngram=5): return set(text[head:head + char_ngram] for head in range(0, len(text) - char_ngram)) def jaccard(set_a, set_b): intersection = set_a & set_b union = set_a | set_b return len(intersection) / len(union) def candidate_duplicates(document_feed, char_ngram=5, seeds=100, bands=5, hashbytes=4): char_ngram = 5 sims = [] hasher = minhash.MinHasher(seeds=seeds, char_ngram=char_ngram, hashbytes=hashbytes) if seeds % bands != 0: raise ValueError('Seeds has to be a multiple of bands. {} % {} != 0'.format(seeds, bands)) lshcache = cache.Cache(num_bands=bands, hasher=hasher) for line in document_feed: line = line.decode('utf8') docid, headline_text = line.split('\t', 1) fingerprint = hasher.fingerprint(headline_text.encode('utf8')) # in addition to storing the fingerpring store the line # number and document ID to help analysis later on lshcache.add_fingerprint(fingerprint, doc_id=docid) candidate_pairs = set() for b in lshcache.bins: for bucket_id in b: if len(b[bucket_id]) > 1: pairs_ = set(itertools.combinations(b[bucket_id], r=2)) candidate_pairs.update(pairs_) return candidate_pairs hasher = minhash.MinHasher(seeds=100, char_ngram=5, hashbytes=4) lshcache = cache.Cache(bands=10, hasher=hasher) # read in the data file and add the first 100 documents to the LSH cache with open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh: feed = itertools.islice(fh, 100) for line in feed: docid, articletext = line.decode('utf8').split('\t', 1) lshcache.add_fingerprint(hasher.fingerprint(line), docid) # for every bucket in the LSH cache get the candidate duplicates candidate_pairs = set() for b in lshcache.bins: for bucket_id in b: if len(b[bucket_id]) > 1: # if the bucket contains more than a single document pairs_ = set(itertools.combinations(b[bucket_id], r=2)) candidate_pairs.update(pairs_) candidate_pairs num_candidates = [] bands = [2, 5, 10, 20] for num_bands in bands: with open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh: feed = itertools.islice(fh, 1000) candidates = candidate_duplicates(feed, char_ngram=5, seeds=100, bands=num_bands, hashbytes=4) num_candidates.append(len(candidates)) fig, ax = plt.subplots(figsize=(8, 6)) plt.bar(bands, num_candidates, align='center'); plt.title('Number of candidate duplicate pairs found by LSH using 100 minhash fingerprint.'); plt.xlabel('Number of bands'); plt.ylabel('Number of candidate duplicates'); plt.xticks(bands, bands); def candidate_duplicates(document_feed, char_ngram=5, seeds=100, bands=5, hashbytes=4): char_ngram = 5 sims = [] hasher = minhash.MinHasher(seeds=seeds, char_ngram=char_ngram, hashbytes=hashbytes) if seeds % bands != 0: raise ValueError('Seeds has to be a multiple of bands. {} % {} != 0'.format(seeds, bands)) lshcache = cache.Cache(num_bands=bands, hasher=hasher) for i_line, line in enumerate(document_feed): line = line.decode('utf8') docid, headline_text = line.split('\t', 1) fingerprint = hasher.fingerprint(headline_text.encode('utf8')) # in addition to storing the fingerpring store the line # number and document ID to help analysis later on lshcache.add_fingerprint(fingerprint, doc_id=(i_line, docid)) candidate_pairs = set() for b in lshcache.bins: for bucket_id in b: if len(b[bucket_id]) > 1: pairs_ = set(itertools.combinations(b[bucket_id], r=2)) candidate_pairs.update(pairs_) return candidate_pairs lines = [] with open('/usr/local/scratch/data/rcv1/headline.text.txt', 'rb') as fh: # read the first 1000 lines into memory so we can compare them for line in itertools.islice(fh, 1000): lines.append(line.decode('utf8')) # reset file pointer and do LSH fh.seek(0) feed = itertools.islice(fh, 1000) candidates = candidate_duplicates(feed, char_ngram=5, seeds=100, bands=20, hashbytes=4) # go over all the generated candidates comparing their similarities similarities = [] for ((line_a, docid_a), (line_b, docid_b)) in candidates: doc_a, doc_b = lines[line_a], lines[line_b] shingles_a = shingles(lines[line_a]) shingles_b = shingles(lines[line_b]) jaccard_sim = jaccard(shingles_a, shingles_b) fingerprint_a = set(hasher.fingerprint(doc_a.encode('utf8'))) fingerprint_b = set(hasher.fingerprint(doc_b.encode('utf8'))) minhash_sim = len(fingerprint_a & fingerprint_b) / len(fingerprint_a | fingerprint_b) similarities.append((docid_a, docid_b, jaccard_sim, minhash_sim)) import random print('There are {} candidate duplicates in total'.format(len(candidates))) random.sample(similarities, k=15) sims_all = np.zeros((1000, 1000), dtype=np.float64) for i, line in enumerate(lines): for j in range(i+1, len(lines)): shingles_a = shingles(lines[i]) shingles_b = shingles(lines[j]) jaccard_sim = jaccard(shingles_a, shingles_b) # similarities are symmetric so we only care about the # upper diagonal here and leave (j, i) to be 0 sims_all[i, j] = jaccard_sim # turn the candidates into a dictionary so we have easy access to # candidates pairs that were found candidates_dict = {(line_a, line_b): (docid_a, docid_b) for ((line_a, docid_a), (line_b, docid_b)) in candidates} found = 0 for i in range(len(lines)): for j in range(i+1, len(lines)): if sims_all[i, j] >= .9: # documents i and j have an actual Jaccard similarity >= 90% found += ((i, j) in candidates_dict or (j, i) in candidates_dict) print('Out of {} pairs with similarity >= 90% {} were found, that\'s {:.1%}'.format((sims_all >= .9).sum(), found, found / (sims_all >= .9).sum())) # preprocess RCV1 to be contained in a single file import glob, zipfile, re import xml.etree.ElementTree as ET files = glob.glob('../data/rcv1/xml/*.zip') with open('../data/rcv1/headline.text.txt', 'wb') as out: for f in files: zf = zipfile.ZipFile(f) for zi in zf.namelist(): fh = zf.open(zi, 'r') root = ET.fromstring(fh.read().decode('latin-1')) itemid = root.attrib['itemid'] headline = root.find('./headline').text text = ' '.join(root.find('./text').itertext()) text = re.sub('\s+', ' ', text) out.write(('{}\t{} {}\n'.format(itemid, headline, text)).encode('utf8')) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: As we can see these two documents are not very similar, at least in terms of their 3-gram shingle Jaccard similarity. That aside the problem with these shingles is that they do not allow us to compute the similarities of large numbers of documents very easily, we have to do an all pairs comparison. To get around that we can use locality sensitive hashing, but before LSH we'll turn the documents into a more manageable and uniform representation Step2: Increasing the length of the fingerprint from $k=100$ to $k=1000$ reduces the variance between random initialisations of the minhasher. Step3: Increasing the fingerprint length however comes at the cost of increased memory usage and more time spent computing the minhashes. For a collection of documents we are still left with comparing all pairs, when that collection grows larger this becomes a very real problem. Queue LSH. Step4: <a id='bands_rows'>How setting LSH parameters affects finding similar documents</a> Step5: The figure shows the probability that LSH with minhash will "find" a pair of similar documents (y-axis) given the Jaccard similarity (x-axis) of those documents for different settings for LSH. Each of the five lines correspond to different settings, the number of hashes is always 100 so we're just changing the number of pieces to chop each fingerprint into (and the size of those pieces, although that becomes determined by setting the number of hashes). Step6: Some duplicate items are present in the corpus so let's see what happens when we apply LSH to it. First a helper function that takes a file pointer and some parameters for minhash and LSH and then finds duplicates. Step7: Then run through some data adding documents to the LSH cache Step8: candidate_pairs now contains a bunch of document IDs that may be duplicates of each other Step9: Now let's run LSH on a few different parameter settings and see what the results look like. To save some time I'm only using the first 1000 documents. Step10: So the more promiscuous [4] version (20 bands per fingerprint) finds many more candidate pairs than the conservative 2 bands model. The first implication of this difference is that it leads to you having to do more comparisons to find the real duplicates. Let's see what that looks like in practice. Step11: So LSH with 20 bands indeed finds a lot of candidate duplicates (111 out of 1000), some of which - for instance (3256, 3186) above - are not all that similar. Let's see how many LSH missed given some similarity threshold. Step12: That seems pretty well inline with the <a href="#bands_rows">figure</a> showing how setting bands and rows affects the probability of finding similar documents. So we're doing quite well in terms of the true positives, what about the false positives? 27 pairs of documents from the ones found were true positives, so the rest are false positives. Since LSH found 110 document pairs in total $110-27 = 83$ pairs were incorrect, that's 83 documents that were checked in vein in comparison to the 499000 pairs we would have had to go through for an all pairs comparison.
3,971
<ASSISTANT_TASK:> Python Code: %matplotlib inline from __future__ import print_function import numpy as np import statsmodels.api as sm import pandas as pd from statsmodels.tsa.arima_process import arma_generate_sample np.random.seed(12345) arparams = np.array([.75, -.25]) maparams = np.array([.65, .35]) arparams = np.r_[1, -arparams] maparams = np.r_[1, maparams] nobs = 250 y = arma_generate_sample(arparams, maparams, nobs) dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs) y = pd.TimeSeries(y, index=dates) arma_mod = sm.tsa.ARMA(y, order=(2,2)) arma_res = arma_mod.fit(trend='nc', disp=-1) print(arma_res.summary()) y.tail() import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(10,8)) fig = arma_res.plot_predict(start='1999m6', end='2001m5', ax=ax) legend = ax.legend(loc='upper left') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Generate some data from an ARMA process Step2: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated. Step3: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series.
3,972
<ASSISTANT_TASK:> Python Code: from rmgpy.chemkin import * from rmgpy.tools.canteraModel import * from IPython.display import display, Image speciesList, reactionList = loadChemkinFile('data/minimal_model/chem_annotated.inp', 'data/minimal_model/species_dictionary.txt', 'data/minimal_model/tran.dat') # Find the species: ethane speciesDict = getRMGSpeciesFromSMILES(['CC'], speciesList) ethane = speciesDict['CC'] reactorType = 'IdealGasReactor' molFracList=[{ethane: 1}] Tlist = ([1300,1500,2000],'K') Plist = ([1],'atm') reactionTime = (0.5, 'ms') # Create cantera object, loading in the species and reactions job = Cantera(speciesList=speciesList, reactionList=reactionList, outputDirectory='temp') # The cantera file must be created from an associated chemkin file # We can either load the Model from the initialized set of rmg species and reactions job.loadModel() # Or load it from a chemkin file by uncommenting the following line: #job.loadChemkinModel('data/minimal_model/chem_annotated.inp',transportFile='data/minimal_model/tran.dat') # Generate the conditions based on the settings we declared earlier job.generateConditions('IdealGasReactor', reactionTime, molFracList, Tlist, Plist) # Simulate and plot alldata = job.simulate() job.plot(alldata) # We can view the cantera model Solution's species and reactions ctSpecies = job.model.species() ctReactions = job.model.reactions() # We can view a cantera species or reaction object from this ct_ethane = ctSpecies[4] ct_rxn = ctReactions[0] print ct_ethane print ct_rxn # We can also do things like modifying the cantera species thermo and reaction kinetics through modifying the # RMG objects first, then using the `modifyReactionKinetics` or `modifySpeciesThermo` functions # Alter the RMG objects in place, lets pick ethane and the first reaction rmg_ethane = speciesDict['CC'] rmg_ethane.thermo.changeBaseEnthalpy(2*4184) # Change base enthalpy by 2 kcal/mol rmg_rxn = reactionList[0] rmg_rxn.kinetics.changeRate(4) # Change A factor by multiplying by a factor of 4 # Take a look at the state of the cantera model before and after print 'Cantera Model: Before' ctSpecies = job.model.species() ctReactions = job.model.reactions() print 'Ethane Thermo = {} kcal/mol'.format(ctSpecies[4].thermo.h(300)/1000/4184) print 'Reaction 1 Kinetics = {}'.format(ctReactions[0].rate) # Now use the altered RMG objects to modify the kinetics and thermo job.modifyReactionKinetics(0, rmg_rxn) job.modifySpeciesThermo(4, rmg_ethane) # If we modify thermo, the cantera model must be refreshed. If only kinetics are modified, this does not need to be done. job.refreshModel() print '' print 'Cantera Model: After' ctSpecies = job.model.species() ctReactions = job.model.reactions() print 'Ethane Thermo = {} kcal/mol'.format(ctSpecies[4].thermo.h(300)/1000/4184) print 'Reaction 1 Kinetics = {}'.format(ctReactions[0].rate) # Show the plots in the ipython notebook for i, condition in enumerate(job.conditions): print 'Condition {0}'.format(i+1) display(Image(filename="temp/{0}_mole_fractions.png".format(i+1))) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Load the species and reaction from the RMG-generated chemkin file chem_annotated.inp and species_dictionary.txt file found in your chemkin folder after running a job. Step2: Set a few conditions for how to react the system
3,973
<ASSISTANT_TASK:> Python Code: import numpy as np import numpy.random as rand import matplotlib.pyplot as plt matplotlib inline mu1, mu2, mu3 = [15,20], [24,25], [38,40] cov = [[10, 0], [0, 10]] n_samples = 5000 data1 = rand.multivariate_normal(mu1, cov, n_samples) data2 = rand.multivariate_normal(mu2, cov, n_samples) data3 = rand.multivariate_normal(mu3, cov, n_samples) data = np.vstack((data1, data2, data3)) plt.axis('equal') plt.plot(data1[:,0], data1[:,1], '^b', label='Class_1') plt.plot(data2[:,0], data2[:,1], 'sr', label='Class_2') plt.plot(data3[:,0], data3[:,1], 'ok', label='Class_3') plt.title('Original samples') plt.legend(loc='best') mu1 = np.mean(data1, 0) mu2 = np.mean(data2, 0) mu3 = np.mean(data3, 0) mu = np.mean(data, 0) s1 = np.outer(mu1-mu, mu1-mu)*data1.shape[0] s2 = np.outer(mu2-mu, mu2-mu)*data2.shape[0] s3 = np.outer(mu3-mu, mu3-mu)*data3.shape[0] S_b = s1 + s2 + s3 def compute_within_scatter_matrix(data, mu): Compute the within-class scatter matrix for a given class :param data: a numpy matrix of (n_samples, n_sample_dimensions) :param mu: a list of n_sample_dimensions matrix = np.zeros((data.shape[1], data.shape[1])) spread = data - mu for s in range(spread.shape[0]): matrix += np.outer(spread[s,:], spread[s,:]) return matrix s1 = compute_within_scatter_matrix(data1, mu1) s2 = compute_within_scatter_matrix(data2, mu2) s3 = compute_within_scatter_matrix(data3, mu3) S_w = s1 + s2 + s3 eig_vals, eig_vecs = np.linalg.eig(np.linalg.inv(S_w).dot(S_b)) for eig_idx, eig_val in enumerate(eig_vals): print('Eigvector #{}: {} (Eigvalue:{:.3f})'.format(eig_idx, eig_vecs[:, eig_idx], eig_val)) S = np.linalg.inv(S_w).dot(S_b) for eig_idx, eig_val in enumerate(eig_vals): eig_vec = eig_vecs[:, eig_idx] np.testing.assert_array_almost_equal(S.dot(eig_vec), eig_val*eig_vec, decimal=6, err_msg='', verbose=True) eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))] eig_pairs = sorted( eig_pairs, key=lambda x:x[0], reverse=True) eigv_sum = sum(eig_vals) for eig_val, eig_vec in eig_pairs: print('Eigvector: {} (Eigvalue:\t{:.3f},\t{:.2%} variance explained)'.format(eig_vec, eig_val, (eig_val/eigv_sum))) W = eig_pairs[0][1] print('Matrix W:\n', W.real) X1_fda = W.dot(data1.T) X2_fda = W.dot(data2.T) X3_fda = W.dot(data3.T) slope = W[1]/W[0] Y1_fda = slope * X1_fda Y2_fda = slope * X2_fda Y3_fda = slope * X3_fda plt.axis('equal') plt.plot(X1_fda, Y1_fda, '^b', label='Class_1') plt.plot(X2_fda, Y2_fda, 'sr', label='Class_2') plt.plot(X3_fda, Y3_fda, 'ok', label='Class_3') plt.title('Projected samples') plt.legend(loc='best') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Compute mean values Step2: Compute the between-class scatter matrix $S_b$ Step4: Compute the within-class scatter matrix $S_w$ Step5: Solve the generalized eigenvalue problem for the matrix $S^{−1}{w}S{b}$ Step6: Double-check the computed eigen-vectors and eigen-values Step7: Sort the eigenvectors by decreasing eigenvalues Step8: If we take a look at the eigenvalues, we can already see that the second eigenvalue are much smaller than the first one. Step9: Transforming the samples onto the new space Step10: Now the transformed samples are scalar values. They are essentially the projection of the original data samples on the selected eigen vector, which corresponds to a straight line. To better visualize the projection, we visualize the transformed samples on the straight line under the original 2-dimensional space.
3,974
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import graphviz from sklearn.linear_model import LinearRegression, LogisticRegression import lingam from lingam.utils import make_dot, remove_effect, predict_adaptive_lasso from IPython.display import display_svg, SVG print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) # Output setting np.set_printoptions(precision=3, suppress=True) np.random.seed(0) # Dataset to be analyzed input_file = 'data1.csv' # Indicate which variable belong to which group set1_labels = ['x1_1', 'x1_2'] set2_labels = ['x2_1', 'x2_2', 'x2_3', 'x2_4', 'x2_5'] set3_labels = ['x3_1', 'x3_2', 'x3_3'] labels = set1_labels + set2_labels + set3_labels X = pd.read_csv(input_file) # Check if the third group includes a binary data contain_bin_var_in_set3 = False for set3_label in set3_labels: if len(np.unique(X[set3_label])) == 2: X[set3_label] = X[set3_label].astype(int) contain_bin_var_in_set3 = True if contain_bin_var_in_set3: print('\x1b[31mThe third group includes a binary variable.\x1b[0m') X.head() # Obtain the column numbers of the variables of the first group set1_indices = [X.columns.get_loc(label) for label in set1_labels] # Compute the residual when each variable of the second group on all the variables of the first group X_removed_set1 = remove_effect(X, set1_indices) # Create the residual dataset for the second group by the residuals computed just above set2_indices = [X.columns.get_loc(label) for label in set2_labels] X2_resid = X_removed_set1[:, set2_indices] set2_model = lingam.DirectLiNGAM() set2_model.fit(X2_resid) make_dot(set2_model.adjacency_matrix_, labels=set2_labels) if contain_bin_var_in_set3: print('\x1b[31mSkip because the third group includes a binary variable\x1b[0m') else: # Obtain column numbers of the variables of the first and second groups set12_indices = [X.columns.get_loc(label) for label in set1_labels+set2_labels] # Regress each variable of the third group on all the variables of the first and second groups X_removed_set12 = remove_effect(X, set12_indices) # Create the residual dataset for the third group by the residuals computed just above set3_indices = [X.columns.get_loc(label) for label in set3_labels] X3_resid = X_removed_set12[:, set3_indices] if contain_bin_var_in_set3: print('\x1b[31mSkip because the third group includes a binary variableスキップします\x1b[0m') else: set3_model = lingam.DirectLiNGAM() set3_model.fit(X3_resid) g = make_dot(set3_model.adjacency_matrix_, labels=set3_labels) display_svg(SVG(g._repr_svg_())) if contain_bin_var_in_set3: print('\x1b[31mSkip because the third group includes a binary variable\x1b[0m') else: # Adjucency matrix for the all the variables adj_matrix = np.zeros([X.shape[1], X.shape[1]]) # Update the adjuncy matrix using the causal graph estimated for the second group set2_start_pos = len(set1_labels) set2_end_pos = set2_start_pos + len(set2_labels) adj_matrix[set2_start_pos:set2_end_pos, set2_start_pos:set2_end_pos] = set2_model.adjacency_matrix_ # Update the adjuncy matrix using the causal graph estimated for the third group set3_start_pos = len(set1_labels) + len(set2_labels) set3_end_pos = set3_start_pos + len(set3_labels) adj_matrix[set3_start_pos:set3_end_pos, set3_start_pos:set3_end_pos] = set3_model.adjacency_matrix_ # Compute the connection strengths from each variable of the second group to that of the third for i, idx in enumerate(set3_indices): # Obtain parents of each variable of the third group set3_parents = np.where(np.abs(set3_model.adjacency_matrix_[i]) > 0)[0] set3_parents = [X.columns.get_loc(set3_labels[idx]) for idx in set3_parents] # Create the set of explanatory variables predictors = [] predictors.extend(set2_indices) # All the variables of the second group predictors.extend(set3_parents) # Parents in the third group # Pruning coefs = predict_adaptive_lasso(X_removed_set1, predictors, idx) adj_matrix[idx, set2_start_pos:set2_end_pos] = coefs[:len(set2_indices)] # Remove a part of the adjacency matrix corresponding the variables of the first group adj_matrix_set23 = adj_matrix[set2_start_pos:, set2_start_pos:] g = make_dot(adj_matrix_set23, labels=set2_labels+set3_labels) display_svg(SVG(g._repr_svg_())) # Compute causal effects from each variable of the second group to that of the third for set3_label in set3_labels: for set2_label in set2_labels: # Create the variable index var2_index = X.columns.get_loc(set2_label) var3_index = X.columns.get_loc(set3_label) # Obtain parents of each variable of the second group parents = np.where(np.abs(set2_model.adjacency_matrix_[set2_labels.index(set2_label)]) > 0)[0] parents = [X.columns.get_loc(set2_labels[idx]) for idx in parents] # Create the set of explanatory variables predictors = [var2_index] predictors.extend(parents) predictors.extend(set1_indices) # If all the variables of the third group are continuous, peform linear regression # If they are binary, perform logistic regression if len(np.unique(X[set3_label])) != 2: lr = LinearRegression() lr.fit(X.iloc[:, predictors], X.iloc[:, var3_index]) effect = lr.coef_[0] else: lr = LogisticRegression(solver='liblinear') lr.fit(X.iloc[:, predictors], X.iloc[:, var3_index]) X_intervened = X.copy() X_intervened.iloc[:, var2_index] = X.iloc[:, var2_index].mean() # do(x=E(x)) p1 = lr.predict_proba(X_intervened.iloc[:, predictors]) X_intervened.iloc[:, var2_index] = X.iloc[:, var2_index].mean() + 1 # do(x=E(x)+1) p2 = lr.predict_proba(X_intervened.iloc[:, predictors]) effect = p2[:, 1].mean() - p1[:, 1].mean() # The difference btw the two averages print(f'{set2_label} ---> {set3_label} : {effect:.3f}') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Indicate which variable belong to which group Step2: Load data Step3: Compute the residual when each variable of the second group on all the variables of the first group Step4: Perform LiNGAM on the residual dataset for the second group and draw the estimated causal graph Step5: Regress each variable of the third group on all the variables of the first and second groups Step6: Perform LiNGAM on the residual dataset for the third group and draw the estimated causal graph Step7: Combine the causal grpahs for the second and third groups and draw it Step8: Compute causal effects from each variable of the second group to that of the third
3,975
<ASSISTANT_TASK:> Python Code: # directory where you want the spacer blasting to be done ## CHANGE THIS! workDir = "/home/nyoungb2/t/CLdb_Ecoli/DR_consensus/" import os from IPython.display import FileLinks %load_ext rpy2.ipython if not os.path.isdir(workDir): os.makedirs(workDir) # checking that CLdb is in $PATH & ~/.CLdb config file is set up !CLdb --config-params !CLdb -- loadDRConsensus !CLdb -- DRconsensus2fasta -h # writing out the consensus sequences !cd $workDir; \ CLdb -- DRconsensus2fasta > DR_consensus.fna # checking output !cd $workDir; \ head -n 6 DR_consensus.fna !cd $workDir; \ mafft --adjustdirection DR_consensus.fna > DR_consensus_aln.fna !cd $workDir; \ echo "#-------#"; \ head -n 6 DR_consensus_aln.fna %%R library(ape) %%R -i workDir inFile = file.path(workDir, 'DR_consensus_aln.fna') seqs = read.dna(inFile, format='fasta') seqs.dist = dist.dna(seqs) plot(hclust(seqs.dist), main='Hierarchical clustering dendrogram') plot(nj(seqs.dist), main='Neighbor-joining tree') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Init Step2: Creating DR consensus seqs & loading into CLdb Step3: That's it! Now, the CLdb.sqlite file contains the DR consensus sequences for each CRISPR locus Step4: Sequence naming is 'locus_ID'|'subtype' Step5: Tree inference
3,976
<ASSISTANT_TASK:> Python Code: import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(color_codes=True) np.random.seed(69572) %matplotlib inline %load_ext writeandexecute # plt.figure(figsize=(120,10)) small = (4,3) mid = (10, 8) large = (12, 8) from __future__ import absolute_import from IPython.core.getipython import get_ipython from IPython.core.magic import (Magics, magics_class, cell_magic) import sys from StringIO import StringIO from markdown import markdown from IPython.core.display import HTML @magics_class class MarkdownMagics(Magics): @cell_magic def asmarkdown(self, line, cell): buffer = StringIO() stdout = sys.stdout sys.stdout = buffer try: exec(cell, locals(), self.shell.user_ns) except: sys.stdout = stdout raise sys.stdout = stdout return HTML("<p>{}</p>".format(markdown(buffer.getvalue(), extensions=['markdown.extensions.extra']))) return buffer.getvalue() + 'test' get_ipython().register_magics(MarkdownMagics) def raw_markup_value_counts(dataframe, max_print_value_counts=30, show_plots=False): ''' prints value counts of each feature in data frame ''' mydf = pd.DataFrame.copy(dataframe) i = 0 raw_markup_data = [] pp = raw_markup_data.append pp('''|Col ID|Col Name|UniqCount|Col Values|UniqValCount|''') pp('''|------|--------|---------|----------|------------|''') for col in mydf.dtypes.index: i += 1 sam = mydf[col] tmp = len(sam.value_counts()) if tmp < max_print_value_counts: flag = True for key, val in dict(sam.value_counts()).iteritems(): if flag: pp('|%i|%s|%i|%s|%s|' % ( i, col, len(sam.value_counts()), key, val)) flag = False else: pp('||-|-|%s|%s|' % (key, val)) if show_plots: plt.figure(i) ax = sam.value_counts().plot(kind='barh', figsize=(12, 5)) _ = plt.title(col.upper()) _ = plt.xlabel('counts') else: pp('|%i|%s|%i|||' % (i, col, len(sam.value_counts()))) return raw_markup_data from __future__ import division import itertools from sklearn.metrics import confusion_matrix def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') def confusion_maxtrix_stuff(y_test, y_pred, class_names): ''' Example >>> confusion_maxtrix_stuff(y_test, y_pred, class_names=RAW_y.status_group.value_counts().keys() ): ''' # Compute confusion matrix cnf_matrix = confusion_matrix(y_test, y_pred) np.set_printoptions(precision=2) # Plot non-normalized confusion matrix plt.figure(figsize=(8,8)) plot_confusion_matrix(cnf_matrix, classes=class_names, title='Confusion matrix, without normalization') # Plot normalized confusion matrix plt.figure(figsize=(8,8)) plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True, title='Normalized confusion matrix') plt.show() RAW_X = pd.read_csv('traning_set_values.csv', index_col='id') RAW_y = pd.read_csv('training_set_labels.csv', index_col='id') RAW_TEST_X = pd.read_csv('test_set_values.csv', index_col='id') from datetime import datetime strptime = datetime.strptime DATE_FORMAT = "%Y-%m-%d" REFERENCE_DATE_POINT = strptime('2000-01-01', DATE_FORMAT) # Reducing geo location precision to 11 meters LONG_LAT_PRECISION = 0.001 def sam_datetime_to_number(x): return (strptime(str(x), DATE_FORMAT) - REFERENCE_DATE_POINT).days # Transforming Date to Int. if RAW_X.date_recorded.dtype == 'O': RAW_X.date_recorded = RAW_X.date_recorded.map(sam_datetime_to_number) RAW_TEST_X.date_recorded = RAW_TEST_X.date_recorded.map(sam_datetime_to_number) # Filling Missing/OUTLIAR Values _ = np.mean(RAW_X[u'latitude'][RAW_X.latitude < -1.0].values) if not RAW_X.loc[RAW_X.latitude >= -1.0, u'latitude'].empty: RAW_X.loc[RAW_X.latitude >= -1.0, u'latitude'] = _ RAW_TEST_X.loc[RAW_TEST_X.latitude >= -1.0, u'latitude'] = _ # Filling Missing/OUTLIAR Values _ = np.mean(RAW_X[u'longitude'][RAW_X[u'longitude'] > 1.0].values) if not RAW_X.loc[RAW_X[u'longitude'] <= 1.0, u'longitude'].empty: RAW_X.loc[RAW_X[u'longitude'] <= 1.0, u'longitude'] = _ RAW_TEST_X.loc[RAW_TEST_X[u'longitude'] <= 1.0, u'longitude'] = _ # Reducing Precision of Lat. if RAW_X.longitude.mean() < 50: RAW_X.longitude = RAW_X.longitude // LONG_LAT_PRECISION RAW_X.latitude = RAW_X.latitude // LONG_LAT_PRECISION RAW_TEST_X.longitude = RAW_TEST_X.longitude // LONG_LAT_PRECISION RAW_TEST_X.latitude = RAW_TEST_X.latitude // LONG_LAT_PRECISION # Filling Missing/OUTLIAR Values if RAW_X.public_meeting.dtype != 'bool': RAW_X.public_meeting = RAW_X.public_meeting == True RAW_TEST_X.public_meeting = RAW_TEST_X.public_meeting == True if RAW_X.permit.dtype != 'bool': RAW_X.permit = RAW_X.permit == True RAW_TEST_X.permit = RAW_TEST_X.permit == True if list(RAW_TEST_X.dtypes[RAW_TEST_X.dtypes != RAW_X.dtypes]): raise Exception('RAW_X.dtypes and RAW_TEST_X.dtypes are not in Sync') def text_transformation(name): if name: name = name.lower().strip() name = ''.join([i if 96 < ord(i) < 128 else ' ' for i in name]) if 'and' in name: name = name.replace('and', ' ') if '/' in name: name = name.replace('/', ' ') while ' ' in name: name = name.replace(' ', ' ') return name.strip() return for col in RAW_X.dtypes[RAW_X.dtypes == object].index: aa = len(RAW_X[col].unique()) RAW_X[col] = RAW_X[col].fillna('').apply(text_transformation) RAW_TEST_X[col] = RAW_TEST_X[col].fillna('').apply(text_transformation) bb = len(RAW_X[col].unique()) if aa != bb: print col, aa, bb from collections import defaultdict from sklearn import preprocessing # http://stackoverflow.com/questions/24458645/label-encoding-across-multiple-columns-in-scikit-learn d = defaultdict(preprocessing.LabelEncoder) # Labels Fit sam = pd.concat([RAW_X, RAW_TEST_X]).apply(lambda x: d[x.name].fit(x)) # Labels Transform - Training Data X = RAW_X.apply(lambda x: d[x.name].transform(x)) TEST_X = RAW_TEST_X.apply(lambda x: d[x.name].transform(x)) le = preprocessing.LabelEncoder().fit(RAW_y[u'status_group']) y = le.transform(RAW_y[u'status_group']) # g = sns.PairGrid(X[:1000]) # g.map(plt.scatter); from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 test = SelectKBest(score_func=chi2, k=30) fit = test.fit(X, y) cols_names = RAW_X.columns np.set_printoptions(precision=2) print(fit.scores_), len(fit.scores_) col_importances = list(zip(fit.scores_, cols_names)) col_importances.sort(reverse=True) selected_cols = [_[-1] for _ in col_importances[:30] ] features = pd.DataFrame(fit.transform(X)) features.columns = selected_cols print len(X.columns), features.shape, len(y) X = pd.DataFrame(fit.transform(X)) TEST_X = pd.DataFrame(fit.transform(TEST_X)) X.columns = selected_cols TEST_X.columns = selected_cols from sklearn.decomposition import PCA # feature extraction pca = PCA(n_components=18) fit = pca.fit(X) plt.scatter (range(len(fit.explained_variance_ratio_)), fit.explained_variance_ratio_.cumsum()) X = pca.transform(X) TEST_X = pca.transform(TEST_X) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42, stratify=y) # X_train, X_test, y_train, y_test = train_test_split(features, y, test_size=0.25, random_state=42, stratify=y) from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100, class_weight="balanced_subsample", n_jobs=-1) # class_weight="balanced_subsample"/"balanced" # criterion="gini"/"entropy" clf clf = clf.fit(X_train, y_train) pred = clf.predict_proba(X_test) clf.score(X_test, y_test) # 0.79303132333435367 # 0.80252525252525253 # 0.80303030303030298 # 0.80345117845117842 # 0.79814814814814816 # (n_estimators=100, class_weight="balanced_subsample", n_jobs=-1) 0.80782828282828278 # (n_estimators=100, class_weight="balanced_subsample", n_jobs=-1) 0.81186868686868685 clf? plt.figure(figsize=(12, 3)) # making importance relative a, b = min(clf.feature_importances_), max(clf.feature_importances_) cols_imp = (clf.feature_importances_ - a) /b _ = plt.scatter(range(30), cols_imp) _ = plt.plot((0, 29), (0.05,0.05), '-r') _ = plt.xlabel('Columns') _ = plt.ylabel('Relative Col Importance') from sklearn import metrics print map(lambda x: len(x), [X_test, y_test]) clf.score(X_test, y_test) # 0.79303132333435367 # 0.80252525252525253 # 0.80303030303030298 # 0.80345117845117842 print .79303132333435367 - 0.80345117845117842 print .8285 - 0.80345117845117842, .8285 - .79303132333435367 test_ids = RAW_TEST_X.index predictions = clf.predict(TEST_X) print (predictions.shape) predictions_labels = le.inverse_transform(predictions) # sub = pd.DataFrame(predictions, columns=list(le.classes_)) sub = pd.DataFrame(predictions_labels, columns=['status_group']) sub.head() sub.insert(0, 'id', test_ids) sub.reset_index() sub.to_csv('submit.csv', index = False) sub.head() X.shape <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Custom Functions Step2: DataFrame Value Counts Step4: Confusion Matrix Step5: Import & Explore Data Step6: Pre Processing Step7: Text Data Tranformations Step8: Cols vs Uniq distribution Step9: UniVariate Analysis Step10: PCA Step11: Test-Train Split Step12: Model Training Step13: Scoring Step14: XGBOOST
3,977
<ASSISTANT_TASK:> Python Code: from sympy import * init_printing(use_unicode=True) a0 = Symbol('a_0') e0 = Symbol('e_0') e = Symbol('e') a = Symbol('a') integrando = Rational(12,19)*((1+Rational(73,24)*e**2+Rational(37,96)*e**4) /(e*(1-e**2)*(1+Rational(121,304)*e**2))) integrando Integral = integrate(integrando,(e,e0,e)) Integral a = a0*exp(Integral) a af = Function('a') solucion = dsolve(Derivative(af(e),e)-af(e)*integrando,af(e)) solucion %matplotlib inline import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt from __future__ import division def dotx(x,t): a = x[0] e = x[1] return [-(16/(5*a**3))*(1+(73/24)*e**2+(37/96)*e**4)/((1-e**2)**(7/2)), -(76/(15*a**4))*e*(1+(121/304)*e**2)/((1-e**2)**(5/2))] T0_d = 0.322997448911 # periodo inicial, en días e0 = 0.6171334 # excentricidad inicial M_c = 1.3886 # masa de la compañera, en masas solares M_p = 1.4398 # masa del pulsar, en masas solares c = 299792458 # rapidez de la luz, en metros por segundo MGcm3 = 4.925490947E-6 # MG/c^3, en segundos m_sol = MGcm3*c # parametro de masa del Sol m=GM/c^2, en metros M = M_c+M_p # masa total, en masas solares mu = (M_c*M_p)/M # masa reducida, en masas solares R_ast = m_sol*(4*mu*M**2)**(1/3) # R_\ast en metros T0_s = T0_d*86400 # periodo inicial, en segundos print('R_ast = '+str(R_ast)+' [m]') print('T0 = '+str(T0_s) + ' [s]') def a(T_s): return (m_sol*M*(c*T_s/(2*np.pi))**2)**(1/3) def T(a_m): return (2*np.pi/c)*(a_m**3/(M*m_sol))**(1/2) a0_m = a(T0_s) # a inicial, en metros at0 = a0_m/R_ast # a tilde inicial print('a0 = '+str(a0_m)+' m') print('at0 = '+str(at0)) def solucion(x0,tt_int): print 'Se resuelve con at0 = %2.f y e0 = %2.f'%(x0[0],x0[1]) sol = odeint(dotx,x0,tt_int) at_todos = sol[:,0] # verifica si at llega a 2. En caso positivo corta el arreglo de soluciones restriccion = np.where(at_todos<2)[0] if len(restriccion) is not 0: pos_ttmax = restriccion[0] # determina el tiempo en el que at=2 print('Acortando intervalo a tt_max = '+str(tt_int[pos_ttmax])) else: pos_ttmax = len(tt_int) tt = tt_int[:pos_ttmax] t_a = tt*R_ast/c/31557600 # el tiempo, en años at = sol[:pos_ttmax,0] e = sol[:pos_ttmax,1] a_m = at*R_ast # solución de a, en metros T_s = T(a_m) # solución de T, en segundos return tt,t_a,at,e,a_m,T_s tt_int_max = 10**22 # tiempo adimensional máximo de integración. Con este valor se llega hasta a=2 tt_int = np.linspace(0,tt_int_max,100000) # tiempos en los que se integrará el sistema print('tt_int_max = '+str(tt_int_max)) x0 = [at0,e0] # valores iniciales tt,t_a,at,e,a_m,T_s = solucion(x0,tt_int) # calcula y asigna valores de la solución fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(tt,at) eje.set_xlabel(r'$\tilde{t}$',fontsize=15) eje.set_ylabel(r'$\tilde{a}$',fontsize=15) plt.grid() fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,a_m) eje.set_title(u'Evolución del semieje mayor',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$a (m)$',fontsize=15) plt.grid() fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,e) eje.set_title(u'Evolución de la excentricidad',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$e$',fontsize=15) plt.grid() T_h = T_s/3600. # periodo orbital, en horas plt.figure(figsize=(5,5)) plt.plot(t_a,T_h) plt.title(u'Evolución del Periodo orbital',fontsize=15) plt.xlabel(u'$t$ (años)',fontsize=15) plt.ylabel(r'$T$ (horas)',fontsize=15) plt.grid() def g(e): return e**(12/19)*(1+121*e**2/304)**(870/2299)/(1-e**2) fig,eje= plt.subplots(1,1,figsize=(5,5)) ee = np.linspace(0,1,100) eje.plot(ee,g(ee)) eje.set_yscale('log') eje.set_title(r'$g$ versus $e$',fontsize=15) eje.set_xlabel(r'$e$',fontsize=15) eje.set_ylabel(r'$g$',fontsize=15) #plt.legend(loc='best') plt.grid() fig,eje= plt.subplots(1,1,figsize=(5,5)) eje.plot(e,at*R_ast, label=u'sol. numérica') ee = np.linspace(min(e),e0,10) a_an = a0_m*g(ee)/g(e0) eje.plot(ee,a_an,'o',label=u'sol. analítica') eje.set_title(u'Semieje mayor v/s excentricidad',fontsize=14) eje.set_xlabel(r'$e$',fontsize=15) eje.set_ylabel(r'$a$',fontsize=15) plt.legend(loc='best') plt.grid() t_max_a = 30 # tiempo de integración, en años tt_int_max = 31557600*c*t_max_a/R_ast #tiempo adimensional máximo de integración tt_int = np.linspace(0,tt_int_max,100000) print('tt_int_max = '+str(tt_int_max)) tt,t_a,at,e,a_m,T_s = solucion(x0,tt_int) fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,at/at0-1) eje.set_title(u'Evolución del semieje mayor',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$(a-a_0)/a_0$',fontsize=15) plt.grid() fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,e-e0) eje.set_title(u'Evolución de la excentricidad',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$e$',fontsize=15) plt.grid() T_h = T_s/3600. # periodo orbital, en horas fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,T_h-T_h[0]) eje.set_title(u'Evolución del periodo orbiral',fontsize=15) eje.set_xlabel(u'$t$ (años)',fontsize=15) eje.set_ylabel(r'$T-T_0$ (horas)',fontsize=15) plt.grid() dota = dotx(x0,0)[0] dotT = (3/2)*(c/R_ast)*(T0_s/at0)*dota print('dT/dt= ' + str(dotT)) data = np.genfromtxt('data-HT.txt') t_exp = data[:,0]-data[0,0] Delta_t_exp = data[:,1] fig,eje = plt.subplots(1,1,figsize=(5,5)) n = np.arange(40000) t_n = (n*T0_s+dotT*T0_s*n*(n-1)/2.)/31557600. # tiempo, en años Delta_t_n = dotT*T0_s*n*(n-1)/2 #retraso acumulado, en segundos eje.plot(t_n,Delta_t_n, label='RG') eje.hlines(0,0,40, color='red',label='Newtoniano') eje.set_xlabel(u'Tiempo (años)') eje.set_ylabel(r'Retraso acumulado (s)') eje.set_xlim(0,35) eje.set_ylim(-45,1) plt.plot(t_exp,Delta_t_exp,'o',label='Datos') eje.legend(loc=3) plt.grid() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: y ahora exponenciamos Step2: Si definimos Step3: La constante $C_1$ es determinada por la condición inicial $a(e_0)=a_0$ que, luego de ser reemplazada de vuelta en la solución, entrega la misma expresión encontrada por el otro método. Step4: Usaremos la siguiente adimencionalización de las variables Step5: Usamos los datos del Pulsar de Hulse y Taulor, de acuerdo a lo por Weisberg, Nice y Taylor (2010) (http Step6: Calculamos algunos otros parámetros que nos serán útiles Step7: Definimos también un par de funciones que relacionan el periodo orbital $T$ (en segundos) con el semieje mayor $a$ (de la coordenada relativa, en metros), y viceversa Step8: Dado que aquí resolveremos el sistema de ecuaciones dos veces (con distintas condiciones iniciales), definiremos una función que nos entrega todas las soluciones Step9: Primera integracion Step10: Como vemos, dadas las condiciones iniciales, se requiere un tiempo adimensionalizado del orden de $10^{21}$ para que el sistema colapse (suponiendo que el modelo es válido incluso a pequeñas distancias, cosa que en realidad no es cierta). Step11: Vemos entonces que el tiempo de colapso es del orden de $10^8$ años. Step12: Finalmente, graficamos la evolución del periodo orbital del sistema Step13: Graficando la dependencia de $a$ con $e$ Step14: Ahora graficamos $a$ en términos de $e$, tanto para la solución analítica como numérica Step15: Como vemos, en el tiempo de observación del pulsar binario, aproximadamente 30 años, el decaimiento tanto de $a$ como $e$ será en la práctica a una tasa constante (línea recta en el gráfico en función del tiempo). Step16: Este comportamiento implica que en este intervalo de tiempo de aproximadamente 30 años los valores de $\dot{T}$, $\dot{a}$ y $\dot{e}$ pueden considerarse constantes. Con el valor de $\dot{T}$ podemos modelar el retardo acumulado en el movimiento orbital del sistema. Si $\dot{T}=$ cte. entonces el tiempo transcurrido hasta completar la $n$-ésima revolución es determinado por las relaciones Step17: Este valor concuerda con el reportado por Weisberg, Nice y Taylor (2010) (http
3,978
<ASSISTANT_TASK:> Python Code: foo = dict() # Create an empty dict foo['bar'] = 1 # Store an integer under the key 'bar' print(foo['bar']) # Retrieve the integer stored in 'bar' import scipy as sp import numpy as np import openpnm as op %config InlineBackend.figure_formats = ['svg'] np.random.seed(10) # Instantiate an empty network object with 10 pores and 10 throats net = op.network.GenericNetwork(Np=10, Nt=10) # Assign an Np-long array of ones net['pore.foo'] = np.ones([net.Np, ]) # Assign an Np-long array of increasing ints net['pore.bar'] = range(0, net.Np) # The Python range iterator is converted to a proper Numpy array print(type(net['pore.bar'])) net['pore.foo'][4] = 44.0 # Overwrite values in the array print(net['pore.foo'][4]) # Retrieve values from the array print(net['pore.foo'][2:6]) # Extract a slice of the array print(net['pore.foo'][[2, 4, 6]]) # Extract specific locations net['throat.foo'] = 2 # Assign a scalar print(len(net['throat.foo'])) # The scalar values is converted to an Nt-long array print(net['throat.foo'][4]) # The scalar value was placed into all locations import openpnm as op %config InlineBackend.figure_formats = ['svg'] import scipy as sp pn = op.network.Cubic(shape=[4, 3, 1], spacing=0.0001) print('The total number of pores on the network is:', pn.num_pores()) print('A short-cut to the total number of pores is:', pn.Np) print('The total number of throats on the network is:', pn.num_throats()) print('A short-cut to the total number of throats is:', pn.Nt) print('A list of all calculated properties is availble with:\n', pn.props()) print(pn.pores('left')) print(pn['pore.coords'][pn.pores('left')]) print(pn.labels()) geom = op.geometry.GenericGeometry(network=pn, pores=pn.Ps, throats=pn.Ts) geom['pore.diameter'] = np.random.rand(pn.Np)*0.0001 # Units of meters P12 = pn['throat.conns'] # An Nt x 2 list of pores on the end of each throat D12 = geom['pore.diameter'][P12] # An Nt x 2 list of pore diameters Dt = np.amin(D12, axis=1) # An Nt x 1 list of the smaller pore from each pair geom['throat.diameter'] = Dt Rp = geom['pore.diameter']/2 geom['pore.volume'] = (4/3)*3.14159*(Rp)**3 C2C = 0.0001 # The center-to-center distance between pores Rp12 = Rp[pn['throat.conns']] geom['throat.length'] = C2C - np.sum(Rp12, axis=1) Rt = geom['throat.diameter']/2 Lt = geom['throat.length'] geom['throat.volume'] = 3.14159*(Rt)**2*Lt import openpnm.models.geometry as gmods geom.add_model(propname='throat.hydraulic_size_factors', model=gmods.hydraulic_size_factors.spheres_and_cylinders, pore_diameter="pore.diameter", throat_diameter="throat.diameter") water = op.phases.GenericPhase(network=pn) water['pore.temperature'] = 298.0 water['pore.viscosity'] = 0.001 phys_water = op.physics.GenericPhysics(network=pn, phase=water, geometry=geom) R = geom['throat.diameter']/2 L = geom['throat.length'] mu_w = 0.001 phys_water['throat.hydraulic_conductance'] = 3.14159*R**4/(8*mu_w*L) alg = op.algorithms.StokesFlow(network=pn, phase=water) BC1_pores = pn.pores('front') alg.set_value_BC(values=202650, pores=BC1_pores) BC2_pores = pn.pores('back') alg.set_value_BC(values=101325, pores=BC2_pores) alg.run() Q = alg.rate(pores=pn.pores('front')) A = 0.0001*3*1 # Cross-sectional area for flow L = 0.0001*4 # Length of flow path del_P = 101325 # Specified pressure gradient K = Q*mu_w*L/(A*del_P) print(K) water.update(alg.results()) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: A detailed tutorial on dictionaries can be found here. The dict does not offer much functionality aside from basic storage of arbitrary objects, and it is meant to be extended. OpenPNM extends the dict to have functionality specifically suited for dealing with OpenPNM data. Step2: Generate a Cubic Network Step3: Next, generate a Network by choosing the Cubic class, then create an instance with the desired parameters Step4: The Network object stored in pn contains pores at the correct spatial positions and connections between the pores according the cubic topology. Step5: Accessing Pores and Throats via Labels Step6: The ability to retrieve pore indices is handy for querying pore properties, such as retrieving the pore coordinates of all pores on the 'left' face Step7: A list of all labels currently assigned to the network can be obtained with Step8: Create a Geometry Object and Assign Geometric Properties to Pores and Throats Step9: This statement contains three arguments Step10: We usually want the throat diameters to always be smaller than the two pores which it connects to maintain physical consistency. This requires understanding a little bit about how OpenPNM stores network topology. Consider the following Step11: Let's dissect the above lines. Step12: The length of each throat is the center-to-center distance between pores, minus the radius of each of two neighboring pores. Step13: The volume of each throat is found assuming a cylinder Step14: The basic geometrical properties of the network are now defined. The Geometry class possesses a method called plot_histograms that produces a plot of the most pertinent geometrical properties. The following figure doesn't look very good since the network in this example has only 12 pores, but the utility of the plot for quick inspection is apparent. Step15: Create a Phase Object Step16: Some notes on this line Step17: The above lines utilize the fact that OpenPNM converts scalars to full length arrays, essentially setting the temperature in each pore to 298.0 K. Step18: As with all objects, the Network must be specified Step19: The viscosity of the Phases was only defined in the pores; however, the hydraulic conductance must be calculated for each throat. There are several options, but to keep this tutorial simple we'll create a scalar value Step20: Numpy arrays support vectorization, so since both L and R are arrays of Nt-length, their multiplication in this way results in another array that is also Nt-long. Step21: Like all the above objects, Algorithms must be assigned to a Network via the network argument. Step22: To actually run the algorithm use the run method Step23: This builds the coefficient matrix from the existing values of hydraulic conductance, and inverts the matrix to solve for pressure in each pore, and stores the results within the Algorithm's dictionary under 'pore.pressure'. Step24: The results ('pore.pressure') are held within the alg object and must be explicitly returned to the Phase object by the user if they wish to use these values in a subsequent calculation. The point of this data containment is to prevent unintentional overwriting of data. Each algorithm has a method called results which returns a dictionary of the pertinent simulation results, which can be added to the phase of interest using the update method.
3,979
<ASSISTANT_TASK:> Python Code: from frozen_lake import FrozenLakeEnv env = FrozenLakeEnv() print(env.__doc__) # Some basic imports and setup import numpy as np, numpy.random as nr, gym np.set_printoptions(precision=3) def begin_grading(): print("\x1b[43m") def end_grading(): print("\x1b[0m") # Seed RNGs so you get the same printouts as me env.seed(0); from gym.spaces import prng; prng.seed(10) # Generate the episode env.reset() for t in range(100): env.render() a = env.action_space.sample() ob, rew, done, _ = env.step(a) if done: break assert done env.render(); class MDP(object): def __init__(self, P, nS, nA, desc=None): self.P = P # state transition and reward probabilities, explained below self.nS = nS # number of states self.nA = nA # number of actions self.desc = desc # 2D array specifying what each grid cell means (used for plotting) mdp = MDP( {s : {a : [tup[:3] for tup in tups] for (a, tups) in a2d.items()} for (s, a2d) in env.P.items()}, env.nS, env.nA, env.desc) print("mdp.P is a two-level dict where the first key is the state and the second key is the action.") print("The 2D grid cells are associated with indices [0, 1, 2, ..., 15] from left to right and top to down, as in") print(np.arange(16).reshape(4,4)) print("mdp.P[state][action] is a list of tuples (probability, nextstate, reward).\n") print("For example, state 0 is the initial state, and the transition information for s=0, a=0 is \nP[0][0] =", mdp.P[0][0], "\n") print("As another example, state 5 corresponds to a hole in the ice, which transitions to itself with probability 1 and reward 0.") print("P[5][0] =", mdp.P[5][0], '\n') def value_iteration(mdp, gamma, nIt): Inputs: mdp: MDP gamma: discount factor nIt: number of iterations, corresponding to n above Outputs: (value_functions, policies) len(value_functions) == nIt+1 and len(policies) == n print("Iteration | max|V-Vprev| | # chg actions | V[0]") print("----------+--------------+---------------+---------") Vs = [np.zeros(mdp.nS)] # list of value functions contains the initial value function V^{(0)}, which is zero pis = [] for it in range(nIt): oldpi = pis[-1] if len(pis) > 0 else None # \pi^{(it)} = Greedy[V^{(it-1)}]. Just used for printout Vprev = Vs[-1] # V^{(it)} # YOUR CODE HERE # Your code should define the following two variables # pi: greedy policy for Vprev, # corresponding to the math above: \pi^{(it)} = Greedy[V^{(it)}] # numpy array of ints # V: bellman backup on Vprev # corresponding to the math above: V^{(it+1)} = T[V^{(it)}] # numpy array of floats max_diff = np.abs(V - Vprev).max() nChgActions="N/A" if oldpi is None else (pi != oldpi).sum() print("%4i | %6.5f | %4s | %5.3f"%(it, max_diff, nChgActions, V[0])) Vs.append(V) pis.append(pi) return Vs, pis GAMMA=0.95 # we'll be using this same value in subsequent problems begin_grading() Vs_VI, pis_VI = value_iteration(mdp, gamma=GAMMA, nIt=20) end_grading() import matplotlib.pyplot as plt %matplotlib inline for (V, pi) in zip(Vs_VI[:10], pis_VI[:10]): plt.figure(figsize=(3,3)) plt.imshow(V.reshape(4,4), cmap='gray', interpolation='none', clim=(0,1)) ax = plt.gca() ax.set_xticks(np.arange(4)-.5) ax.set_yticks(np.arange(4)-.5) ax.set_xticklabels([]) ax.set_yticklabels([]) Y, X = np.mgrid[0:4, 0:4] a2uv = {0: (-1, 0), 1:(0, -1), 2:(1,0), 3:(-1, 0)} Pi = pi.reshape(4,4) for y in range(4): for x in range(4): a = Pi[y, x] u, v = a2uv[a] plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1) plt.text(x, y, str(env.desc[y,x].item().decode()), color='g', size=12, verticalalignment='center', horizontalalignment='center', fontweight='bold') plt.grid(color='b', lw=2, ls='-') plt.figure() plt.plot(Vs_VI) plt.title("Values of different states"); chg_iter = 50 # YOUR CODE HERE # Your code will need to define an MDP (mymdp) # like the frozen lake MDP defined above begin_grading() Vs, pis = value_iteration(mymdp, gamma=GAMMA, nIt=chg_iter+1) end_grading() def compute_vpi(pi, mdp, gamma): # YOUR CODE HERE return V begin_grading() print(compute_vpi(np.ones(16), mdp, gamma=GAMMA)) end_grading() Vpi=compute_vpi(pis_VI[15], mdp, gamma=GAMMA) V_vi = Vs_VI[15] print("From compute_vpi", Vpi) print("From value iteration", V_vi) print("Difference", Vpi - V_vi) def compute_qpi(vpi, mdp, gamma): # YOUR CODE HERE return Qpi begin_grading() Qpi = compute_qpi(np.arange(mdp.nS), mdp, gamma=0.95) print("Qpi:\n", Qpi) end_grading() def policy_iteration(mdp, gamma, nIt): Vs = [] pis = [] pi_prev = np.zeros(mdp.nS,dtype='int') pis.append(pi_prev) print("Iteration | # chg actions | V[0]") print("----------+---------------+---------") for it in range(nIt): vpi = compute_vpi(pi_prev, mdp, gamma) qpi = compute_qpi(vpi, mdp, gamma) pi = qpi.argmax(axis=1) print("%4i | %6i | %6.5f"%(it, (pi != pi_prev).sum(), vpi[0])) Vs.append(vpi) pis.append(pi) pi_prev = pi return Vs, pis Vs_PI, pis_PI = policy_iteration(mdp, gamma=0.95, nIt=20) plt.plot(Vs_PI); for s in range(5): plt.figure() plt.plot(np.array(Vs_VI)[:,s]) plt.plot(np.array(Vs_PI)[:,s]) plt.ylabel("value of state %i"%s) plt.xlabel("iteration") plt.legend(["value iteration", "policy iteration"], loc='best') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Let's look at what a random episode looks like. Step2: In the episode above, the agent falls into a hole after two timesteps. Also note the stochasticity--on the first step, the DOWN action is selected, but the agent moves to the right. Step4: Part 1 Step5: Below, we've illustrated the progress of value iteration. Your optimal actions are shown by arrows. Step6: Problem 2 Step7: Problem 3 Step8: Now let's compute the value of an arbitrarily-chosen policy. Step9: As a sanity check, if we run compute_vpi on the solution from our previous value iteration run, we should get approximately (but not exactly) the same values produced by value iteration. Step10: Problem 3b Step11: Now we're ready to run policy iteration! Step12: Now we can compare the convergence of value iteration and policy iteration on several states.
3,980
<ASSISTANT_TASK:> Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline import os os.chdir('..') import sys sys.path.insert(0, './python') import caffe import os import h5py import shutil import tempfile import sklearn import sklearn.datasets import sklearn.linear_model import pandas as pd X, y = sklearn.datasets.make_classification( n_samples=10000, n_features=4, n_redundant=0, n_informative=2, n_clusters_per_class=2, hypercube=False, random_state=0 ) # Split into train and test X, Xt, y, yt = sklearn.cross_validation.train_test_split(X, y) # Visualize sample of the data ind = np.random.permutation(X.shape[0])[:1000] df = pd.DataFrame(X[ind]) _ = pd.scatter_matrix(df, figsize=(9, 9), diagonal='kde', marker='o', s=40, alpha=.4, c=y[ind]) %%timeit # Train and test the scikit-learn SGD logistic regression. clf = sklearn.linear_model.SGDClassifier( loss='log', n_iter=1000, penalty='l2', alpha=1e-3, class_weight='auto') clf.fit(X, y) yt_pred = clf.predict(Xt) print('Accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(yt, yt_pred))) # Write out the data to HDF5 files in a temp directory. # This file is assumed to be caffe_root/examples/hdf5_classification.ipynb dirname = os.path.abspath('./examples/hdf5_classification/data') if not os.path.exists(dirname): os.makedirs(dirname) train_filename = os.path.join(dirname, 'train.h5') test_filename = os.path.join(dirname, 'test.h5') # HDF5DataLayer source should be a file containing a list of HDF5 filenames. # To show this off, we'll list the same data file twice. with h5py.File(train_filename, 'w') as f: f['data'] = X f['label'] = y.astype(np.float32) with open(os.path.join(dirname, 'train.txt'), 'w') as f: f.write(train_filename + '\n') f.write(train_filename + '\n') # HDF5 is pretty efficient, but can be further compressed. comp_kwargs = {'compression': 'gzip', 'compression_opts': 1} with h5py.File(test_filename, 'w') as f: f.create_dataset('data', data=Xt, **comp_kwargs) f.create_dataset('label', data=yt.astype(np.float32), **comp_kwargs) with open(os.path.join(dirname, 'test.txt'), 'w') as f: f.write(test_filename + '\n') from caffe import layers as L from caffe import params as P def logreg(hdf5, batch_size): # logistic regression: data, matrix multiplication, and 2-class softmax loss n = caffe.NetSpec() n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2) n.ip1 = L.InnerProduct(n.data, num_output=2, weight_filler=dict(type='xavier')) n.accuracy = L.Accuracy(n.ip1, n.label) n.loss = L.SoftmaxWithLoss(n.ip1, n.label) return n.to_proto() with open('examples/hdf5_classification/logreg_auto_train.prototxt', 'w') as f: f.write(str(logreg('examples/hdf5_classification/data/train.txt', 10))) with open('examples/hdf5_classification/logreg_auto_test.prototxt', 'w') as f: f.write(str(logreg('examples/hdf5_classification/data/test.txt', 10))) %%timeit caffe.set_mode_cpu() solver = caffe.get_solver('examples/hdf5_classification/solver.prototxt') solver.solve() accuracy = 0 batch_size = solver.test_nets[0].blobs['data'].num test_iters = int(len(Xt) / batch_size) for i in range(test_iters): solver.test_nets[0].forward() accuracy += solver.test_nets[0].blobs['accuracy'].data accuracy /= test_iters print("Accuracy: {:.3f}".format(accuracy)) !./build/tools/caffe train -solver examples/hdf5_classification/solver.prototxt from caffe import layers as L from caffe import params as P def nonlinear_net(hdf5, batch_size): # one small nonlinearity, one leap for model kind n = caffe.NetSpec() n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2) # define a hidden layer of dimension 40 n.ip1 = L.InnerProduct(n.data, num_output=40, weight_filler=dict(type='xavier')) # transform the output through the ReLU (rectified linear) non-linearity n.relu1 = L.ReLU(n.ip1, in_place=True) # score the (now non-linear) features n.ip2 = L.InnerProduct(n.ip1, num_output=2, weight_filler=dict(type='xavier')) # same accuracy and loss as before n.accuracy = L.Accuracy(n.ip2, n.label) n.loss = L.SoftmaxWithLoss(n.ip2, n.label) return n.to_proto() with open('examples/hdf5_classification/nonlinear_auto_train.prototxt', 'w') as f: f.write(str(nonlinear_net('examples/hdf5_classification/data/train.txt', 10))) with open('examples/hdf5_classification/nonlinear_auto_test.prototxt', 'w') as f: f.write(str(nonlinear_net('examples/hdf5_classification/data/test.txt', 10))) %%timeit caffe.set_mode_cpu() solver = caffe.get_solver('examples/hdf5_classification/nonlinear_solver.prototxt') solver.solve() accuracy = 0 batch_size = solver.test_nets[0].blobs['data'].num test_iters = int(len(Xt) / batch_size) for i in range(test_iters): solver.test_nets[0].forward() accuracy += solver.test_nets[0].blobs['accuracy'].data accuracy /= test_iters print("Accuracy: {:.3f}".format(accuracy)) !./build/tools/caffe train -solver examples/hdf5_classification/nonlinear_solver.prototxt # Clean up (comment this out if you want to examine the hdf5_classification/data directory). shutil.rmtree(dirname) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Synthesize a dataset of 10,000 4-vectors for binary classification with 2 informative features and 2 noise features. Step2: Learn and evaluate scikit-learn's logistic regression with stochastic gradient descent (SGD) training. Time and check the classifier's accuracy. Step3: Save the dataset to HDF5 for loading in Caffe. Step4: Let's define logistic regression in Caffe through Python net specification. This is a quick and natural way to define nets that sidesteps manually editing the protobuf model. Step5: Time to learn and evaluate our Caffeinated logistic regression in Python. Step6: Do the same through the command line interface for detailed output on the model and solving. Step7: If you look at output or the logreg_auto_train.prototxt, you'll see that the model is simple logistic regression. Step8: Do the same through the command line interface for detailed output on the model and solving.
3,981
<ASSISTANT_TASK:> Python Code: import ROOT from ROOT import TMVA import os ROOT.TMVA.Tools.Instance() TMVA.PyMethodBase.PyInitialize() ## For PYMVA methods TMVA.PyMethodBase.PyInitialize(); outputFile = ROOT.TFile.Open("CNN_ClassificationOutput.root", "RECREATE") factory = ROOT.TMVA.Factory("TMVA_CNN_Classification", outputFile, "!V:ROC:!Silent:Color:!DrawProgressBar:AnalysisType=Classification" ) inputFileName = "images_data.root" inputFile = ROOT.TFile.Open( inputFileName ) # retrieve input trees signalTree = inputFile.Get("sig_tree") backgroundTree = inputFile.Get("bkg_tree") signalTree.Print() loader = ROOT.TMVA.DataLoader("dataset") ### global event weights per tree (see below for setting event-wise weights) signalWeight = 1.0 backgroundWeight = 1.0 ### You can add an arbitrary number of signal or background trees loader.AddSignalTree ( signalTree, signalWeight ) loader.AddBackgroundTree( backgroundTree, backgroundWeight ) imgSize = 8 * 8; for i in range(0,imgSize): varName = "var"+str(i) loader.AddVariable(varName,'F'); ## Apply additional cuts on the signal and background samples (can be different) mycuts = ROOT.TCut("") ## for example: TCut mycuts = "abs(var1)<0.5 && abs(var2-0.5)<1"; mycutb = ROOT.TCut("") ## for example: TCut mycutb = "abs(var1)<0.5"; loader.PrepareTrainingAndTestTree( mycuts, mycutb, "nTrain_Signal=5000:nTrain_Background=5000:SplitMode=Random:" "NormMode=NumEvents:!V" ) inputLayoutString = "InputLayout=1|1|64"; batchLayoutString= "BatchLayout=1|32|64"; layoutString = ("Layout=DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|64|TANH,DENSE|1|LINEAR") training1 = "Optimizer=ADAM,LearningRate=1e-3,Momentum=0.,Regularization=None,WeightDecay=1e-4," training1 += "DropConfig=0.+0.+0.+0.,MaxEpochs=30,ConvergenceSteps=10,BatchSize=32,TestRepetitions=1" trainingStrategyString = "TrainingStrategy=" + training1 dnnOptions = "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=G:WeightInitialization=XAVIER::Architecture=CPU" dnnOptions += ":" + inputLayoutString dnnOptions += ":" + batchLayoutString dnnOptions += ":" + layoutString dnnOptions += ":" + trainingStrategyString #we can now book the method factory.BookMethod(loader, ROOT.TMVA.Types.kDL, "DL_DENSE", dnnOptions) #input layout inputLayoutString = "InputLayout=1|8|8" ## Batch Layout batchLayoutString = "BatchLayout=128|1|64" layoutString = ("Layout=CONV|10|3|3|1|1|1|1|RELU,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1," "RESHAPE|FLAT,DENSE|64|TANH,DENSE|1|LINEAR") ##Training strategies. training1 = ("LearningRate=1e-3,Momentum=0.9,Repetitions=1," "ConvergenceSteps=10,BatchSize=128,TestRepetitions=1," "MaxEpochs=20,WeightDecay=1e-4,Regularization=None," "Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0") trainingStrategyString = "TrainingStrategy=" + training1 ## General Options. cnnOptions = ("!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:" "WeightInitialization=XAVIERUNIFORM"); cnnOptions += ":" + inputLayoutString cnnOptions += ":" + batchLayoutString cnnOptions += ":" + layoutString cnnOptions += ":" + trainingStrategyString cnnOptions += ":Architecture=CPU" ##book CNN factory.BookMethod(loader, ROOT.TMVA.Types.kDL, "DL_CNN", cnnOptions); ## to use tensorflow backend import os ##os.environ["KERAS_BACKEND"] = "tensorflow" import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam, SGD from tensorflow.keras.initializers import TruncatedNormal from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, Conv2D, MaxPooling2D, Reshape, BatchNormalization model = Sequential() model.add(Reshape((8,8, 1), input_shape=(64,))) model.add(Conv2D(10, kernel_size=(3,3), kernel_initializer='TruncatedNormal', activation='relu', padding='same' ) ) model.add(Conv2D(10, kernel_size=(3,3), kernel_initializer='TruncatedNormal', activation='relu', padding='same' ) ) #stride for maxpool is equal to pool size model.add(MaxPooling2D(pool_size=(2, 2) )) model.add(Flatten()) model.add(Dense(64, activation='tanh')) #model.add(Dropout(0.2)) model.add(Dense(2, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=Adam(lr=0.001), metrics=['accuracy']) model.save('model_cnn.h5') model.summary() factory.BookMethod(loader, ROOT.TMVA.Types.kPyKeras, "PyKeras","H:!V:VarTransform=None:FilenameModel=model_cnn.h5:" "FilenameTrainedModel=trained_model_cnn.h5:NumEpochs=20:BatchSize=128"); factory.TrainAllMethods(); factory.TestAllMethods(); factory.EvaluateAllMethods(); //%jsroot on c1 = factory.GetROCCurve(loader) c1.Draw() ## close outputfile to save output file outputFile.Close() <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Declare DataLoader(s) Step2: Setup Dataset(s) Step3: Booking Methods Step4: Book Convolutional Neural Network in TMVA Step5: Book Convolutional Neural Network in Keras using a generated model Step6: Train Methods Step7: Test and Evaluate Methods Step8: Plot ROC Curve
3,982
<ASSISTANT_TASK:> Python Code: markers = ['Synap','Synap','VGlut1','VGlut1','VGlut2','Vglut3', 'psd','glur2','nmdar1','nr2b','gad','VGAT', 'PV','Gephyr','GABAR1','GABABR','CR1','5HT1A', 'NOS','TH','VACht','Synapo','tubuli','DAPI'] synapType = ['synap','synap','ex.pre','ex.pre','ex.pre','in.pre', 'ex.post','ex.post','ex.post','ex.post','in.pre','in.pre', 'in.pre','in.post','in.post','in.post','other','other', 'other','other','other','other','none','none'] # filter away synapses so those with high tubulin and dapi expression will be pruned away sub_integratedData = integratedData[(synapsin1_sample != 0) & (synapsin2_sample != 0)] avginttub = np.mean(sub_integratedData[:,22]) intfilteredtub = [synapse for synapse, value in enumerate(sub_integratedData[:,22]) if value < avginttub] avgintdap = np.mean(sub_integratedData[:,23]) intfiltereddap = [synapse for synapse, value in enumerate(sub_integratedData[:,23]) if value < avgintdap] FilteredInd = np.array(list(set(intfilteredtub).intersection(intfiltereddap))) filteredInt = integratedData[FilteredInd] from sklearn import decomposition pca = decomposition.PCA(n_components=3) pca.fit(filteredInt) filtered_trans = pca.transform(filteredInt) from sklearn.cluster import KMeans KM = KMeans(init='k-means++', n_clusters=2) KM.fit(filtered_trans) k_means_labels = KM.labels_ k_means_cluster_centers = KM.cluster_centers_ fig = plt.figure(figsize=(8, 3)) fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9) colors = ['#FF0000', '#6600FF'] ax = fig.add_subplot(1, 1, 1) for k, col in zip(range(2), colors): my_members = k_means_labels == k cluster_center = k_means_cluster_centers[k] ax.plot(filtered_trans[my_members, 0], filtered_trans[my_members, 1], 'w', markerfacecolor=col, marker='.') ax.set_title('KMeans') other = filteredInt[:, 15:21] pca = decomposition.PCA(n_components=3) pca.fit(other) filtered_trans = pca.transform(other) from sklearn.cluster import KMeans KM = KMeans(init='k-means++', n_clusters=2) KM.fit(other) k_means_labels = KM.labels_ k_means_cluster_centers = KM.cluster_centers_ fig = plt.figure(figsize=(8, 3)) fig.subplots_adjust(left=0.02, right=0.98, bottom=0.05, top=0.9) colors = ['#FF0000', '#6600FF'] ax = fig.add_subplot(1, 1, 1) for k, col in zip(range(2), colors): my_members = k_means_labels == k cluster_center = k_means_cluster_centers[k] ax.plot(other[my_members, 0], other[my_members, 1], 'w', markerfacecolor=col, marker='.') ax.set_title('Clustering based on the other synapses markers') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: We then proceeded to try to separate the resulting filtered synapses into two groups through a naive kMean method, after tranforming the synapse vector to a lower dimension. Step2: The above example uses all the features; let's remove dapi, tubuli, synapsins, and all the exhibitory/inhibitory synapse markers and repeat the clustering procedure.
3,983
<ASSISTANT_TASK:> Python Code: # If you'd like to download it through the command line... !curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz # And then extract it through the command line... !tar -zxf convote_v1.1.tar.gz # glob finds files matching a certain filename pattern import glob # Give me all the text files paths = glob.glob('convote_v1.1/data_stage_one/development_set/*') paths[:5] len(paths) speeches = [] for path in paths: with open(path) as speech_file: speech = { 'pathname': path, 'filename': path.split('/')[-1], 'content': speech_file.read() } speeches.append(speech) speeches_df = pd.DataFrame(speeches) speeches_df.head() speeches_df['content'].head(5) count_vectorizer = CountVectorizer(stop_words='english') X=count_vectorizer.fit_transform(speeches_df['content']) len(count_vectorizer.get_feature_names()) tokens_df=pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()) tokens_df count_vectorizer = CountVectorizer(stop_words='english', max_features=100) X=count_vectorizer.fit_transform(speeches_df['content']) len(count_vectorizer.get_feature_names()) tokens_df=pd.DataFrame(X.toarray(), columns=count_vectorizer.get_feature_names()) tokens_df # 702 rows means 702 speeches, since each speech is a single string len(tokens_df) # if the speech doesnt contain a chairman, the column entry will be 0. so, 250 no-chairmain speeches. granted, # we have no idea if they stared the speech with chairman or just mentioned him somewhere len(tokens_df[tokens_df['chairman']==0]) # 76 times no mr or chairman. which means they must call the chairman just 'chairman' a lot. rude! len(tokens_df[(tokens_df['mr']==0) & (tokens_df['chairman']==0)]) # so speech index 375 tokens_df[tokens_df['thank']==tokens_df['thank'].max()].index # lets look at the speech speeches_df['content'][375] # wow that was long. but its the most thankful one, so whatever. # i sorted by china here, on a lark. lets see if it holds true if i sort by trade tokens_df[(tokens_df['china']>0) & (tokens_df['trade']>0)].sort_values(by='china', ascending=False)[['china', 'trade']].head(3) tokens_df[(tokens_df['china']>0) & (tokens_df['trade']>0)].sort_values(by='trade', ascending=False)[['china', 'trade']].head(3) # kind of! at any rate, speech 294 seems to be the most china and trade related. lets look at it! speeches_df['content'][294] # thats another super long speech, but it does seem to mostly be about trade and china. l2_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True) X = l2_vectorizer.fit_transform(speeches_df['content']) tfidf_tokens_df = pd.DataFrame(X.toarray(), columns=l2_vectorizer.get_feature_names()) china_trade_df=pd.DataFrame([tfidf_tokens_df['china'], tfidf_tokens_df['trade'], tfidf_tokens_df['china'] + tfidf_tokens_df['trade']], index=["china", "trade", "china + trade"]).T china_trade_df[china_trade_df.any(axis=1)].sort_values(by='china + trade', ascending=False).head(3) # wow, that comes up with a totally different list of speeches. lets look at speech 636 speeches_df['content'][636] # that one is very short by comparison and this time, its really only about china and trade # index 0 is the first speech, which was the first one imported. paths[0] # Pass that into 'cat' using { } which lets you put variables in shell commands # that way you can pass the path to cat !cat {paths[0]} # i guess i probably should have read ahead to this part. oh well, dumping the index of the speeches_df # was still mostly readable election_chaos_df=pd.DataFrame([tfidf_tokens_df['election'], tfidf_tokens_df['chaos'], tfidf_tokens_df['election'] + tfidf_tokens_df['chaos']], index=["election", "chaos", "election + chaos"]).T election_chaos_df[election_chaos_df.any(axis=1)].sort_values(by='chaos', ascending=False).head(10) # i did the sort that way because i guess they dont talk about chaos much. lets look at that speech !cat {paths[257]} # thats pretty weak. i tried to come up with something spicier but i cant tell what years these are from. # how about this? clinton_welfare_df=pd.DataFrame([tfidf_tokens_df['clinton'], tfidf_tokens_df['welfare'], tfidf_tokens_df['clinton'] + tfidf_tokens_df['welfare']], index=["clinton", "welfare", "clinton + welfare"]).T clinton_welfare_df[clinton_welfare_df.any(axis=1)].sort_values(by='clinton + welfare', ascending=False).head(10) !cat {paths[346]} # still pretty dull. oh well. from sklearn.cluster import KMeans count_vectorizer = CountVectorizer(stop_words='english') X=count_vectorizer.fit_transform(speeches_df['content']) number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = count_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) tf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=False) X = tf_vectorizer.fit_transform(speeches_df['content']) number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) tfidf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True) X = tfidf_vectorizer.fit_transform(speeches_df['content']) number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) # well, it seems to be between results including wild horses and those including frivolous lawsuits. # i prefer wild horses but the tfidf is probably the best representation of the actual document paths = glob.glob('hp/hp/*') paths[:5] hp_fics = [] for path in paths: with open(path) as hp_file: hp_fic = { 'pathname': path, 'filename': path.split('/')[-1], 'content': hp_file.read() } hp_fics.append(hp_fic) hp_fics_df = pd.DataFrame(hp_fics) hp_fics_df.head() tfidf_vectorizer = TfidfVectorizer(stop_words='english', use_idf=True) X = tfidf_vectorizer.fit_transform(hp_fics_df['content']) number_of_clusters = 2 km = KMeans(n_clusters=number_of_clusters) km.fit(X) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = tfidf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) # i would say people are either writing about harry and hermione or lily and james # ive never read harry potter, but i think that means either the harry generation or their parents generation? # seems legit <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: You can explore the files if you'd like, but we're going to get the ones from convote_v1.1/data_stage_one/development_set/. It's a bunch of text files. Step2: So great, we have 702 of them. Now let's import them. Step3: In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff. Step4: Doing our analysis Step5: Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words. Step6: Now let's push all of that into a dataframe with nicely named columns. Step7: Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and how many don't mention "chairman" and how many mention neither "mr" nor "chairman"? Step8: What is the index of the speech which is the most thankful, a.k.a. includes the word 'thank' the most times? Step9: If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser? Step10: Now what if I'm using a TfidfVectorizer? Step11: What's the content of the speeches? Here's a way to get them Step12: Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting. Step13: Enough of this garbage, let's cluster Step14: Which one do you think works the best? Step15: Harry Potter time
3,984
<ASSISTANT_TASK:> Python Code: from pyspark.sql import Row Person = Row("name", "age") print Person ali = Person("Alice", 11) print ali help(Row) import re import datetime from pyspark.sql import Row month_map = {'Jan': 1, 'Feb': 2, 'Mar':3, 'Apr':4, 'May':5, 'Jun':6, 'Jul':7, 'Aug':8, 'Sep': 9, 'Oct':10, 'Nov': 11, 'Dec': 12} def parse_apache_time(s): Convert Apache time format into a Python datetime object Args: s (str): date and time in Apache time format Returns: datetime: datetime object (ignore timezone for now) # 01/Aug/1995:00:00:01 return datetime.datetime(int(s[7:11]), month_map[s[3:6]], int(s[0:2]), int(s[12:14]), int(s[15:17]), int(s[18:20])) def parseApacheLogLine(logline): Parse a line in the Apache Common Log format Args: logline (str): a line of text in the Apache Common Log format Returns: tuple: either a dictionary containing the parts of the Apache Access Log and 1, or the original invalid log line and 0 match = re.search(APACHE_ACCESS_LOG_PATTERN, logline) if match is None: return (logline, 0) size_field = match.group(9) if size_field == '-': size = long(0) else: size = long(match.group(9)) return (Row( host = match.group(1), client_identd = match.group(2), user_id = match.group(3), date_time = parse_apache_time(match.group(4)), method = match.group(5), endpoint = match.group(6), protocol = match.group(7), response_code = int(match.group(8)), content_size = size ), 1) # A regular expression pattern to extract fields from the log line APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)' import sys import os from test_helper import Test baseDir = os.path.join('data') inputPath = os.path.join('cs100', 'lab2', 'apache.access.log.PROJECT') logFile = os.path.join(baseDir, inputPath) print baseDir print inputPath print logFile import sys import os from test_helper import Test baseDir = os.path.join('data') inputPath = os.path.join('cs100', 'lab2', 'apache.access.log.PROJECT') logFile = os.path.join(baseDir, inputPath) def parseLogs(): Read and parse log file parsed_logs = (sc .textFile(logFile) .map(parseApacheLogLine) .cache()) access_logs = (parsed_logs .filter(lambda s: s[1] == 1) .map(lambda s: s[0]) .cache()) failed_logs = (parsed_logs .filter(lambda s: s[1] == 0) .map(lambda s: s[0])) failed_logs_count = failed_logs.count() if failed_logs_count > 0: print 'Number of invalid logline: %d' % failed_logs.count() for line in failed_logs.take(20): print 'Invalid logline: %s' % line print 'Read %d lines, successfully parsed %d lines, failed to parse %d lines' % (parsed_logs.count(), access_logs.count(), failed_logs.count()) return parsed_logs, access_logs, failed_logs parsed_logs, access_logs, failed_logs = parseLogs() # TODO: Replace <FILL IN> with appropriate code #This was originally '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)" (\d{3}) (\S+)' APACHE_ACCESS_LOG_PATTERN = '^(\S+) (\S+) (\S+) \[([\w:/]+\s[+\-]\d{4})\] "(\S+) (\S+)\s*(\S*)\s*" (\d{3}) (\S+)' parsed_logs, access_logs, failed_logs = parseLogs() # TEST Data cleaning (1c) Test.assertEquals(failed_logs.count(), 0, 'incorrect failed_logs.count()') Test.assertEquals(parsed_logs.count(), 1043177 , 'incorrect parsed_logs.count()') Test.assertEquals(access_logs.count(), parsed_logs.count(), 'incorrect access_logs.count()') print failed_logs.take(1) print parsed_logs.take(1) print access_logs.take(1) print "access_logs.count():", access_logs.count() print "access_logs.take(1):", access_logs.take(1) print "access_logs.take(1)[0]:", access_logs.take(1)[0] print "type(access_logs.take(1)[0]):", type(access_logs.take(1)[0]) a = Row(name = "Zhang", age = 19) print type(a) print a print a.name print type(a.name) a = sc.parallelize([1,2,3,4]) print a.collect() # Calculate statistics based on the content size. content_sizes = access_logs.map(lambda log: log.content_size).cache() print 'Content Size Avg: %i, Min: %i, Max: %s' % ( content_sizes.reduce(lambda a, b : a + b) / content_sizes.count(), content_sizes.min(), content_sizes.max()) # Response Code to Count responseCodeToCount = (access_logs .map(lambda log: (log.response_code, 1)) .reduceByKey(lambda a, b : a + b) .cache()) responseCodeToCountList = responseCodeToCount.take(100) print 'Found %d response codes' % len(responseCodeToCountList) print 'Response Code Counts: %s' % responseCodeToCountList assert len(responseCodeToCountList) == 7 assert sorted(responseCodeToCountList) == [(200, 940847), (302, 16244), (304, 79824), (403, 58), (404, 6185), (500, 2), (501, 17)] labels = responseCodeToCount.map(lambda (x, y): x).collect() print labels count = access_logs.count() fracs = responseCodeToCount.map(lambda (x, y): (float(y) / count)).collect() print fracs import matplotlib.pyplot as plt def pie_pct_format(value): Determine the appropriate format string for the pie chart percentage label Args: value: value of the pie slice Returns: str: formated string label; if the slice is too small to fit, returns an empty string for label return '' if value < 7 else '%.0f%%' % value fig = plt.figure(figsize=(4.5, 4.5), facecolor='white', edgecolor='white') colors = ['yellowgreen', 'lightskyblue', 'gold', 'purple', 'lightcoral', 'yellow', 'black'] explode = (0.05, 0.05, 0.1, 0, 0, 0, 0) patches, texts, autotexts = plt.pie(fracs, labels=labels, colors=colors, explode=explode, autopct=pie_pct_format, shadow=False, startangle=125) for text, autotext in zip(texts, autotexts): if autotext.get_text() == '': text.set_text('') # If the slice is small to fit, don't show a text label plt.legend(labels, loc=(0.80, -0.1), shadow=True) pass # Any hosts that has accessed the server more than 10 times. hostCountPairTuple = access_logs.map(lambda log: (log.host, 1)) hostSum = hostCountPairTuple.reduceByKey(lambda a, b : a + b) hostMoreThan10 = hostSum.filter(lambda s: s[1] > 10) hostsPick20 = (hostMoreThan10 .map(lambda s: s[0]) .take(20)) print 'Any 20 hosts that have accessed more then 10 times: %s' % hostsPick20 # An example: [u'204.120.34.185', u'204.243.249.9', u'slip1-32.acs.ohio-state.edu', u'lapdog-14.baylor.edu', u'199.77.67.3', u'gs1.cs.ttu.edu', u'haskell.limbex.com', u'alfred.uib.no', u'146.129.66.31', u'manaus.bologna.maraut.it', u'dialup98-110.swipnet.se', u'slip-ppp02.feldspar.com', u'ad03-053.compuserve.com', u'srawlin.opsys.nwa.com', u'199.202.200.52', u'ix-den7-23.ix.netcom.com', u'151.99.247.114', u'w20-575-104.mit.edu', u'205.25.227.20', u'ns.rmc.com'] endpoints = (access_logs .map(lambda log: (log.endpoint, 1)) .reduceByKey(lambda a, b : a + b) .cache()) ends = endpoints.map(lambda (x, y): x).collect() counts = endpoints.map(lambda (x, y): y).collect() fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, len(ends), 0, max(counts)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Endpoints') plt.ylabel('Number of Hits') plt.plot(counts) pass # Top Endpoints endpointCounts = (access_logs .map(lambda log: (log.endpoint, 1)) .reduceByKey(lambda a, b : a + b)) topEndpoints = endpointCounts.takeOrdered(10, lambda s: -1 * s[1]) print 'Top Ten Endpoints: %s' % topEndpoints assert topEndpoints == [(u'/images/NASA-logosmall.gif', 59737), (u'/images/KSC-logosmall.gif', 50452), (u'/images/MOSAIC-logosmall.gif', 43890), (u'/images/USA-logosmall.gif', 43664), (u'/images/WORLD-logosmall.gif', 43277), (u'/images/ksclogo-medium.gif', 41336), (u'/ksc.html', 28582), (u'/history/apollo/images/apollo-logo1.gif', 26778), (u'/images/launch-logo.gif', 24755), (u'/', 20292)], 'incorrect Top Ten Endpoints' # TODO: Replace <FILL IN> with appropriate code # HINT: Each of these <FILL IN> below could be completed with a single transformation or action. # You are welcome to structure your solution in a different way, so long as # you ensure the variables used in the next Test section are defined (ie. endpointSum, topTenErrURLs). not200 = access_logs.filter(lambda log : log.response_code != 200) endpointCountPairTuple = not200.map(lambda log: (log.endpoint, 1)) endpointSum = endpointCountPairTuple.reduceByKey(lambda a, b : a + b).cache() topTenErrURLs = endpointSum.takeOrdered(10, lambda s: -1 * s[1]) print 'Top Ten failed URLs: %s' % topTenErrURLs # TEST Top ten error endpoints (3a) Test.assertEquals(endpointSum.count(), 7689, 'incorrect count for endpointSum') Test.assertEquals(topTenErrURLs, [(u'/images/NASA-logosmall.gif', 8761), (u'/images/KSC-logosmall.gif', 7236), (u'/images/MOSAIC-logosmall.gif', 5197), (u'/images/USA-logosmall.gif', 5157), (u'/images/WORLD-logosmall.gif', 5020), (u'/images/ksclogo-medium.gif', 4728), (u'/history/apollo/images/apollo-logo1.gif', 2907), (u'/images/launch-logo.gif', 2811), (u'/', 2199), (u'/images/ksclogosmall.gif', 1622)], 'incorrect Top Ten failed URLs (topTenErrURLs)') print access_logs.take(1)[0] # TODO: Replace <FILL IN> with appropriate code # HINT: Do you recall the tips from (3a)? Each of these <FILL IN> could be an transformation or action. hosts = access_logs.map(lambda log: (log.host , 1)) uniqueHosts = hosts.reduceByKey(lambda a, b: a + b).cache() uniqueHostCount = uniqueHosts.count() print 'Unique hosts: %d' % uniqueHostCount # TEST Number of unique hosts (3b) Test.assertEquals(uniqueHostCount, 54507, 'incorrect uniqueHostCount') aa = datetime.datetime(2015,8,28,19,50,6,) print aa print print aa.day # TODO: Replace <FILL IN> with appropriate code # (log.date_time.date().day,log.host),1)中(log.date_time.date().day, log.host)作为key # 1作为value dayToHostPairTuple = access_logs.map(lambda log : ((log.date_time.date().day, log.host), 1) ) # (log.date_time.date().day, log.host) dayGroupedHosts = dayToHostPairTuple.reduceByKey(lambda a, b: a + b).map(lambda a : a[0]).cache() # (log.date_time.date().day, 1) dayHostCount = dayGroupedHosts.map(lambda a: (a[0], 1)) dailyHosts = dayHostCount.reduceByKey(lambda a, b : a + b).sortByKey().cache() dailyHostsList = dailyHosts.take(30) print 'Unique hosts per day: %s' % dailyHostsList # TEST Number of unique daily hosts (3c) Test.assertEquals(dailyHosts.count(), 21, 'incorrect dailyHosts.count()') Test.assertEquals(dailyHostsList, [(1, 2582), (3, 3222), (4, 4190), (5, 2502), (6, 2537), (7, 4106), (8, 4406), (9, 4317), (10, 4523), (11, 4346), (12, 2864), (13, 2650), (14, 4454), (15, 4214), (16, 4340), (17, 4385), (18, 4168), (19, 2550), (20, 2560), (21, 4134), (22, 4456)], 'incorrect dailyHostsList') Test.assertTrue(dailyHosts.is_cached, 'incorrect dailyHosts.is_cached') print "dailyHosts.count():", dailyHosts.count() print "dailyHosts.take(1):", dailyHosts.take(1) print "dailyHosts.collect():", dailyHosts.collect() # TODO: Replace <FILL IN> with appropriate code daysWithHosts = dailyHosts.map(lambda a : a[0]).collect() hosts = dailyHosts.map(lambda a : a[1]).collect() print daysWithHosts print hosts a = [10, 17, 12, 14] print a a.remove(17) print a # TEST Visualizing unique daily hosts (3d) test_days = range(1, 23) test_days.remove(2) Test.assertEquals(daysWithHosts, test_days, 'incorrect days') Test.assertEquals(hosts, [2582, 3222, 4190, 2502, 2537, 4106, 4406, 4317, 4523, 4346, 2864, 2650, 4454, 4214, 4340, 4385, 4168, 2550, 2560, 4134, 4456], 'incorrect hosts') fig = plt.figure(figsize=(8,4.5), facecolor='white', edgecolor='white') plt.axis([min(daysWithHosts), max(daysWithHosts), 0, max(hosts)+500]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('Hosts') plt.plot(daysWithHosts, hosts) pass a = datetime.datetime(1,2,3,4,5,6) print "a:", a aa = datetime.datetime(1,2,3,4,5,6).date() print "aa:", aa aaa = datetime.datetime(1,2,3,4,5,6).date().day print "aaa:", aaa # TODO: Replace <FILL IN> with appropriate code print "access_logs.take(3):", access_logs.take(3) print # ( (log.date_time.date().day, log.host), 1 ) dayAndHostTuple = access_logs.map(lambda log : ((log.date_time.date().day, log.host), 1) ) print "dayAndHostTuple.take(3):", dayAndHostTuple.take(3) print # (log.date_time.date().day, (1, (log.date_time.date().day, log.host)) ) groupedByDay = dayAndHostTuple.reduceByKey(lambda a, b : a + b ).map(lambda a :(a[0][0],(1,a[1]))) sortedByDay = groupedByDay.reduceByKey(lambda a, b :(a[0] + b[0] , a[1]+b[1])).sortByKey() print "sortedByDay.take(2):", sortedByDay.take(2) print avgDailyReqPerHost = sortedByDay.map(lambda a :(a[0], a[1][1]/a[1][0])).cache() avgDailyReqPerHostList = avgDailyReqPerHost.take(30) print 'Average number of daily requests per Hosts is %s' % avgDailyReqPerHostList # TEST Average number of daily requests per hosts (3e) Test.assertEquals(avgDailyReqPerHostList, [(1, 13), (3, 12), (4, 14), (5, 12), (6, 12), (7, 13), (8, 13), (9, 14), (10, 13), (11, 14), (12, 13), (13, 13), (14, 13), (15, 13), (16, 13), (17, 13), (18, 13), (19, 12), (20, 12), (21, 13), (22, 12)], 'incorrect avgDailyReqPerHostList') Test.assertTrue(avgDailyReqPerHost.is_cached, 'incorrect avgDailyReqPerHost.is_cache') # TODO: Replace <FILL IN> with appropriate code print 'Average number of daily requests per Hosts is %s' % avgDailyReqPerHostList daysWithAvg = avgDailyReqPerHost.map(lambda a : a[0]).collect() avgs = avgDailyReqPerHost.map(lambda a : a[1]).collect() # TEST Average Daily Requests per Unique Host (3f) Test.assertEquals(daysWithAvg, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect days') Test.assertEquals(avgs, [13, 12, 14, 12, 12, 13, 13, 14, 13, 14, 13, 13, 13, 13, 13, 13, 13, 12, 12, 13, 12], 'incorrect avgs') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(daysWithAvg), 0, max(avgs)+2]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('Average') plt.plot(daysWithAvg, avgs) pass print access_logs.take(1) # TODO: Replace <FILL IN> with appropriate code badRecords = (access_logs .filter(lambda log : log.response_code == 404) ).cache() print 'Found %d 404 URLs' % badRecords.count() # TEST Counting 404 (4a) Test.assertEquals(badRecords.count(), 6185, 'incorrect badRecords.count()') Test.assertTrue(badRecords.is_cached, 'incorrect badRecords.is_cached') print badRecords.take(1) # TODO: Replace <FILL IN> with appropriate code badEndpoints = badRecords.map(lambda log : (log.endpoint, 1)) badUniqueEndpoints = badEndpoints.reduceByKey(lambda a , b : a + b).map(lambda a : a[0]) print "badUniqueEndpoints.count():", badUniqueEndpoints.count() print badUniqueEndpointsPick40 = badUniqueEndpoints.take(40) print '404 URLS: %s' % badUniqueEndpointsPick40 # TEST Listing 404 records (4b) badUniqueEndpointsSet40 = set(badUniqueEndpointsPick40) Test.assertEquals(len(badUniqueEndpointsSet40), 40, 'badUniqueEndpointsPick40 not distinct') print badRecords.take(1) # TODO: Replace <FILL IN> with appropriate code badEndpointsCountPairTuple = badRecords.map(lambda log : (log.endpoint, 1)) badEndpointsSum = badEndpointsCountPairTuple.reduceByKey(lambda a , b: a + b).cache() print "badEndP=pointsSum.count():", badEndpointsSum.count() print badEndpointsTop20 = badEndpointsSum.takeOrdered(20, lambda (a, b): -b) print 'Top Twenty 404 URLs: %s' % badEndpointsTop20 # TEST Top twenty 404 URLs (4c) Test.assertEquals(badEndpointsTop20, [(u'/pub/winvn/readme.txt', 633), (u'/pub/winvn/release.txt', 494), (u'/shuttle/missions/STS-69/mission-STS-69.html', 431), (u'/images/nasa-logo.gif', 319), (u'/elv/DELTA/uncons.htm', 178), (u'/shuttle/missions/sts-68/ksc-upclose.gif', 156), (u'/history/apollo/sa-1/sa-1-patch-small.gif', 146), (u'/images/crawlerway-logo.gif', 120), (u'/://spacelink.msfc.nasa.gov', 117), (u'/history/apollo/pad-abort-test-1/pad-abort-test-1-patch-small.gif', 100), (u'/history/apollo/a-001/a-001-patch-small.gif', 97), (u'/images/Nasa-logo.gif', 85), (u'/shuttle/resources/orbiters/atlantis.gif', 64), (u'/history/apollo/images/little-joe.jpg', 62), (u'/images/lf-logo.gif', 59), (u'/shuttle/resources/orbiters/discovery.gif', 56), (u'/shuttle/resources/orbiters/challenger.gif', 54), (u'/robots.txt', 53), (u'/elv/new01.gif>', 43), (u'/history/apollo/pad-abort-test-2/pad-abort-test-2-patch-small.gif', 38)], 'incorrect badEndpointsTop20') print "badRecords.take(1):", badRecords.take(1) # TODO: Replace <FILL IN> with appropriate code errHostsCountPairTuple = badRecords.map(lambda log : (log.host ,1)) errHostsSum = errHostsCountPairTuple.reduceByKey(lambda a, b: a + b).cache() errHostsTop25 = errHostsSum.takeOrdered(25, key = lambda (a , b): -b) print 'Top 25 hosts that generated errors: %s' % errHostsTop25 # TEST Top twenty-five 404 response code hosts (4d) Test.assertEquals(len(errHostsTop25), 25, 'length of errHostsTop25 is not 25') Test.assertEquals(len(set(errHostsTop25) - set([(u'maz3.maz.net', 39), (u'piweba3y.prodigy.com', 39), (u'gate.barr.com', 38), (u'm38-370-9.mit.edu', 37), (u'ts8-1.westwood.ts.ucla.edu', 37), (u'nexus.mlckew.edu.au', 37), (u'204.62.245.32', 33), (u'163.206.104.34', 27), (u'spica.sci.isas.ac.jp', 27), (u'www-d4.proxy.aol.com', 26), (u'www-c4.proxy.aol.com', 25), (u'203.13.168.24', 25), (u'203.13.168.17', 25), (u'internet-gw.watson.ibm.com', 24), (u'scooter.pa-x.dec.com', 23), (u'crl5.crl.com', 23), (u'piweba5y.prodigy.com', 23), (u'onramp2-9.onr.com', 22), (u'slip145-189.ut.nl.ibm.net', 22), (u'198.40.25.102.sap2.artic.edu', 21), (u'gn2.getnet.com', 20), (u'msp1-16.nas.mr.net', 20), (u'isou24.vilspa.esa.es', 19), (u'dial055.mbnet.mb.ca', 19), (u'tigger.nashscene.com', 19)])), 0, 'incorrect errHostsTop25') # TODO: Replace <FILL IN> with appropriate code errDateCountPairTuple = badRecords.map(lambda log : (log.date_time.date().day, 1)) errDateSum = errDateCountPairTuple.reduceByKey(lambda a , b: a + b).cache() errDateSorted = errDateSum.sortByKey().cache() errByDate = errDateSorted.take(30) print '404 Errors by day: %s' % errByDate # TEST 404 response codes per day (4e) Test.assertEquals(errByDate, [(1, 243), (3, 303), (4, 346), (5, 234), (6, 372), (7, 532), (8, 381), (9, 279), (10, 314), (11, 263), (12, 195), (13, 216), (14, 287), (15, 326), (16, 258), (17, 269), (18, 255), (19, 207), (20, 312), (21, 305), (22, 288)], 'incorrect errByDate') Test.assertTrue(errDateSorted.is_cached, 'incorrect errDateSorted.is_cached') print "errDateSorted.take(1):", errDateSorted.take(1) print print "errDateSorted.count():", errDateSorted.count() print print "errDateSorted.collect():", errDateSorted.collect() # TODO: Replace <FILL IN> with appropriate code daysWithErrors404 = errDateSorted.map(lambda a : a[0]).collect() print "daysWithErrors404:", daysWithErrors404 print errors404ByDay = errDateSorted.map(lambda a : a[1]).collect() print "errors404ByDay:", errors404ByDay # TEST Visualizing the 404 Response Codes by Day (4f) Test.assertEquals(daysWithErrors404, [1, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22], 'incorrect daysWithErrors404') Test.assertEquals(errors404ByDay, [243, 303, 346, 234, 372, 532, 381, 279, 314, 263, 195, 216, 287, 326, 258, 269, 255, 207, 312, 305, 288], 'incorrect errors404ByDay') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(daysWithErrors404), 0, max(errors404ByDay)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Day') plt.ylabel('404 Errors') plt.plot(daysWithErrors404, errors404ByDay) pass print errDateSorted.collect() # TODO: Replace <FILL IN> with appropriate code topErrDate = errDateSorted.takeOrdered(5 , lambda (a, b): -b) print 'Top Five dates for 404 requests: %s' % topErrDate # TEST Five dates for 404 requests (4g) Test.assertEquals(topErrDate, [(7, 532), (8, 381), (6, 372), (4, 346), (15, 326)], 'incorrect topErrDate') print "badRecords.count():", badRecords.count() print print "badRecords.take(1):", badRecords.take(1) # TODO: Replace <FILL IN> with appropriate code hourCountPairTuple = badRecords.map(lambda a : (a.date_time.hour , 1)) hourRecordsSum = hourCountPairTuple.reduceByKey(lambda a , b: a + b) hourRecordsSorted = hourRecordsSum.sortByKey().cache() errHourList = hourRecordsSorted.take(24) print 'Top hours for 404 requests: %s' % errHourList # TEST Hourly 404 response codes (4h) Test.assertEquals(errHourList, [(0, 175), (1, 171), (2, 422), (3, 272), (4, 102), (5, 95), (6, 93), (7, 122), (8, 199), (9, 185), (10, 329), (11, 263), (12, 438), (13, 397), (14, 318), (15, 347), (16, 373), (17, 330), (18, 268), (19, 269), (20, 270), (21, 241), (22, 234), (23, 272)], 'incorrect errHourList') Test.assertTrue(hourRecordsSorted.is_cached, 'incorrect hourRecordsSorted.is_cached') # TODO: Replace <FILL IN> with appropriate code hoursWithErrors404 = hourRecordsSorted.map(lambda a : a[0]).collect() print "hoursWithErrors404:", hoursWithErrors404 print errors404ByHours = hourRecordsSorted.map(lambda a : a[1]).collect() print "errors404ByHours:", errors404ByHours # TEST Visualizing the 404 Response Codes by Hour (4i) Test.assertEquals(hoursWithErrors404, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], 'incorrect hoursWithErrors404') Test.assertEquals(errors404ByHours, [175, 171, 422, 272, 102, 95, 93, 122, 199, 185, 329, 263, 438, 397, 318, 347, 373, 330, 268, 269, 270, 241, 234, 272], 'incorrect errors404ByHours') fig = plt.figure(figsize=(8,4.2), facecolor='white', edgecolor='white') plt.axis([0, max(hoursWithErrors404), 0, max(errors404ByHours)]) plt.grid(b=True, which='major', axis='y') plt.xlabel('Hour') plt.ylabel('404 Errors') plt.plot(hoursWithErrors404, errors404ByHours) pass <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step2: version 1.0.0 Step4: (1b) Configuration and Initial RDD Creation Step5: (1c) Data Cleaning Step6: Part 2 Step7: (2b) Example Step9: (2c) Example Step10: (2d) Example Step11: (2e) Example Step12: (2f) Example Step13: Part 3 Step14: (3b) Exercise Step15: (3c) Exercise Step16: (3d) Exercise Step17: (3e) Exercise Step18: (3f) Exercise Step19: Part 4 Step20: (4b) Exercise Step21: (4c) Exercise Step22: (4d) Exercise Step23: (4e) Exercise Step24: (4f) Exercise Step25: (4g) Exercise Step26: (4h) Exercise Step27: (4i) Exercise
3,985
<ASSISTANT_TASK:> Python Code: import pkg_resources raw_data = pd.read_csv(pkg_resources.resource_stream('deepforest', 'data/train.csv')) clean_data = raw_data.drop(["Cabin", "Name", "PassengerId", "Ticket"], axis=1) clean_data = pd.get_dummies(clean_data).fillna(-1) train, test = train_test_split(clean_data) def split_x_y(dataframe, target): return dataframe.drop(target, axis=1), dataframe[target] X_train, y_train = split_x_y(train, "Survived") X_test, y_test = split_x_y(test, "Survived") rf = RandomForestClassifier(n_estimators=100, n_jobs=-1) rf.fit(X_train, y_train) y_pred = rf.predict_proba(X_test) auc = roc_auc_score(y_true=y_test, y_score=y_pred[:, 1]) auc from sklearn.model_selection import StratifiedKFold rf1 = RandomForestClassifier(n_estimators=100, n_jobs=-1, max_depth=4) rf2 = RandomForestClassifier(n_estimators=100, n_jobs=-1, max_depth=10) rf1.fit(X_train, y_train) y_pred_train_1 = rf1.predict_proba(X_train) y_pred_test_1 = rf1.predict_proba(X_test) y_pred_train_1 = pd.DataFrame(y_pred_train_1, columns=["rf1_0", "rf1_1"], index=X_train.index) y_pred_test_1 = pd.DataFrame(y_pred_test_1, columns=["rf1_0", "rf1_1"], index=X_test.index) rf2.fit(X_train, y_train) y_pred_train_2 = rf2.predict_proba(X_train) y_pred_test_2 = rf2.predict_proba(X_test) y_pred_train_2 = pd.DataFrame(y_pred_train_2, columns=["rf2_0", "rf2_1"], index=X_train.index) y_pred_test_2 = pd.DataFrame(y_pred_test_2, columns=["rf2_0", "rf2_1"], index=X_test.index) hidden_train_1 = pd.concat([X_train, y_pred_train_1, y_pred_train_2], axis=1) hidden_test_1 = pd.concat([X_test, y_pred_test_1, y_pred_test_2], axis=1) hidden_train_1.head() rf3 = RandomForestClassifier(n_estimators=300, n_jobs=-1) rf3.fit(hidden_train_1, y_train) y_pred3 = rf3.predict_proba(hidden_test_1) roc_auc_score(y_test, y_pred3[:, 1]) from deepforest.layer import Layer, InputLayer input_layer = InputLayer(RandomForestClassifier(n_estimators=100, n_jobs=-1, max_depth=4), RandomForestClassifier(n_estimators=100, n_jobs=-1, max_depth=10)) hidden_layer = Layer(input_layer, RandomForestClassifier(n_estimators=50, n_jobs=-1, max_depth=4), RandomForestClassifier(n_estimators=50, n_jobs=-1, max_depth=10)) hidden_layer.fit(X_train, y_train) pd.DataFrame(hidden_layer.predict(X_test), index=X_test.index) def random_forest_generator(): for i in range(2, 15, 2): yield RandomForestClassifier(n_estimators=100, n_jobs=-1, min_samples_leaf=5, max_depth=i) for i in range(2, 15, 2): yield RandomForestClassifier(n_estimators=100, n_jobs=-1, max_features=1, min_samples_leaf=5, max_depth=i) def paper_like_generator(): for i in range(2): yield RandomForestClassifier(n_estimators=1000, n_jobs=-1, min_samples_leaf=10) for i in range(2): yield RandomForestClassifier(n_estimators=1000, n_jobs=-1, max_features=1, min_samples_leaf=10) def build_input_layer(): return InputLayer(*paper_like_generator()) def build_hidden_layer(layer): return Layer(layer, *paper_like_generator()) def build_output_layer(layer): return Layer(layer, RandomForestClassifier(n_estimators=500, n_jobs=-1, min_samples_leaf=5, max_depth=10)) input_l = build_input_layer() hidden_1 = build_hidden_layer(input_l) hidden_2 = build_hidden_layer(hidden_1) hidden_3 = build_hidden_layer(hidden_2) hidden_4 = build_hidden_layer(hidden_3) output_l = build_output_layer(hidden_4) output_l.fit(X_train, y_train) y_pred = output_l.predict(X_test) y_pred roc_auc_score(y_test, y_pred[:, 1]) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Baseline model Step2: Deep Forest Step3: This is not very handy, not at all. We already see a lot of code duplication, and one may feel there may be a way to abstract a lot of the logic that is happening here, in a way that is more flexible and powerful that all this boilerplate code. Step4: Going Further
3,986
<ASSISTANT_TASK:> Python Code: #####Test Machine Learning Algorithms ip = 'Insert IP here' conn = MongoClient(ip, 27017) conn.database_names() db = conn.get_database('cleaned_data') reviews = db.get_collection('restaurant_reviews') useridlist =[] for user in users_dict.keys(): useridlist.append(user) print(useridlist[1]) def make_user_df(user_specific_reviews): #Input: #user_specific_reviews: A list of reviews for a specific user #Output: A dataframe with the columns (user_reviews, user_ratings, biz_ids) user_reviews = [] user_ratings = [] business_ids = [] for review in user_specific_reviews: user_reviews.append(review['text']) user_ratings.append(review['stars']) business_ids.append(review['business_id']) ###WE SHOULD MAKE THE OUR OWN PUNCTUATION RULES #https://www.tutorialspoint.com/python/string_translate.htm #I'm gonna have to go and figure out what this does -ed #user_reviews = [review.encode('utf-8').translate(None, string.punctuation) for review in user_reviews] user_df = pd.DataFrame({'review_text': user_reviews, 'rating': user_ratings, 'biz_id': business_ids}) return user_df #test to make users_dict,make_user_df works user_specific_reviews = users_dict[useridlist[0]] x= make_user_df(user_specific_reviews) x.head() business_ids = list(set(user['biz_id'])) restreview = {} for i in range(0, len(business_ids)): rlist = [] for obj in reviews.find({'business_id':business_ids[i]}): rlist.append(obj) restreview[business_ids[i]] = rlist restaurant_df = yml.make_biz_df(user, restreview) #Create a training and test sample from the user reviewed restaurants split_samp = .30 random_int = random.randint(1, len(business_ids)-1) len_random = int(len(business_ids) * split_samp) test_set = business_ids[random_int:random_int+len_random] training_set = business_ids[0:random_int]+business_ids[random_int+len_random:len(business_ids)] train_reviews, train_ratings = [], [] #Create a list of training reviews and training ratings for rest_id in training_set: train_reviews.extend(list(user_df[user_df['biz_id'] == rest_id]['review_text'])) train_ratings.extend(list(user_df[user_df['biz_id'] == rest_id]['rating'])) #Transform the star labels into a binary class problem, 0 if rating is < 4 else 1 train_labels = [1 if x >=4 else 0 for x in train_ratings] #this is just for my understand of how the model is working under the hood def fit_lsi(train_reviews): #Input: train_reviews is a list of reviews that will be used to train the LSI feature transformer #Output: A trained LSI model and the transformed training reviews texts = [[word for word in review.lower().split() if (word not in stop_words)] for review in train_reviews] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts] numpy_matrix = matutils.corpus2dense(corpus, num_terms=10000) singular_values = np.linalg.svd(numpy_matrix, full_matrices=False, compute_uv=False) mean_sv = sum(list(singular_values))/len(singular_values) topics = int(mean_sv) tfidf = models.TfidfModel(corpus) corpus_tfidf = tfidf[corpus] lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=topics) return lsi, topics, dictionary #Fit LSI model and return number of LSI topics lsi, topics, dictionary = yml.fit_lsi(train_reviews) #Make a FeatureUnion object with the desired features then fit to train reviews comb_features = yml.make_featureunion() comb_features.fit(train_reviews) train_features = comb_features.transform(train_reviews) train_lsi = yml.get_lsi_features(train_reviews, lsi, topics, dictionary) train_features = sparse.hstack((train_features, train_lsi)) train_features = train_features.todense() #fit each model in turn model_runs = [(True, False, False, False, False), (False, True, False, False, False), (False, False, True, False, False), (False, False, False, True, False), (False, False, False, False, True)] test_results = {} for i in tqdm.tqdm(range(0, len(model_runs))): clf = yml.fit_model(train_features, train_labels, svm_clf = model_runs[i][0], RandomForest = model_runs[i][1], nb = model_runs[i][2]) threshold = 0.7 error = yml.test_user_set(test_set, clf, restaurant_df, user_df, comb_features, threshold, lsi, topics, dictionary) test_results[clf] = error #Get top predictions for key in test_results.keys(): results = test_results[test_results.keys()[0]] log_loss = yml.get_log_loss() print log_loss <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 1.Create a user dataframe with the following columns Step2: 2.Create a list of unique business IDs for that user Step3: 3.Connect to the MongoDB server and pull all of the reviews for the restaurants that the user has reviewed Step4: 4.Create a restaurant dataframe with the following columns Step5: 5.Do a 80/20 training/test split, randomizing over the set of user' reviewed restaurants Step6: 6.Train the LSI model on the set of training reviews, get the number of topics used in fitting Step7: 7.Set up the FeatureUnion with the desired features, then fit according to the train reviews and transform the train reviews
3,987
<ASSISTANT_TASK:> Python Code: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import imageio import matplotlib.pyplot as plt import numpy as np import os import sys import tarfile from IPython.display import display, Image from sklearn.linear_model import LogisticRegression from six.moves.urllib.request import urlretrieve from six.moves import cPickle as pickle # Config the matplotlib backend as plotting inline in IPython %matplotlib inline url = 'https://commondatastorage.googleapis.com/books1000/' last_percent_reported = None data_root = '.' # Change me to store data elsewhere def download_progress_hook(count, blockSize, totalSize): A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): Download a file if not present, and make sure it's the right size. dest_filename = os.path.join(data_root, filename) if force or not os.path.exists(dest_filename): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook) print('\nDownload Complete!') statinfo = os.stat(dest_filename) if statinfo.st_size == expected_bytes: print('Found and verified', dest_filename) else: raise Exception( 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?') return dest_filename train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043) num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz if os.path.isdir(root) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (root, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % root) tar = tarfile.open(filename) sys.stdout.flush() tar.extractall(data_root) tar.close() data_folders = [ os.path.join(root, d) for d in sorted(os.listdir(root)) if os.path.isdir(os.path.join(root, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print(data_folders) return data_folders train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename) image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): Load the data for a single letter label. image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: image_data = (imageio.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except (IOError, ValueError) as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return dataset def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return dataset_names train_datasets = maybe_pickle(train_folders, 45000) test_datasets = maybe_pickle(test_folders, 1800) def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 200000 valid_size = 10000 test_size = 10000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape) def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return shuffled_dataset, shuffled_labels train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels) pickle_file = os.path.join(data_root, 'notMNIST.pickle') try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine. Step4: Extract the dataset from the compressed .tar.gz file. Step6: Problem 1 Step7: Problem 2 Step8: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match. Step9: Problem 4
3,988
<ASSISTANT_TASK:> Python Code: # Import modules that contain functions we need import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt # Read in data that will be used for the calculations. # Using pandas read_csv method, we can create a data frame data = pd.read_csv("https://github.com/adamlamee/CODINGinK12-data/raw/master/stars.csv") # We wish too look at the first 3 rows of our data set data.head(3) fig = plt.figure(figsize=(15, 6)) plt.scatter(data.ra,data.dec, s=0.01) plt.xlim(24, 0) plt.title("All the Stars in the Catalogue") plt.xlabel('Right Ascension (Hours)') plt.ylabel('Declination (Degrees)') from IPython.display import Image from IPython.core.display import HTML Image(url= 'http://www.hpcf.upr.edu/~abel/phl/nearby_stars_with_exoplanets.png') # format the points on the graph transparency = 1 size = 1 # draws a scatter plot fig = plt.figure(figsize=(20, 4.5)) plt.scatter(data.temp, data.lum, s=size, edgecolors='none', alpha=transparency) plt.xlim(2000,15000) plt.ylim(0,1000) plt.title("Does hotter mean brighter?") plt.ylabel("Luminosity") plt.xlabel("Temperature (K)") Image(url= 'http://hmxearthscience.com/Galaxies%20and%20Stars/HR%20Lab%202.jpg') # These are the abbreviations for all the constellations data.sort_values('con').con.unique() # This shows just one constellation data_con = data.query('con == "Vir"') # This plots where the brightest 10 stars are in the sky data_con = data_con.sort_values('mag').head(10) plt.scatter(data_con.ra,data_con.dec) plt.gca().invert_xaxis() plt.title("A constellation in the sky") plt.xlabel('Right Ascension (degrees)') plt.ylabel('Declination (Hours)') # format the points on the graph transparency = 0.2 size = 1 # draws a scatter plot fig = plt.figure(figsize=(6, 4.5)) plt.scatter(data.temp, data.absmag, s=size, edgecolors='none', alpha=transparency) plt.scatter(data_con.temp, data_con.absmag, color='red', edgecolors='none') plt.xlim(17000,2000) plt.ylim(18,-18) plt.title("Types of stars in a constellation") plt.ylabel("Absolute Magnitude") plt.xlabel("Temperature (K)") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: PART 1 Step2: Run the Code below to create a chart containing all of the Stars with Exoplantes within 100 light years! Step3: <span style="color Step4: Run the code below to display a chart of Star color and temperatures. Step5: <span style="color Step6: Choose a Constellation from the list above and insert the 3 letter code below in the " ", for example "Vir". Step7: <span style="color
3,989
<ASSISTANT_TASK:> Python Code: import pandas as pd from collections import Counter seqs = ['TCGGGGGTTTTT', 'CCGGTGACTTAC', 'ACGGGGATTTTC', 'TTGGGGACTTTT', 'AAGGGGACTTCC', 'TTGGGGACTTCC', 'TCGGGGATTCAT', 'TCGGGGATTCCT', 'TAGGGGACCTAC', 'TCGGGTATAACC'] data = {} for i, seq in enumerate(seqs): data[i+1] = list(seq) df = pd.DataFrame(data).T df freq_df = pd.DataFrame() nucs = list('ACGT') num_seqs = len(seqs) counts_df = pd.DataFrame(0, index=nucs, columns=list(range(num_seqs))) profile_df = pd.DataFrame(0., index=nucs, columns=list(range(num_seqs))) simple_consensus = [] for col in df.columns: bases = df[col].values.tolist() base_count = Counter(bases) most_common = max(set(bases), key=bases.count) simple_consensus.append(most_common) freq_df[col] = list(map(lambda b: b if b == most_common else b.lower(), bases)) for nuc in nucs: counts_df.at[nuc, col] = base_count[nuc] profile_df.at[nuc, col] = base_count[nuc] / len(seqs) print('Consensus') print(''.join(simple_consensus)) print('Frequency') freq_df print('Counts') counts_df print('Profile') profile_df scores = [] for col in freq_df: bases = freq_df[col].values.tolist() lower = list(filter(lambda b: b.islower(), bases)) scores.append(len(lower)) print(scores) print(sum(scores)) consensus = [] for col in profile_df.columns: profile = profile_df[col] max_bases = profile[profile >= .4].index.tolist() consensus.append('/'.join(max_bases)) print('Consensus') print(' '.join(consensus)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Now we have a DataFrame where we can get all the bases in each position to find the base(s) occurring most frequently. We'll create a second DataFrame where the less frequent bases are put into lowercase. We'll also create a profile that gives us the frequency as a percentage. Note that we can create DataFrame of zeros easily, but we need the percentage profile to be initialized with "0." to indicate that we will store floats not integers. Step2: For the "real" consensus, we'll take into account all bases occurring at a frequency of 40% or greater.
3,990
<ASSISTANT_TASK:> Python Code: utc = 0 sma = 1 ecc = 2 inc = 3 raan = 4 aop = 5 ma = 6 ta = 7 #fig1 = plt.figure(figsize = [15,8], facecolor='w') fig_peri = plt.figure(figsize = [15,8], facecolor='w') fig_peri_deorbit = plt.figure(figsize = [15,8], facecolor='w') fig_apo = plt.figure(figsize = [15,8], facecolor='w') fig3 = plt.figure(figsize = [15,8], facecolor='w') fig4 = plt.figure(figsize = [15,8], facecolor='w') fig4_rap = plt.figure(figsize = [15,8], facecolor='w') fig5 = plt.figure(figsize = [15,8], facecolor='w') fig6 = plt.figure(figsize = [15,8], facecolor='w') #sub1 = fig1.add_subplot(111) sub_peri = fig_peri.add_subplot(111) sub_peri_deorbit = fig_peri_deorbit.add_subplot(111) sub_apo = fig_apo.add_subplot(111) sub3 = fig3.add_subplot(111) sub4 = fig4.add_subplot(111) sub4_rap = fig4_rap.add_subplot(111) sub5 = fig5.add_subplot(111) sub6 = fig6.add_subplot(111) subs = [sub_peri, sub_peri_deorbit, sub_apo, sub3, sub4, sub4_rap, sub5, sub6] for file in ['orbit_deorbit.txt', 'orbit_deorbit2.txt', 'orbit_deorbit3.txt']: orbit = load_orbit_file(file) t = Time(mjd2unixtimestamp(orbit[:,utc]), format='unix') #sub1.plot(t.datetime, orbit[:,sma]) sub_peri.plot(t.datetime, orbit[:,sma]*(1-orbit[:,ecc])) deorbit_sel = (mjd2unixtimestamp(orbit[:,utc]) >= 1564012800) & (mjd2unixtimestamp(orbit[:,utc]) <= 1564963200) if np.any(deorbit_sel): sub_peri_deorbit.plot(t[deorbit_sel].datetime, orbit[deorbit_sel,sma]*(1-orbit[deorbit_sel,ecc])) sub_apo.plot(t.datetime, orbit[:,sma]*(1+orbit[:,ecc])) sub3.plot(t.datetime, orbit[:,ecc]) sub4.plot(t.datetime, orbit[:,aop]) sub4_rap.plot(t.datetime, np.fmod(orbit[:,aop] + orbit[:,raan],360)) sub5.plot(t.datetime, orbit[:,inc]) sub6.plot(t.datetime, orbit[:,raan]) sub_peri.axhline(y = 1737, color='red') sub_peri_deorbit.axhline(y = 1737, color='red') month_locator = mdates.MonthLocator() day_locator = mdates.DayLocator() for sub in subs: sub.set_xlabel('Time') sub.xaxis.set_major_locator(month_locator) sub.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m')) sub.xaxis.set_tick_params(rotation=45) sub_peri_deorbit.xaxis.set_major_locator(day_locator) sub_peri_deorbit.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d')) #sub1.set_ylabel('SMA (km)') sub_peri.set_ylabel('Periapsis radius (km)') sub_peri_deorbit.set_ylabel('Periapsis radius (km)') sub_apo.set_ylabel('Apoapsis radius (km)') sub3.set_ylabel('ECC') sub4.set_ylabel('AOP (deg)') sub4_rap.set_ylabel('RAOP (deg)') sub5.set_ylabel('INC (deg)') sub6.set_ylabel('RAAN (deg)') #sub1.set_title('Semi-major axis') sub_peri.set_title('Periapsis radius') sub_peri_deorbit.set_title('Periapsis radius') sub_apo.set_title('Apoapsis radius') sub3.set_title('Eccentricity') sub4.set_title('Argument of periapsis') sub4_rap.set_title('Right ascension of periapsis') sub5.set_title('Inclination') sub6.set_title('Right ascension of ascending node') for sub in subs: sub.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris']) sub_peri.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris', 'Lunar radius']); sub_peri_deorbit.legend(['Before periapsis lowering', 'After periapsis lowering', 'Latest ephemeris', 'Lunar radius']); <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Plot the orbital parameters which are vary significantly between different tracking files.
3,991
<ASSISTANT_TASK:> Python Code: # Always run this first # NOTE: Do not define new basic variables in this notebook; # define them in Variables_Q.ipynb. Use this notebook # to define new expressions built from those variables. from __future__ import division # This needs to be here, even though it's in Variables_Q.ipynb import sys sys.path.insert(0, '..') # Look for modules in directory above this one import pickle execfile('../Utilities/ExecNotebook.ipy') try: UseQuaternions VariablesNotebook = 'Variables_Q.ipynb' if UseQuaternions else 'Variables_Q.ipynb' except: UseQuaternions = False VariablesNotebook = 'Variables_Q.ipynb' execnotebook(VariablesNotebook) from Utilities import CodeOutput if 'PNOrbitalEvolutionOrder' not in globals(): PNOrbitalEvolutionOrder = frac(7,2) execnotebook('BindingEnergy.ipynb') execnotebook('EnergyAbsorption.ipynb') execnotebook('Precession.ipynb') # Read in the high-order series expansion of a ratio of polynomials p_Ratio = pickle.load(file('PolynomialRatios/PolynomialRatioSeries_Order{0}.dat'.format(2*PNOrbitalEvolutionOrder+1))) p_Ratio = p_Ratio.removeO().subs('PolynomialVariable',v) # Evaluate the flux, energy, and derivative of energy FluxTerms = [Flux_NoSpin, Flux_Spin] BindingEnergyTerms = [BindingEnergy_NoSpin, BindingEnergy_Spin] for Term in FluxTerms: PNVariables.update(Term) for Term in BindingEnergyTerms: PNVariables.update(Term) Flux = FluxExpression(FluxTerms, PNOrbitalEvolutionOrder) Energy = BindingEnergyExpression(BindingEnergyTerms, PNOrbitalEvolutionOrder) dEdv = BindingEnergyDerivativeExpression(BindingEnergyTerms, PNOrbitalEvolutionOrder) # Evaluate the energy absorption by the BHs, and make substitutions so that the Horner form is nice AbsorptionTerms = [AlviTerms] for Term in AbsorptionTerms: PNVariables.update(Term) Absorption = AbsorptionExpression(AbsorptionTerms, PNOrbitalEvolutionOrder) # Treat remaining log(v) terms as constants, for Taylor expansions and efficient numerical evaluation Flux = Flux.subs(log(v), logv) Energy = Energy.subs(log(v), logv) dEdv = dEdv.subs(log(v), logv) Absorption = Absorption.subs(log(v), logv) # Get the series expansions for the numerators and denominators FluxSeries = series(- (Flux + Absorption)/Fcal_coeff, x=v, x0=0, n=2*PNOrbitalEvolutionOrder+1).removeO() dEdvSeries = series(dEdv/(-nu*v/2), x=v, x0=0, n=2*PNOrbitalEvolutionOrder+1).removeO() # TaylorT4 T4Expressions = PNCollection() NumTerms = {'Num{0}'.format(n): FluxSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} DenTerms = {'Den{0}'.format(n): dEdvSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} T4Expressions.AddDerivedConstant('dvdt_T4', (Fcal_coeff/(-nu*v/2))*\ horner(sum([v**n*horner(N(p_Ratio.coeff(v,n=n).subs(dict(NumTerms.items() + DenTerms.items())))) for n in range(2*PNOrbitalEvolutionOrder+1)]))) # TaylorT5 T5Expressions = PNCollection() NumTerms = {'Num{0}'.format(n): dEdvSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} DenTerms = {'Den{0}'.format(n): FluxSeries.coeff(v,n=n) for n in range(2*PNOrbitalEvolutionOrder+1)} T5Expressions.AddDerivedConstant('dtdv', ((-nu*v/2)/Fcal_coeff)*\ horner(sum([v**n*horner(N(p_Ratio.coeff(v,n=n).subs(dict(NumTerms.items() + DenTerms.items())))) for n in range(2*PNOrbitalEvolutionOrder+1)]))) T5Expressions.AddDerivedConstant('dvdt_T5', 1.0/dtdv) # TaylorT1 just gets some substitutions for efficiency T1Expressions = PNCollection() T1Expressions.AddDerivedConstant('Flux', Flux.subs(Pow(nu,3), nu__3).subs(Pow(nu,2), nu__2)) T1Expressions.AddDerivedConstant('dEdv', dEdv.subs(Pow(nu,3), nu__3).subs(Pow(nu,2), nu__2)) T1Expressions.AddDerivedConstant('Absorption', Absorption.subs(Pow(nu,3), nu__3).subs(Pow(nu,2), nu__2)) T1Expressions.AddDerivedConstant('dvdt_T1', - (Flux + Absorption) / dEdv) PrecessionVelocities = PNCollection() PrecessionVelocities.AddDerivedVariable('OmegaVec_chiVec_1', Precession_chiVec1Expression(PNOrbitalEvolutionOrder), datatype=ellHat.datatype) PrecessionVelocities.AddDerivedVariable('OmegaVec_chiVec_2', Precession_chiVec2Expression(PNOrbitalEvolutionOrder), datatype=ellHat.datatype) PrecessionVelocities.AddDerivedVariable(('OmegaVec' if UseQuaternions else 'OmegaVec_ellHat'), Precession_ellHatExpression(PNOrbitalEvolutionOrder)*nHat + ((v**3/M)*ellHat if UseQuaternions else 0), datatype=nHat.datatype) CodeConstructor = CodeOutput.CodeConstructor(PNVariables, T1Expressions) for Terms in BindingEnergyTerms+FluxTerms+[AlviTerms]+[Precession_ellHat, Precession_chiVec1, Precession_chiVec2]: CodeConstructor.AddDependencies(Terms) CodeConstructor.AddDependencies(PrecessionVelocities) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Here, we set the PN order, if it is not already set. This will be used in numerous places below. This is the exponent of the largest power of $x$, or half the exponent of the largest power of $v$ that will appear beyond leading orders in the various quantities. Note that, because of python's convention that intervals are half-open at the end, most occurrences of PNOrbitalEvolutionOrder in the code will have 1 added to them; the actual value of PNOrbitalEvolutionOrder will be what we normally expect. Step2: TaylorT1, TaylorT4, and TaylorT5* Step3: Next, we calculate the expansions needed for TaylorT4 and T5. These will be the right-hand sides in our evolution equations for $dv/dt$. TaylorT1 simply numerically evaluates a ratio of the terms imported above Step4: Now, the precession terms
3,992
<ASSISTANT_TASK:> Python Code: import numpy as np import matplotlib.pyplot as plt import h5py import scipy from PIL import Image from scipy import ndimage from lr_utils import load_dataset %matplotlib inline # Loading the data (cat/non-cat) train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset() # Example of a picture index = 23 plt.imshow(train_set_x_orig[index]) print ("y = " + str(train_set_y[:,index]) + ", it's a '" + classes[np.squeeze(train_set_y[:,index])].decode("utf-8") + "' picture.") ### START CODE HERE ### (≈ 3 lines of code) m_train = train_set_y.shape[1] m_test = test_set_y.shape[1] num_px = train_set_x_orig[0].shape[0] ### END CODE HERE ### print ("Number of training examples: m_train = " + str(m_train)) print ("Number of testing examples: m_test = " + str(m_test)) print ("Height/Width of each image: num_px = " + str(num_px)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_set_x shape: " + str(train_set_x_orig.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x shape: " + str(test_set_x_orig.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) # Reshape the training and test examples ### START CODE HERE ### (≈ 2 lines of code) train_set_x_flatten = train_set_x_orig.reshape(m_train, -1).T test_set_x_flatten = test_set_x_orig.reshape(m_test, -1).T ### END CODE HERE ### print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape)) print ("train_set_y shape: " + str(train_set_y.shape)) print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape)) print ("test_set_y shape: " + str(test_set_y.shape)) print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0])) train_set_x = train_set_x_flatten / 255. test_set_x = test_set_x_flatten / 255. # GRADED FUNCTION: sigmoid def sigmoid(z): Compute the sigmoid of z Arguments: x -- A scalar or numpy array of any size. Return: s -- sigmoid(z) ### START CODE HERE ### (≈ 1 line of code) s = 1.0 / (1 + np.exp(-z)) ### END CODE HERE ### return s print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(9.2) = " + str(sigmoid(9.2))) # GRADED FUNCTION: initialize_with_zeros def initialize_with_zeros(dim): This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0. Argument: dim -- size of the w vector we want (or number of parameters in this case) Returns: w -- initialized vector of shape (dim, 1) b -- initialized scalar (corresponds to the bias) ### START CODE HERE ### (≈ 1 line of code) w, b = np.zeros((dim, 1)), 0 ### END CODE HERE ### assert(w.shape == (dim, 1)) assert(isinstance(b, float) or isinstance(b, int)) return w, b dim = 2 w, b = initialize_with_zeros(dim) print ("w = " + str(w)) print ("b = " + str(b)) # GRADED FUNCTION: propagate def propagate(w, b, X, Y): Implement the cost function and its gradient for the propagation explained above Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return: cost -- negative log-likelihood cost for logistic regression dw -- gradient of the loss with respect to w, thus same shape as w db -- gradient of the loss with respect to b, thus same shape as b Tips: - Write your code step by step for the propagation m = X.shape[1] # FORWARD PROPAGATION (FROM X TO COST) ### START CODE HERE ### (≈ 2 lines of code) A = sigmoid(np.dot(w.T, X) + b) cost = -1.0 / m * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) ### END CODE HERE ### # BACKWARD PROPAGATION (TO FIND GRAD) ### START CODE HERE ### (≈ 2 lines of code) dw = 1.0 / m * np.dot(X, (A - Y).T) db = 1.0 / m * np.sum(A - Y) ### END CODE HERE ### assert(dw.shape == w.shape) assert(db.dtype == float) cost = np.squeeze(cost) assert(cost.shape == ()) grads = {"dw": dw, "db": db} return grads, cost w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1,2], [3,4]]), np.array([[1, 0]]) grads, cost = propagate(w, b, X, Y) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) print ("cost = " + str(cost)) # GRADED FUNCTION: optimize def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False): This function optimizes w and b by running a gradient descent algorithm Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of shape (num_px * num_px * 3, number of examples) Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- True to print the loss every 100 steps Returns: params -- dictionary containing the weights w and bias b grads -- dictionary containing the gradients of the weights and bias with respect to the cost function costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve. Tips: You basically need to write down two steps and iterate through them: 1) Calculate the cost and the gradient for the current parameters. Use propagate(). 2) Update the parameters using gradient descent rule for w and b. costs = [] for i in range(num_iterations): # Cost and gradient calculation (≈ 1-4 lines of code) ### START CODE HERE ### grads, cost = propagate(w, b, X, Y) ### END CODE HERE ### # Retrieve derivatives from grads dw = grads["dw"] db = grads["db"] # update rule (≈ 2 lines of code) ### START CODE HERE ### w = w - learning_rate * dw b = b - learning_rate * db ### END CODE HERE ### # Record the costs if i % 100 == 0: costs.append(cost) # Print the cost every 100 training examples if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" % (i, cost)) params = {"w": w, "b": b} grads = {"dw": dw, "db": db} return params, grads, costs params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False) print ("w = " + str(params["w"])) print ("b = " + str(params["b"])) print ("dw = " + str(grads["dw"])) print ("db = " + str(grads["db"])) # GRADED FUNCTION: predict def predict(w, b, X): ''' Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b) Arguments: w -- weights, a numpy array of size (num_px * num_px * 3, 1) b -- bias, a scalar X -- data of size (num_px * num_px * 3, number of examples) Returns: Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X ''' m = X.shape[1] Y_prediction = np.zeros((1, m)) w = w.reshape(X.shape[0], 1) # Compute vector "A" predicting the probabilities of a cat being present in the picture ### START CODE HERE ### (≈ 1 line of code) A = sigmoid(np.dot(w.T, X) + b) ### END CODE HERE ### # Y_prediction[A >= 0.5] = int(1) # Y_prediction[A < 0.5] = int(0) for i in range(A.shape[1]): # Convert probabilities a[0,i] to actual predictions p[0,i] ### START CODE HERE ### (≈ 4 lines of code) if A[0][i] > 0.5: Y_prediction[0][i] = 1 else: Y_prediction[0][i] = 0 ### END CODE HERE ### assert(Y_prediction.shape == (1, m)) return Y_prediction.astype(int) print("predictions = " + str(predict(w, b, X))) # GRADED FUNCTION: model def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False): Builds the logistic regression model by calling the function you've implemented previously Arguments: X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train) Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train) X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test) Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations -- hyperparameter representing the number of iterations to optimize the parameters learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize() print_cost -- Set to true to print the cost every 100 iterations Returns: d -- dictionary containing information about the model. ### START CODE HERE ### dim = X_train.shape[0] w, b = initialize_with_zeros(dim) params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations = num_iterations, learning_rate = learning_rate, print_cost = print_cost) w = params["w"] b = params["b"] Y_prediction_test = predict(w, b, X_test) Y_prediction_train = predict(w, b, X_train) ### END CODE HERE ### # Print train/test Errors print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100)) print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100)) d = {"costs": costs, "Y_prediction_test": Y_prediction_test, "Y_prediction_train" : Y_prediction_train, "w" : w, "b" : b, "learning_rate" : learning_rate, "num_iterations": num_iterations} return d d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True) # Example of a picture that was wrongly classified. index = 5 plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3))) print(d["Y_prediction_test"][0, index]) print ("y = " + str(test_set_y[0, index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0, index]].decode("utf-8") + "\" picture.") # Plot learning curve (with costs) costs = np.squeeze(d['costs']) plt.plot(costs) plt.ylabel('cost') plt.xlabel('iterations (per hundreds)') plt.title("Learning rate =" + str(d["learning_rate"])) plt.show() learning_rates = [0.01, 0.001, 0.0001] models = {} for i in learning_rates: print ("learning rate is: " + str(i)) models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False) print ('\n' + "-------------------------------------------------------" + '\n') for i in learning_rates: plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"])) plt.ylabel('cost') plt.xlabel('iterations') legend = plt.legend(loc='upper center', shadow=True) frame = legend.get_frame() frame.set_facecolor('0.90') plt.show() ## START CODE HERE ## (PUT YOUR IMAGE NAME) ## END CODE HERE ## # We preprocess the image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px, num_px)).reshape((1, num_px * num_px * 3)).T my_predicted_image = predict(d["w"], d["b"], my_image) plt.imshow(image) print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: 2 - Overview of the Problem set Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing). Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. Step4: Expected Output for m_train, m_test and num_px Step5: Expected Output Step7: <font color='blue'> Step9: Expected Output Step11: Expected Output Step13: Expected Output Step14: Expected Output Step16: Expected Output Step17: Run the following cell to train your model. Step18: Expected Output Step19: Let's also plot the cost function and the gradients. Step20: Interpretation Step21: Interpretation
3,993
<ASSISTANT_TASK:> Python Code: import os import matplotlib.pyplot as plt import cv2 %matplotlib inline def readRGBImage(imagepath): image = cv2.imread(imagepath) # Height, Width, Channel (major, minor, _) = cv2.__version__.split(".") if major == '3': # version 3 is used, need to convert image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) else: # Version 2 is used, not necessary to convert pass return image # Read image from file, save image with matplotlib using `imshow` function basedir = './src/cnn/images' imagepath = os.path.join(basedir, 'sample.jpeg') #image = cv2.imread(imagepath, cv2.IMREAD_GRAYSCALE) image = readRGBImage(imagepath) # Width and Height shows pixel size of this image # Channel=3 indicates the RGB channel print('image.shape (Height, Width, Channel) = ', image.shape) # Save image with openCV # This may be blue image because the color format RGB is opposite. cv2.imwrite('./src/cnn/images/out.jpg', image) # bgr_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) # cv2.imwrite('./src/cnn/images/out.jpg', bgr_image) # Plotting plt.imshow(image) plt.savefig('./src/cnn/images/out_plt.jpg') gray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) # Gray scale image is 2 dimension, No channel dimension. print('gray_image.shape (Height, Width) = ', gray_image.shape) cv2.imwrite('./src/cnn/images/out_gray.jpg', gray_image) %matplotlib inline print('image.shape (Height, Width, Channel) = ', image.shape) # Resize image to half size height, width = image.shape[:2] half_image = cv2.resize(image, (width//2, height//2)) # size must be int print('half_image.shape (Height, Width, Channel) = ', half_image.shape) plt.imshow(half_image) plt.savefig('./src/cnn/images/out_half.jpg') # Resize image by specifying longer side size def resize_longedge(image, pixel): Resize the input image Longer edge size will be `pixel`, and aspect ratio doesn't change height, width = image.shape[:2] longer_side = max(height, width) ratio = float(pixel) / longer_side return cv2.resize(image, None, fx=ratio, fy=ratio) # size must be int resized128_image = resize_longedge(image, 128) print('resized128_image.shape (Height, Width, Channel) = ', resized128_image.shape) plt.imshow(resized128_image) plt.savefig('./src/cnn/images/out_resized128.jpg') %matplotlib inline # Crop center of half_image height, width = half_image.shape[:2] crop_length = min(height, width) height_start = (height - crop_length) // 2 width_start = (width - crop_length) // 2 cropped_image = half_image[ height_start:height_start+crop_length, width_start:width_start+crop_length, :] print('cropped_image.shape (Height, Width, Channel) = ', cropped_image.shape) plt.imshow(cropped_image) plt.savefig('./src/cnn/images/out_cropped.jpg') %matplotlib inline # Show RGB channel separately in gray scale fig, axes = plt.subplots(1, 3) # image[:, :, 0] is R channel. axes[0].set_title('R channel') axes[0].imshow(image[:, :, 0], cmap='gray') # image[:, :, 1] is G channel. axes[1].set_title('G channel') axes[1].imshow(image[:, :, 1], cmap='gray') # image[:, :, 2] is B channel. axes[2].set_title('B channel') axes[2].imshow(image[:, :, 2], cmap='gray') plt.savefig(os.path.join(basedir, 'RGB_gray.jpg')) # Show RGB channel separately in color fig, axes = plt.subplots(1, 3) # image[:, :, 0] is R channel, replace the rest by 0. imageR = image.copy() imageR[:, :, 1:3] = 0 axes[0].set_title('R channel') axes[0].imshow(imageR) # image[:, :, 1] is G channel, replace the rest by 0. imageG = image.copy() imageG[:, :, [0, 2]] = 0 axes[1].set_title('G channel') axes[1].imshow(imageG) # image[:, :, 2] is B channel, replace the rest by 0. imageB = image.copy() imageB[:, :, 0:2] = 0 axes[2].set_title('B channel') axes[2].imshow(imageB) plt.savefig(os.path.join(basedir, 'RGB_color.jpg')) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Loading and save image Step2: Change color format Step4: Resize Step5: Crop Step6: Image processing with channels
3,994
<ASSISTANT_TASK:> Python Code: # Imports import numpy as np import pandas as pd import xarray as xr from functools import lru_cache # For function result caching # Special functions # from scipy.special import sph_harm import spherical_functions as sf import quaternion # Performance & benchmarking libraries from joblib import Memory import xyzpy as xyz import numba as nb # Timings with ttictoc # https://github.com/hector-sab/ttictoc from ttictoc import TicToc # Package fns. # For module testing, include path to module here import sys import os modPath = r'D:\code\github\ePSproc' sys.path.append(modPath) import epsproc as ep # TODO: tidy this up! from epsproc.util import matEleSelector from epsproc.geomFunc import geomCalc # Calculate some values. # w3jTable will output all values up to l=lp=Lmax (hence L=2Lmax) lmax = 1 w3jlist = geomCalc.w3jTable(Lmax = Lmax, form = '2d') # For form = '2d', the function will output only valid entries as a coordinate table print(w3jlist.shape) print(f'Max value: {w3jlist[:,-1].max()}, min value: {w3jlist[:,-1].min()}\n') # Print the table - output format has rows (l, lp, L, m, mp, M, 3j) print(w3jlist) # Recalculate and set to Xarray output format, then plot with ep.lmPlot() w3j = geomCalc.w3jTable(Lmax = lmax, form = 'xdaLM') # Check number of valid entries matches basic table above print(f'Number of valid (non-NaN) elements: {w3j.count()}') # Set parameters to restack the Xarray into (L,M) pairs plotDimsRed = ['l', 'm', 'lp', 'mp'] xDim = {'LM':['L','M']} # Plot with ep.lmPlot(), real values daPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r') # Print out values by QNs (Pandas table) daPlotpd # Calculate and output in Sparse array format w3jSparse = geomCalc.w3jTable(Lmax = lmax, form = 'ndsparse') w3jSparse # Try a larger Lmax and plot only. lmax = 3 w3j = geomCalc.w3jTable(Lmax = lmax, form = 'xdaLM') # Check number of valid entries matches basic table above print(f'Number of valid (non-NaN) elements: {w3j.count()}') # Set parameters to restack the Xarray into (L,M) pairs plotDimsRed = ['l', 'm', 'lp', 'mp'] xDim = {'LM':['L','M']} # Plot with ep.lmPlot(), real values daPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r') # Resort axis by (l,lp) plotDimsRed = ['l', 'lp', 'm', 'mp'] xDim = {'LM':['L','M']} # Plot with ep.lmPlot(), real values daPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r') # A complementary visulization is to call directly the sns.clustermap plot, use clustering and plot by category labels - see https://seaborn.pydata.org/index.html # (ep.lpPlot uses a modified version of this routine.) ep.snsMatMod.clustermap(daPlotpd.fillna(0), center=0, cmap="vlag", row_cluster=True, col_cluster=True) ep.snsMatMod.clustermap(daPlotpd.fillna(0).T.corr(), center=0, cmap="vlag", row_cluster=True, col_cluster=True) # Test some other Seaborn methods... these likely won't scale well for large lmax! import seaborn as sns # Recalculate for small lmax lmax = 1 w3j = geomCalc.w3jTable(Lmax = lmax, form = 'xdaLM') # Set parameters to restack the Xarray into (L,M) pairs plotDimsRed = ['l', 'm', 'lp', 'mp'] xDim = {'LM':['L','M']} # Plot with ep.lmPlot(), real values daPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r') # Try sns pairplot # sns.pairplot(daPlotpd.fillna(0).T) # Big grids! sns.pairplot(daPlotpd.fillna(0)) # OK, not particularly informative # sns.pairplot(daPlotpd.fillna(0), hue = 'l') # Doesn't work - multindex issue? # Calculate EPR terms, all QNs, with field strengths e = 1 EPRtable = geomCalc.EPR(form = '2d') # Output values as list, [l, lp, P, p, R-p, R, EPR] print(EPRX) lmax = 1 EPRX = geomCalc.EPR(form = 'xarray') # Set parameters to restack the Xarray into (L,M) pairs plotDimsRed = ['l', 'p', 'lp', 'R-p'] xDim = {'LM':['P','R']} # Plot with ep.lmPlot(), real values daPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r') Lmax = 1 BLMtable = geomCalc.betaTerm(Lmax = Lmax, form = 'xdaLM') # Output as stacked Xarray BLMtable plotDimsRed = ['l', 'm', 'lp', 'mp'] xDim = {'LM':['L','M']} # daPlot = BLMtable.unstack().stack({'LM':['L','M']}) # ep.lmPlot(w3jXcombMult.unstack().stack({'LM':['l','m']}), plotDims=['lp', 'L', 'mp', 'M'], xDim='LM', SFflag = False) # daPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable.unstack().stack(xDim), plotDims=plotDimsRed, xDim=xDim, SFflag = False, squeeze = False) daPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable, plotDims=plotDimsRed, xDim=xDim, SFflag = False, squeeze = False) # ep.lmPlot(w3jXcombMult.unstack(), plotDims=['lp', 'L', 'mp', 'M'], xDim='L', SFflag = False) # daPlotpd = daPlot.unstack().stack(plotDim = plotDimsRed).to_pandas().dropna(axis = 1).T # daPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable.unstack().stack(xDim), plotDims=plotDimsRed, xDim=xDim, pType = 'r') daPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable, plotDims=plotDimsRed, xDim=xDim, pType = 'r') daPlotpd # NOW FIXED # Test PD conversion - seems to be giving issue for lmPlot() routine here. # daPlotpd = daPlot.stack(plotDim = plotDimsRed).to_pandas().T # daPlotpd = daPlot.stack(plotDim = plotDimsRed).to_pandas().dropna(axis = 0).T # Drop na here seems to remove everything - might be issue in lmPlot # daPlotpd = daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').to_pandas().T # This seems to reduce NaNs OK # daPlotpd = daPlot.stack(xDim).stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').to_pandas().T # This seems to reduce NaNs OK # daPlotpd # Test reductions by # of non-Nan elements # print(daPlot.count()) # print(daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='any').count()) # how='any' will drop all elements it seems. # print(daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').count()) # print(daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').count()) print(daPlot.count()) print(daPlot.dropna(dim='plotDim', how='any').count()) # how='any' will drop all elements it seems. print(daPlot.dropna(dim='plotDim', how='all').count()) print(daPlot.dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').count()) # Test correlation fns. # This fails with NaNs present it seems # ep.snsMatMod.clustermap(daPlot.dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').fillna(0).to_pandas().T.corr()) # ep.snsMatMod.clustermap(daPlot.dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').fillna(0).to_pandas().corr()) ep.snsMatMod.clustermap(daPlotpd.fillna(0).T.corr()) ep.snsMatMod.clustermap(daPlotpd.fillna(0).corr()) # Switch plotting dims plotDimsRed = ['L','M'] xDim = {'llpmmp':['l', 'm', 'lp', 'mp']} daPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable, plotDims=plotDimsRed, xDim=xDim, mMax = 2, pType = 'r') import holoviews as hv hv_ds = hv.Dataset(daPlotpd.unstack()) daPlotpd.plot(kind = 'bar') <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Exploring Wigner 3js Step2: This ends up as a relatively sparse array, since many combinations are invalid (do not follow angular momentum selection rules), hence there are many NaN terms. Step3: Here nnz is the number of non-zero elements. Step4: This clearly shows that the valid terms become sparser at higher $l$, and the couplings become smaller. Step5: $E_{P,R}$ tensor Step6: As before, we can visualise these values... Step7: Testing betaTerm() Step8: Other plotting methods... Step9: Pandas
3,995
<ASSISTANT_TASK:> Python Code: import numpy as np import matplotlib.pyplot as plt N = 100 # points per class D = 2 # dimensionality at 2 so we can eyeball it K = 3 # number of classes X = np.zeros((N*K, D)) # generate an empty matrix to hold X features y = np.zeros(N*K, dtype='int32') # switching this to int32 # for 3 classes, evenly generates spiral arms for j in xrange(K): ix = range(N*j, N*(j+1)) r = np.linspace(0.0,1,N) #radius t = np.linspace(j*4, (j+1)*4, N) + np.random.randn(N)*0.2 # theta X[ix] = np.c_[r*np.sin(t), r*np.cos(t)] y[ix] = j plt.scatter(X[:,0], X[:,1], c=y, s=20, cmap=plt.cm.Spectral) plt.show() import tensorflow as tf # what should the classifier expect in terms of features feature_columns = [tf.contrib.layers.real_valued_column("", dimension=D)] # defining the actual classifier dnn_spiral_classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns, activation_fn = tf.nn.softmax, # softmax activation optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01), #GD with LR of 0.01 hidden_units = [10], # one hidden layer, containing 10 neurons n_classes = K, # K target classes model_dir="/tmp/spiral_model") # directory for saving model checkpoints # turn data into tensors to feed into the computational graph # honestly input_fn could also handle these as np.arrays but this is here to show you that the tf.constant operation can run on np.array input def get_inputs(): X_tensor = tf.constant(X) y_tensor = tf.constant(y) return X_tensor, y_tensor # fit the model dnn_spiral_classifier.fit(input_fn=get_inputs, steps=200) # interestingsly, you can continue training the model by continuing to call fit dnn_spiral_classifier.fit(input_fn=get_inputs, steps=300) #evaluating the accuracy accuracy_score = dnn_spiral_classifier.evaluate(input_fn=get_inputs, steps=1)["accuracy"] print("\n Accuracy: {0:f}\n".format(accuracy_score)) %ls '/tmp/spiral_model/' def new_points(): return np.array([[1.0, 1.0], [-1.5, -1.0]], dtype = np.int32) predictions = list(dnn_spiral_classifier.predict(input_fn=new_points)) print( "New Samples, Class Predictions: {}\n" .format(predictions)) # watch out for this, tf.classifier.evaluate is going to be deprecated, so keep an eye out for a long-term solution to calculating accuracy accuracy_score = dnn_spiral_classifier.evaluate(input_fn=get_inputs, steps=1)["accuracy"] # sample code to use for the gold star challenge from https://www.tensorflow.org/get_started/get_started import numpy as np import tensorflow as tf # Model parameters W = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) # training data x_train = [1,2,3,4] y_train = [0,-1,-2,-3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x:x_train, y:y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x:x_train, y:y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: TensorFlow Step2: Notice the following Step3: Predicting on a new value Step4: Digging into the DNNClassifier Step5: Exercises
3,996
<ASSISTANT_TASK:> Python Code: import riip ri = riip.RiiDataFrame() ri.catalog.head(3) ri.raw_data.loc[3].head(5) # first 5 rows for the material whose id is 3 ri.catalog.tail(3) ri.raw_data.loc[2911].head(5) # first 5 rows for the material whose id is 2912 grid_data = ri.load_grid_data(3) grid_data import matplotlib.pyplot as plt ri.plot(3, "n") plt.show() ri.plot(3, "k") plt.show() ri.plot(3, "eps") plt.show() ri.search("NaCl") ri.search("sodium") # upper or lower case is not significant ri.select("2.5 < n < 3 and 0.4 < wl < 0.8").head(10) ri.plot(157) ri.show(1) print(ri.read(0)) ri.read(0, as_dict=True) ri.references(20) water = ri.material({'id': 428}) water.catalog <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: RiiDataFrame has an attribute named catalog that is a Pandas DataFrame provinding the catalog of experimental data as shown below. Step2: The experimental data are given by raw_data Step3: where n is the refractive index and k is the extinction coefficient at the vacuum wavelength wl_n (wl_k) in the unit of &mu;m. Step4: Using the method load_grid_data(), you can get grid data calculated at 200 wavelength values in the range [wl_min, wl_max], which is the intersection between the domain of n [wl_n_min, wl_n_max] and the domain of k [wl_k_min, wl_k_max]. These values are shown in catalog. Step5: Helper Methods Step6: search Step7: select Step8: show Step9: read Step10: references Step11: material
3,997
<ASSISTANT_TASK:> Python Code: from ctypes import cdll basics = cdll.LoadLibrary('./lib_basics.so') basics.mul(2, 5) from ctypes import c_int first = (c_int * 3)(1, 2, 3) vector3D = c_int * 3 second = vector3D(4, 5, 6) c_result = basics.dot(first, second, 3) python_result = sum(a * b for a, b in zip([1, 2, 3], [4, 5, 6])) print('C code returned', c_result, 'and Python code returned', python_result) basics.dot((c_int*1)(2), (c_int*1)(*[3]), 1) try: vector3D([1, 2, 3]) except: print('You cannot pass lists') try: vector3D(0, 1, 2, 3) except: print('Forbidden to provide more elements than it should accept') from ctypes import cdll, c_long, c_size_t, c_voidp setters = cdll.LoadLibrary('./lib_setters.so') def change_long(a, b=0, digit=0): setters.set_long(c_voidp(id(a)), c_long(b), c_size_t(digit)) from ctypes import c_long, c_size_t, c_voidp def change_long(a, b=0, digit=0): args = (a, b, digit) if not all(type(a) is int for a in args): raise TypeError('All parameters should be of type "int", ' 'but {} provided'.format(map(type, args))) if a + 0 is a: raise ValueError('No way. You don\'t want to break ' 'your interpreter, right?') setters.set_long(c_voidp(id(a)), c_long(b), c_size_t(digit)) def variable_info(text, variable): print('{:^30}: {:#05x} ({:#x})'.format(text, variable, id(variable))) def foo(a, new_value): a = new_value a = 2**10 variable_info('Before function call', a) foo(a, 5) variable_info('After function call', a) a = 2**10 b = a variable_info('Before function call', a) change_long(a, 2, 0) variable_info('After function call', a) variable_info('What\'s about b? Here it is', b) from numpy import array, cross basis = [ [1, 0, 0], [0, 1, 0], [0, 0, 1] ] def py_cross(u, v): return [ u[1] * v[2] - u[2] * v[1], u[2] * v[0] - u[0] * v[2], u[0] * v[1] - u[1] * v[0] ] py_cross(basis[0], basis[1]) cross(array(basis[0]), array(basis[1])) from ctypes import cdll from numpy import empty_like c_cross = cdll.LoadLibrary('./lib_cross.so') u = array(basis[0]).astype('f') v = array(basis[1]).astype('f') w = empty_like(u) def cross_wrapper(u, v, w): return c_cross.cross(u.ctypes.get_as_parameter(), v.ctypes.get_as_parameter(), w.ctypes.get_as_parameter()) cross_wrapper(u, v, w) print(w) from numpy.random import rand BIG_ENOUGH_INTEGER = int(1E5) vectors_u = rand(BIG_ENOUGH_INTEGER, 3).astype('f') vectors_v = rand(BIG_ENOUGH_INTEGER, 3).astype('f') print('Vectors u:', vectors_u) %%timeit for i in range(BIG_ENOUGH_INTEGER): py_cross(vectors_u[i], vectors_v[i]) %%timeit cross(vectors_u, vectors_v) %%timeit vectors_w = empty_like(vectors_u) for i in range(BIG_ENOUGH_INTEGER): cross_wrapper(vectors_u[i], vectors_v[i], vectors_w[i]) from numpy import allclose np_result = cross(vectors_u, vectors_v) py_result = [py_cross(vectors_u[i], vectors_v[i]) for i in range(BIG_ENOUGH_INTEGER)] print(allclose(np_result, py_result)) vectors_w = empty_like(vectors_u) assert sum([cross_wrapper(vectors_u[i], vectors_v[i], vectors_w[i]) for i in range(BIG_ENOUGH_INTEGER)]) == 0 print(allclose(np_result, vectors_w)) vectors_w = empty_like(vectors_u) c_vectors_u = vectors_u.ctypes.get_as_parameter() c_vectors_v = vectors_v.ctypes.get_as_parameter() c_vectors_w = vectors_w.ctypes.get_as_parameter() %%timeit vectors_w = empty_like(vectors_u) c_vectors_w = vectors_w.ctypes.get_as_parameter() c_cross.cross_vectors(c_vectors_u, c_vectors_v, c_vectors_w, len(vectors_u)) c_cross.cross_vectors(c_vectors_u, c_vectors_v, c_vectors_w, len(vectors_u)) print(allclose(np_result, vectors_w)) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: In Linux Step2: In Windows Step3: See? It's easy Step4: Say, we need to multiply 3-dimensional vectors Step5: We can create an alias for this data type and use it Step6: Call the function Step7: Following examples will cause errors Step8: Available types Step9: Don't forget that Python interpreter will not create new objects for small integers like 0, so we should avoid assigning new values to such numbers, because they will be changed everywhere they're used Step10: Recall that we cannot change values of integers inside the Python functions Step11: Now forget it and take a look at what we've done Step12: Cross product Step13: Plain Python Step14: NumPy Step15: C Step16: Let's run performance tests Step17: Are calculations right? Step18: NumPy versus human
3,998
<ASSISTANT_TASK:> Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@test {"skip": true} !pip install --quiet --upgrade tensorflow-federated !pip install --quiet --upgrade nest-asyncio import nest_asyncio nest_asyncio.apply() import tensorflow as tf import tensorflow_federated as tff emnist_train, emnist_test = tff.simulation.datasets.emnist.load_data() NUM_CLIENTS = 10 BATCH_SIZE = 20 def preprocess(dataset): def batch_format_fn(element): Flatten a batch of EMNIST data and return a (features, label) tuple. return (tf.reshape(element['pixels'], [-1, 784]), tf.reshape(element['label'], [-1, 1])) return dataset.batch(BATCH_SIZE).map(batch_format_fn) client_ids = sorted(emnist_train.client_ids)[:NUM_CLIENTS] federated_train_data = [preprocess(emnist_train.create_tf_dataset_for_client(x)) for x in client_ids ] def create_keras_model(): initializer = tf.keras.initializers.GlorotNormal(seed=0) return tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(784,)), tf.keras.layers.Dense(10, kernel_initializer=initializer), tf.keras.layers.Softmax(), ]) def model_fn(): keras_model = create_keras_model() return tff.learning.from_keras_model( keras_model, input_spec=federated_train_data[0].element_spec, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()]) def initialize_fn(): model = model_fn() return model.trainable_variables def next_fn(server_weights, federated_dataset): # Broadcast the server weights to the clients. server_weights_at_client = broadcast(server_weights) # Each client computes their updated weights. client_weights = client_update(federated_dataset, server_weights_at_client) # The server averages these updates. mean_client_weights = mean(client_weights) # The server updates its model. server_weights = server_update(mean_client_weights) return server_weights @tf.function def client_update(model, dataset, server_weights, client_optimizer): Performs training (using the server model weights) on the client's dataset. # Initialize the client model with the current server weights. client_weights = model.trainable_variables # Assign the server weights to the client model. tf.nest.map_structure(lambda x, y: x.assign(y), client_weights, server_weights) # Use the client_optimizer to update the local model. for batch in dataset: with tf.GradientTape() as tape: # Compute a forward pass on the batch of data outputs = model.forward_pass(batch) # Compute the corresponding gradient grads = tape.gradient(outputs.loss, client_weights) grads_and_vars = zip(grads, client_weights) # Apply the gradient using a client optimizer. client_optimizer.apply_gradients(grads_and_vars) return client_weights @tf.function def server_update(model, mean_client_weights): Updates the server model weights as the average of the client model weights. model_weights = model.trainable_variables # Assign the mean client weights to the server model. tf.nest.map_structure(lambda x, y: x.assign(y), model_weights, mean_client_weights) return model_weights federated_float_on_clients = tff.FederatedType(tf.float32, tff.CLIENTS) str(federated_float_on_clients) @tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS)) def get_average_temperature(client_temperatures): return tff.federated_mean(client_temperatures) str(get_average_temperature.type_signature) get_average_temperature([68.5, 70.3, 69.8]) @tff.tf_computation(tf.float32) def add_half(x): return tf.add(x, 0.5) str(add_half.type_signature) @tff.federated_computation(tff.FederatedType(tf.float32, tff.CLIENTS)) def add_half_on_clients(x): return tff.federated_map(add_half, x) str(add_half_on_clients.type_signature) @tff.tf_computation def server_init(): model = model_fn() return model.trainable_variables @tff.federated_computation def initialize_fn(): return tff.federated_value(server_init(), tff.SERVER) whimsy_model = model_fn() tf_dataset_type = tff.SequenceType(whimsy_model.input_spec) str(tf_dataset_type) model_weights_type = server_init.type_signature.result str(model_weights_type) @tff.tf_computation(tf_dataset_type, model_weights_type) def client_update_fn(tf_dataset, server_weights): model = model_fn() client_optimizer = tf.keras.optimizers.SGD(learning_rate=0.01) return client_update(model, tf_dataset, server_weights, client_optimizer) @tff.tf_computation(model_weights_type) def server_update_fn(mean_client_weights): model = model_fn() return server_update(model, mean_client_weights) federated_server_type = tff.FederatedType(model_weights_type, tff.SERVER) federated_dataset_type = tff.FederatedType(tf_dataset_type, tff.CLIENTS) @tff.federated_computation(federated_server_type, federated_dataset_type) def next_fn(server_weights, federated_dataset): # Broadcast the server weights to the clients. server_weights_at_client = tff.federated_broadcast(server_weights) # Each client computes their updated weights. client_weights = tff.federated_map( client_update_fn, (federated_dataset, server_weights_at_client)) # The server averages these updates. mean_client_weights = tff.federated_mean(client_weights) # The server updates its model. server_weights = tff.federated_map(server_update_fn, mean_client_weights) return server_weights federated_algorithm = tff.templates.IterativeProcess( initialize_fn=initialize_fn, next_fn=next_fn ) str(federated_algorithm.initialize.type_signature) str(federated_algorithm.next.type_signature) central_emnist_test = emnist_test.create_tf_dataset_from_all_clients() central_emnist_test = preprocess(central_emnist_test) def evaluate(server_state): keras_model = create_keras_model() keras_model.compile( loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()] ) keras_model.set_weights(server_state) keras_model.evaluate(central_emnist_test) server_state = federated_algorithm.initialize() evaluate(server_state) for round in range(15): server_state = federated_algorithm.next(server_state, federated_train_data) evaluate(server_state) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: <table class="tfo-notebook-buttons" align="left"> Step2: NOTE Step4: In order to feed the dataset into our model, we flatten the data, and convert each example into a tuple of the form (flattened_image_vector, label). Step5: We now select a small number of clients, and apply the preprocessing above to their datasets. Step6: Preparing the model Step7: In order to use this model in TFF, we wrap the Keras model as a tff.learning.Model. This allows us to perform the model's forward pass within TFF, and extract model outputs. For more details, also see the image classification tutorial. Step8: While we used tf.keras to create a tff.learning.Model, TFF supports much more general models. These models have the following relevant attributes capturing the model weights Step9: This function looks good, but as we will see later, we will need to make a small modification to make it a "TFF computation". Step11: We'll focus on implementing these four components separately. We first focus on the parts that can be implemented in pure TensorFlow, namely the client and server update steps. Step13: Server Update Step14: The snippet could be simplified by simply returning the mean_client_weights. However, more advanced implementations of Federated Averaging use mean_client_weights with more sophisticated techniques, such as momentum or adaptivity. Step15: Federated types are specified by a type T of its member constituents (eg. tf.float32) and a group G of devices. We will focus on the cases where G is either tff.CLIENTS or tff.SERVER. Such a federated type is represented as {T}@G, as shown below. Step16: Why do we care so much about placements? A key goal of TFF is to enable writing code that could be deployed on a real distributed system. This means that it is vital to reason about which subsets of devices execute which code, and where different pieces of data reside. Step17: You might ask, how is this different from the tf.function decorator in TensorFlow? The key answer is that the code generated by tff.federated_computation is neither TensorFlow nor Python code; It is a specification of a distributed system in an internal platform-independent glue language. Step18: This tff.federated_computation accepts arguments of federated type {float32}@CLIENTS, and returns values of federated type {float32}@SERVER. Federated computations may also go from server to client, from client to client, or from server to server. Federated computations can also be composed like normal functions, as long as their type signatures match up. Step19: Non-eager computations and TensorFlow Step20: These also have type signatures, but without placements. For example, we can call Step21: Here we see an important difference between tff.federated_computation and tff.tf_computation. The former has explicit placements, while the latter does not. Step22: This function is almost identical to add_half, except that it only accepts values with placement at tff.CLIENTS, and returns values with the same placement. We can see this in its type signature Step23: In summary Step24: We can then pass this directly into a federated computation using tff.federated_value. Step25: Creating the next_fn Step26: Let's look at the dataset type signature. Remember that we took 28 by 28 images (with integer labels) and flattened them. Step27: We can also extract the model weights type by using our server_init function above. Step28: Examining the type signature, we'll be able to see the architecture of our model! Step29: We can now create our tff.tf_computation for the client update. Step30: The tff.tf_computation version of the server update can be defined in a similar way, using types we've already extracted. Step31: Last, but not least, we need to create the tff.federated_computation that brings this all together. This function will accept two federated values, one corresponding to the server weights (with placement tff.SERVER), and the other corresponding to the client datasets (with placement tff.CLIENTS). Step32: Remember the 4 elements of an FL algorithm? Step33: We now have a tff.federated_computation for both the algorithm initialization, and for running one step of the algorithm. To finish our algorithm, we pass these into tff.templates.IterativeProcess. Step34: Let's look at the type signature of the initialize and next functions of our iterative process. Step35: This reflects the fact that federated_algorithm.initialize is a no-arg function that returns a single-layer model (with a 784-by-10 weight matrix, and 10 bias units). Step36: Here, we see that federated_algorithm.next accepts a server model and client data, and returns an updated server model. Step37: Next, we write a function that accepts a server state, and uses Keras to evaluate on the test dataset. If you're familiar with tf.Keras, this will all look familiar, though note the use of set_weights! Step38: Now, let's initialize our algorithm and evaluate on the test set. Step39: Let's train for a few rounds and see if anything changes.
3,999
<ASSISTANT_TASK:> Python Code: !swirlcons --user:example=2 --user:rp-solver=4 #!mpirun -n 4 swirlcons %run make_plots.py %pylab inline import glob from matplotlib import image from clawpack.visclaw.JSAnimation import IPython_display from matplotlib import animation figno = 0 fname = '_plots/*fig' + str(figno) + '.png' filenames=sorted(glob.glob(fname)) fig = plt.figure() im = plt.imshow(image.imread(filenames[0])) def init(): im.set_data(image.imread(filenames[0])) return im, def animate(i): image_i=image.imread(filenames[i]) im.set_data(image_i) return im, animation.FuncAnimation(fig, animate, init_func=init, frames=len(filenames), interval=500, blit=True) <END_TASK>
<SYSTEM_TASK:> Given the following text description, write Python code to implement the functionality described below step by step <END_TASK> <USER_TASK:> Description: Step1: Or, run code in parallel mode (command may need to be customized, depending your on MPI installation.) Step2: Create PNG files for web-browser viewing, or animation. Step3: View PNG files in browser, using URL above, or create an animation of all PNG files, using code below.