code
stringlengths
75
104k
docstring
stringlengths
1
46.9k
text
stringlengths
164
112k
def get_next_batch(self): """ This method is called from the manager. It must return a list or a generator of BaseRecord objects. When it has nothing else to read, it must set class variable "finished" to True. """ if self.collection_scanner.is_enabled: batch = self.collection_scanner.get_new_batch() for item in batch: base_item = BaseRecord(item) self.increase_read() self.last_position['last_key'] = item['_key'] yield base_item self.logger.debug('Done reading batch') else: self.logger.debug('No more batches') self.finished = True
This method is called from the manager. It must return a list or a generator of BaseRecord objects. When it has nothing else to read, it must set class variable "finished" to True.
Below is the the instruction that describes the task: ### Input: This method is called from the manager. It must return a list or a generator of BaseRecord objects. When it has nothing else to read, it must set class variable "finished" to True. ### Response: def get_next_batch(self): """ This method is called from the manager. It must return a list or a generator of BaseRecord objects. When it has nothing else to read, it must set class variable "finished" to True. """ if self.collection_scanner.is_enabled: batch = self.collection_scanner.get_new_batch() for item in batch: base_item = BaseRecord(item) self.increase_read() self.last_position['last_key'] = item['_key'] yield base_item self.logger.debug('Done reading batch') else: self.logger.debug('No more batches') self.finished = True
def write(settings_path, settings_data, **kwargs): """Write data to .env file""" for key, value in settings_data.items(): dotenv_cli.set_key(str(settings_path), key.upper(), str(value))
Write data to .env file
Below is the the instruction that describes the task: ### Input: Write data to .env file ### Response: def write(settings_path, settings_data, **kwargs): """Write data to .env file""" for key, value in settings_data.items(): dotenv_cli.set_key(str(settings_path), key.upper(), str(value))
def generative_model(A, D, m, eta, gamma=None, model_type='matching', model_var='powerlaw', epsilon=1e-6, copy=True, seed=None): ''' Generates synthetic networks using the models described in Betzel et al. (2016) Neuroimage. See this paper for more details. Succinctly, the probability of forming a connection between nodes u and v is P(u,v) = E(u,v)**eta * K(u,v)**gamma where eta and gamma are hyperparameters, E(u,v) is the euclidean or similar distance measure, and K(u,v) is the algorithm that defines the model. This describes the power law formulation, an alternative formulation uses the exponential function P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) Parameters ---------- A : np.ndarray Binary network of seed connections D : np.ndarray Matrix of euclidean distances or other distances between nodes m : int Number of connections that should be present in the final synthetic network eta : np.ndarray A vector describing a range of values to estimate for eta, the hyperparameter describing exponential weighting of the euclidean distance. gamma : np.ndarray A vector describing a range of values to estimate for theta, the hyperparameter describing exponential weighting of the basis algorithm. If model_type='euclidean' or another distance metric, this can be None. model_type : Enum(str) euclidean : Uses only euclidean distances to generate connection probabilities neighbors : count of common neighbors matching : matching index, the normalized overlap in neighborhoods clu-avg : Average clustering coefficient clu-min : Minimum clustering coefficient clu-max : Maximum clustering coefficient clu-diff : Difference in clustering coefficient clu-prod : Product of clustering coefficient deg-avg : Average degree deg-min : Minimum degree deg-max : Maximum degree deg-diff : Difference in degree deg-prod : Product of degrees model_var : Enum(str) Default value is powerlaw. If so, uses formulation of P(u,v) as described above. Alternate value is exponential. If so, uses P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) epsilon : float A small positive value added to all P(u,v). The default value is 1e-6 copy : bool Some algorithms add edges directly to the input matrix. Set this flag to make a copy of the input matrix instead. Defaults to True. seed : hashable, optional If None (default), use the np.random's global random state to generate random numbers. Otherwise, use a new np.random.RandomState instance seeded with the given value. ''' rng = get_rng(seed) if copy: A = A.copy() n = len(D) #These parameters don't do any of the voronoi narrowing. #Its a list of eta values paired with gamma values. #To try 3 eta and 3 gamma pairs, should use 9 list values. if len(eta) != len(gamma): raise BCTParamError('Eta and gamma hyperparameters must be lists of ' 'the same size') nparams = len(eta) B = np.zeros((n, n, nparams)) def k_avg(K): return ((np.tile(K, (n, 1)) + np.transpose(np.tile(K, (n, 1))))/2 + epsilon) def k_diff(K): return np.abs(np.tile(K, (n, 1)) - np.transpose(np.tile(K, (n, 1)))) + epsilon def k_max(K): return np.max(np.dstack((np.tile(K, (n, 1)), np.transpose(np.tile(K, (n, 1))))), axis=2) + epsilon def k_min(K): return np.min(np.dstack((np.tile(K, (n, 1)), np.transpose(np.tile(K, (n, 1))))), axis=2) + epsilon def k_prod(K): return np.outer(K, np.transpose(K)) + epsilon def s_avg(K, sc): return (K+sc) / 2 + epsilon def s_diff(K, sc): return np.abs(K-sc) + epsilon def s_min(K, sc): return np.where(K < sc, K + epsilon, sc + epsilon) def s_max(K, sc): #return np.max((K, sc.T), axis=0) return np.where(K > sc, K + epsilon, sc + epsilon) def s_prod(K, sc): return K * sc + epsilon def x_avg(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_avg(Ksc, Kix) def x_diff(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_diff(Ksc, Kix) def x_max(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_max(Ksc, Kix) def x_min(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_min(Ksc, Kix) def x_prod(K, ixes): nr_ixes = np.size(np.where(ixes)) Ka = np.reshape(K[ixes], (nr_ixes, 1)) Kb = np.reshape(np.transpose(K), (1, n)) return np.outer(Ka, Kb) + epsilon def clu_gen(A, K, D, m, eta, gamma, model_var, x_fun): mseed = np.size(np.where(A.flat))//2 A = A>0 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) c = clustering_coef_bu(A) k = np.sum(A, axis=1) Ff = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) #print(mseed, m) for i in range(mseed+1, m): C = np.append(0, np.cumsum(Ff[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] A[uu,vv] = A[vv,uu] = 1 k[uu] += 1 k[vv] += 1 bu = A[uu,:].astype(bool) bv = A[vv,:].astype(bool) su = A[np.ix_(bu, bu)] sv = A[np.ix_(bu, bu)] bth = np.logical_and(bu, bv) c[bth] += 2/(k[bth]**2 - k[bth]) c[uu] = np.size(np.where(su.flat))/(k[uu]*(k[uu]-1)) c[vv] = np.size(np.where(sv.flat))/(k[vv]*(k[vv]-1)) c[k<=1] = 0 bth[uu] = 1 bth[vv] = 1 k_result = x_fun(c, bth) #print(np.shape(k_result)) #print(np.shape(K)) #print(K) #print(np.shape(K[bth,:])) K[bth,:] = k_result K[:,bth] = k_result.T if mv2 in ('powerlaw', 'power_law'): Ff[bth,:] = Fd[bth,:] * K[bth,:]**gamma Ff[:,bth] = Fd[:,bth] * K[:,bth]**gamma elif mv2 in ('exponential',): Ff[bth,:] = Fd[bth,:] * np.exp(K[bth,:])*gamma Ff[:,bth] = Fd[:,bth] * np.exp(K[:,bth])*gamma Ff = Ff * np.logical_not(A) return A def deg_gen(A, K, D, m, eta, gamma, model_var, s_fun): mseed = np.size(np.where(A.flat))//2 k = np.sum(A, axis=1) if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) P = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) b = np.zeros((m,), dtype=int) # print(mseed) # print(np.shape(u),np.shape(v)) # print(np.shape(b)) # print(np.shape(A[u,v])) # print(np.shape(np.where(A[u,v])), 'sqishy') # print(np.shape(P), 'squnnaq') #b[:mseed] = np.where(A[np.ix_(u,v)]) b[:mseed] = np.squeeze(np.where(A[u,v])) #print(mseed, m) for i in range(mseed, m): C = np.append(0, np.cumsum(P[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] k[uu] += 1 k[vv] += 1 if mv2 in ('powerlaw', 'power_law'): Fk[:,uu] = Fk[uu,:] = s_fun(k, k[uu]) ** gamma Fk[:,vv] = Fk[vv,:] = s_fun(k, k[vv]) ** gamma elif mv2 in ('exponential',): Fk[:,uu] = Fk[uu,:] = np.exp(s_fun(k, k[uu]) * gamma) Fk[:,vv] = Fk[vv,:] = np.exp(s_fun(k, k[vv]) * gamma) P = Fd * Fk b[i] = r P[u[b[:i]], v[b[:i]]] = P[v[b[:i]], u[b[:i]]] = 0 A[u[r], v[r]] = A[v[r], u[r]] = 1 #P[b[u[:i]], b[v[:i]]] = P[b[v[:i]], b[u[:i]]] = 0 #A[uu,vv] = A[vv,uu] = 1 # indx = v*n + u # indx[b] # # nH = np.zeros((n,n)) # nH.ravel()[indx[b]]=1 # # nG = np.zeros((n,n)) # nG[ u[b], v[b] ]=1 # nG = nG + nG.T # # print(np.shape(np.where(A != nG))) # # import pdb # pdb.set_trace() return A def matching_gen(A, K, D, m, eta, gamma, model_var): K += epsilon mseed = np.size(np.where(A.flat))//2 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) Ff = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) for ii in range(mseed, m): C = np.append(0, np.cumsum(Ff[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] A[uu,vv] = A[vv,uu] = 1 updateuu, = np.where(np.inner(A, A[:,uu])) np.delete(updateuu, np.where(updateuu == uu)) np.delete(updateuu, np.where(updateuu == vv)) c1 = np.append(A[:,uu], A[uu,:]) for i in range(len(updateuu)): j = updateuu[i] c2 = np.append(A[:,j], A[j,:]) use = np.logical_or(c1, c2) use[uu] = use[uu+n] = use[j] = use[j+n] = 0 ncon = np.sum(c1[use]) + np.sum(c2[use]) if ncon == 0: K[uu, j] = K[j, uu] = epsilon else: K[uu, j] = K[j, uu] = (2 / ncon * np.sum(np.logical_and(c1[use], c2[use])) + epsilon) updatevv, = np.where(np.inner(A, A[:,vv])) np.delete(updatevv, np.where(updatevv == uu)) np.delete(updatevv, np.where(updatevv == vv)) c1 = np.append(A[:,vv], A[vv,:]) for i in range(len(updatevv)): j = updatevv[i] c2 = np.append(A[:,j], A[j,:]) use = np.logical_or(c1, c2) use[vv] = use[vv+n] = use[j] = use[j+n] = 0 ncon = np.sum(c1[use]) + np.sum(c2[use]) if ncon == 0: K[vv, j] = K[j, vv] = epsilon else: K[vv, j] = K[j, vv] = (2 / ncon * np.sum(np.logical_and(c1[use], c2[use])) + epsilon) Ff = Fd * Fk * np.logical_not(A) return A def neighbors_gen(A, K, D, m, eta, gamma, model_var): K += epsilon mseed = np.size(np.where(A.flat))//2 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) Ff = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) for ii in range(mseed, m): C = np.append(0, np.cumsum(Ff[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] A[uu, vv] = A[vv, uu] = 1 x = A[uu, :].astype(int) y = A[:, vv].astype(int) K[uu, y] += 1 K[y, uu] += 1 K[vv, x] += 1 K[x, vv] += 1 if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) if mv2 in ('powerlaw', 'power_law'): Ff[uu, y] = Ff[y, uu] = Fd[uu, y] * (K[uu, y] ** gamma) Ff[vv, x] = Ff[x, vv] = Fd[vv, x] * (K[vv, x] ** gamma) elif mv2 in ('exponential',): Ff[uu, y] = Ff[y, uu] = Fd[uu, y] * np.exp(gamma * K[uu, y]) Ff[vv, x] = Ff[x, vv] = Fd[vv, x] * np.exp(gamma * K[vv, x]) Ff[np.where(A)] = 0 return A def euclidean_gen(A, D, m, eta, model_var): mseed = np.size(np.where(A.flat))//2 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 != mv2: raise BCTParamError('Too many hyperparameters specified') if mv1 in ('powerlaw', 'power_law'): Fd = D ** eta elif mv1 in ('exponential',): Fd = np.exp(eta ** D) u,v = np.where(np.triu(np.ones((n,n)), 1)) P = Fd * np.logical_not(A) b = np.zeros((m,), dtype=int) b[:mseed] = np.squeeze(np.where(A[u, v])) for i in range(mseed, m): C = np.append(0, np.cumsum(P[u, v])) r = np.sum(rng.random_sample()*C[-1] >= C) b[i] = r P = Fd P[u[b[:i]], v[b[:i]]] = P[v[b[:i]], u[b[:i]]] = 0 A[u[r], v[r]] = A[v[r], u[r]] = 1 return A if model_type in ('clu-avg', 'clu_avg'): Kseed = k_avg(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_avg) elif model_type in ('clu-diff', 'clu_diff'): Kseed = k_diff(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_diff) elif model_type in ('clu-max', 'clu_max'): Kseed = k_max(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_max) elif model_type in ('clu-min', 'clu_min'): Kseed = k_min(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_min) elif model_type in ('clu-prod', 'clu_prod'): Kseed = k_prod(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_prod) elif model_type in ('deg-avg', 'deg_avg'): Kseed = k_avg(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_avg) elif model_type in ('deg-diff', 'deg_diff'): Kseed = k_diff(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_diff) elif model_type in ('deg-max', 'deg_max'): Kseed = k_max(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_max) elif model_type in ('deg-min', 'deg_min'): Kseed = k_min(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_min) elif model_type in ('deg-prod', 'deg_prod'): Kseed = k_prod(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_prod) elif model_type in ('neighbors',): Kseed = np.inner(A, A) np.fill_diagonal(Kseed, 0) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = neighbors_gen(A, Kseed, D, m, ep, gp, model_var) elif model_type in ('matching', 'matching-ind', 'matching_ind'): mi, _, _ = matching_ind(A) Kseed = mi + mi.T for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = matching_gen(A, Kseed, D, m, ep, gp, model_var) elif model_type in ('spatial', 'geometric', 'euclidean'): for j, ep in enumerate(eta): B[:,:,j] = euclidean_gen(A, D, m, ep, model_var) return np.squeeze(B)
Generates synthetic networks using the models described in Betzel et al. (2016) Neuroimage. See this paper for more details. Succinctly, the probability of forming a connection between nodes u and v is P(u,v) = E(u,v)**eta * K(u,v)**gamma where eta and gamma are hyperparameters, E(u,v) is the euclidean or similar distance measure, and K(u,v) is the algorithm that defines the model. This describes the power law formulation, an alternative formulation uses the exponential function P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) Parameters ---------- A : np.ndarray Binary network of seed connections D : np.ndarray Matrix of euclidean distances or other distances between nodes m : int Number of connections that should be present in the final synthetic network eta : np.ndarray A vector describing a range of values to estimate for eta, the hyperparameter describing exponential weighting of the euclidean distance. gamma : np.ndarray A vector describing a range of values to estimate for theta, the hyperparameter describing exponential weighting of the basis algorithm. If model_type='euclidean' or another distance metric, this can be None. model_type : Enum(str) euclidean : Uses only euclidean distances to generate connection probabilities neighbors : count of common neighbors matching : matching index, the normalized overlap in neighborhoods clu-avg : Average clustering coefficient clu-min : Minimum clustering coefficient clu-max : Maximum clustering coefficient clu-diff : Difference in clustering coefficient clu-prod : Product of clustering coefficient deg-avg : Average degree deg-min : Minimum degree deg-max : Maximum degree deg-diff : Difference in degree deg-prod : Product of degrees model_var : Enum(str) Default value is powerlaw. If so, uses formulation of P(u,v) as described above. Alternate value is exponential. If so, uses P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) epsilon : float A small positive value added to all P(u,v). The default value is 1e-6 copy : bool Some algorithms add edges directly to the input matrix. Set this flag to make a copy of the input matrix instead. Defaults to True. seed : hashable, optional If None (default), use the np.random's global random state to generate random numbers. Otherwise, use a new np.random.RandomState instance seeded with the given value.
Below is the the instruction that describes the task: ### Input: Generates synthetic networks using the models described in Betzel et al. (2016) Neuroimage. See this paper for more details. Succinctly, the probability of forming a connection between nodes u and v is P(u,v) = E(u,v)**eta * K(u,v)**gamma where eta and gamma are hyperparameters, E(u,v) is the euclidean or similar distance measure, and K(u,v) is the algorithm that defines the model. This describes the power law formulation, an alternative formulation uses the exponential function P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) Parameters ---------- A : np.ndarray Binary network of seed connections D : np.ndarray Matrix of euclidean distances or other distances between nodes m : int Number of connections that should be present in the final synthetic network eta : np.ndarray A vector describing a range of values to estimate for eta, the hyperparameter describing exponential weighting of the euclidean distance. gamma : np.ndarray A vector describing a range of values to estimate for theta, the hyperparameter describing exponential weighting of the basis algorithm. If model_type='euclidean' or another distance metric, this can be None. model_type : Enum(str) euclidean : Uses only euclidean distances to generate connection probabilities neighbors : count of common neighbors matching : matching index, the normalized overlap in neighborhoods clu-avg : Average clustering coefficient clu-min : Minimum clustering coefficient clu-max : Maximum clustering coefficient clu-diff : Difference in clustering coefficient clu-prod : Product of clustering coefficient deg-avg : Average degree deg-min : Minimum degree deg-max : Maximum degree deg-diff : Difference in degree deg-prod : Product of degrees model_var : Enum(str) Default value is powerlaw. If so, uses formulation of P(u,v) as described above. Alternate value is exponential. If so, uses P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) epsilon : float A small positive value added to all P(u,v). The default value is 1e-6 copy : bool Some algorithms add edges directly to the input matrix. Set this flag to make a copy of the input matrix instead. Defaults to True. seed : hashable, optional If None (default), use the np.random's global random state to generate random numbers. Otherwise, use a new np.random.RandomState instance seeded with the given value. ### Response: def generative_model(A, D, m, eta, gamma=None, model_type='matching', model_var='powerlaw', epsilon=1e-6, copy=True, seed=None): ''' Generates synthetic networks using the models described in Betzel et al. (2016) Neuroimage. See this paper for more details. Succinctly, the probability of forming a connection between nodes u and v is P(u,v) = E(u,v)**eta * K(u,v)**gamma where eta and gamma are hyperparameters, E(u,v) is the euclidean or similar distance measure, and K(u,v) is the algorithm that defines the model. This describes the power law formulation, an alternative formulation uses the exponential function P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) Parameters ---------- A : np.ndarray Binary network of seed connections D : np.ndarray Matrix of euclidean distances or other distances between nodes m : int Number of connections that should be present in the final synthetic network eta : np.ndarray A vector describing a range of values to estimate for eta, the hyperparameter describing exponential weighting of the euclidean distance. gamma : np.ndarray A vector describing a range of values to estimate for theta, the hyperparameter describing exponential weighting of the basis algorithm. If model_type='euclidean' or another distance metric, this can be None. model_type : Enum(str) euclidean : Uses only euclidean distances to generate connection probabilities neighbors : count of common neighbors matching : matching index, the normalized overlap in neighborhoods clu-avg : Average clustering coefficient clu-min : Minimum clustering coefficient clu-max : Maximum clustering coefficient clu-diff : Difference in clustering coefficient clu-prod : Product of clustering coefficient deg-avg : Average degree deg-min : Minimum degree deg-max : Maximum degree deg-diff : Difference in degree deg-prod : Product of degrees model_var : Enum(str) Default value is powerlaw. If so, uses formulation of P(u,v) as described above. Alternate value is exponential. If so, uses P(u,v) = exp(E(u,v)*eta) * exp(K(u,v)*gamma) epsilon : float A small positive value added to all P(u,v). The default value is 1e-6 copy : bool Some algorithms add edges directly to the input matrix. Set this flag to make a copy of the input matrix instead. Defaults to True. seed : hashable, optional If None (default), use the np.random's global random state to generate random numbers. Otherwise, use a new np.random.RandomState instance seeded with the given value. ''' rng = get_rng(seed) if copy: A = A.copy() n = len(D) #These parameters don't do any of the voronoi narrowing. #Its a list of eta values paired with gamma values. #To try 3 eta and 3 gamma pairs, should use 9 list values. if len(eta) != len(gamma): raise BCTParamError('Eta and gamma hyperparameters must be lists of ' 'the same size') nparams = len(eta) B = np.zeros((n, n, nparams)) def k_avg(K): return ((np.tile(K, (n, 1)) + np.transpose(np.tile(K, (n, 1))))/2 + epsilon) def k_diff(K): return np.abs(np.tile(K, (n, 1)) - np.transpose(np.tile(K, (n, 1)))) + epsilon def k_max(K): return np.max(np.dstack((np.tile(K, (n, 1)), np.transpose(np.tile(K, (n, 1))))), axis=2) + epsilon def k_min(K): return np.min(np.dstack((np.tile(K, (n, 1)), np.transpose(np.tile(K, (n, 1))))), axis=2) + epsilon def k_prod(K): return np.outer(K, np.transpose(K)) + epsilon def s_avg(K, sc): return (K+sc) / 2 + epsilon def s_diff(K, sc): return np.abs(K-sc) + epsilon def s_min(K, sc): return np.where(K < sc, K + epsilon, sc + epsilon) def s_max(K, sc): #return np.max((K, sc.T), axis=0) return np.where(K > sc, K + epsilon, sc + epsilon) def s_prod(K, sc): return K * sc + epsilon def x_avg(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_avg(Ksc, Kix) def x_diff(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_diff(Ksc, Kix) def x_max(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_max(Ksc, Kix) def x_min(K, ixes): nr_ixes = np.size(np.where(ixes)) Ksc = np.tile(K, (nr_ixes, 1)) Kix = np.transpose(np.tile(K[ixes], (n, 1))) return s_min(Ksc, Kix) def x_prod(K, ixes): nr_ixes = np.size(np.where(ixes)) Ka = np.reshape(K[ixes], (nr_ixes, 1)) Kb = np.reshape(np.transpose(K), (1, n)) return np.outer(Ka, Kb) + epsilon def clu_gen(A, K, D, m, eta, gamma, model_var, x_fun): mseed = np.size(np.where(A.flat))//2 A = A>0 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) c = clustering_coef_bu(A) k = np.sum(A, axis=1) Ff = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) #print(mseed, m) for i in range(mseed+1, m): C = np.append(0, np.cumsum(Ff[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] A[uu,vv] = A[vv,uu] = 1 k[uu] += 1 k[vv] += 1 bu = A[uu,:].astype(bool) bv = A[vv,:].astype(bool) su = A[np.ix_(bu, bu)] sv = A[np.ix_(bu, bu)] bth = np.logical_and(bu, bv) c[bth] += 2/(k[bth]**2 - k[bth]) c[uu] = np.size(np.where(su.flat))/(k[uu]*(k[uu]-1)) c[vv] = np.size(np.where(sv.flat))/(k[vv]*(k[vv]-1)) c[k<=1] = 0 bth[uu] = 1 bth[vv] = 1 k_result = x_fun(c, bth) #print(np.shape(k_result)) #print(np.shape(K)) #print(K) #print(np.shape(K[bth,:])) K[bth,:] = k_result K[:,bth] = k_result.T if mv2 in ('powerlaw', 'power_law'): Ff[bth,:] = Fd[bth,:] * K[bth,:]**gamma Ff[:,bth] = Fd[:,bth] * K[:,bth]**gamma elif mv2 in ('exponential',): Ff[bth,:] = Fd[bth,:] * np.exp(K[bth,:])*gamma Ff[:,bth] = Fd[:,bth] * np.exp(K[:,bth])*gamma Ff = Ff * np.logical_not(A) return A def deg_gen(A, K, D, m, eta, gamma, model_var, s_fun): mseed = np.size(np.where(A.flat))//2 k = np.sum(A, axis=1) if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) P = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) b = np.zeros((m,), dtype=int) # print(mseed) # print(np.shape(u),np.shape(v)) # print(np.shape(b)) # print(np.shape(A[u,v])) # print(np.shape(np.where(A[u,v])), 'sqishy') # print(np.shape(P), 'squnnaq') #b[:mseed] = np.where(A[np.ix_(u,v)]) b[:mseed] = np.squeeze(np.where(A[u,v])) #print(mseed, m) for i in range(mseed, m): C = np.append(0, np.cumsum(P[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] k[uu] += 1 k[vv] += 1 if mv2 in ('powerlaw', 'power_law'): Fk[:,uu] = Fk[uu,:] = s_fun(k, k[uu]) ** gamma Fk[:,vv] = Fk[vv,:] = s_fun(k, k[vv]) ** gamma elif mv2 in ('exponential',): Fk[:,uu] = Fk[uu,:] = np.exp(s_fun(k, k[uu]) * gamma) Fk[:,vv] = Fk[vv,:] = np.exp(s_fun(k, k[vv]) * gamma) P = Fd * Fk b[i] = r P[u[b[:i]], v[b[:i]]] = P[v[b[:i]], u[b[:i]]] = 0 A[u[r], v[r]] = A[v[r], u[r]] = 1 #P[b[u[:i]], b[v[:i]]] = P[b[v[:i]], b[u[:i]]] = 0 #A[uu,vv] = A[vv,uu] = 1 # indx = v*n + u # indx[b] # # nH = np.zeros((n,n)) # nH.ravel()[indx[b]]=1 # # nG = np.zeros((n,n)) # nG[ u[b], v[b] ]=1 # nG = nG + nG.T # # print(np.shape(np.where(A != nG))) # # import pdb # pdb.set_trace() return A def matching_gen(A, K, D, m, eta, gamma, model_var): K += epsilon mseed = np.size(np.where(A.flat))//2 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) Ff = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) for ii in range(mseed, m): C = np.append(0, np.cumsum(Ff[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] A[uu,vv] = A[vv,uu] = 1 updateuu, = np.where(np.inner(A, A[:,uu])) np.delete(updateuu, np.where(updateuu == uu)) np.delete(updateuu, np.where(updateuu == vv)) c1 = np.append(A[:,uu], A[uu,:]) for i in range(len(updateuu)): j = updateuu[i] c2 = np.append(A[:,j], A[j,:]) use = np.logical_or(c1, c2) use[uu] = use[uu+n] = use[j] = use[j+n] = 0 ncon = np.sum(c1[use]) + np.sum(c2[use]) if ncon == 0: K[uu, j] = K[j, uu] = epsilon else: K[uu, j] = K[j, uu] = (2 / ncon * np.sum(np.logical_and(c1[use], c2[use])) + epsilon) updatevv, = np.where(np.inner(A, A[:,vv])) np.delete(updatevv, np.where(updatevv == uu)) np.delete(updatevv, np.where(updatevv == vv)) c1 = np.append(A[:,vv], A[vv,:]) for i in range(len(updatevv)): j = updatevv[i] c2 = np.append(A[:,j], A[j,:]) use = np.logical_or(c1, c2) use[vv] = use[vv+n] = use[j] = use[j+n] = 0 ncon = np.sum(c1[use]) + np.sum(c2[use]) if ncon == 0: K[vv, j] = K[j, vv] = epsilon else: K[vv, j] = K[j, vv] = (2 / ncon * np.sum(np.logical_and(c1[use], c2[use])) + epsilon) Ff = Fd * Fk * np.logical_not(A) return A def neighbors_gen(A, K, D, m, eta, gamma, model_var): K += epsilon mseed = np.size(np.where(A.flat))//2 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 in ('powerlaw', 'power_law'): Fd = D**eta elif mv1 in ('exponential',): Fd = np.exp(eta*D) if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) Ff = Fd * Fk * np.logical_not(A) u,v = np.where(np.triu(np.ones((n,n)), 1)) for ii in range(mseed, m): C = np.append(0, np.cumsum(Ff[u,v])) r = np.sum(rng.random_sample()*C[-1] >= C) uu = u[r] vv = v[r] A[uu, vv] = A[vv, uu] = 1 x = A[uu, :].astype(int) y = A[:, vv].astype(int) K[uu, y] += 1 K[y, uu] += 1 K[vv, x] += 1 K[x, vv] += 1 if mv2 in ('powerlaw', 'power_law'): Fk = K**gamma elif mv2 in ('exponential',): Fk = np.exp(gamma*K) if mv2 in ('powerlaw', 'power_law'): Ff[uu, y] = Ff[y, uu] = Fd[uu, y] * (K[uu, y] ** gamma) Ff[vv, x] = Ff[x, vv] = Fd[vv, x] * (K[vv, x] ** gamma) elif mv2 in ('exponential',): Ff[uu, y] = Ff[y, uu] = Fd[uu, y] * np.exp(gamma * K[uu, y]) Ff[vv, x] = Ff[x, vv] = Fd[vv, x] * np.exp(gamma * K[vv, x]) Ff[np.where(A)] = 0 return A def euclidean_gen(A, D, m, eta, model_var): mseed = np.size(np.where(A.flat))//2 if type(model_var) == tuple: mv1, mv2 = model_var else: mv1, mv2 = model_var, model_var if mv1 != mv2: raise BCTParamError('Too many hyperparameters specified') if mv1 in ('powerlaw', 'power_law'): Fd = D ** eta elif mv1 in ('exponential',): Fd = np.exp(eta ** D) u,v = np.where(np.triu(np.ones((n,n)), 1)) P = Fd * np.logical_not(A) b = np.zeros((m,), dtype=int) b[:mseed] = np.squeeze(np.where(A[u, v])) for i in range(mseed, m): C = np.append(0, np.cumsum(P[u, v])) r = np.sum(rng.random_sample()*C[-1] >= C) b[i] = r P = Fd P[u[b[:i]], v[b[:i]]] = P[v[b[:i]], u[b[:i]]] = 0 A[u[r], v[r]] = A[v[r], u[r]] = 1 return A if model_type in ('clu-avg', 'clu_avg'): Kseed = k_avg(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_avg) elif model_type in ('clu-diff', 'clu_diff'): Kseed = k_diff(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_diff) elif model_type in ('clu-max', 'clu_max'): Kseed = k_max(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_max) elif model_type in ('clu-min', 'clu_min'): Kseed = k_min(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_min) elif model_type in ('clu-prod', 'clu_prod'): Kseed = k_prod(clustering_coef_bu(A)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = clu_gen(A, Kseed, D, m, ep, gp, model_var, x_prod) elif model_type in ('deg-avg', 'deg_avg'): Kseed = k_avg(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_avg) elif model_type in ('deg-diff', 'deg_diff'): Kseed = k_diff(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_diff) elif model_type in ('deg-max', 'deg_max'): Kseed = k_max(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_max) elif model_type in ('deg-min', 'deg_min'): Kseed = k_min(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_min) elif model_type in ('deg-prod', 'deg_prod'): Kseed = k_prod(np.sum(A, axis=1)) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = deg_gen(A, Kseed, D, m, ep, gp, model_var, s_prod) elif model_type in ('neighbors',): Kseed = np.inner(A, A) np.fill_diagonal(Kseed, 0) for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = neighbors_gen(A, Kseed, D, m, ep, gp, model_var) elif model_type in ('matching', 'matching-ind', 'matching_ind'): mi, _, _ = matching_ind(A) Kseed = mi + mi.T for j, (ep, gp) in enumerate(zip(eta, gamma)): B[:,:,j] = matching_gen(A, Kseed, D, m, ep, gp, model_var) elif model_type in ('spatial', 'geometric', 'euclidean'): for j, ep in enumerate(eta): B[:,:,j] = euclidean_gen(A, D, m, ep, model_var) return np.squeeze(B)
def _get_file_from_iso_fp(self, outfp, blocksize, iso_path, rr_path, joliet_path): # type: (BinaryIO, int, Optional[bytes], Optional[bytes], Optional[bytes]) -> None ''' An internal method to fetch a single file from the ISO and write it out to the file object. Parameters: outfp - The file object to write data to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path and joliet_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path and joliet_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path and rr_path). Returns: Nothing. ''' if joliet_path is not None: if self.joliet_vd is None: raise pycdlibexception.PyCdlibInvalidInput('Cannot fetch a joliet_path from a non-Joliet ISO') found_record = self._find_joliet_record(joliet_path) elif rr_path is not None: if not self.rock_ridge: raise pycdlibexception.PyCdlibInvalidInput('Cannot fetch a rr_path from a non-Rock Ridge ISO') found_record = self._find_rr_record(rr_path) elif iso_path is not None: found_record = self._find_iso_record(iso_path) else: raise pycdlibexception.PyCdlibInternalError('Invalid path passed to get_file_from_iso_fp') if found_record.is_dir(): raise pycdlibexception.PyCdlibInvalidInput('Cannot write out a directory') if rr_path is not None or iso_path is not None: if found_record.rock_ridge is not None and found_record.rock_ridge.is_symlink(): # If this Rock Ridge record is a symlink, it has no data # associated with it, so it makes no sense to try and get the # data. In theory, we could follow the symlink to the # appropriate place and get the data of the thing it points to. # However, Rock Ridge symlinks are allowed to point *outside* # of this ISO, so it is really not clear that this is something # we want to do. For now we make the user follow the symlink # themselves if they want to get the data. We can revisit this # decision in the future if we need to. raise pycdlibexception.PyCdlibInvalidInput('Symlinks have no data associated with them') if found_record.inode is None: raise pycdlibexception.PyCdlibInvalidInput('Cannot write out a file without data') while found_record.get_data_length() > 0: with inode.InodeOpenData(found_record.inode, self.pvd.logical_block_size()) as (data_fp, data_len): # Here we copy the data into the output file descriptor. If a boot # info table is present, we overlay the table over bytes 8-64 of the # file. Note, however, that we never return more bytes than the length # of the file, so the boot info table may get truncated. if found_record.inode.boot_info_table is not None: header_len = min(data_len, 8) outfp.write(data_fp.read(header_len)) data_len -= header_len if data_len > 0: rec = found_record.inode.boot_info_table.record() table_len = min(data_len, len(rec)) outfp.write(rec[:table_len]) data_len -= table_len if data_len > 0: data_fp.seek(len(rec), os.SEEK_CUR) utils.copy_data(data_len, blocksize, data_fp, outfp) else: utils.copy_data(data_len, blocksize, data_fp, outfp) if found_record.data_continuation is not None: found_record = found_record.data_continuation else: break
An internal method to fetch a single file from the ISO and write it out to the file object. Parameters: outfp - The file object to write data to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path and joliet_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path and joliet_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path and rr_path). Returns: Nothing.
Below is the the instruction that describes the task: ### Input: An internal method to fetch a single file from the ISO and write it out to the file object. Parameters: outfp - The file object to write data to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path and joliet_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path and joliet_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path and rr_path). Returns: Nothing. ### Response: def _get_file_from_iso_fp(self, outfp, blocksize, iso_path, rr_path, joliet_path): # type: (BinaryIO, int, Optional[bytes], Optional[bytes], Optional[bytes]) -> None ''' An internal method to fetch a single file from the ISO and write it out to the file object. Parameters: outfp - The file object to write data to. blocksize - The number of bytes in each transfer. iso_path - The absolute ISO9660 path to lookup on the ISO (exclusive with rr_path and joliet_path). rr_path - The absolute Rock Ridge path to lookup on the ISO (exclusive with iso_path and joliet_path). joliet_path - The absolute Joliet path to lookup on the ISO (exclusive with iso_path and rr_path). Returns: Nothing. ''' if joliet_path is not None: if self.joliet_vd is None: raise pycdlibexception.PyCdlibInvalidInput('Cannot fetch a joliet_path from a non-Joliet ISO') found_record = self._find_joliet_record(joliet_path) elif rr_path is not None: if not self.rock_ridge: raise pycdlibexception.PyCdlibInvalidInput('Cannot fetch a rr_path from a non-Rock Ridge ISO') found_record = self._find_rr_record(rr_path) elif iso_path is not None: found_record = self._find_iso_record(iso_path) else: raise pycdlibexception.PyCdlibInternalError('Invalid path passed to get_file_from_iso_fp') if found_record.is_dir(): raise pycdlibexception.PyCdlibInvalidInput('Cannot write out a directory') if rr_path is not None or iso_path is not None: if found_record.rock_ridge is not None and found_record.rock_ridge.is_symlink(): # If this Rock Ridge record is a symlink, it has no data # associated with it, so it makes no sense to try and get the # data. In theory, we could follow the symlink to the # appropriate place and get the data of the thing it points to. # However, Rock Ridge symlinks are allowed to point *outside* # of this ISO, so it is really not clear that this is something # we want to do. For now we make the user follow the symlink # themselves if they want to get the data. We can revisit this # decision in the future if we need to. raise pycdlibexception.PyCdlibInvalidInput('Symlinks have no data associated with them') if found_record.inode is None: raise pycdlibexception.PyCdlibInvalidInput('Cannot write out a file without data') while found_record.get_data_length() > 0: with inode.InodeOpenData(found_record.inode, self.pvd.logical_block_size()) as (data_fp, data_len): # Here we copy the data into the output file descriptor. If a boot # info table is present, we overlay the table over bytes 8-64 of the # file. Note, however, that we never return more bytes than the length # of the file, so the boot info table may get truncated. if found_record.inode.boot_info_table is not None: header_len = min(data_len, 8) outfp.write(data_fp.read(header_len)) data_len -= header_len if data_len > 0: rec = found_record.inode.boot_info_table.record() table_len = min(data_len, len(rec)) outfp.write(rec[:table_len]) data_len -= table_len if data_len > 0: data_fp.seek(len(rec), os.SEEK_CUR) utils.copy_data(data_len, blocksize, data_fp, outfp) else: utils.copy_data(data_len, blocksize, data_fp, outfp) if found_record.data_continuation is not None: found_record = found_record.data_continuation else: break
def main(): """ GATK germline pipeline with variant filtering and annotation. """ # Define Parser object and add to jobTree parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawTextHelpFormatter) # Generate subparsers subparsers = parser.add_subparsers(dest='command') subparsers.add_parser('generate-config', help='Generates an editable config in the current working directory.') subparsers.add_parser('generate-manifest', help='Generates an editable manifest in the current working directory.') subparsers.add_parser('generate', help='Generates a config and manifest in the current working directory.') # Run subparser parser_run = subparsers.add_parser('run', help='Runs the GATK germline pipeline') parser_run.add_argument('--config', required=True, type=str, help='Path to the (filled in) config file, generated with ' '"generate-config".') parser_run.add_argument('--manifest', type=str, help='Path to the (filled in) manifest file, generated with ' '"generate-manifest".\nDefault value: "%(default)s".') parser_run.add_argument('--sample', default=None, nargs=2, type=str, help='Input sample identifier and BAM file URL or local path') parser_run.add_argument('--output-dir', default=None, help='Path/URL to output directory') parser_run.add_argument('-s', '--suffix', default=None, help='Additional suffix to add to the names of the output files') parser_run.add_argument('--preprocess-only', action='store_true', help='Only runs preprocessing steps') Job.Runner.addToilOptions(parser_run) options = parser.parse_args() cwd = os.getcwd() if options.command == 'generate-config' or options.command == 'generate': generate_file(os.path.join(cwd, 'config-toil-germline.yaml'), generate_config) if options.command == 'generate-manifest' or options.command == 'generate': generate_file(os.path.join(cwd, 'manifest-toil-germline.tsv'), generate_manifest) elif options.command == 'run': # Program checks for program in ['curl', 'docker']: require(next(which(program)), program + ' must be installed on every node.'.format(program)) require(os.path.exists(options.config), '{} not found. Please run "generate-config"'.format(options.config)) # Read sample manifest samples = [] if options.manifest: samples.extend(parse_manifest(options.manifest)) # Add BAM sample from command line if options.sample: uuid, url = options.sample # samples tuple: (uuid, url, paired_url, rg_line) # BAM samples should not have as paired URL or read group line samples.append(GermlineSample(uuid, url, None, None)) require(len(samples) > 0, 'No samples were detected in the manifest or on the command line') # Parse inputs inputs = {x.replace('-', '_'): y for x, y in yaml.load(open(options.config).read()).iteritems()} required_fields = {'genome_fasta', 'output_dir', 'run_bwa', 'sorted', 'snp_filter_annotations', 'indel_filter_annotations', 'preprocess', 'preprocess_only', 'run_vqsr', 'joint_genotype', 'run_oncotator', 'cores', 'file_size', 'xmx', 'suffix'} input_fields = set(inputs.keys()) require(input_fields > required_fields, 'Missing config parameters:\n{}'.format(', '.join(required_fields - input_fields))) if inputs['output_dir'] is None: inputs['output_dir'] = options.output_dir require(inputs['output_dir'] is not None, 'Missing output directory PATH/URL') if inputs['suffix'] is None: inputs['suffix'] = options.suffix if options.suffix else '' if inputs['preprocess_only'] is None: inputs['preprocess_only'] = options.preprocess_only if inputs['run_vqsr']: # Check that essential VQSR parameters are present vqsr_fields = {'g1k_snp', 'mills', 'dbsnp', 'hapmap', 'omni'} require(input_fields > vqsr_fields, 'Missing parameters for VQSR:\n{}'.format(', '.join(vqsr_fields - input_fields))) # Check that hard filtering parameters are present. If only running preprocessing steps, then we do # not need filtering information. elif not inputs['preprocess_only']: hard_filter_fields = {'snp_filter_name', 'snp_filter_expression', 'indel_filter_name', 'indel_filter_expression'} require(input_fields > hard_filter_fields, 'Missing parameters for hard filtering:\n{}'.format(', '.join(hard_filter_fields - input_fields))) # Check for falsey hard filtering parameters for hard_filter_field in hard_filter_fields: require(inputs[hard_filter_field], 'Missing %s value for hard filtering, ' 'got %s.' % (hard_filter_field, inputs[hard_filter_field])) # Set resource parameters inputs['xmx'] = human2bytes(inputs['xmx']) inputs['file_size'] = human2bytes(inputs['file_size']) inputs['cores'] = int(inputs['cores']) inputs['annotations'] = set(inputs['snp_filter_annotations'] + inputs['indel_filter_annotations']) # HaplotypeCaller test data for testing inputs['hc_output'] = inputs.get('hc_output', None) # It is a toil-scripts convention to store input parameters in a Namespace object config = argparse.Namespace(**inputs) root = Job.wrapJobFn(run_gatk_germline_pipeline, samples, config) Job.Runner.startToil(root, options)
GATK germline pipeline with variant filtering and annotation.
Below is the the instruction that describes the task: ### Input: GATK germline pipeline with variant filtering and annotation. ### Response: def main(): """ GATK germline pipeline with variant filtering and annotation. """ # Define Parser object and add to jobTree parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawTextHelpFormatter) # Generate subparsers subparsers = parser.add_subparsers(dest='command') subparsers.add_parser('generate-config', help='Generates an editable config in the current working directory.') subparsers.add_parser('generate-manifest', help='Generates an editable manifest in the current working directory.') subparsers.add_parser('generate', help='Generates a config and manifest in the current working directory.') # Run subparser parser_run = subparsers.add_parser('run', help='Runs the GATK germline pipeline') parser_run.add_argument('--config', required=True, type=str, help='Path to the (filled in) config file, generated with ' '"generate-config".') parser_run.add_argument('--manifest', type=str, help='Path to the (filled in) manifest file, generated with ' '"generate-manifest".\nDefault value: "%(default)s".') parser_run.add_argument('--sample', default=None, nargs=2, type=str, help='Input sample identifier and BAM file URL or local path') parser_run.add_argument('--output-dir', default=None, help='Path/URL to output directory') parser_run.add_argument('-s', '--suffix', default=None, help='Additional suffix to add to the names of the output files') parser_run.add_argument('--preprocess-only', action='store_true', help='Only runs preprocessing steps') Job.Runner.addToilOptions(parser_run) options = parser.parse_args() cwd = os.getcwd() if options.command == 'generate-config' or options.command == 'generate': generate_file(os.path.join(cwd, 'config-toil-germline.yaml'), generate_config) if options.command == 'generate-manifest' or options.command == 'generate': generate_file(os.path.join(cwd, 'manifest-toil-germline.tsv'), generate_manifest) elif options.command == 'run': # Program checks for program in ['curl', 'docker']: require(next(which(program)), program + ' must be installed on every node.'.format(program)) require(os.path.exists(options.config), '{} not found. Please run "generate-config"'.format(options.config)) # Read sample manifest samples = [] if options.manifest: samples.extend(parse_manifest(options.manifest)) # Add BAM sample from command line if options.sample: uuid, url = options.sample # samples tuple: (uuid, url, paired_url, rg_line) # BAM samples should not have as paired URL or read group line samples.append(GermlineSample(uuid, url, None, None)) require(len(samples) > 0, 'No samples were detected in the manifest or on the command line') # Parse inputs inputs = {x.replace('-', '_'): y for x, y in yaml.load(open(options.config).read()).iteritems()} required_fields = {'genome_fasta', 'output_dir', 'run_bwa', 'sorted', 'snp_filter_annotations', 'indel_filter_annotations', 'preprocess', 'preprocess_only', 'run_vqsr', 'joint_genotype', 'run_oncotator', 'cores', 'file_size', 'xmx', 'suffix'} input_fields = set(inputs.keys()) require(input_fields > required_fields, 'Missing config parameters:\n{}'.format(', '.join(required_fields - input_fields))) if inputs['output_dir'] is None: inputs['output_dir'] = options.output_dir require(inputs['output_dir'] is not None, 'Missing output directory PATH/URL') if inputs['suffix'] is None: inputs['suffix'] = options.suffix if options.suffix else '' if inputs['preprocess_only'] is None: inputs['preprocess_only'] = options.preprocess_only if inputs['run_vqsr']: # Check that essential VQSR parameters are present vqsr_fields = {'g1k_snp', 'mills', 'dbsnp', 'hapmap', 'omni'} require(input_fields > vqsr_fields, 'Missing parameters for VQSR:\n{}'.format(', '.join(vqsr_fields - input_fields))) # Check that hard filtering parameters are present. If only running preprocessing steps, then we do # not need filtering information. elif not inputs['preprocess_only']: hard_filter_fields = {'snp_filter_name', 'snp_filter_expression', 'indel_filter_name', 'indel_filter_expression'} require(input_fields > hard_filter_fields, 'Missing parameters for hard filtering:\n{}'.format(', '.join(hard_filter_fields - input_fields))) # Check for falsey hard filtering parameters for hard_filter_field in hard_filter_fields: require(inputs[hard_filter_field], 'Missing %s value for hard filtering, ' 'got %s.' % (hard_filter_field, inputs[hard_filter_field])) # Set resource parameters inputs['xmx'] = human2bytes(inputs['xmx']) inputs['file_size'] = human2bytes(inputs['file_size']) inputs['cores'] = int(inputs['cores']) inputs['annotations'] = set(inputs['snp_filter_annotations'] + inputs['indel_filter_annotations']) # HaplotypeCaller test data for testing inputs['hc_output'] = inputs.get('hc_output', None) # It is a toil-scripts convention to store input parameters in a Namespace object config = argparse.Namespace(**inputs) root = Job.wrapJobFn(run_gatk_germline_pipeline, samples, config) Job.Runner.startToil(root, options)
def screen_dumper(**kwargs): """Dump data to screen.""" farms = kwargs["farms"] engine = kwargs["engine"] logging.info("dumping to screen") print(f"\n[Screen dumper] ({engine})") try: if len(farms) == 1: print(f"You have one farm with little pandas.") else: print(f"You have {len(farms)} farms with little pandas.") except TypeError: print(" - your farm has burned to the ground.") else: for number, farm in enumerate(farms): print(f"[#{number+1}]You have {len(farm)} " f"little pandas in this farm.") for animal in farm: print(80*"=") try: print(animal.name) except AttributeError: print("no-name") print(80*"-") print(animal.head(5)) print()
Dump data to screen.
Below is the the instruction that describes the task: ### Input: Dump data to screen. ### Response: def screen_dumper(**kwargs): """Dump data to screen.""" farms = kwargs["farms"] engine = kwargs["engine"] logging.info("dumping to screen") print(f"\n[Screen dumper] ({engine})") try: if len(farms) == 1: print(f"You have one farm with little pandas.") else: print(f"You have {len(farms)} farms with little pandas.") except TypeError: print(" - your farm has burned to the ground.") else: for number, farm in enumerate(farms): print(f"[#{number+1}]You have {len(farm)} " f"little pandas in this farm.") for animal in farm: print(80*"=") try: print(animal.name) except AttributeError: print("no-name") print(80*"-") print(animal.head(5)) print()
def importElementTree(module_names=None): """Find a working ElementTree implementation, trying the standard places that such a thing might show up. >>> ElementTree = importElementTree() @param module_names: The names of modules to try to use as ElementTree. Defaults to C{L{elementtree_modules}} @returns: An ElementTree module """ if module_names is None: module_names = elementtree_modules for mod_name in module_names: try: ElementTree = __import__(mod_name, None, None, ['unused']) except ImportError: pass else: # Make sure it can actually parse XML try: ElementTree.XML('<unused/>') except (SystemExit, MemoryError, AssertionError): raise except: logging.exception('Not using ElementTree library %r because it failed to ' 'parse a trivial document: %s' % mod_name) else: return ElementTree else: raise ImportError('No ElementTree library found. ' 'You may need to install one. ' 'Tried importing %r' % (module_names,) )
Find a working ElementTree implementation, trying the standard places that such a thing might show up. >>> ElementTree = importElementTree() @param module_names: The names of modules to try to use as ElementTree. Defaults to C{L{elementtree_modules}} @returns: An ElementTree module
Below is the the instruction that describes the task: ### Input: Find a working ElementTree implementation, trying the standard places that such a thing might show up. >>> ElementTree = importElementTree() @param module_names: The names of modules to try to use as ElementTree. Defaults to C{L{elementtree_modules}} @returns: An ElementTree module ### Response: def importElementTree(module_names=None): """Find a working ElementTree implementation, trying the standard places that such a thing might show up. >>> ElementTree = importElementTree() @param module_names: The names of modules to try to use as ElementTree. Defaults to C{L{elementtree_modules}} @returns: An ElementTree module """ if module_names is None: module_names = elementtree_modules for mod_name in module_names: try: ElementTree = __import__(mod_name, None, None, ['unused']) except ImportError: pass else: # Make sure it can actually parse XML try: ElementTree.XML('<unused/>') except (SystemExit, MemoryError, AssertionError): raise except: logging.exception('Not using ElementTree library %r because it failed to ' 'parse a trivial document: %s' % mod_name) else: return ElementTree else: raise ImportError('No ElementTree library found. ' 'You may need to install one. ' 'Tried importing %r' % (module_names,) )
def _remove(self, client_kwargs): """ Remove an object. args: client_kwargs (dict): Client arguments. """ with _handle_client_exception(): # Object if 'obj' in client_kwargs: return self.client.delete_object( client_kwargs['container'], client_kwargs['obj']) # Container return self.client.delete_container(client_kwargs['container'])
Remove an object. args: client_kwargs (dict): Client arguments.
Below is the the instruction that describes the task: ### Input: Remove an object. args: client_kwargs (dict): Client arguments. ### Response: def _remove(self, client_kwargs): """ Remove an object. args: client_kwargs (dict): Client arguments. """ with _handle_client_exception(): # Object if 'obj' in client_kwargs: return self.client.delete_object( client_kwargs['container'], client_kwargs['obj']) # Container return self.client.delete_container(client_kwargs['container'])
def select_from_fv_by_seeds(fv, seeds, unique_cls): """ Tool to make simple feature functions take features from feature array by seeds. :param fv: ndarray with lineariezed feature. It's shape is MxN, where M is number of image pixels and N is number of features :param seeds: ndarray with seeds. Does not to be linear. :param unique_cls: number of used seeds clases. Like [1, 2] :return: fv_selection, seeds_selection - selection from feature vector and selection from seeds """ logger.debug("seeds" + str(seeds)) # fvlin = fv.reshape(-1, int(fv.size/seeds.size)) expected_shape = [seeds.size, int(fv.size/seeds.size)] if fv.shape[0] != expected_shape[0] or fv.shape[1] != expected_shape[1]: raise AssertionError("Wrong shape of input feature vector array fv") # sd = seeds.reshape(-1, 1) selection = np.in1d(seeds, unique_cls) fv_selection = fv[selection] seeds_selection = seeds.flatten()[selection] # sd = sd[] return fv_selection, seeds_selection
Tool to make simple feature functions take features from feature array by seeds. :param fv: ndarray with lineariezed feature. It's shape is MxN, where M is number of image pixels and N is number of features :param seeds: ndarray with seeds. Does not to be linear. :param unique_cls: number of used seeds clases. Like [1, 2] :return: fv_selection, seeds_selection - selection from feature vector and selection from seeds
Below is the the instruction that describes the task: ### Input: Tool to make simple feature functions take features from feature array by seeds. :param fv: ndarray with lineariezed feature. It's shape is MxN, where M is number of image pixels and N is number of features :param seeds: ndarray with seeds. Does not to be linear. :param unique_cls: number of used seeds clases. Like [1, 2] :return: fv_selection, seeds_selection - selection from feature vector and selection from seeds ### Response: def select_from_fv_by_seeds(fv, seeds, unique_cls): """ Tool to make simple feature functions take features from feature array by seeds. :param fv: ndarray with lineariezed feature. It's shape is MxN, where M is number of image pixels and N is number of features :param seeds: ndarray with seeds. Does not to be linear. :param unique_cls: number of used seeds clases. Like [1, 2] :return: fv_selection, seeds_selection - selection from feature vector and selection from seeds """ logger.debug("seeds" + str(seeds)) # fvlin = fv.reshape(-1, int(fv.size/seeds.size)) expected_shape = [seeds.size, int(fv.size/seeds.size)] if fv.shape[0] != expected_shape[0] or fv.shape[1] != expected_shape[1]: raise AssertionError("Wrong shape of input feature vector array fv") # sd = seeds.reshape(-1, 1) selection = np.in1d(seeds, unique_cls) fv_selection = fv[selection] seeds_selection = seeds.flatten()[selection] # sd = sd[] return fv_selection, seeds_selection
def jobSetCompleted(self, jobID, completionReason, completionMsg, useConnectionID = True): """ Change the status on the given job to completed Parameters: ---------------------------------------------------------------- job: jobID of the job to mark as completed completionReason: completionReason string completionMsg: completionMsg string useConnectionID: True if the connection id of the calling function must be the same as the connection that created the job. Set to False for hypersearch workers """ # Get a database connection and cursor with ConnectionFactory.get() as conn: query = 'UPDATE %s SET status=%%s, ' \ ' completion_reason=%%s, ' \ ' completion_msg=%%s, ' \ ' end_time=UTC_TIMESTAMP(), ' \ ' _eng_last_update_time=UTC_TIMESTAMP() ' \ ' WHERE job_id=%%s' \ % (self.jobsTableName,) sqlParams = [self.STATUS_COMPLETED, completionReason, completionMsg, jobID] if useConnectionID: query += ' AND _eng_cjm_conn_id=%s' sqlParams.append(self._connectionID) result = conn.cursor.execute(query, sqlParams) if result != 1: raise RuntimeError("Tried to change the status of jobID=%s to " "completed, but this job could not be found or " "belongs to some other CJM" % (jobID))
Change the status on the given job to completed Parameters: ---------------------------------------------------------------- job: jobID of the job to mark as completed completionReason: completionReason string completionMsg: completionMsg string useConnectionID: True if the connection id of the calling function must be the same as the connection that created the job. Set to False for hypersearch workers
Below is the the instruction that describes the task: ### Input: Change the status on the given job to completed Parameters: ---------------------------------------------------------------- job: jobID of the job to mark as completed completionReason: completionReason string completionMsg: completionMsg string useConnectionID: True if the connection id of the calling function must be the same as the connection that created the job. Set to False for hypersearch workers ### Response: def jobSetCompleted(self, jobID, completionReason, completionMsg, useConnectionID = True): """ Change the status on the given job to completed Parameters: ---------------------------------------------------------------- job: jobID of the job to mark as completed completionReason: completionReason string completionMsg: completionMsg string useConnectionID: True if the connection id of the calling function must be the same as the connection that created the job. Set to False for hypersearch workers """ # Get a database connection and cursor with ConnectionFactory.get() as conn: query = 'UPDATE %s SET status=%%s, ' \ ' completion_reason=%%s, ' \ ' completion_msg=%%s, ' \ ' end_time=UTC_TIMESTAMP(), ' \ ' _eng_last_update_time=UTC_TIMESTAMP() ' \ ' WHERE job_id=%%s' \ % (self.jobsTableName,) sqlParams = [self.STATUS_COMPLETED, completionReason, completionMsg, jobID] if useConnectionID: query += ' AND _eng_cjm_conn_id=%s' sqlParams.append(self._connectionID) result = conn.cursor.execute(query, sqlParams) if result != 1: raise RuntimeError("Tried to change the status of jobID=%s to " "completed, but this job could not be found or " "belongs to some other CJM" % (jobID))
def multiline_merge(lines, current_event, re_after, re_before): """ Merge multi-line events based. Some event (like Python trackback or Java stracktrace) spawn on multiple line. This method will merge them using two regular expression: regex_after and regex_before. If a line match re_after, it will be merged with next line. If a line match re_before, it will be merged with previous line. This function return a list of complet event. Note that because we don't know if an event is complet before another new event start, the last event will not be returned but stored in current_event. You should pass the same current_event to successive call to multiline_merge. current_event is a list of lines whose belong to the same event. """ events = [] for line in lines: if re_before and re_before.match(line): current_event.append(line) elif re_after and current_event and re_after.match(current_event[-1]): current_event.append(line) else: if current_event: events.append('\n'.join(current_event)) current_event.clear() current_event.append(line) return events
Merge multi-line events based. Some event (like Python trackback or Java stracktrace) spawn on multiple line. This method will merge them using two regular expression: regex_after and regex_before. If a line match re_after, it will be merged with next line. If a line match re_before, it will be merged with previous line. This function return a list of complet event. Note that because we don't know if an event is complet before another new event start, the last event will not be returned but stored in current_event. You should pass the same current_event to successive call to multiline_merge. current_event is a list of lines whose belong to the same event.
Below is the the instruction that describes the task: ### Input: Merge multi-line events based. Some event (like Python trackback or Java stracktrace) spawn on multiple line. This method will merge them using two regular expression: regex_after and regex_before. If a line match re_after, it will be merged with next line. If a line match re_before, it will be merged with previous line. This function return a list of complet event. Note that because we don't know if an event is complet before another new event start, the last event will not be returned but stored in current_event. You should pass the same current_event to successive call to multiline_merge. current_event is a list of lines whose belong to the same event. ### Response: def multiline_merge(lines, current_event, re_after, re_before): """ Merge multi-line events based. Some event (like Python trackback or Java stracktrace) spawn on multiple line. This method will merge them using two regular expression: regex_after and regex_before. If a line match re_after, it will be merged with next line. If a line match re_before, it will be merged with previous line. This function return a list of complet event. Note that because we don't know if an event is complet before another new event start, the last event will not be returned but stored in current_event. You should pass the same current_event to successive call to multiline_merge. current_event is a list of lines whose belong to the same event. """ events = [] for line in lines: if re_before and re_before.match(line): current_event.append(line) elif re_after and current_event and re_after.match(current_event[-1]): current_event.append(line) else: if current_event: events.append('\n'.join(current_event)) current_event.clear() current_event.append(line) return events
def notifylists(self): """ Return a new raw REST interface to notify list resources :rtype: :py:class:`ns1.rest.monitoring.NotifyLists` """ import ns1.rest.monitoring return ns1.rest.monitoring.NotifyLists(self.config)
Return a new raw REST interface to notify list resources :rtype: :py:class:`ns1.rest.monitoring.NotifyLists`
Below is the the instruction that describes the task: ### Input: Return a new raw REST interface to notify list resources :rtype: :py:class:`ns1.rest.monitoring.NotifyLists` ### Response: def notifylists(self): """ Return a new raw REST interface to notify list resources :rtype: :py:class:`ns1.rest.monitoring.NotifyLists` """ import ns1.rest.monitoring return ns1.rest.monitoring.NotifyLists(self.config)
def _merge_metadata(self_or_cls, obj, fn, *dicts): """ Returns a merged metadata info dictionary from the supplied function and additional dictionaries """ merged = dict([(k,v) for d in dicts for (k,v) in d.items()]) return dict(merged, **fn(obj)) if fn else merged
Returns a merged metadata info dictionary from the supplied function and additional dictionaries
Below is the the instruction that describes the task: ### Input: Returns a merged metadata info dictionary from the supplied function and additional dictionaries ### Response: def _merge_metadata(self_or_cls, obj, fn, *dicts): """ Returns a merged metadata info dictionary from the supplied function and additional dictionaries """ merged = dict([(k,v) for d in dicts for (k,v) in d.items()]) return dict(merged, **fn(obj)) if fn else merged
def auto_download(status, credentials=None, subjects_path=None, overwrite=False, release='HCP_1200', database='hcp-openaccess', retinotopy_path=None, retinotopy_cache=True): ''' auto_download(True) enables automatic downloading of HCP subject data when the subject ID is requested. The optional arguments are identical to those required for the function download(), and they are passed to download() when auto-downloading occurs. auto_download(False) disables automatic downloading. Automatic downloading is disabled by default unless the environment variable HCP_AUTO_DOWNLOAD is set to true. In this case, the database and release are derived from the environment variables HCP_AUTO_DATABASE and HCP_AUTO_RELEASE, and the variable HCP_AUTO_PATH can be used to override the default subjects path. ''' global _auto_download_options, _retinotopy_path status = (['structure','retinotopy'] if status is True else [] if status is False else [status] if pimms.is_str(status) else status) _auto_download_options = {'structure':False, 'retinotopy':False} for s in status: if s.lower() == 'structure': if s3fs is None: raise RuntimeError( 's3fs was not successfully loaded, so downloads may not occur; check' ' your Python configuration to make sure that s3fs is installed. See' ' http://s3fs.readthedocs.io/en/latest/install.html for details.') if credentials is None: credentials = config['hcp_credentials'] if credentials is None: raise ValueError('No HCP credentials detected or found') (s3fs_key, s3fs_secret) = to_credentials(credentials) if subjects_path is None: sdirs = config['hcp_subject_paths'] subjects_path = next((sd for sd in sdirs if os.path.isdir(sd)), None) if subjects_path is None: raise ValueError('No subjects path given or found') else: subjects_path = os.path.expanduser(subjects_path) fs = s3fs.S3FileSystem(key=s3fs_key, secret=s3fs_secret) hcpbase = '/'.join([database, release]) if not fs.exists(hcpbase): raise ValueError('database/release (%s/%s) not found' % (database, release)) sids = set([]) for f in fs.ls(hcpbase): f = os.path.split(f)[-1] if len(f) == 6 and f[0] != '0': try: sids.add(int(f)) except Exception: pass _auto_download_options['structure'] = True _auto_download_options['subjects_path'] = subjects_path _auto_download_options['overwrite'] = overwrite _auto_download_options['release'] = release _auto_download_options['database'] = database _auto_download_options['subject_ids'] = frozenset(sids) _auto_download_options['s3fs'] = fs elif s.lower() == 'retinotopy': if retinotopy_path is None: dirs = config['hcp_subject_paths'] if subjects_path is not None: dirs = [subjects_path] + list(dirs) if _retinotopy_path is not None: dirs = [_retinotopy_path] + list(dirs) retinotopy_path = next((sd for sd in dirs if os.path.isdir(sd)), None) if retinotopy_path is None: raise ValueError('No retinotopy path given or found') else: retinotopy_path = os.path.expanduser(retinotopy_path) _auto_download_options['retinotopy'] = True _auto_download_options['retinotopy_path'] = retinotopy_path _auto_download_options['retinotopy_cache'] = retinotopy_cache else: raise ValueError('unrecognized auto_download argument: %s' % s) if all(v is False for v in six.itervalues(_auto_download_options)): _auto_download_options = None
auto_download(True) enables automatic downloading of HCP subject data when the subject ID is requested. The optional arguments are identical to those required for the function download(), and they are passed to download() when auto-downloading occurs. auto_download(False) disables automatic downloading. Automatic downloading is disabled by default unless the environment variable HCP_AUTO_DOWNLOAD is set to true. In this case, the database and release are derived from the environment variables HCP_AUTO_DATABASE and HCP_AUTO_RELEASE, and the variable HCP_AUTO_PATH can be used to override the default subjects path.
Below is the the instruction that describes the task: ### Input: auto_download(True) enables automatic downloading of HCP subject data when the subject ID is requested. The optional arguments are identical to those required for the function download(), and they are passed to download() when auto-downloading occurs. auto_download(False) disables automatic downloading. Automatic downloading is disabled by default unless the environment variable HCP_AUTO_DOWNLOAD is set to true. In this case, the database and release are derived from the environment variables HCP_AUTO_DATABASE and HCP_AUTO_RELEASE, and the variable HCP_AUTO_PATH can be used to override the default subjects path. ### Response: def auto_download(status, credentials=None, subjects_path=None, overwrite=False, release='HCP_1200', database='hcp-openaccess', retinotopy_path=None, retinotopy_cache=True): ''' auto_download(True) enables automatic downloading of HCP subject data when the subject ID is requested. The optional arguments are identical to those required for the function download(), and they are passed to download() when auto-downloading occurs. auto_download(False) disables automatic downloading. Automatic downloading is disabled by default unless the environment variable HCP_AUTO_DOWNLOAD is set to true. In this case, the database and release are derived from the environment variables HCP_AUTO_DATABASE and HCP_AUTO_RELEASE, and the variable HCP_AUTO_PATH can be used to override the default subjects path. ''' global _auto_download_options, _retinotopy_path status = (['structure','retinotopy'] if status is True else [] if status is False else [status] if pimms.is_str(status) else status) _auto_download_options = {'structure':False, 'retinotopy':False} for s in status: if s.lower() == 'structure': if s3fs is None: raise RuntimeError( 's3fs was not successfully loaded, so downloads may not occur; check' ' your Python configuration to make sure that s3fs is installed. See' ' http://s3fs.readthedocs.io/en/latest/install.html for details.') if credentials is None: credentials = config['hcp_credentials'] if credentials is None: raise ValueError('No HCP credentials detected or found') (s3fs_key, s3fs_secret) = to_credentials(credentials) if subjects_path is None: sdirs = config['hcp_subject_paths'] subjects_path = next((sd for sd in sdirs if os.path.isdir(sd)), None) if subjects_path is None: raise ValueError('No subjects path given or found') else: subjects_path = os.path.expanduser(subjects_path) fs = s3fs.S3FileSystem(key=s3fs_key, secret=s3fs_secret) hcpbase = '/'.join([database, release]) if not fs.exists(hcpbase): raise ValueError('database/release (%s/%s) not found' % (database, release)) sids = set([]) for f in fs.ls(hcpbase): f = os.path.split(f)[-1] if len(f) == 6 and f[0] != '0': try: sids.add(int(f)) except Exception: pass _auto_download_options['structure'] = True _auto_download_options['subjects_path'] = subjects_path _auto_download_options['overwrite'] = overwrite _auto_download_options['release'] = release _auto_download_options['database'] = database _auto_download_options['subject_ids'] = frozenset(sids) _auto_download_options['s3fs'] = fs elif s.lower() == 'retinotopy': if retinotopy_path is None: dirs = config['hcp_subject_paths'] if subjects_path is not None: dirs = [subjects_path] + list(dirs) if _retinotopy_path is not None: dirs = [_retinotopy_path] + list(dirs) retinotopy_path = next((sd for sd in dirs if os.path.isdir(sd)), None) if retinotopy_path is None: raise ValueError('No retinotopy path given or found') else: retinotopy_path = os.path.expanduser(retinotopy_path) _auto_download_options['retinotopy'] = True _auto_download_options['retinotopy_path'] = retinotopy_path _auto_download_options['retinotopy_cache'] = retinotopy_cache else: raise ValueError('unrecognized auto_download argument: %s' % s) if all(v is False for v in six.itervalues(_auto_download_options)): _auto_download_options = None
def _generate_sequences(self, primary_label, secondary_label, ngrams): """Generates aligned sequences between each witness labelled `primary_label` and each witness labelled `secondary_label`, based around `ngrams`. :param primary_label: label for one side of the pairs of witnesses to align :type primary_label: `str` :param secondary_label: label for the other side of the pairs of witnesses to align :type secondary_label: `str` :param ngrams: n-grams to base sequences off :type ngrams: `list` of `str` """ cols = [constants.WORK_FIELDNAME, constants.SIGLUM_FIELDNAME] primary_works = self._matches[self._matches[ constants.LABEL_FIELDNAME] == primary_label][ cols].drop_duplicates() secondary_works = self._matches[self._matches[ constants.LABEL_FIELDNAME] == secondary_label][ cols].drop_duplicates() for index, (work1, siglum1) in primary_works.iterrows(): text1 = self._get_text(self._corpus.get_witness(work1, siglum1)) label1 = '{}_{}'.format(work1, siglum1) for index, (work2, siglum2) in secondary_works.iterrows(): text2 = self._get_text(self._corpus.get_witness( work2, siglum2)) label2 = '{}_{}'.format(work2, siglum2) self._generate_sequences_for_texts(label1, text1, label2, text2, ngrams)
Generates aligned sequences between each witness labelled `primary_label` and each witness labelled `secondary_label`, based around `ngrams`. :param primary_label: label for one side of the pairs of witnesses to align :type primary_label: `str` :param secondary_label: label for the other side of the pairs of witnesses to align :type secondary_label: `str` :param ngrams: n-grams to base sequences off :type ngrams: `list` of `str`
Below is the the instruction that describes the task: ### Input: Generates aligned sequences between each witness labelled `primary_label` and each witness labelled `secondary_label`, based around `ngrams`. :param primary_label: label for one side of the pairs of witnesses to align :type primary_label: `str` :param secondary_label: label for the other side of the pairs of witnesses to align :type secondary_label: `str` :param ngrams: n-grams to base sequences off :type ngrams: `list` of `str` ### Response: def _generate_sequences(self, primary_label, secondary_label, ngrams): """Generates aligned sequences between each witness labelled `primary_label` and each witness labelled `secondary_label`, based around `ngrams`. :param primary_label: label for one side of the pairs of witnesses to align :type primary_label: `str` :param secondary_label: label for the other side of the pairs of witnesses to align :type secondary_label: `str` :param ngrams: n-grams to base sequences off :type ngrams: `list` of `str` """ cols = [constants.WORK_FIELDNAME, constants.SIGLUM_FIELDNAME] primary_works = self._matches[self._matches[ constants.LABEL_FIELDNAME] == primary_label][ cols].drop_duplicates() secondary_works = self._matches[self._matches[ constants.LABEL_FIELDNAME] == secondary_label][ cols].drop_duplicates() for index, (work1, siglum1) in primary_works.iterrows(): text1 = self._get_text(self._corpus.get_witness(work1, siglum1)) label1 = '{}_{}'.format(work1, siglum1) for index, (work2, siglum2) in secondary_works.iterrows(): text2 = self._get_text(self._corpus.get_witness( work2, siglum2)) label2 = '{}_{}'.format(work2, siglum2) self._generate_sequences_for_texts(label1, text1, label2, text2, ngrams)
def default(*args, **kwargs): """ Return first argument which is "truthy" >>> default(None, None, 1) 1 >>> default(None, None, 123) 123 >>> print(default(None, None)) None """ default = kwargs.get('default', None) for arg in args: if arg: return arg return default
Return first argument which is "truthy" >>> default(None, None, 1) 1 >>> default(None, None, 123) 123 >>> print(default(None, None)) None
Below is the the instruction that describes the task: ### Input: Return first argument which is "truthy" >>> default(None, None, 1) 1 >>> default(None, None, 123) 123 >>> print(default(None, None)) None ### Response: def default(*args, **kwargs): """ Return first argument which is "truthy" >>> default(None, None, 1) 1 >>> default(None, None, 123) 123 >>> print(default(None, None)) None """ default = kwargs.get('default', None) for arg in args: if arg: return arg return default
def listBlockChildren(self, block_name=""): """ list parents of a block """ if (not block_name) or re.search("['%','*']", block_name): dbsExceptionHandler("dbsException-invalid-input", "DBSBlock/listBlockChildren. Block_name must be provided." ) conn = self.dbi.connection() try: results = self.blockchildlist.execute(conn, block_name) return results finally: if conn: conn.close()
list parents of a block
Below is the the instruction that describes the task: ### Input: list parents of a block ### Response: def listBlockChildren(self, block_name=""): """ list parents of a block """ if (not block_name) or re.search("['%','*']", block_name): dbsExceptionHandler("dbsException-invalid-input", "DBSBlock/listBlockChildren. Block_name must be provided." ) conn = self.dbi.connection() try: results = self.blockchildlist.execute(conn, block_name) return results finally: if conn: conn.close()
def perm(lst, func): ''' Produce permutations of `lst`, where permutations are mutated by `func`. Used for flipping constraints. highly possible that returned constraints can be unsat this does it blindly, without any attention to the constraints themselves Considering lst as a list of constraints, e.g. [ C1, C2, C3 ] we'd like to consider scenarios of all possible permutations of flipped constraints, excluding the original list. So we'd like to generate: [ func(C1), C2 , C3 ], [ C1 , func(C2), C3 ], [ func(C1), func(C2), C3 ], [ C1 , C2 , func(C3)], .. etc This is effectively treating the list of constraints as a bitmask of width len(lst) and counting up, skipping the 0th element (unmodified array). The code below yields lists of constraints permuted as above by treating list indeces as bitmasks from 1 to 2**len(lst) and applying func to all the set bit offsets. ''' for i in range(1, 2**len(lst)): yield [func(item) if (1<<j)&i else item for (j, item) in enumerate(lst)]
Produce permutations of `lst`, where permutations are mutated by `func`. Used for flipping constraints. highly possible that returned constraints can be unsat this does it blindly, without any attention to the constraints themselves Considering lst as a list of constraints, e.g. [ C1, C2, C3 ] we'd like to consider scenarios of all possible permutations of flipped constraints, excluding the original list. So we'd like to generate: [ func(C1), C2 , C3 ], [ C1 , func(C2), C3 ], [ func(C1), func(C2), C3 ], [ C1 , C2 , func(C3)], .. etc This is effectively treating the list of constraints as a bitmask of width len(lst) and counting up, skipping the 0th element (unmodified array). The code below yields lists of constraints permuted as above by treating list indeces as bitmasks from 1 to 2**len(lst) and applying func to all the set bit offsets.
Below is the the instruction that describes the task: ### Input: Produce permutations of `lst`, where permutations are mutated by `func`. Used for flipping constraints. highly possible that returned constraints can be unsat this does it blindly, without any attention to the constraints themselves Considering lst as a list of constraints, e.g. [ C1, C2, C3 ] we'd like to consider scenarios of all possible permutations of flipped constraints, excluding the original list. So we'd like to generate: [ func(C1), C2 , C3 ], [ C1 , func(C2), C3 ], [ func(C1), func(C2), C3 ], [ C1 , C2 , func(C3)], .. etc This is effectively treating the list of constraints as a bitmask of width len(lst) and counting up, skipping the 0th element (unmodified array). The code below yields lists of constraints permuted as above by treating list indeces as bitmasks from 1 to 2**len(lst) and applying func to all the set bit offsets. ### Response: def perm(lst, func): ''' Produce permutations of `lst`, where permutations are mutated by `func`. Used for flipping constraints. highly possible that returned constraints can be unsat this does it blindly, without any attention to the constraints themselves Considering lst as a list of constraints, e.g. [ C1, C2, C3 ] we'd like to consider scenarios of all possible permutations of flipped constraints, excluding the original list. So we'd like to generate: [ func(C1), C2 , C3 ], [ C1 , func(C2), C3 ], [ func(C1), func(C2), C3 ], [ C1 , C2 , func(C3)], .. etc This is effectively treating the list of constraints as a bitmask of width len(lst) and counting up, skipping the 0th element (unmodified array). The code below yields lists of constraints permuted as above by treating list indeces as bitmasks from 1 to 2**len(lst) and applying func to all the set bit offsets. ''' for i in range(1, 2**len(lst)): yield [func(item) if (1<<j)&i else item for (j, item) in enumerate(lst)]
def start(address=None, port=5000, ssl_crt=None, ssl_key=None): ''' Api to listen for webhooks to send to the reactor. Implement the webhook behavior in an engine. :py:class:`rest_cherrypy Webhook docs <salt.netapi.rest_cherrypy.app.Webhook>` Unlike the rest_cherrypy Webhook, this is only an unauthenticated webhook endpoint. If an authenticated webhook endpoint is needed, use the salt-api webhook which runs on the master and authenticates through eauth. .. note: This is really meant to be used on the minion, because salt-api needs to be run on the master for use with eauth. .. warning:: Unauthenticated endpoint This engine sends webhook calls to the event stream. If the engine is running on a minion with `file_client: local` the event is sent to the minion event stream. Otherwise it is sent to the master event stream. Example Config .. code-block:: yaml engines: - webhook: {} .. code-block:: yaml engines: - webhook: port: 8000 address: 10.128.1.145 ssl_crt: /etc/pki/tls/certs/localhost.crt ssl_key: /etc/pki/tls/certs/localhost.key .. note: For making an unsigned key, use the following command `salt-call --local tls.create_self_signed_cert` ''' if __opts__.get('__role') == 'master': fire_master = salt.utils.event.get_master_event(__opts__, __opts__['sock_dir']).fire_event else: fire_master = None def fire(tag, msg): ''' How to fire the event ''' if fire_master: fire_master(msg, tag) else: __salt__['event.send'](tag, msg) class WebHook(tornado.web.RequestHandler): # pylint: disable=abstract-method def post(self, tag): # pylint: disable=arguments-differ body = self.request.body headers = self.request.headers payload = { 'headers': headers if isinstance(headers, dict) else dict(headers), 'body': salt.utils.stringutils.to_str(body), } fire('salt/engines/hook/' + tag, payload) application = tornado.web.Application([(r"/(.*)", WebHook), ]) ssl_options = None if all([ssl_crt, ssl_key]): ssl_options = {"certfile": ssl_crt, "keyfile": ssl_key} io_loop = tornado.ioloop.IOLoop(make_current=False) io_loop.make_current() http_server = tornado.httpserver.HTTPServer(application, ssl_options=ssl_options) http_server.listen(port, address=address) io_loop.start()
Api to listen for webhooks to send to the reactor. Implement the webhook behavior in an engine. :py:class:`rest_cherrypy Webhook docs <salt.netapi.rest_cherrypy.app.Webhook>` Unlike the rest_cherrypy Webhook, this is only an unauthenticated webhook endpoint. If an authenticated webhook endpoint is needed, use the salt-api webhook which runs on the master and authenticates through eauth. .. note: This is really meant to be used on the minion, because salt-api needs to be run on the master for use with eauth. .. warning:: Unauthenticated endpoint This engine sends webhook calls to the event stream. If the engine is running on a minion with `file_client: local` the event is sent to the minion event stream. Otherwise it is sent to the master event stream. Example Config .. code-block:: yaml engines: - webhook: {} .. code-block:: yaml engines: - webhook: port: 8000 address: 10.128.1.145 ssl_crt: /etc/pki/tls/certs/localhost.crt ssl_key: /etc/pki/tls/certs/localhost.key .. note: For making an unsigned key, use the following command `salt-call --local tls.create_self_signed_cert`
Below is the the instruction that describes the task: ### Input: Api to listen for webhooks to send to the reactor. Implement the webhook behavior in an engine. :py:class:`rest_cherrypy Webhook docs <salt.netapi.rest_cherrypy.app.Webhook>` Unlike the rest_cherrypy Webhook, this is only an unauthenticated webhook endpoint. If an authenticated webhook endpoint is needed, use the salt-api webhook which runs on the master and authenticates through eauth. .. note: This is really meant to be used on the minion, because salt-api needs to be run on the master for use with eauth. .. warning:: Unauthenticated endpoint This engine sends webhook calls to the event stream. If the engine is running on a minion with `file_client: local` the event is sent to the minion event stream. Otherwise it is sent to the master event stream. Example Config .. code-block:: yaml engines: - webhook: {} .. code-block:: yaml engines: - webhook: port: 8000 address: 10.128.1.145 ssl_crt: /etc/pki/tls/certs/localhost.crt ssl_key: /etc/pki/tls/certs/localhost.key .. note: For making an unsigned key, use the following command `salt-call --local tls.create_self_signed_cert` ### Response: def start(address=None, port=5000, ssl_crt=None, ssl_key=None): ''' Api to listen for webhooks to send to the reactor. Implement the webhook behavior in an engine. :py:class:`rest_cherrypy Webhook docs <salt.netapi.rest_cherrypy.app.Webhook>` Unlike the rest_cherrypy Webhook, this is only an unauthenticated webhook endpoint. If an authenticated webhook endpoint is needed, use the salt-api webhook which runs on the master and authenticates through eauth. .. note: This is really meant to be used on the minion, because salt-api needs to be run on the master for use with eauth. .. warning:: Unauthenticated endpoint This engine sends webhook calls to the event stream. If the engine is running on a minion with `file_client: local` the event is sent to the minion event stream. Otherwise it is sent to the master event stream. Example Config .. code-block:: yaml engines: - webhook: {} .. code-block:: yaml engines: - webhook: port: 8000 address: 10.128.1.145 ssl_crt: /etc/pki/tls/certs/localhost.crt ssl_key: /etc/pki/tls/certs/localhost.key .. note: For making an unsigned key, use the following command `salt-call --local tls.create_self_signed_cert` ''' if __opts__.get('__role') == 'master': fire_master = salt.utils.event.get_master_event(__opts__, __opts__['sock_dir']).fire_event else: fire_master = None def fire(tag, msg): ''' How to fire the event ''' if fire_master: fire_master(msg, tag) else: __salt__['event.send'](tag, msg) class WebHook(tornado.web.RequestHandler): # pylint: disable=abstract-method def post(self, tag): # pylint: disable=arguments-differ body = self.request.body headers = self.request.headers payload = { 'headers': headers if isinstance(headers, dict) else dict(headers), 'body': salt.utils.stringutils.to_str(body), } fire('salt/engines/hook/' + tag, payload) application = tornado.web.Application([(r"/(.*)", WebHook), ]) ssl_options = None if all([ssl_crt, ssl_key]): ssl_options = {"certfile": ssl_crt, "keyfile": ssl_key} io_loop = tornado.ioloop.IOLoop(make_current=False) io_loop.make_current() http_server = tornado.httpserver.HTTPServer(application, ssl_options=ssl_options) http_server.listen(port, address=address) io_loop.start()
def _escape_token(token, alphabet): """Escape away underscores and OOV characters and append '_'. This allows the token to be expressed as the concatenation of a list of subtokens from the vocabulary. The underscore acts as a sentinel which allows us to invertibly concatenate multiple such lists. Args: token: A unicode string to be escaped. alphabet: A set of all characters in the vocabulary's alphabet. Returns: escaped_token: An escaped unicode string. Raises: ValueError: If the provided token is not unicode. """ if not isinstance(token, six.text_type): raise ValueError("Expected string type for token, got %s" % type(token)) token = token.replace(u"\\", u"\\\\").replace(u"_", u"\\u") ret = [c if c in alphabet and c != u"\n" else r"\%d;" % ord(c) for c in token] return u"".join(ret) + "_"
Escape away underscores and OOV characters and append '_'. This allows the token to be expressed as the concatenation of a list of subtokens from the vocabulary. The underscore acts as a sentinel which allows us to invertibly concatenate multiple such lists. Args: token: A unicode string to be escaped. alphabet: A set of all characters in the vocabulary's alphabet. Returns: escaped_token: An escaped unicode string. Raises: ValueError: If the provided token is not unicode.
Below is the the instruction that describes the task: ### Input: Escape away underscores and OOV characters and append '_'. This allows the token to be expressed as the concatenation of a list of subtokens from the vocabulary. The underscore acts as a sentinel which allows us to invertibly concatenate multiple such lists. Args: token: A unicode string to be escaped. alphabet: A set of all characters in the vocabulary's alphabet. Returns: escaped_token: An escaped unicode string. Raises: ValueError: If the provided token is not unicode. ### Response: def _escape_token(token, alphabet): """Escape away underscores and OOV characters and append '_'. This allows the token to be expressed as the concatenation of a list of subtokens from the vocabulary. The underscore acts as a sentinel which allows us to invertibly concatenate multiple such lists. Args: token: A unicode string to be escaped. alphabet: A set of all characters in the vocabulary's alphabet. Returns: escaped_token: An escaped unicode string. Raises: ValueError: If the provided token is not unicode. """ if not isinstance(token, six.text_type): raise ValueError("Expected string type for token, got %s" % type(token)) token = token.replace(u"\\", u"\\\\").replace(u"_", u"\\u") ret = [c if c in alphabet and c != u"\n" else r"\%d;" % ord(c) for c in token] return u"".join(ret) + "_"
def load_file(self, filename): """Load config from a YAML file.""" filename = os.path.abspath(filename) with open(filename) as f: self.load_dict(yaml.load(f)) self._loaded_files.append(filename)
Load config from a YAML file.
Below is the the instruction that describes the task: ### Input: Load config from a YAML file. ### Response: def load_file(self, filename): """Load config from a YAML file.""" filename = os.path.abspath(filename) with open(filename) as f: self.load_dict(yaml.load(f)) self._loaded_files.append(filename)
def nvmlDeviceGetSerial(handle): r""" /** * Retrieves the globally unique board serial number associated with this device's board. * * For all products with an inforom. * * The serial number is an alphanumeric string that will not exceed 30 characters (including the NULL terminator). * This number matches the serial number tag that is physically attached to the board. See \ref * nvmlConstants::NVML_DEVICE_SERIAL_BUFFER_SIZE. * * @param device The identifier of the target device * @param serial Reference in which to return the board/module serial number * @param length The maximum allowed length of the string returned in \a serial * * @return * - \ref NVML_SUCCESS if \a serial has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a device is invalid, or \a serial is NULL * - \ref NVML_ERROR_INSUFFICIENT_SIZE if \a length is too small * - \ref NVML_ERROR_NOT_SUPPORTED if the device does not support this feature * - \ref NVML_ERROR_GPU_IS_LOST if the target GPU has fallen off the bus or is otherwise inaccessible * - \ref NVML_ERROR_UNKNOWN on any unexpected error */ nvmlReturn_t DECLDIR nvmlDeviceGetSerial """ c_serial = create_string_buffer(NVML_DEVICE_SERIAL_BUFFER_SIZE) fn = _nvmlGetFunctionPointer("nvmlDeviceGetSerial") ret = fn(handle, c_serial, c_uint(NVML_DEVICE_SERIAL_BUFFER_SIZE)) _nvmlCheckReturn(ret) return bytes_to_str(c_serial.value)
r""" /** * Retrieves the globally unique board serial number associated with this device's board. * * For all products with an inforom. * * The serial number is an alphanumeric string that will not exceed 30 characters (including the NULL terminator). * This number matches the serial number tag that is physically attached to the board. See \ref * nvmlConstants::NVML_DEVICE_SERIAL_BUFFER_SIZE. * * @param device The identifier of the target device * @param serial Reference in which to return the board/module serial number * @param length The maximum allowed length of the string returned in \a serial * * @return * - \ref NVML_SUCCESS if \a serial has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a device is invalid, or \a serial is NULL * - \ref NVML_ERROR_INSUFFICIENT_SIZE if \a length is too small * - \ref NVML_ERROR_NOT_SUPPORTED if the device does not support this feature * - \ref NVML_ERROR_GPU_IS_LOST if the target GPU has fallen off the bus or is otherwise inaccessible * - \ref NVML_ERROR_UNKNOWN on any unexpected error */ nvmlReturn_t DECLDIR nvmlDeviceGetSerial
Below is the the instruction that describes the task: ### Input: r""" /** * Retrieves the globally unique board serial number associated with this device's board. * * For all products with an inforom. * * The serial number is an alphanumeric string that will not exceed 30 characters (including the NULL terminator). * This number matches the serial number tag that is physically attached to the board. See \ref * nvmlConstants::NVML_DEVICE_SERIAL_BUFFER_SIZE. * * @param device The identifier of the target device * @param serial Reference in which to return the board/module serial number * @param length The maximum allowed length of the string returned in \a serial * * @return * - \ref NVML_SUCCESS if \a serial has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a device is invalid, or \a serial is NULL * - \ref NVML_ERROR_INSUFFICIENT_SIZE if \a length is too small * - \ref NVML_ERROR_NOT_SUPPORTED if the device does not support this feature * - \ref NVML_ERROR_GPU_IS_LOST if the target GPU has fallen off the bus or is otherwise inaccessible * - \ref NVML_ERROR_UNKNOWN on any unexpected error */ nvmlReturn_t DECLDIR nvmlDeviceGetSerial ### Response: def nvmlDeviceGetSerial(handle): r""" /** * Retrieves the globally unique board serial number associated with this device's board. * * For all products with an inforom. * * The serial number is an alphanumeric string that will not exceed 30 characters (including the NULL terminator). * This number matches the serial number tag that is physically attached to the board. See \ref * nvmlConstants::NVML_DEVICE_SERIAL_BUFFER_SIZE. * * @param device The identifier of the target device * @param serial Reference in which to return the board/module serial number * @param length The maximum allowed length of the string returned in \a serial * * @return * - \ref NVML_SUCCESS if \a serial has been set * - \ref NVML_ERROR_UNINITIALIZED if the library has not been successfully initialized * - \ref NVML_ERROR_INVALID_ARGUMENT if \a device is invalid, or \a serial is NULL * - \ref NVML_ERROR_INSUFFICIENT_SIZE if \a length is too small * - \ref NVML_ERROR_NOT_SUPPORTED if the device does not support this feature * - \ref NVML_ERROR_GPU_IS_LOST if the target GPU has fallen off the bus or is otherwise inaccessible * - \ref NVML_ERROR_UNKNOWN on any unexpected error */ nvmlReturn_t DECLDIR nvmlDeviceGetSerial """ c_serial = create_string_buffer(NVML_DEVICE_SERIAL_BUFFER_SIZE) fn = _nvmlGetFunctionPointer("nvmlDeviceGetSerial") ret = fn(handle, c_serial, c_uint(NVML_DEVICE_SERIAL_BUFFER_SIZE)) _nvmlCheckReturn(ret) return bytes_to_str(c_serial.value)
def get_metric_parsers(metric_packages=tuple(), include_defaults=True): """Gets all of the metric parsers. Args: metric_packages - Defaults to no extra packages. An iterable of metric containing packages. A metric inherits DiffParserBase and does not have __metric__ = False A metric package must be imported using import a.b.c include_defaults - Whether to include the generic metric parsers """ metric_parsers = set() if include_defaults: import git_code_debt.metrics metric_parsers.update(discover(git_code_debt.metrics, is_metric_cls)) for metric_package in metric_packages: metric_parsers.update(discover(metric_package, is_metric_cls)) return metric_parsers
Gets all of the metric parsers. Args: metric_packages - Defaults to no extra packages. An iterable of metric containing packages. A metric inherits DiffParserBase and does not have __metric__ = False A metric package must be imported using import a.b.c include_defaults - Whether to include the generic metric parsers
Below is the the instruction that describes the task: ### Input: Gets all of the metric parsers. Args: metric_packages - Defaults to no extra packages. An iterable of metric containing packages. A metric inherits DiffParserBase and does not have __metric__ = False A metric package must be imported using import a.b.c include_defaults - Whether to include the generic metric parsers ### Response: def get_metric_parsers(metric_packages=tuple(), include_defaults=True): """Gets all of the metric parsers. Args: metric_packages - Defaults to no extra packages. An iterable of metric containing packages. A metric inherits DiffParserBase and does not have __metric__ = False A metric package must be imported using import a.b.c include_defaults - Whether to include the generic metric parsers """ metric_parsers = set() if include_defaults: import git_code_debt.metrics metric_parsers.update(discover(git_code_debt.metrics, is_metric_cls)) for metric_package in metric_packages: metric_parsers.update(discover(metric_package, is_metric_cls)) return metric_parsers
def get_vnetwork_portgroups_input_last_rcvd_instance(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") get_vnetwork_portgroups = ET.Element("get_vnetwork_portgroups") config = get_vnetwork_portgroups input = ET.SubElement(get_vnetwork_portgroups, "input") last_rcvd_instance = ET.SubElement(input, "last-rcvd-instance") last_rcvd_instance.text = kwargs.pop('last_rcvd_instance') callback = kwargs.pop('callback', self._callback) return callback(config)
Auto Generated Code
Below is the the instruction that describes the task: ### Input: Auto Generated Code ### Response: def get_vnetwork_portgroups_input_last_rcvd_instance(self, **kwargs): """Auto Generated Code """ config = ET.Element("config") get_vnetwork_portgroups = ET.Element("get_vnetwork_portgroups") config = get_vnetwork_portgroups input = ET.SubElement(get_vnetwork_portgroups, "input") last_rcvd_instance = ET.SubElement(input, "last-rcvd-instance") last_rcvd_instance.text = kwargs.pop('last_rcvd_instance') callback = kwargs.pop('callback', self._callback) return callback(config)
def callBigDlFunc(bigdl_type, name, *args): """ Call API in PythonBigDL """ gateway = _get_gateway() error = Exception("Cannot find function: %s" % name) for jinvoker in JavaCreator.instance(bigdl_type, gateway).value: # hasattr(jinvoker, name) always return true here, # so you need to invoke the method to check if it exist or not try: api = getattr(jinvoker, name) result = callJavaFunc(api, *args) except Exception as e: error = e if "does not exist" not in str(e): raise e else: return result raise error
Call API in PythonBigDL
Below is the the instruction that describes the task: ### Input: Call API in PythonBigDL ### Response: def callBigDlFunc(bigdl_type, name, *args): """ Call API in PythonBigDL """ gateway = _get_gateway() error = Exception("Cannot find function: %s" % name) for jinvoker in JavaCreator.instance(bigdl_type, gateway).value: # hasattr(jinvoker, name) always return true here, # so you need to invoke the method to check if it exist or not try: api = getattr(jinvoker, name) result = callJavaFunc(api, *args) except Exception as e: error = e if "does not exist" not in str(e): raise e else: return result raise error
def imshow(*imgs, **options): """ Plots multiple images using matplotlib by dynamically finding the required number of rows and cols. :param imgs: Images as any number of arguments :param options: Dict of options - cmap: Color map for gray scale images - vmin: Minimum value to be used in color map - vmax: Maximum value to be used in color map """ n = len(imgs) nrows = int(math.ceil(math.sqrt(n))) ncols = int(math.ceil(n / nrows)) for row in range(nrows): for col in range(ncols): i = row * ncols + col if i >= n: break plt.subplot(nrows, ncols, i+1) show_img(imgs[i], options) plt.show()
Plots multiple images using matplotlib by dynamically finding the required number of rows and cols. :param imgs: Images as any number of arguments :param options: Dict of options - cmap: Color map for gray scale images - vmin: Minimum value to be used in color map - vmax: Maximum value to be used in color map
Below is the the instruction that describes the task: ### Input: Plots multiple images using matplotlib by dynamically finding the required number of rows and cols. :param imgs: Images as any number of arguments :param options: Dict of options - cmap: Color map for gray scale images - vmin: Minimum value to be used in color map - vmax: Maximum value to be used in color map ### Response: def imshow(*imgs, **options): """ Plots multiple images using matplotlib by dynamically finding the required number of rows and cols. :param imgs: Images as any number of arguments :param options: Dict of options - cmap: Color map for gray scale images - vmin: Minimum value to be used in color map - vmax: Maximum value to be used in color map """ n = len(imgs) nrows = int(math.ceil(math.sqrt(n))) ncols = int(math.ceil(n / nrows)) for row in range(nrows): for col in range(ncols): i = row * ncols + col if i >= n: break plt.subplot(nrows, ncols, i+1) show_img(imgs[i], options) plt.show()
def _api_args_item(self, item): """Glances API RESTful implementation. Return the JSON representation of the Glances command line arguments item HTTP/200 if OK HTTP/400 if item is not found HTTP/404 if others error """ response.content_type = 'application/json; charset=utf-8' if item not in self.args: abort(400, "Unknown argument item %s" % item) try: # Get the JSON value of the args' dict # Use vars to convert namespace to dict # Source: https://docs.python.org/%s/library/functions.html#vars args_json = json.dumps(vars(self.args)[item]) except Exception as e: abort(404, "Cannot get args item (%s)" % str(e)) return args_json
Glances API RESTful implementation. Return the JSON representation of the Glances command line arguments item HTTP/200 if OK HTTP/400 if item is not found HTTP/404 if others error
Below is the the instruction that describes the task: ### Input: Glances API RESTful implementation. Return the JSON representation of the Glances command line arguments item HTTP/200 if OK HTTP/400 if item is not found HTTP/404 if others error ### Response: def _api_args_item(self, item): """Glances API RESTful implementation. Return the JSON representation of the Glances command line arguments item HTTP/200 if OK HTTP/400 if item is not found HTTP/404 if others error """ response.content_type = 'application/json; charset=utf-8' if item not in self.args: abort(400, "Unknown argument item %s" % item) try: # Get the JSON value of the args' dict # Use vars to convert namespace to dict # Source: https://docs.python.org/%s/library/functions.html#vars args_json = json.dumps(vars(self.args)[item]) except Exception as e: abort(404, "Cannot get args item (%s)" % str(e)) return args_json
def _extract_axes_for_slice(self, axes): """ Return the slice dictionary for these axes. """ return {self._AXIS_SLICEMAP[i]: a for i, a in zip(self._AXIS_ORDERS[self._AXIS_LEN - len(axes):], axes)}
Return the slice dictionary for these axes.
Below is the the instruction that describes the task: ### Input: Return the slice dictionary for these axes. ### Response: def _extract_axes_for_slice(self, axes): """ Return the slice dictionary for these axes. """ return {self._AXIS_SLICEMAP[i]: a for i, a in zip(self._AXIS_ORDERS[self._AXIS_LEN - len(axes):], axes)}
def set(self, id, translation, domain='messages'): """ Sets a message translation. """ assert isinstance(id, (str, unicode)) assert isinstance(translation, (str, unicode)) assert isinstance(domain, (str, unicode)) self.add({id: translation}, domain)
Sets a message translation.
Below is the the instruction that describes the task: ### Input: Sets a message translation. ### Response: def set(self, id, translation, domain='messages'): """ Sets a message translation. """ assert isinstance(id, (str, unicode)) assert isinstance(translation, (str, unicode)) assert isinstance(domain, (str, unicode)) self.add({id: translation}, domain)
def lookup_id_action(self, text, loc, var): """Code executed after recognising an identificator in expression""" exshared.setpos(loc, text) if DEBUG > 0: print("EXP_VAR:",var) if DEBUG == 2: self.symtab.display() if DEBUG > 2: return var_index = self.symtab.lookup_symbol(var.name, [SharedData.KINDS.GLOBAL_VAR, SharedData.KINDS.PARAMETER, SharedData.KINDS.LOCAL_VAR]) if var_index == None: raise SemanticException("'%s' undefined" % var.name) return var_index
Code executed after recognising an identificator in expression
Below is the the instruction that describes the task: ### Input: Code executed after recognising an identificator in expression ### Response: def lookup_id_action(self, text, loc, var): """Code executed after recognising an identificator in expression""" exshared.setpos(loc, text) if DEBUG > 0: print("EXP_VAR:",var) if DEBUG == 2: self.symtab.display() if DEBUG > 2: return var_index = self.symtab.lookup_symbol(var.name, [SharedData.KINDS.GLOBAL_VAR, SharedData.KINDS.PARAMETER, SharedData.KINDS.LOCAL_VAR]) if var_index == None: raise SemanticException("'%s' undefined" % var.name) return var_index
def height_to_geopotential(height): r"""Compute geopotential for a given height. Parameters ---------- height : `pint.Quantity` Height above sea level (array_like) Returns ------- `pint.Quantity` The corresponding geopotential value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8. """ # Calculate geopotential geopot = mpconsts.G * mpconsts.me * ((1 / mpconsts.Re) - (1 / (mpconsts.Re + height))) return geopot
r"""Compute geopotential for a given height. Parameters ---------- height : `pint.Quantity` Height above sea level (array_like) Returns ------- `pint.Quantity` The corresponding geopotential value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8.
Below is the the instruction that describes the task: ### Input: r"""Compute geopotential for a given height. Parameters ---------- height : `pint.Quantity` Height above sea level (array_like) Returns ------- `pint.Quantity` The corresponding geopotential value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8. ### Response: def height_to_geopotential(height): r"""Compute geopotential for a given height. Parameters ---------- height : `pint.Quantity` Height above sea level (array_like) Returns ------- `pint.Quantity` The corresponding geopotential value(s) Examples -------- >>> from metpy.constants import g, G, me, Re >>> import metpy.calc >>> from metpy.units import units >>> height = np.linspace(0,10000, num = 11) * units.m >>> geopot = metpy.calc.height_to_geopotential(height) >>> geopot <Quantity([ 0. 9817.46806283 19631.85526579 29443.16305888 39251.39289118 49056.54621087 58858.62446525 68657.62910064 78453.56156253 88246.42329545 98036.21574306], 'meter ** 2 / second ** 2')> Notes ----- Derived from definition of geopotential in [Hobbs2006]_ pg.14 Eq.1.8. """ # Calculate geopotential geopot = mpconsts.G * mpconsts.me * ((1 / mpconsts.Re) - (1 / (mpconsts.Re + height))) return geopot
def _update_enabled(self, name, enabled_value): ''' Update whether an individual beacon is enabled ''' if isinstance(self.opts['beacons'][name], dict): # Backwards compatibility self.opts['beacons'][name]['enabled'] = enabled_value else: enabled_index = self._get_index(self.opts['beacons'][name], 'enabled') if enabled_index >= 0: self.opts['beacons'][name][enabled_index]['enabled'] = enabled_value else: self.opts['beacons'][name].append({'enabled': enabled_value})
Update whether an individual beacon is enabled
Below is the the instruction that describes the task: ### Input: Update whether an individual beacon is enabled ### Response: def _update_enabled(self, name, enabled_value): ''' Update whether an individual beacon is enabled ''' if isinstance(self.opts['beacons'][name], dict): # Backwards compatibility self.opts['beacons'][name]['enabled'] = enabled_value else: enabled_index = self._get_index(self.opts['beacons'][name], 'enabled') if enabled_index >= 0: self.opts['beacons'][name][enabled_index]['enabled'] = enabled_value else: self.opts['beacons'][name].append({'enabled': enabled_value})
def reset_parameter(**kwargs): """Create a callback that resets the parameter after the first iteration. Note ---- The initial parameter will still take in-effect on first iteration. Parameters ---------- **kwargs : value should be list or function List of parameters for each boosting round or a customized function that calculates the parameter in terms of current number of round (e.g. yields learning rate decay). If list lst, parameter = lst[current_round]. If function func, parameter = func(current_round). Returns ------- callback : function The callback that resets the parameter after the first iteration. """ def _callback(env): new_parameters = {} for key, value in kwargs.items(): if key in ['num_class', 'num_classes', 'boosting', 'boost', 'boosting_type', 'metric', 'metrics', 'metric_types']: raise RuntimeError("cannot reset {} during training".format(repr(key))) if isinstance(value, list): if len(value) != env.end_iteration - env.begin_iteration: raise ValueError("Length of list {} has to equal to 'num_boost_round'." .format(repr(key))) new_param = value[env.iteration - env.begin_iteration] else: new_param = value(env.iteration - env.begin_iteration) if new_param != env.params.get(key, None): new_parameters[key] = new_param if new_parameters: env.model.reset_parameter(new_parameters) env.params.update(new_parameters) _callback.before_iteration = True _callback.order = 10 return _callback
Create a callback that resets the parameter after the first iteration. Note ---- The initial parameter will still take in-effect on first iteration. Parameters ---------- **kwargs : value should be list or function List of parameters for each boosting round or a customized function that calculates the parameter in terms of current number of round (e.g. yields learning rate decay). If list lst, parameter = lst[current_round]. If function func, parameter = func(current_round). Returns ------- callback : function The callback that resets the parameter after the first iteration.
Below is the the instruction that describes the task: ### Input: Create a callback that resets the parameter after the first iteration. Note ---- The initial parameter will still take in-effect on first iteration. Parameters ---------- **kwargs : value should be list or function List of parameters for each boosting round or a customized function that calculates the parameter in terms of current number of round (e.g. yields learning rate decay). If list lst, parameter = lst[current_round]. If function func, parameter = func(current_round). Returns ------- callback : function The callback that resets the parameter after the first iteration. ### Response: def reset_parameter(**kwargs): """Create a callback that resets the parameter after the first iteration. Note ---- The initial parameter will still take in-effect on first iteration. Parameters ---------- **kwargs : value should be list or function List of parameters for each boosting round or a customized function that calculates the parameter in terms of current number of round (e.g. yields learning rate decay). If list lst, parameter = lst[current_round]. If function func, parameter = func(current_round). Returns ------- callback : function The callback that resets the parameter after the first iteration. """ def _callback(env): new_parameters = {} for key, value in kwargs.items(): if key in ['num_class', 'num_classes', 'boosting', 'boost', 'boosting_type', 'metric', 'metrics', 'metric_types']: raise RuntimeError("cannot reset {} during training".format(repr(key))) if isinstance(value, list): if len(value) != env.end_iteration - env.begin_iteration: raise ValueError("Length of list {} has to equal to 'num_boost_round'." .format(repr(key))) new_param = value[env.iteration - env.begin_iteration] else: new_param = value(env.iteration - env.begin_iteration) if new_param != env.params.get(key, None): new_parameters[key] = new_param if new_parameters: env.model.reset_parameter(new_parameters) env.params.update(new_parameters) _callback.before_iteration = True _callback.order = 10 return _callback
def train_on_batch(self, data: List[Iterable], labels: Iterable[list]) -> None: """Trains model on a single batch Args: data: a batch of word sequences labels: a batch of correct tag sequences Returns: the trained model """ X, Y = self._transform_batch(data, labels) self.model_.train_on_batch(X, Y)
Trains model on a single batch Args: data: a batch of word sequences labels: a batch of correct tag sequences Returns: the trained model
Below is the the instruction that describes the task: ### Input: Trains model on a single batch Args: data: a batch of word sequences labels: a batch of correct tag sequences Returns: the trained model ### Response: def train_on_batch(self, data: List[Iterable], labels: Iterable[list]) -> None: """Trains model on a single batch Args: data: a batch of word sequences labels: a batch of correct tag sequences Returns: the trained model """ X, Y = self._transform_batch(data, labels) self.model_.train_on_batch(X, Y)
def column(self, name): """ Returns the index of the column at the given name. :param name | <str> :return <int> (-1 if not found) """ columns = self.columns() if name in columns: return columns.index(name) else: check = projex.text.underscore(name) for i, column in enumerate(columns): if projex.text.underscore(column) == check: return i return -1
Returns the index of the column at the given name. :param name | <str> :return <int> (-1 if not found)
Below is the the instruction that describes the task: ### Input: Returns the index of the column at the given name. :param name | <str> :return <int> (-1 if not found) ### Response: def column(self, name): """ Returns the index of the column at the given name. :param name | <str> :return <int> (-1 if not found) """ columns = self.columns() if name in columns: return columns.index(name) else: check = projex.text.underscore(name) for i, column in enumerate(columns): if projex.text.underscore(column) == check: return i return -1
def get_rtr_by_name(self, rtr_name): """Search a router by its name. """ upd_rtr_list = [] try: rtr_list = self.neutronclient.list_routers() for rtr in rtr_list.get('routers'): if rtr_name == rtr['name']: upd_rtr_list.append(rtr) except Exception as exc: LOG.error("Failed to get router by name %(name)s, " "Exc %(exc)s", {'name': rtr_name, 'exc': str(exc)}) return upd_rtr_list
Search a router by its name.
Below is the the instruction that describes the task: ### Input: Search a router by its name. ### Response: def get_rtr_by_name(self, rtr_name): """Search a router by its name. """ upd_rtr_list = [] try: rtr_list = self.neutronclient.list_routers() for rtr in rtr_list.get('routers'): if rtr_name == rtr['name']: upd_rtr_list.append(rtr) except Exception as exc: LOG.error("Failed to get router by name %(name)s, " "Exc %(exc)s", {'name': rtr_name, 'exc': str(exc)}) return upd_rtr_list
def _process_mrk_marker_view(self, limit): """ This is the definition of markers (as in genes, but other genomic loci types as well). It looks up the identifiers in the hashmap This includes their labels, specific class, and identifiers TODO should we use the mrk_mouse_view instead? Triples: <marker_id> a owl:Class OR owl:NamedIndividual GENO:marker_type rdf:label <symbol> RO:in_taxon <NCBITaxon_id> :param limit: :return: """ if self.test_mode: graph = self.testgraph else: graph = self.graph model = Model(graph) geno = Genotype(graph) line_counter = 0 raw = '/'.join((self.rawdir, 'mrk_marker_view')) LOG.info("getting markers and assigning types") with open(raw, 'r') as f: f.readline() # read the header row; skip for line in f: line = line.rstrip("\n") line_counter += 1 (marker_key, organism_key, marker_status_key, symbol, name, latin_name, marker_type) = line.split('\t') if self.test_mode is True: if int(marker_key) not in self.test_keys.get('marker'): continue # use only non-withdrawn markers if marker_status_key != '2': marker_id = self.idhash['marker'].get(marker_key) # only pull info for mouse genes for now # other species should come from other dbs if organism_key != '1': continue if marker_id is None: LOG.error( "can't find %s %s in the id hash", marker_key, symbol) mapped_marker_type = self.resolve(marker_type.strip()) # if it's unlocated, or is not a gene, # then don't add it as a class because # it's not added as a gene. # everything except for genes are modeled as individuals if mapped_marker_type in [ self.globaltt['gene'], self.globaltt['pseudogene']]: model.addClassToGraph( marker_id, symbol, mapped_marker_type, name) model.addSynonym( marker_id, name, self.globaltt['has_exact_synonym']) self.markers['classes'].append(marker_id) else: model.addIndividualToGraph( marker_id, symbol, mapped_marker_type, name) model.addSynonym( marker_id, name, self.globaltt['has_exact_synonym']) self.markers['indiv'].append(marker_id) self.label_hash[marker_id] = symbol # add the taxon taxon_id = self.resolve(latin_name) # not always proper binomial geno.addTaxon(taxon_id, marker_id) # make MGI the leader for mouse genes. if taxon_id == self.globaltt['Mus musculus']: model.makeLeader(marker_id) if not self.test_mode and limit is not None and line_counter > limit: break return
This is the definition of markers (as in genes, but other genomic loci types as well). It looks up the identifiers in the hashmap This includes their labels, specific class, and identifiers TODO should we use the mrk_mouse_view instead? Triples: <marker_id> a owl:Class OR owl:NamedIndividual GENO:marker_type rdf:label <symbol> RO:in_taxon <NCBITaxon_id> :param limit: :return:
Below is the the instruction that describes the task: ### Input: This is the definition of markers (as in genes, but other genomic loci types as well). It looks up the identifiers in the hashmap This includes their labels, specific class, and identifiers TODO should we use the mrk_mouse_view instead? Triples: <marker_id> a owl:Class OR owl:NamedIndividual GENO:marker_type rdf:label <symbol> RO:in_taxon <NCBITaxon_id> :param limit: :return: ### Response: def _process_mrk_marker_view(self, limit): """ This is the definition of markers (as in genes, but other genomic loci types as well). It looks up the identifiers in the hashmap This includes their labels, specific class, and identifiers TODO should we use the mrk_mouse_view instead? Triples: <marker_id> a owl:Class OR owl:NamedIndividual GENO:marker_type rdf:label <symbol> RO:in_taxon <NCBITaxon_id> :param limit: :return: """ if self.test_mode: graph = self.testgraph else: graph = self.graph model = Model(graph) geno = Genotype(graph) line_counter = 0 raw = '/'.join((self.rawdir, 'mrk_marker_view')) LOG.info("getting markers and assigning types") with open(raw, 'r') as f: f.readline() # read the header row; skip for line in f: line = line.rstrip("\n") line_counter += 1 (marker_key, organism_key, marker_status_key, symbol, name, latin_name, marker_type) = line.split('\t') if self.test_mode is True: if int(marker_key) not in self.test_keys.get('marker'): continue # use only non-withdrawn markers if marker_status_key != '2': marker_id = self.idhash['marker'].get(marker_key) # only pull info for mouse genes for now # other species should come from other dbs if organism_key != '1': continue if marker_id is None: LOG.error( "can't find %s %s in the id hash", marker_key, symbol) mapped_marker_type = self.resolve(marker_type.strip()) # if it's unlocated, or is not a gene, # then don't add it as a class because # it's not added as a gene. # everything except for genes are modeled as individuals if mapped_marker_type in [ self.globaltt['gene'], self.globaltt['pseudogene']]: model.addClassToGraph( marker_id, symbol, mapped_marker_type, name) model.addSynonym( marker_id, name, self.globaltt['has_exact_synonym']) self.markers['classes'].append(marker_id) else: model.addIndividualToGraph( marker_id, symbol, mapped_marker_type, name) model.addSynonym( marker_id, name, self.globaltt['has_exact_synonym']) self.markers['indiv'].append(marker_id) self.label_hash[marker_id] = symbol # add the taxon taxon_id = self.resolve(latin_name) # not always proper binomial geno.addTaxon(taxon_id, marker_id) # make MGI the leader for mouse genes. if taxon_id == self.globaltt['Mus musculus']: model.makeLeader(marker_id) if not self.test_mode and limit is not None and line_counter > limit: break return
def parse_get(prs, conn): """Retrieve records. Arguments: prs: parser object of argparse conn: dictionary of connection information """ prs_get = prs.add_parser( 'get', help='retrieve all zones or records with a specific zone') prs_get.add_argument('--domain', action='store', help='specify domain FQDN') conn_options(prs_get, conn) set_option(prs_get, 'search') prs_get.set_defaults(func=get)
Retrieve records. Arguments: prs: parser object of argparse conn: dictionary of connection information
Below is the the instruction that describes the task: ### Input: Retrieve records. Arguments: prs: parser object of argparse conn: dictionary of connection information ### Response: def parse_get(prs, conn): """Retrieve records. Arguments: prs: parser object of argparse conn: dictionary of connection information """ prs_get = prs.add_parser( 'get', help='retrieve all zones or records with a specific zone') prs_get.add_argument('--domain', action='store', help='specify domain FQDN') conn_options(prs_get, conn) set_option(prs_get, 'search') prs_get.set_defaults(func=get)
def solutions_as_2d_trajectories(self, x_axis, y_axis): """ Returns the :attr:`InferenceResult.solutions` as a plottable 2d trajectory. :param x_axis: the variable to be on the x axis of projection :param y_axis: the variable to be on the y axis of preojection :return: a tuple x, y specifying lists of x and y coordinates of projection """ if not self.solutions: raise Exception('No intermediate solutions returned. ' 'Re-run inference with return_intermediate_solutions=True') index_x = self.parameter_index(x_axis) index_y = self.parameter_index(y_axis) x, y = [], [] for parameters, initial_conditions in self.solutions: all_values = parameters + initial_conditions x.append(all_values[index_x]) y.append(all_values[index_y]) return x, y
Returns the :attr:`InferenceResult.solutions` as a plottable 2d trajectory. :param x_axis: the variable to be on the x axis of projection :param y_axis: the variable to be on the y axis of preojection :return: a tuple x, y specifying lists of x and y coordinates of projection
Below is the the instruction that describes the task: ### Input: Returns the :attr:`InferenceResult.solutions` as a plottable 2d trajectory. :param x_axis: the variable to be on the x axis of projection :param y_axis: the variable to be on the y axis of preojection :return: a tuple x, y specifying lists of x and y coordinates of projection ### Response: def solutions_as_2d_trajectories(self, x_axis, y_axis): """ Returns the :attr:`InferenceResult.solutions` as a plottable 2d trajectory. :param x_axis: the variable to be on the x axis of projection :param y_axis: the variable to be on the y axis of preojection :return: a tuple x, y specifying lists of x and y coordinates of projection """ if not self.solutions: raise Exception('No intermediate solutions returned. ' 'Re-run inference with return_intermediate_solutions=True') index_x = self.parameter_index(x_axis) index_y = self.parameter_index(y_axis) x, y = [], [] for parameters, initial_conditions in self.solutions: all_values = parameters + initial_conditions x.append(all_values[index_x]) y.append(all_values[index_y]) return x, y
def gp_norm(infile): """indentify normalization region""" inDir, outDir = getWorkDirs() data, titles = [], [] for eidx,energy in enumerate(['19', '27', '39', '62']): file_url = os.path.realpath(os.path.join( inDir, 'rawdata', energy, 'pt-integrated', infile+'.dat' )) data_import = np.loadtxt(open(file_url, 'rb')) data_import[:,1] += eidx * 0.2 data_import[:,4] = data_import[:,3] data_import[:,(2,3)] = 0 data.append(data_import) titles.append(' '.join([getEnergy4Key(energy), 'GeV'])) nData = len(data) lines = dict( ('x={}'.format(1+i*0.2), 'lc {} lt 2 lw 4'.format(default_colors[-2])) for i in range(nData) ) lines.update(dict( ('x={}'.format(1+i*0.2+0.02), 'lc {} lt 3 lw 4'.format(default_colors[-5])) for i in range(nData) )) lines.update(dict( ('x={}'.format(1+i*0.2-0.02), 'lc {} lt 3 lw 4'.format(default_colors[-5])) for i in range(nData) )) lines.update({'y=0.9': 'lc {} lt 1 lw 4'.format(default_colors[-2])}) charges = '++' if infile == 'rpp' else '--' make_plot( name = '%s/norm_range_%s' % (outDir,infile), xr = [0,2], yr = [0.9,1.7], data = data, properties = [ 'lt 1 lw 3 lc %s pt 1' % (default_colors[i]) # (i/2)%4 for i in range(nData) ], titles = titles, size = '8in,8in', lmargin = 0.05, rmargin = 0.99, tmargin = 0.93, bmargin = 0.14, xlabel = 'dielectron invariant mass, M_{ee} (GeV/c^{2})', lines = lines, key = [ 'maxrows 1', 'nobox', 'samplen 0.1', 'width -1', 'at graph 1,1.1' ], labels = { 'SE_{%s} / ME@_{%s}^N' % (charges, charges): (0.3, 1.3) }, gpcalls = [ 'ytics (1,"1" 1.2, "1" 1.4, "1" 1.6)', 'boxwidth 0.002', ], )
indentify normalization region
Below is the the instruction that describes the task: ### Input: indentify normalization region ### Response: def gp_norm(infile): """indentify normalization region""" inDir, outDir = getWorkDirs() data, titles = [], [] for eidx,energy in enumerate(['19', '27', '39', '62']): file_url = os.path.realpath(os.path.join( inDir, 'rawdata', energy, 'pt-integrated', infile+'.dat' )) data_import = np.loadtxt(open(file_url, 'rb')) data_import[:,1] += eidx * 0.2 data_import[:,4] = data_import[:,3] data_import[:,(2,3)] = 0 data.append(data_import) titles.append(' '.join([getEnergy4Key(energy), 'GeV'])) nData = len(data) lines = dict( ('x={}'.format(1+i*0.2), 'lc {} lt 2 lw 4'.format(default_colors[-2])) for i in range(nData) ) lines.update(dict( ('x={}'.format(1+i*0.2+0.02), 'lc {} lt 3 lw 4'.format(default_colors[-5])) for i in range(nData) )) lines.update(dict( ('x={}'.format(1+i*0.2-0.02), 'lc {} lt 3 lw 4'.format(default_colors[-5])) for i in range(nData) )) lines.update({'y=0.9': 'lc {} lt 1 lw 4'.format(default_colors[-2])}) charges = '++' if infile == 'rpp' else '--' make_plot( name = '%s/norm_range_%s' % (outDir,infile), xr = [0,2], yr = [0.9,1.7], data = data, properties = [ 'lt 1 lw 3 lc %s pt 1' % (default_colors[i]) # (i/2)%4 for i in range(nData) ], titles = titles, size = '8in,8in', lmargin = 0.05, rmargin = 0.99, tmargin = 0.93, bmargin = 0.14, xlabel = 'dielectron invariant mass, M_{ee} (GeV/c^{2})', lines = lines, key = [ 'maxrows 1', 'nobox', 'samplen 0.1', 'width -1', 'at graph 1,1.1' ], labels = { 'SE_{%s} / ME@_{%s}^N' % (charges, charges): (0.3, 1.3) }, gpcalls = [ 'ytics (1,"1" 1.2, "1" 1.4, "1" 1.6)', 'boxwidth 0.002', ], )
def remove_field(self, field_name): """Remove the field with the received field name from model.""" field = self._fields.pop(field_name, None) if field is not None and field.default is not None: if six.callable(field.default): self._default_callables.pop(field.key, None) else: self._defaults.pop(field.key, None)
Remove the field with the received field name from model.
Below is the the instruction that describes the task: ### Input: Remove the field with the received field name from model. ### Response: def remove_field(self, field_name): """Remove the field with the received field name from model.""" field = self._fields.pop(field_name, None) if field is not None and field.default is not None: if six.callable(field.default): self._default_callables.pop(field.key, None) else: self._defaults.pop(field.key, None)
def mean(l, ignore_nan=False, empty=0): """ nanmean compatible with generators. """ l = iter(l) if ignore_nan: l = ifilterfalse(np.isnan, l) try: n = 1 acc = next(l) except StopIteration: if empty == 'raise': raise ValueError('Empty mean') return empty for n, v in enumerate(l, 2): acc += v if n == 1: return acc return acc / n
nanmean compatible with generators.
Below is the the instruction that describes the task: ### Input: nanmean compatible with generators. ### Response: def mean(l, ignore_nan=False, empty=0): """ nanmean compatible with generators. """ l = iter(l) if ignore_nan: l = ifilterfalse(np.isnan, l) try: n = 1 acc = next(l) except StopIteration: if empty == 'raise': raise ValueError('Empty mean') return empty for n, v in enumerate(l, 2): acc += v if n == 1: return acc return acc / n
def assert_not_in(obj, seq, message=None, extra=None): """Raises an AssertionError if obj is in iter.""" # for very long strings, provide a truncated error if isinstance(seq, six.string_types) and obj in seq and len(seq) > 200: index = seq.find(obj) start_index = index - 50 if start_index > 0: truncated = "(truncated) ..." else: truncated = "" start_index = 0 end_index = index + len(obj) + 50 truncated += seq[start_index:end_index] if end_index < len(seq): truncated += "... (truncated)" assert False, _assert_fail_message(message, obj, truncated, "is in", extra) assert obj not in seq, _assert_fail_message(message, obj, seq, "is in", extra)
Raises an AssertionError if obj is in iter.
Below is the the instruction that describes the task: ### Input: Raises an AssertionError if obj is in iter. ### Response: def assert_not_in(obj, seq, message=None, extra=None): """Raises an AssertionError if obj is in iter.""" # for very long strings, provide a truncated error if isinstance(seq, six.string_types) and obj in seq and len(seq) > 200: index = seq.find(obj) start_index = index - 50 if start_index > 0: truncated = "(truncated) ..." else: truncated = "" start_index = 0 end_index = index + len(obj) + 50 truncated += seq[start_index:end_index] if end_index < len(seq): truncated += "... (truncated)" assert False, _assert_fail_message(message, obj, truncated, "is in", extra) assert obj not in seq, _assert_fail_message(message, obj, seq, "is in", extra)
def decode(self, fd, mtu, max_len=2560): """ Read the media transport descriptor, depay the RTP payload and decode the SBC frames into a byte array. The maximum number of bytes to be returned may be passed as an argument and all available bytes are returned to the caller. :param int fd: Media transport file descriptor :param int mtu: Media transport MTU size as returned when the media transport was acquired. :param int max_len: Optional. Set maximum number of bytes to read. :return data: Decoded data bytes as an array. :rtype: array{byte} """ output_buffer = ffi.new('char[]', max_len) sz = self.codec.rtp_sbc_decode_from_fd(self.config, output_buffer, max_len, mtu, fd) return ffi.buffer(output_buffer[0:sz])
Read the media transport descriptor, depay the RTP payload and decode the SBC frames into a byte array. The maximum number of bytes to be returned may be passed as an argument and all available bytes are returned to the caller. :param int fd: Media transport file descriptor :param int mtu: Media transport MTU size as returned when the media transport was acquired. :param int max_len: Optional. Set maximum number of bytes to read. :return data: Decoded data bytes as an array. :rtype: array{byte}
Below is the the instruction that describes the task: ### Input: Read the media transport descriptor, depay the RTP payload and decode the SBC frames into a byte array. The maximum number of bytes to be returned may be passed as an argument and all available bytes are returned to the caller. :param int fd: Media transport file descriptor :param int mtu: Media transport MTU size as returned when the media transport was acquired. :param int max_len: Optional. Set maximum number of bytes to read. :return data: Decoded data bytes as an array. :rtype: array{byte} ### Response: def decode(self, fd, mtu, max_len=2560): """ Read the media transport descriptor, depay the RTP payload and decode the SBC frames into a byte array. The maximum number of bytes to be returned may be passed as an argument and all available bytes are returned to the caller. :param int fd: Media transport file descriptor :param int mtu: Media transport MTU size as returned when the media transport was acquired. :param int max_len: Optional. Set maximum number of bytes to read. :return data: Decoded data bytes as an array. :rtype: array{byte} """ output_buffer = ffi.new('char[]', max_len) sz = self.codec.rtp_sbc_decode_from_fd(self.config, output_buffer, max_len, mtu, fd) return ffi.buffer(output_buffer[0:sz])
def loadFromCheckpoint(savedModelDir, newSerialization=False): """ Load saved model. :param savedModelDir: (string) Directory of where the experiment is to be or was saved :returns: (:class:`nupic.frameworks.opf.model.Model`) The loaded model instance. """ if newSerialization: return HTMPredictionModel.readFromCheckpoint(savedModelDir) else: return Model.load(savedModelDir)
Load saved model. :param savedModelDir: (string) Directory of where the experiment is to be or was saved :returns: (:class:`nupic.frameworks.opf.model.Model`) The loaded model instance.
Below is the the instruction that describes the task: ### Input: Load saved model. :param savedModelDir: (string) Directory of where the experiment is to be or was saved :returns: (:class:`nupic.frameworks.opf.model.Model`) The loaded model instance. ### Response: def loadFromCheckpoint(savedModelDir, newSerialization=False): """ Load saved model. :param savedModelDir: (string) Directory of where the experiment is to be or was saved :returns: (:class:`nupic.frameworks.opf.model.Model`) The loaded model instance. """ if newSerialization: return HTMPredictionModel.readFromCheckpoint(savedModelDir) else: return Model.load(savedModelDir)
def _make_graph(self): """Init common graph svg structure""" self.nodes['graph'] = self.svg.node( class_='graph %s-graph %s' % ( self.__class__.__name__.lower(), 'horizontal' if self.horizontal else 'vertical' ) ) self.svg.node( self.nodes['graph'], 'rect', class_='background', x=0, y=0, width=self.width, height=self.height ) self.nodes['plot'] = self.svg.node( self.nodes['graph'], class_="plot", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.svg.node( self.nodes['plot'], 'rect', class_='background', x=0, y=0, width=self.view.width, height=self.view.height ) self.nodes['title'] = self.svg.node( self.nodes['graph'], class_="titles" ) self.nodes['overlay'] = self.svg.node( self.nodes['graph'], class_="plot overlay", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.nodes['text_overlay'] = self.svg.node( self.nodes['graph'], class_="plot text-overlay", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.nodes['tooltip_overlay'] = self.svg.node( self.nodes['graph'], class_="plot tooltip-overlay", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.nodes['tooltip'] = self.svg.node( self.nodes['tooltip_overlay'], transform='translate(0 0)', style="opacity: 0", **{'class': 'tooltip'} ) self.svg.node( self.nodes['tooltip'], 'rect', rx=self.tooltip_border_radius, ry=self.tooltip_border_radius, width=0, height=0, **{'class': 'tooltip-box'} ) self.svg.node(self.nodes['tooltip'], 'g', class_='text')
Init common graph svg structure
Below is the the instruction that describes the task: ### Input: Init common graph svg structure ### Response: def _make_graph(self): """Init common graph svg structure""" self.nodes['graph'] = self.svg.node( class_='graph %s-graph %s' % ( self.__class__.__name__.lower(), 'horizontal' if self.horizontal else 'vertical' ) ) self.svg.node( self.nodes['graph'], 'rect', class_='background', x=0, y=0, width=self.width, height=self.height ) self.nodes['plot'] = self.svg.node( self.nodes['graph'], class_="plot", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.svg.node( self.nodes['plot'], 'rect', class_='background', x=0, y=0, width=self.view.width, height=self.view.height ) self.nodes['title'] = self.svg.node( self.nodes['graph'], class_="titles" ) self.nodes['overlay'] = self.svg.node( self.nodes['graph'], class_="plot overlay", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.nodes['text_overlay'] = self.svg.node( self.nodes['graph'], class_="plot text-overlay", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.nodes['tooltip_overlay'] = self.svg.node( self.nodes['graph'], class_="plot tooltip-overlay", transform="translate(%d, %d)" % (self.margin_box.left, self.margin_box.top) ) self.nodes['tooltip'] = self.svg.node( self.nodes['tooltip_overlay'], transform='translate(0 0)', style="opacity: 0", **{'class': 'tooltip'} ) self.svg.node( self.nodes['tooltip'], 'rect', rx=self.tooltip_border_radius, ry=self.tooltip_border_radius, width=0, height=0, **{'class': 'tooltip-box'} ) self.svg.node(self.nodes['tooltip'], 'g', class_='text')
def extract_feature_dependent_feature(self, extractor, force_extraction=False, verbose=0, add_args=None, custom_name=None): """ Extracts a feature which may be dependent on other features and stores it in the database Parameters ---------- extractor : function, which takes the path of a data point, a dictionary of all other features and *args as parameters and returns a feature force_extraction : boolean, if True - will re-extract feature even if a feature with this name already exists in the database, otherwise, will only extract if the feature doesn't exist in the database. default value: False verbose : int, if bigger than 0, will print the current number of the file for which data is being extracted add_args : optional arguments for the extractor (list/dictionary/tuple/whatever). if None, the extractor should take only one input argument - the file path. default value: None custom_name : string, optional name for the feature (it will be stored in the database with the custom_name instead of extractor function name). if None, the extractor function name will be used. default value: None Returns ------- None """ if self._prepopulated is False: raise errors.EmptyDatabase(self.dbpath) else: return extract_feature_dependent_feature_base(self.dbpath, self.path_to_set, self._set_object, extractor, force_extraction, verbose, add_args, custom_name)
Extracts a feature which may be dependent on other features and stores it in the database Parameters ---------- extractor : function, which takes the path of a data point, a dictionary of all other features and *args as parameters and returns a feature force_extraction : boolean, if True - will re-extract feature even if a feature with this name already exists in the database, otherwise, will only extract if the feature doesn't exist in the database. default value: False verbose : int, if bigger than 0, will print the current number of the file for which data is being extracted add_args : optional arguments for the extractor (list/dictionary/tuple/whatever). if None, the extractor should take only one input argument - the file path. default value: None custom_name : string, optional name for the feature (it will be stored in the database with the custom_name instead of extractor function name). if None, the extractor function name will be used. default value: None Returns ------- None
Below is the the instruction that describes the task: ### Input: Extracts a feature which may be dependent on other features and stores it in the database Parameters ---------- extractor : function, which takes the path of a data point, a dictionary of all other features and *args as parameters and returns a feature force_extraction : boolean, if True - will re-extract feature even if a feature with this name already exists in the database, otherwise, will only extract if the feature doesn't exist in the database. default value: False verbose : int, if bigger than 0, will print the current number of the file for which data is being extracted add_args : optional arguments for the extractor (list/dictionary/tuple/whatever). if None, the extractor should take only one input argument - the file path. default value: None custom_name : string, optional name for the feature (it will be stored in the database with the custom_name instead of extractor function name). if None, the extractor function name will be used. default value: None Returns ------- None ### Response: def extract_feature_dependent_feature(self, extractor, force_extraction=False, verbose=0, add_args=None, custom_name=None): """ Extracts a feature which may be dependent on other features and stores it in the database Parameters ---------- extractor : function, which takes the path of a data point, a dictionary of all other features and *args as parameters and returns a feature force_extraction : boolean, if True - will re-extract feature even if a feature with this name already exists in the database, otherwise, will only extract if the feature doesn't exist in the database. default value: False verbose : int, if bigger than 0, will print the current number of the file for which data is being extracted add_args : optional arguments for the extractor (list/dictionary/tuple/whatever). if None, the extractor should take only one input argument - the file path. default value: None custom_name : string, optional name for the feature (it will be stored in the database with the custom_name instead of extractor function name). if None, the extractor function name will be used. default value: None Returns ------- None """ if self._prepopulated is False: raise errors.EmptyDatabase(self.dbpath) else: return extract_feature_dependent_feature_base(self.dbpath, self.path_to_set, self._set_object, extractor, force_extraction, verbose, add_args, custom_name)
def sni2route(self, sni: SchemaNodeId, sctx: SchemaContext) -> SchemaRoute: """Translate schema node identifier to a schema route. Args: sni: Schema node identifier (absolute or relative). sctx: Schema context. Raises: ModuleNotRegistered: If `mid` is not registered in the data model. UnknownPrefix: If a prefix specified in `sni` is not declared. """ nlist = sni.split("/") res = [] for qn in (nlist[1:] if sni[0] == "/" else nlist): res.append(self.translate_node_id(qn, sctx)) return res
Translate schema node identifier to a schema route. Args: sni: Schema node identifier (absolute or relative). sctx: Schema context. Raises: ModuleNotRegistered: If `mid` is not registered in the data model. UnknownPrefix: If a prefix specified in `sni` is not declared.
Below is the the instruction that describes the task: ### Input: Translate schema node identifier to a schema route. Args: sni: Schema node identifier (absolute or relative). sctx: Schema context. Raises: ModuleNotRegistered: If `mid` is not registered in the data model. UnknownPrefix: If a prefix specified in `sni` is not declared. ### Response: def sni2route(self, sni: SchemaNodeId, sctx: SchemaContext) -> SchemaRoute: """Translate schema node identifier to a schema route. Args: sni: Schema node identifier (absolute or relative). sctx: Schema context. Raises: ModuleNotRegistered: If `mid` is not registered in the data model. UnknownPrefix: If a prefix specified in `sni` is not declared. """ nlist = sni.split("/") res = [] for qn in (nlist[1:] if sni[0] == "/" else nlist): res.append(self.translate_node_id(qn, sctx)) return res
def run(model_specification, results_directory, verbose, log, with_debugger): """Run a simulation from the command line. The simulation itself is defined by the given MODEL_SPECIFICATION yaml file. Within the results directory, which defaults to ~/vivarium_results if none is provided, a subdirectory will be created with the same name as the MODEL_SPECIFICATION if one does not exist. Results will be written to a further subdirectory named after the start time of the simulation run.""" log_level = logging.DEBUG if verbose else logging.ERROR logging.basicConfig(filename=log, level=log_level) try: run_simulation(model_specification, results_directory) except (BdbQuit, KeyboardInterrupt): raise except Exception as e: if with_debugger: import pdb import traceback traceback.print_exc() pdb.post_mortem() else: logging.exception("Uncaught exception {}".format(e)) raise
Run a simulation from the command line. The simulation itself is defined by the given MODEL_SPECIFICATION yaml file. Within the results directory, which defaults to ~/vivarium_results if none is provided, a subdirectory will be created with the same name as the MODEL_SPECIFICATION if one does not exist. Results will be written to a further subdirectory named after the start time of the simulation run.
Below is the the instruction that describes the task: ### Input: Run a simulation from the command line. The simulation itself is defined by the given MODEL_SPECIFICATION yaml file. Within the results directory, which defaults to ~/vivarium_results if none is provided, a subdirectory will be created with the same name as the MODEL_SPECIFICATION if one does not exist. Results will be written to a further subdirectory named after the start time of the simulation run. ### Response: def run(model_specification, results_directory, verbose, log, with_debugger): """Run a simulation from the command line. The simulation itself is defined by the given MODEL_SPECIFICATION yaml file. Within the results directory, which defaults to ~/vivarium_results if none is provided, a subdirectory will be created with the same name as the MODEL_SPECIFICATION if one does not exist. Results will be written to a further subdirectory named after the start time of the simulation run.""" log_level = logging.DEBUG if verbose else logging.ERROR logging.basicConfig(filename=log, level=log_level) try: run_simulation(model_specification, results_directory) except (BdbQuit, KeyboardInterrupt): raise except Exception as e: if with_debugger: import pdb import traceback traceback.print_exc() pdb.post_mortem() else: logging.exception("Uncaught exception {}".format(e)) raise
def get_parent_log_ids(self, log_id): """Gets the parent ``Ids`` of the given log. arg: log_id (osid.id.Id): the ``Id`` of a log return: (osid.id.IdList) - the parent ``Ids`` of the log raise: NotFound - ``log_id`` is not found raise: NullArgument - ``log_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.BinHierarchySession.get_parent_bin_ids if self._catalog_session is not None: return self._catalog_session.get_parent_catalog_ids(catalog_id=log_id) return self._hierarchy_session.get_parents(id_=log_id)
Gets the parent ``Ids`` of the given log. arg: log_id (osid.id.Id): the ``Id`` of a log return: (osid.id.IdList) - the parent ``Ids`` of the log raise: NotFound - ``log_id`` is not found raise: NullArgument - ``log_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*
Below is the the instruction that describes the task: ### Input: Gets the parent ``Ids`` of the given log. arg: log_id (osid.id.Id): the ``Id`` of a log return: (osid.id.IdList) - the parent ``Ids`` of the log raise: NotFound - ``log_id`` is not found raise: NullArgument - ``log_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* ### Response: def get_parent_log_ids(self, log_id): """Gets the parent ``Ids`` of the given log. arg: log_id (osid.id.Id): the ``Id`` of a log return: (osid.id.IdList) - the parent ``Ids`` of the log raise: NotFound - ``log_id`` is not found raise: NullArgument - ``log_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* """ # Implemented from template for # osid.resource.BinHierarchySession.get_parent_bin_ids if self._catalog_session is not None: return self._catalog_session.get_parent_catalog_ids(catalog_id=log_id) return self._hierarchy_session.get_parents(id_=log_id)
def auth_oauth2(self) -> dict: """ Authorizes a user by OAuth2 to get access token """ oauth_data = { 'client_id': self._app_id, 'display': 'mobile', 'response_type': 'token', 'scope': '+66560', 'v': self.API_VERSION } response = self.post(self.OAUTH_URL, oauth_data) url_params = get_url_params(response.url, fragment=True) if 'access_token' in url_params: return url_params action_url = get_base_url(response.text) if action_url: response = self.get(action_url) return get_url_params(response.url) response_json = response.json() if 'error' in response_json['error']: exception_msg = '{}: {}'.format(response_json['error'], response_json['error_description']) raise VVKAuthException(exception_msg)
Authorizes a user by OAuth2 to get access token
Below is the the instruction that describes the task: ### Input: Authorizes a user by OAuth2 to get access token ### Response: def auth_oauth2(self) -> dict: """ Authorizes a user by OAuth2 to get access token """ oauth_data = { 'client_id': self._app_id, 'display': 'mobile', 'response_type': 'token', 'scope': '+66560', 'v': self.API_VERSION } response = self.post(self.OAUTH_URL, oauth_data) url_params = get_url_params(response.url, fragment=True) if 'access_token' in url_params: return url_params action_url = get_base_url(response.text) if action_url: response = self.get(action_url) return get_url_params(response.url) response_json = response.json() if 'error' in response_json['error']: exception_msg = '{}: {}'.format(response_json['error'], response_json['error_description']) raise VVKAuthException(exception_msg)
def fap_baluev(Z, fmax, t, y, dy, normalization='standard'): """Alias-free approximation to false alarm probability (Eqn 6 of Baluev 2008) """ cdf = cdf_single(Z, len(t), normalization) tau = tau_davies(Z, fmax, t, y, dy, normalization=normalization) return 1 - cdf * np.exp(-tau)
Alias-free approximation to false alarm probability (Eqn 6 of Baluev 2008)
Below is the the instruction that describes the task: ### Input: Alias-free approximation to false alarm probability (Eqn 6 of Baluev 2008) ### Response: def fap_baluev(Z, fmax, t, y, dy, normalization='standard'): """Alias-free approximation to false alarm probability (Eqn 6 of Baluev 2008) """ cdf = cdf_single(Z, len(t), normalization) tau = tau_davies(Z, fmax, t, y, dy, normalization=normalization) return 1 - cdf * np.exp(-tau)
def Architecture_var(cls, v, serializerVars, extraTypes, extraTypes_serialized, ctx, childCtx): """ :return: list of extra discovered processes """ t = v._dtype # if type requires extra definition if isinstance(t, HArray) and v.defVal.vldMask: if v.drivers: raise SerializerException("Verilog does not support RAMs" " with initialized value") eProcs, eVars = cls.hardcodeRomIntoProcess(v) for _v in eVars: _procs = cls.Architecture_var(_v, serializerVars, extraTypes, extraTypes_serialized, ctx, childCtx) eProcs.extend(_procs) return eProcs v.name = ctx.scope.checkedName(v.name, v) serializedVar = cls.SignalItem(v, childCtx, declaration=True) serializerVars.append(serializedVar) return []
:return: list of extra discovered processes
Below is the the instruction that describes the task: ### Input: :return: list of extra discovered processes ### Response: def Architecture_var(cls, v, serializerVars, extraTypes, extraTypes_serialized, ctx, childCtx): """ :return: list of extra discovered processes """ t = v._dtype # if type requires extra definition if isinstance(t, HArray) and v.defVal.vldMask: if v.drivers: raise SerializerException("Verilog does not support RAMs" " with initialized value") eProcs, eVars = cls.hardcodeRomIntoProcess(v) for _v in eVars: _procs = cls.Architecture_var(_v, serializerVars, extraTypes, extraTypes_serialized, ctx, childCtx) eProcs.extend(_procs) return eProcs v.name = ctx.scope.checkedName(v.name, v) serializedVar = cls.SignalItem(v, childCtx, declaration=True) serializerVars.append(serializedVar) return []
def t(self, point): ''' :point: Point subclass :return: float If :point: is collinear, determine the 't' coefficient of the parametric equation: xyz = A<xyz> + t ( B<xyz> - A<xyz> ) if t < 0, point is less than A and B on the line if t >= 0 and <= 1, point is between A and B if t > 1 point is greater than B ''' # XXX could use for an ordering on points? if point not in self: msg = "'{p}' is not collinear with '{l}'" raise CollinearPoints(msg.format(p=point, l=self)) # p = A + t ( B - A) # p - A = t ( B - A) # p - A / (B -A) = t return (point - self.A) / self.m
:point: Point subclass :return: float If :point: is collinear, determine the 't' coefficient of the parametric equation: xyz = A<xyz> + t ( B<xyz> - A<xyz> ) if t < 0, point is less than A and B on the line if t >= 0 and <= 1, point is between A and B if t > 1 point is greater than B
Below is the the instruction that describes the task: ### Input: :point: Point subclass :return: float If :point: is collinear, determine the 't' coefficient of the parametric equation: xyz = A<xyz> + t ( B<xyz> - A<xyz> ) if t < 0, point is less than A and B on the line if t >= 0 and <= 1, point is between A and B if t > 1 point is greater than B ### Response: def t(self, point): ''' :point: Point subclass :return: float If :point: is collinear, determine the 't' coefficient of the parametric equation: xyz = A<xyz> + t ( B<xyz> - A<xyz> ) if t < 0, point is less than A and B on the line if t >= 0 and <= 1, point is between A and B if t > 1 point is greater than B ''' # XXX could use for an ordering on points? if point not in self: msg = "'{p}' is not collinear with '{l}'" raise CollinearPoints(msg.format(p=point, l=self)) # p = A + t ( B - A) # p - A = t ( B - A) # p - A / (B -A) = t return (point - self.A) / self.m
def LargestComponent(self): """ Returns (i, val) where i is the component index (0 - 2) which has largest absolute value and val is the value of the component. """ if abs(self.x) > abs(self.y): if abs(self.x) > abs(self.z): return (0, self.x) else: return (2, self.z) else: if abs(self.y) > abs(self.z): return (1, self.y) else: return (2, self.z)
Returns (i, val) where i is the component index (0 - 2) which has largest absolute value and val is the value of the component.
Below is the the instruction that describes the task: ### Input: Returns (i, val) where i is the component index (0 - 2) which has largest absolute value and val is the value of the component. ### Response: def LargestComponent(self): """ Returns (i, val) where i is the component index (0 - 2) which has largest absolute value and val is the value of the component. """ if abs(self.x) > abs(self.y): if abs(self.x) > abs(self.z): return (0, self.x) else: return (2, self.z) else: if abs(self.y) > abs(self.z): return (1, self.y) else: return (2, self.z)
def path_to_attr(path): """ Transform path to ast.Attribute. >>> import gast as ast >>> path = ('__builtin__', 'my', 'constant') >>> value = path_to_attr(path) >>> ref = ast.Attribute( ... value=ast.Attribute(value=ast.Name(id="__builtin__", ... ctx=ast.Load(), ... annotation=None), ... attr="my", ctx=ast.Load()), ... attr="constant", ctx=ast.Load()) >>> ast.dump(ref) == ast.dump(value) True """ return reduce(lambda hpath, last: ast.Attribute(hpath, last, ast.Load()), path[1:], ast.Name(mangle(path[0]), ast.Load(), None))
Transform path to ast.Attribute. >>> import gast as ast >>> path = ('__builtin__', 'my', 'constant') >>> value = path_to_attr(path) >>> ref = ast.Attribute( ... value=ast.Attribute(value=ast.Name(id="__builtin__", ... ctx=ast.Load(), ... annotation=None), ... attr="my", ctx=ast.Load()), ... attr="constant", ctx=ast.Load()) >>> ast.dump(ref) == ast.dump(value) True
Below is the the instruction that describes the task: ### Input: Transform path to ast.Attribute. >>> import gast as ast >>> path = ('__builtin__', 'my', 'constant') >>> value = path_to_attr(path) >>> ref = ast.Attribute( ... value=ast.Attribute(value=ast.Name(id="__builtin__", ... ctx=ast.Load(), ... annotation=None), ... attr="my", ctx=ast.Load()), ... attr="constant", ctx=ast.Load()) >>> ast.dump(ref) == ast.dump(value) True ### Response: def path_to_attr(path): """ Transform path to ast.Attribute. >>> import gast as ast >>> path = ('__builtin__', 'my', 'constant') >>> value = path_to_attr(path) >>> ref = ast.Attribute( ... value=ast.Attribute(value=ast.Name(id="__builtin__", ... ctx=ast.Load(), ... annotation=None), ... attr="my", ctx=ast.Load()), ... attr="constant", ctx=ast.Load()) >>> ast.dump(ref) == ast.dump(value) True """ return reduce(lambda hpath, last: ast.Attribute(hpath, last, ast.Load()), path[1:], ast.Name(mangle(path[0]), ast.Load(), None))
def VerifyStructure(self, parser_mediator, line): """Verifies if a line from a text file is in the expected format. Args: parser_mediator (ParserMediator): parser mediator. line (str): line from a text file. Returns: bool: True if the line is in the expected format, False if not. """ try: structure = self._DPKG_LOG_LINE.parseString(line) except pyparsing.ParseException as exception: logger.debug( 'Unable to parse Debian dpkg.log file with error: {0!s}'.format( exception)) return False return 'date_time' in structure and 'body' in structure
Verifies if a line from a text file is in the expected format. Args: parser_mediator (ParserMediator): parser mediator. line (str): line from a text file. Returns: bool: True if the line is in the expected format, False if not.
Below is the the instruction that describes the task: ### Input: Verifies if a line from a text file is in the expected format. Args: parser_mediator (ParserMediator): parser mediator. line (str): line from a text file. Returns: bool: True if the line is in the expected format, False if not. ### Response: def VerifyStructure(self, parser_mediator, line): """Verifies if a line from a text file is in the expected format. Args: parser_mediator (ParserMediator): parser mediator. line (str): line from a text file. Returns: bool: True if the line is in the expected format, False if not. """ try: structure = self._DPKG_LOG_LINE.parseString(line) except pyparsing.ParseException as exception: logger.debug( 'Unable to parse Debian dpkg.log file with error: {0!s}'.format( exception)) return False return 'date_time' in structure and 'body' in structure
def assign_properties(thing): """Assign properties to an object. When creating something via a post request (e.g. a node), you can pass the properties of the object in the request. This function gets those values from the request and fills in the relevant columns of the table. """ details = request_parameter(parameter="details", optional=True) if details: setattr(thing, "details", loads(details)) for p in range(5): property_name = "property" + str(p + 1) property = request_parameter(parameter=property_name, optional=True) if property: setattr(thing, property_name, property) session.commit()
Assign properties to an object. When creating something via a post request (e.g. a node), you can pass the properties of the object in the request. This function gets those values from the request and fills in the relevant columns of the table.
Below is the the instruction that describes the task: ### Input: Assign properties to an object. When creating something via a post request (e.g. a node), you can pass the properties of the object in the request. This function gets those values from the request and fills in the relevant columns of the table. ### Response: def assign_properties(thing): """Assign properties to an object. When creating something via a post request (e.g. a node), you can pass the properties of the object in the request. This function gets those values from the request and fills in the relevant columns of the table. """ details = request_parameter(parameter="details", optional=True) if details: setattr(thing, "details", loads(details)) for p in range(5): property_name = "property" + str(p + 1) property = request_parameter(parameter=property_name, optional=True) if property: setattr(thing, property_name, property) session.commit()
def get_return_page(self,prior=False): ''' This is just a wrapper for the getReturnPage helper function. ''' siteHistory = self.request.session.get('SITE_HISTORY',{}) return getReturnPage(siteHistory,prior=prior)
This is just a wrapper for the getReturnPage helper function.
Below is the the instruction that describes the task: ### Input: This is just a wrapper for the getReturnPage helper function. ### Response: def get_return_page(self,prior=False): ''' This is just a wrapper for the getReturnPage helper function. ''' siteHistory = self.request.session.get('SITE_HISTORY',{}) return getReturnPage(siteHistory,prior=prior)
def simxReadCollision(clientID, collisionObjectHandle, operationMode): ''' Please have a look at the function description/documentation in the V-REP user manual ''' collisionState = ct.c_ubyte() return c_ReadCollision(clientID, collisionObjectHandle, ct.byref(collisionState), operationMode), bool(collisionState.value!=0)
Please have a look at the function description/documentation in the V-REP user manual
Below is the the instruction that describes the task: ### Input: Please have a look at the function description/documentation in the V-REP user manual ### Response: def simxReadCollision(clientID, collisionObjectHandle, operationMode): ''' Please have a look at the function description/documentation in the V-REP user manual ''' collisionState = ct.c_ubyte() return c_ReadCollision(clientID, collisionObjectHandle, ct.byref(collisionState), operationMode), bool(collisionState.value!=0)
def poll(self): """Check if the operation has finished. :rtype: bool :returns: A boolean indicating if the current operation has completed. :raises ValueError: if the operation has already completed. """ if self.complete: raise ValueError("The operation has completed.") operation_pb = self._get_operation() self._update_state(operation_pb) return self.complete
Check if the operation has finished. :rtype: bool :returns: A boolean indicating if the current operation has completed. :raises ValueError: if the operation has already completed.
Below is the the instruction that describes the task: ### Input: Check if the operation has finished. :rtype: bool :returns: A boolean indicating if the current operation has completed. :raises ValueError: if the operation has already completed. ### Response: def poll(self): """Check if the operation has finished. :rtype: bool :returns: A boolean indicating if the current operation has completed. :raises ValueError: if the operation has already completed. """ if self.complete: raise ValueError("The operation has completed.") operation_pb = self._get_operation() self._update_state(operation_pb) return self.complete
def write_csvs(self, dirname: PathLike, skip_data: bool = True, sep: str = ','): """Write annotation to ``.csv`` files. It is not possible to recover the full :class:`~anndata.AnnData` from the output of this function. Use :meth:`~anndata.AnnData.write` for this. Parameters ---------- dirname Name of directory to which to export. skip_data Skip the data matrix :attr:`X`. sep Separator for the data. """ from .readwrite.write import write_csvs write_csvs(dirname, self, skip_data=skip_data, sep=sep)
Write annotation to ``.csv`` files. It is not possible to recover the full :class:`~anndata.AnnData` from the output of this function. Use :meth:`~anndata.AnnData.write` for this. Parameters ---------- dirname Name of directory to which to export. skip_data Skip the data matrix :attr:`X`. sep Separator for the data.
Below is the the instruction that describes the task: ### Input: Write annotation to ``.csv`` files. It is not possible to recover the full :class:`~anndata.AnnData` from the output of this function. Use :meth:`~anndata.AnnData.write` for this. Parameters ---------- dirname Name of directory to which to export. skip_data Skip the data matrix :attr:`X`. sep Separator for the data. ### Response: def write_csvs(self, dirname: PathLike, skip_data: bool = True, sep: str = ','): """Write annotation to ``.csv`` files. It is not possible to recover the full :class:`~anndata.AnnData` from the output of this function. Use :meth:`~anndata.AnnData.write` for this. Parameters ---------- dirname Name of directory to which to export. skip_data Skip the data matrix :attr:`X`. sep Separator for the data. """ from .readwrite.write import write_csvs write_csvs(dirname, self, skip_data=skip_data, sep=sep)
def primary_keys_for(self, cls: ClassDefinition) -> List[SlotDefinitionName]: """ Return all primary keys / identifiers for cls @param cls: class to get keys for @return: List of primary keys """ return [slot_name for slot_name in self.all_slots_for(cls) if self.schema.slots[slot_name].primary_key or self.schema.slots[slot_name].identifier]
Return all primary keys / identifiers for cls @param cls: class to get keys for @return: List of primary keys
Below is the the instruction that describes the task: ### Input: Return all primary keys / identifiers for cls @param cls: class to get keys for @return: List of primary keys ### Response: def primary_keys_for(self, cls: ClassDefinition) -> List[SlotDefinitionName]: """ Return all primary keys / identifiers for cls @param cls: class to get keys for @return: List of primary keys """ return [slot_name for slot_name in self.all_slots_for(cls) if self.schema.slots[slot_name].primary_key or self.schema.slots[slot_name].identifier]
def _print_p(self): """ m._print_p() -- Print probability (frequency) matrix """ print "# ", for i in range(self.width): print " %4d "%i, print for L in ['A', 'C', 'T', 'G']: print "#%s "%L, for i in range(self.width): print "%8.3f "%math.pow(2,self.logP[i][L]), print
m._print_p() -- Print probability (frequency) matrix
Below is the the instruction that describes the task: ### Input: m._print_p() -- Print probability (frequency) matrix ### Response: def _print_p(self): """ m._print_p() -- Print probability (frequency) matrix """ print "# ", for i in range(self.width): print " %4d "%i, print for L in ['A', 'C', 'T', 'G']: print "#%s "%L, for i in range(self.width): print "%8.3f "%math.pow(2,self.logP[i][L]), print
def train_tf(tokens_stream, out=None, **kwargs): """ Train a map of term frequencies on a list of files (parallelized). """ print('Counting terms...') results = parallel(count_tf, tokens_stream, n_jobs=-1) print('Merging...') tf = merge(results) if out is not None: with open(out, 'w') as f: json.dump(tf, f) return tf
Train a map of term frequencies on a list of files (parallelized).
Below is the the instruction that describes the task: ### Input: Train a map of term frequencies on a list of files (parallelized). ### Response: def train_tf(tokens_stream, out=None, **kwargs): """ Train a map of term frequencies on a list of files (parallelized). """ print('Counting terms...') results = parallel(count_tf, tokens_stream, n_jobs=-1) print('Merging...') tf = merge(results) if out is not None: with open(out, 'w') as f: json.dump(tf, f) return tf
def delete_unit(unit_id, **kwargs): """ Delete a unit from the DB. Raises and exception if the unit does not exist """ try: db_unit = db.DBSession.query(Unit).filter(Unit.id==unit_id).one() db.DBSession.delete(db_unit) db.DBSession.flush() return True except NoResultFound: raise ResourceNotFoundError("Unit (ID=%s) does not exist"%(unit_id))
Delete a unit from the DB. Raises and exception if the unit does not exist
Below is the the instruction that describes the task: ### Input: Delete a unit from the DB. Raises and exception if the unit does not exist ### Response: def delete_unit(unit_id, **kwargs): """ Delete a unit from the DB. Raises and exception if the unit does not exist """ try: db_unit = db.DBSession.query(Unit).filter(Unit.id==unit_id).one() db.DBSession.delete(db_unit) db.DBSession.flush() return True except NoResultFound: raise ResourceNotFoundError("Unit (ID=%s) does not exist"%(unit_id))
def encode(self, data: mx.sym.Symbol, data_length: Optional[mx.sym.Symbol], seq_len: int) -> Tuple[mx.sym.Symbol, mx.sym.Symbol, int]: """ Encodes data given sequence lengths of individual examples and maximum sequence length. :param data: Input data. :param data_length: Vector with sequence lengths. :param seq_len: Maximum sequence length. :return: Encoded versions of input data (data, data_length, seq_len). """ outputs, _ = self.rnn.unroll(seq_len, inputs=data, merge_outputs=True, layout=self.layout) return outputs, data_length, seq_len
Encodes data given sequence lengths of individual examples and maximum sequence length. :param data: Input data. :param data_length: Vector with sequence lengths. :param seq_len: Maximum sequence length. :return: Encoded versions of input data (data, data_length, seq_len).
Below is the the instruction that describes the task: ### Input: Encodes data given sequence lengths of individual examples and maximum sequence length. :param data: Input data. :param data_length: Vector with sequence lengths. :param seq_len: Maximum sequence length. :return: Encoded versions of input data (data, data_length, seq_len). ### Response: def encode(self, data: mx.sym.Symbol, data_length: Optional[mx.sym.Symbol], seq_len: int) -> Tuple[mx.sym.Symbol, mx.sym.Symbol, int]: """ Encodes data given sequence lengths of individual examples and maximum sequence length. :param data: Input data. :param data_length: Vector with sequence lengths. :param seq_len: Maximum sequence length. :return: Encoded versions of input data (data, data_length, seq_len). """ outputs, _ = self.rnn.unroll(seq_len, inputs=data, merge_outputs=True, layout=self.layout) return outputs, data_length, seq_len
def _try_methods(methods, to_find=None): # type: (list, Optional[str]) -> Optional[str] """Runs the methods specified by _hunt_for_mac(). We try every method and see if it returned a MAC address. If it returns None or raises an exception, we continue and try the next method. """ found = None for m in methods: try: if isinstance(m, tuple): for arg in m[3]: # list(str) if DEBUG: log.debug("Trying: '%s %s'", m[2], arg) # Arguments: (regex, _popen(command, arg), regex index) found = _search(m[0], _popen(m[2], arg), m[1]) if DEBUG: log.debug("Result: %s\n", found) if found: # Skip remaining args AND remaining methods break elif callable(m): if DEBUG: log.debug("Trying: '%s' (to_find: '%s')", m.__name__, str(to_find)) if to_find is not None: found = m(to_find) else: found = m() if DEBUG: log.debug("Result: %s\n", found) else: log.critical("Invalid type '%s' for method '%s'", type(m), str(m)) except Exception as ex: if DEBUG: log.debug("Exception: %s", str(ex)) if DEBUG >= 2: log.debug(traceback.format_exc()) continue if found: # Skip remaining methods break return found
Runs the methods specified by _hunt_for_mac(). We try every method and see if it returned a MAC address. If it returns None or raises an exception, we continue and try the next method.
Below is the the instruction that describes the task: ### Input: Runs the methods specified by _hunt_for_mac(). We try every method and see if it returned a MAC address. If it returns None or raises an exception, we continue and try the next method. ### Response: def _try_methods(methods, to_find=None): # type: (list, Optional[str]) -> Optional[str] """Runs the methods specified by _hunt_for_mac(). We try every method and see if it returned a MAC address. If it returns None or raises an exception, we continue and try the next method. """ found = None for m in methods: try: if isinstance(m, tuple): for arg in m[3]: # list(str) if DEBUG: log.debug("Trying: '%s %s'", m[2], arg) # Arguments: (regex, _popen(command, arg), regex index) found = _search(m[0], _popen(m[2], arg), m[1]) if DEBUG: log.debug("Result: %s\n", found) if found: # Skip remaining args AND remaining methods break elif callable(m): if DEBUG: log.debug("Trying: '%s' (to_find: '%s')", m.__name__, str(to_find)) if to_find is not None: found = m(to_find) else: found = m() if DEBUG: log.debug("Result: %s\n", found) else: log.critical("Invalid type '%s' for method '%s'", type(m), str(m)) except Exception as ex: if DEBUG: log.debug("Exception: %s", str(ex)) if DEBUG >= 2: log.debug(traceback.format_exc()) continue if found: # Skip remaining methods break return found
def get_tags(name=None, instance_id=None, call=None, location=None, kwargs=None, resource_id=None): # pylint: disable=W0613 ''' Retrieve tags for a resource. Normally a VM name or instance_id is passed in, but a resource_id may be passed instead. If both are passed in, the instance_id will be used. CLI Examples: .. code-block:: bash salt-cloud -a get_tags mymachine salt-cloud -a get_tags resource_id=vol-3267ab32 ''' if location is None: location = get_location() if instance_id is None: if resource_id is None: if name: instance_id = _get_node(name)['instanceId'] elif 'instance_id' in kwargs: instance_id = kwargs['instance_id'] elif 'resource_id' in kwargs: instance_id = kwargs['resource_id'] else: instance_id = resource_id params = {'Action': 'DescribeTags', 'Filter.1.Name': 'resource-id', 'Filter.1.Value': instance_id} return aws.query(params, setname='tagSet', location=location, provider=get_provider(), opts=__opts__, sigver='4')
Retrieve tags for a resource. Normally a VM name or instance_id is passed in, but a resource_id may be passed instead. If both are passed in, the instance_id will be used. CLI Examples: .. code-block:: bash salt-cloud -a get_tags mymachine salt-cloud -a get_tags resource_id=vol-3267ab32
Below is the the instruction that describes the task: ### Input: Retrieve tags for a resource. Normally a VM name or instance_id is passed in, but a resource_id may be passed instead. If both are passed in, the instance_id will be used. CLI Examples: .. code-block:: bash salt-cloud -a get_tags mymachine salt-cloud -a get_tags resource_id=vol-3267ab32 ### Response: def get_tags(name=None, instance_id=None, call=None, location=None, kwargs=None, resource_id=None): # pylint: disable=W0613 ''' Retrieve tags for a resource. Normally a VM name or instance_id is passed in, but a resource_id may be passed instead. If both are passed in, the instance_id will be used. CLI Examples: .. code-block:: bash salt-cloud -a get_tags mymachine salt-cloud -a get_tags resource_id=vol-3267ab32 ''' if location is None: location = get_location() if instance_id is None: if resource_id is None: if name: instance_id = _get_node(name)['instanceId'] elif 'instance_id' in kwargs: instance_id = kwargs['instance_id'] elif 'resource_id' in kwargs: instance_id = kwargs['resource_id'] else: instance_id = resource_id params = {'Action': 'DescribeTags', 'Filter.1.Name': 'resource-id', 'Filter.1.Value': instance_id} return aws.query(params, setname='tagSet', location=location, provider=get_provider(), opts=__opts__, sigver='4')
def clip_matrix(left, right, bottom, top, near, far, perspective=False): """Return matrix to obtain normalized device coordinates from frustum. The frustum bounds are axis-aligned along x (left, right), y (bottom, top) and z (near, far). Normalized device coordinates are in range [-1, 1] if coordinates are inside the frustum. If perspective is True the frustum is a truncated pyramid with the perspective point at origin and direction along z axis, otherwise an orthographic canonical view volume (a box). Homogeneous coordinates transformed by the perspective clip matrix need to be dehomogenized (divided by w coordinate). >>> frustum = np.random.rand(6) >>> frustum[1] += frustum[0] >>> frustum[3] += frustum[2] >>> frustum[5] += frustum[4] >>> M = clip_matrix(perspective=False, *frustum) >>> a = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> np.allclose(a, [-1., -1., -1., 1.]) True >>> b = np.dot(M, [frustum[1], frustum[3], frustum[5], 1]) >>> np.allclose(b, [ 1., 1., 1., 1.]) True >>> M = clip_matrix(perspective=True, *frustum) >>> v = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> c = v / v[3] >>> np.allclose(c, [-1., -1., -1., 1.]) True >>> v = np.dot(M, [frustum[1], frustum[3], frustum[4], 1]) >>> d = v / v[3] >>> np.allclose(d, [ 1., 1., -1., 1.]) True """ if left >= right or bottom >= top or near >= far: raise ValueError("invalid frustum") if perspective: if near <= _EPS: raise ValueError("invalid frustum: near <= 0") t = 2.0 * near M = [[t / (left - right), 0.0, (right + left) / (right - left), 0.0], [0.0, t / (bottom - top), (top + bottom) / (top - bottom), 0.0], [0.0, 0.0, (far + near) / (near - far), t * far / (far - near)], [0.0, 0.0, -1.0, 0.0]] else: M = [[2.0 / (right - left), 0.0, 0.0, (right + left) / (left - right)], [0.0, 2.0 / (top - bottom), 0.0, (top + bottom) / (bottom - top)], [0.0, 0.0, 2.0 / (far - near), (far + near) / (near - far)], [0.0, 0.0, 0.0, 1.0]] return np.array(M)
Return matrix to obtain normalized device coordinates from frustum. The frustum bounds are axis-aligned along x (left, right), y (bottom, top) and z (near, far). Normalized device coordinates are in range [-1, 1] if coordinates are inside the frustum. If perspective is True the frustum is a truncated pyramid with the perspective point at origin and direction along z axis, otherwise an orthographic canonical view volume (a box). Homogeneous coordinates transformed by the perspective clip matrix need to be dehomogenized (divided by w coordinate). >>> frustum = np.random.rand(6) >>> frustum[1] += frustum[0] >>> frustum[3] += frustum[2] >>> frustum[5] += frustum[4] >>> M = clip_matrix(perspective=False, *frustum) >>> a = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> np.allclose(a, [-1., -1., -1., 1.]) True >>> b = np.dot(M, [frustum[1], frustum[3], frustum[5], 1]) >>> np.allclose(b, [ 1., 1., 1., 1.]) True >>> M = clip_matrix(perspective=True, *frustum) >>> v = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> c = v / v[3] >>> np.allclose(c, [-1., -1., -1., 1.]) True >>> v = np.dot(M, [frustum[1], frustum[3], frustum[4], 1]) >>> d = v / v[3] >>> np.allclose(d, [ 1., 1., -1., 1.]) True
Below is the the instruction that describes the task: ### Input: Return matrix to obtain normalized device coordinates from frustum. The frustum bounds are axis-aligned along x (left, right), y (bottom, top) and z (near, far). Normalized device coordinates are in range [-1, 1] if coordinates are inside the frustum. If perspective is True the frustum is a truncated pyramid with the perspective point at origin and direction along z axis, otherwise an orthographic canonical view volume (a box). Homogeneous coordinates transformed by the perspective clip matrix need to be dehomogenized (divided by w coordinate). >>> frustum = np.random.rand(6) >>> frustum[1] += frustum[0] >>> frustum[3] += frustum[2] >>> frustum[5] += frustum[4] >>> M = clip_matrix(perspective=False, *frustum) >>> a = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> np.allclose(a, [-1., -1., -1., 1.]) True >>> b = np.dot(M, [frustum[1], frustum[3], frustum[5], 1]) >>> np.allclose(b, [ 1., 1., 1., 1.]) True >>> M = clip_matrix(perspective=True, *frustum) >>> v = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> c = v / v[3] >>> np.allclose(c, [-1., -1., -1., 1.]) True >>> v = np.dot(M, [frustum[1], frustum[3], frustum[4], 1]) >>> d = v / v[3] >>> np.allclose(d, [ 1., 1., -1., 1.]) True ### Response: def clip_matrix(left, right, bottom, top, near, far, perspective=False): """Return matrix to obtain normalized device coordinates from frustum. The frustum bounds are axis-aligned along x (left, right), y (bottom, top) and z (near, far). Normalized device coordinates are in range [-1, 1] if coordinates are inside the frustum. If perspective is True the frustum is a truncated pyramid with the perspective point at origin and direction along z axis, otherwise an orthographic canonical view volume (a box). Homogeneous coordinates transformed by the perspective clip matrix need to be dehomogenized (divided by w coordinate). >>> frustum = np.random.rand(6) >>> frustum[1] += frustum[0] >>> frustum[3] += frustum[2] >>> frustum[5] += frustum[4] >>> M = clip_matrix(perspective=False, *frustum) >>> a = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> np.allclose(a, [-1., -1., -1., 1.]) True >>> b = np.dot(M, [frustum[1], frustum[3], frustum[5], 1]) >>> np.allclose(b, [ 1., 1., 1., 1.]) True >>> M = clip_matrix(perspective=True, *frustum) >>> v = np.dot(M, [frustum[0], frustum[2], frustum[4], 1]) >>> c = v / v[3] >>> np.allclose(c, [-1., -1., -1., 1.]) True >>> v = np.dot(M, [frustum[1], frustum[3], frustum[4], 1]) >>> d = v / v[3] >>> np.allclose(d, [ 1., 1., -1., 1.]) True """ if left >= right or bottom >= top or near >= far: raise ValueError("invalid frustum") if perspective: if near <= _EPS: raise ValueError("invalid frustum: near <= 0") t = 2.0 * near M = [[t / (left - right), 0.0, (right + left) / (right - left), 0.0], [0.0, t / (bottom - top), (top + bottom) / (top - bottom), 0.0], [0.0, 0.0, (far + near) / (near - far), t * far / (far - near)], [0.0, 0.0, -1.0, 0.0]] else: M = [[2.0 / (right - left), 0.0, 0.0, (right + left) / (left - right)], [0.0, 2.0 / (top - bottom), 0.0, (top + bottom) / (bottom - top)], [0.0, 0.0, 2.0 / (far - near), (far + near) / (near - far)], [0.0, 0.0, 0.0, 1.0]] return np.array(M)
async def set_lock(self, resource, lock_identifier, lock_timeout): """ Lock this instance and set lock expiration time to lock_timeout :param resource: redis key to set :param lock_identifier: uniquie id of lock :param lock_timeout: timeout for lock in seconds :raises: LockError if lock is not acquired """ lock_timeout_ms = int(lock_timeout * 1000) try: with await self.connect() as redis: await redis.eval( self.set_lock_script, keys=[resource], args=[lock_identifier, lock_timeout_ms] ) except aioredis.errors.ReplyError as exc: # script fault self.log.debug('Can not set lock "%s" on %s', resource, repr(self)) raise LockError('Can not set lock') from exc except (aioredis.errors.RedisError, OSError) as exc: self.log.error('Can not set lock "%s" on %s: %s', resource, repr(self), repr(exc)) raise LockError('Can not set lock') from exc except asyncio.CancelledError: self.log.debug('Lock "%s" is cancelled on %s', resource, repr(self)) raise except Exception as exc: self.log.exception('Can not set lock "%s" on %s', resource, repr(self)) raise else: self.log.debug('Lock "%s" is set on %s', resource, repr(self))
Lock this instance and set lock expiration time to lock_timeout :param resource: redis key to set :param lock_identifier: uniquie id of lock :param lock_timeout: timeout for lock in seconds :raises: LockError if lock is not acquired
Below is the the instruction that describes the task: ### Input: Lock this instance and set lock expiration time to lock_timeout :param resource: redis key to set :param lock_identifier: uniquie id of lock :param lock_timeout: timeout for lock in seconds :raises: LockError if lock is not acquired ### Response: async def set_lock(self, resource, lock_identifier, lock_timeout): """ Lock this instance and set lock expiration time to lock_timeout :param resource: redis key to set :param lock_identifier: uniquie id of lock :param lock_timeout: timeout for lock in seconds :raises: LockError if lock is not acquired """ lock_timeout_ms = int(lock_timeout * 1000) try: with await self.connect() as redis: await redis.eval( self.set_lock_script, keys=[resource], args=[lock_identifier, lock_timeout_ms] ) except aioredis.errors.ReplyError as exc: # script fault self.log.debug('Can not set lock "%s" on %s', resource, repr(self)) raise LockError('Can not set lock') from exc except (aioredis.errors.RedisError, OSError) as exc: self.log.error('Can not set lock "%s" on %s: %s', resource, repr(self), repr(exc)) raise LockError('Can not set lock') from exc except asyncio.CancelledError: self.log.debug('Lock "%s" is cancelled on %s', resource, repr(self)) raise except Exception as exc: self.log.exception('Can not set lock "%s" on %s', resource, repr(self)) raise else: self.log.debug('Lock "%s" is set on %s', resource, repr(self))
def squeeze(self, axis=None): """Return the partition with removed degenerate (length 1) dimensions. Parameters ---------- axis : None or index expression, optional Subset of the axes to squeeze. Default: All axes. Returns ------- squeezed : `RectPartition` Squeezed partition. Examples -------- >>> p = odl.uniform_partition([0, -1], [1, 2], (3, 1)) >>> p.squeeze() uniform_partition(0.0, 1.0, 3) The axis argument can be used to only squeeze some axes (if applicable) >>> p.squeeze(axis=0) uniform_partition([ 0., -1.], [ 1., 2.], (3, 1)) Notes ----- This is not equivalent to ``RectPartiton(self.set.squeeze(), self.grid.squeeze())`` since the definition of degenerate is different in sets and grids. This method follow the definition used in grids, that is, an axis is degenerate if it has only one element. See Also -------- odl.discr.grid.RectGrid.squeeze odl.set.domain.IntervalProd.squeeze """ if axis is None: rng = range(self.ndim) else: rng = list(np.atleast_1d(np.arange(self.ndim)[axis])) new_indcs = [i for i in range(self.ndim) if i not in rng or self.grid.nondegen_byaxis[i]] newset = self.set[new_indcs] return RectPartition(newset, self.grid.squeeze(axis))
Return the partition with removed degenerate (length 1) dimensions. Parameters ---------- axis : None or index expression, optional Subset of the axes to squeeze. Default: All axes. Returns ------- squeezed : `RectPartition` Squeezed partition. Examples -------- >>> p = odl.uniform_partition([0, -1], [1, 2], (3, 1)) >>> p.squeeze() uniform_partition(0.0, 1.0, 3) The axis argument can be used to only squeeze some axes (if applicable) >>> p.squeeze(axis=0) uniform_partition([ 0., -1.], [ 1., 2.], (3, 1)) Notes ----- This is not equivalent to ``RectPartiton(self.set.squeeze(), self.grid.squeeze())`` since the definition of degenerate is different in sets and grids. This method follow the definition used in grids, that is, an axis is degenerate if it has only one element. See Also -------- odl.discr.grid.RectGrid.squeeze odl.set.domain.IntervalProd.squeeze
Below is the the instruction that describes the task: ### Input: Return the partition with removed degenerate (length 1) dimensions. Parameters ---------- axis : None or index expression, optional Subset of the axes to squeeze. Default: All axes. Returns ------- squeezed : `RectPartition` Squeezed partition. Examples -------- >>> p = odl.uniform_partition([0, -1], [1, 2], (3, 1)) >>> p.squeeze() uniform_partition(0.0, 1.0, 3) The axis argument can be used to only squeeze some axes (if applicable) >>> p.squeeze(axis=0) uniform_partition([ 0., -1.], [ 1., 2.], (3, 1)) Notes ----- This is not equivalent to ``RectPartiton(self.set.squeeze(), self.grid.squeeze())`` since the definition of degenerate is different in sets and grids. This method follow the definition used in grids, that is, an axis is degenerate if it has only one element. See Also -------- odl.discr.grid.RectGrid.squeeze odl.set.domain.IntervalProd.squeeze ### Response: def squeeze(self, axis=None): """Return the partition with removed degenerate (length 1) dimensions. Parameters ---------- axis : None or index expression, optional Subset of the axes to squeeze. Default: All axes. Returns ------- squeezed : `RectPartition` Squeezed partition. Examples -------- >>> p = odl.uniform_partition([0, -1], [1, 2], (3, 1)) >>> p.squeeze() uniform_partition(0.0, 1.0, 3) The axis argument can be used to only squeeze some axes (if applicable) >>> p.squeeze(axis=0) uniform_partition([ 0., -1.], [ 1., 2.], (3, 1)) Notes ----- This is not equivalent to ``RectPartiton(self.set.squeeze(), self.grid.squeeze())`` since the definition of degenerate is different in sets and grids. This method follow the definition used in grids, that is, an axis is degenerate if it has only one element. See Also -------- odl.discr.grid.RectGrid.squeeze odl.set.domain.IntervalProd.squeeze """ if axis is None: rng = range(self.ndim) else: rng = list(np.atleast_1d(np.arange(self.ndim)[axis])) new_indcs = [i for i in range(self.ndim) if i not in rng or self.grid.nondegen_byaxis[i]] newset = self.set[new_indcs] return RectPartition(newset, self.grid.squeeze(axis))
def MakePmfFromList(t, name=''): """Makes a PMF from an unsorted sequence of values. Args: t: sequence of numbers name: string name for this PMF Returns: Pmf object """ hist = MakeHistFromList(t) d = hist.GetDict() pmf = Pmf(d, name) pmf.Normalize() return pmf
Makes a PMF from an unsorted sequence of values. Args: t: sequence of numbers name: string name for this PMF Returns: Pmf object
Below is the the instruction that describes the task: ### Input: Makes a PMF from an unsorted sequence of values. Args: t: sequence of numbers name: string name for this PMF Returns: Pmf object ### Response: def MakePmfFromList(t, name=''): """Makes a PMF from an unsorted sequence of values. Args: t: sequence of numbers name: string name for this PMF Returns: Pmf object """ hist = MakeHistFromList(t) d = hist.GetDict() pmf = Pmf(d, name) pmf.Normalize() return pmf
def _split_by_regions(dirname, out_ext, in_key): """Split a BAM file data analysis into chromosomal regions. """ def _do_work(data): # XXX Need to move retrieval of regions into preparation to avoid # need for files when running in non-shared filesystems regions = _get_parallel_regions(data) def _sort_by_size(region): _, start, end = region return end - start regions.sort(key=_sort_by_size, reverse=True) bam_file = data[in_key] if bam_file is None: return None, [] part_info = [] base_out = os.path.splitext(os.path.basename(bam_file))[0] nowork = [["nochrom"], ["noanalysis", data["config"]["algorithm"]["non_callable_regions"]]] for region in regions + nowork: out_dir = os.path.join(data["dirs"]["work"], dirname, data["name"][-1], region[0]) region_outfile = os.path.join(out_dir, "%s-%s%s" % (base_out, to_safestr(region), out_ext)) part_info.append((region, region_outfile)) out_file = os.path.join(data["dirs"]["work"], dirname, data["name"][-1], "%s%s" % (base_out, out_ext)) return out_file, part_info return _do_work
Split a BAM file data analysis into chromosomal regions.
Below is the the instruction that describes the task: ### Input: Split a BAM file data analysis into chromosomal regions. ### Response: def _split_by_regions(dirname, out_ext, in_key): """Split a BAM file data analysis into chromosomal regions. """ def _do_work(data): # XXX Need to move retrieval of regions into preparation to avoid # need for files when running in non-shared filesystems regions = _get_parallel_regions(data) def _sort_by_size(region): _, start, end = region return end - start regions.sort(key=_sort_by_size, reverse=True) bam_file = data[in_key] if bam_file is None: return None, [] part_info = [] base_out = os.path.splitext(os.path.basename(bam_file))[0] nowork = [["nochrom"], ["noanalysis", data["config"]["algorithm"]["non_callable_regions"]]] for region in regions + nowork: out_dir = os.path.join(data["dirs"]["work"], dirname, data["name"][-1], region[0]) region_outfile = os.path.join(out_dir, "%s-%s%s" % (base_out, to_safestr(region), out_ext)) part_info.append((region, region_outfile)) out_file = os.path.join(data["dirs"]["work"], dirname, data["name"][-1], "%s%s" % (base_out, out_ext)) return out_file, part_info return _do_work
def ji_windows(self, ij_win): # what can be given to ij_win NOT intuitive/right name by now!!! """For a given specific window, i.e. an element of :attr:`windows`, get the windows of all resolutions. Arguments: ij_win {int} -- The index specifying the window for which to return the resolution-windows. """ ji_windows = {} transform_src = self._layer_meta[self._res_indices[self._windows_res][0]]["transform"] for res in self._res_indices: transform_dst = self._layer_meta[self._res_indices[res][0]]["transform"] ji_windows[res] = window_from_window(window_src=self.windows[ij_win], transform_src=transform_src, transform_dst=transform_dst) return ji_windows
For a given specific window, i.e. an element of :attr:`windows`, get the windows of all resolutions. Arguments: ij_win {int} -- The index specifying the window for which to return the resolution-windows.
Below is the the instruction that describes the task: ### Input: For a given specific window, i.e. an element of :attr:`windows`, get the windows of all resolutions. Arguments: ij_win {int} -- The index specifying the window for which to return the resolution-windows. ### Response: def ji_windows(self, ij_win): # what can be given to ij_win NOT intuitive/right name by now!!! """For a given specific window, i.e. an element of :attr:`windows`, get the windows of all resolutions. Arguments: ij_win {int} -- The index specifying the window for which to return the resolution-windows. """ ji_windows = {} transform_src = self._layer_meta[self._res_indices[self._windows_res][0]]["transform"] for res in self._res_indices: transform_dst = self._layer_meta[self._res_indices[res][0]]["transform"] ji_windows[res] = window_from_window(window_src=self.windows[ij_win], transform_src=transform_src, transform_dst=transform_dst) return ji_windows
def get_file_link(node, use_metadata=False, include_size=False, include_extension=False, include_icon=False, href=None, extra_class='', extra=''): """ Returns a formatted HTML link tag to the FileNode's file, optionally including some meta information about the file. """ link_text = None if use_metadata: link_text = node.get_metadata_display() if not link_text: link_text = node.__unicode__() if node.node_type != media_types.FOLDER: if include_extension: if extra != '': extra += ' ' extra = '<span class="file-extension">%s</span>' % node.extension.upper() if include_size: if extra != '': extra += ', ' extra += '<span class="file-size">%s</span>' % filesizeformat(node.size) if extra: extra = ' <span class="details">(%s)</span>' % extra link_class = 'file %s' % node.extension else: link_class = 'folder' if extra_class: link_class = '%s %s' % (link_class, extra_class) if node.node_type != media_types.FOLDER and not href: href = node.file.url icon = '' if include_icon: icon_file = node.get_icon_file() if icon_file: icon = '<span class="icon"><img src="%s" alt="%s" /></span>' % ( icon_file.url, node.alt) if href: link = u'<a class="%s" href="%s">%s%s</a>%s' % ( link_class, href, icon, link_text, extra) else: link = u'<span class="%s">%s%s</span>%s' % ( link_class, icon, link_text, extra) return force_unicode(mark_safe(link))
Returns a formatted HTML link tag to the FileNode's file, optionally including some meta information about the file.
Below is the the instruction that describes the task: ### Input: Returns a formatted HTML link tag to the FileNode's file, optionally including some meta information about the file. ### Response: def get_file_link(node, use_metadata=False, include_size=False, include_extension=False, include_icon=False, href=None, extra_class='', extra=''): """ Returns a formatted HTML link tag to the FileNode's file, optionally including some meta information about the file. """ link_text = None if use_metadata: link_text = node.get_metadata_display() if not link_text: link_text = node.__unicode__() if node.node_type != media_types.FOLDER: if include_extension: if extra != '': extra += ' ' extra = '<span class="file-extension">%s</span>' % node.extension.upper() if include_size: if extra != '': extra += ', ' extra += '<span class="file-size">%s</span>' % filesizeformat(node.size) if extra: extra = ' <span class="details">(%s)</span>' % extra link_class = 'file %s' % node.extension else: link_class = 'folder' if extra_class: link_class = '%s %s' % (link_class, extra_class) if node.node_type != media_types.FOLDER and not href: href = node.file.url icon = '' if include_icon: icon_file = node.get_icon_file() if icon_file: icon = '<span class="icon"><img src="%s" alt="%s" /></span>' % ( icon_file.url, node.alt) if href: link = u'<a class="%s" href="%s">%s%s</a>%s' % ( link_class, href, icon, link_text, extra) else: link = u'<span class="%s">%s%s</span>%s' % ( link_class, icon, link_text, extra) return force_unicode(mark_safe(link))
def _update_data(self): """Update altfunc""" func = self.owner.formula.func codeobj = func.__code__ name = func.__name__ # self.cells.name # func.__name__ namespace_impl = self.owner._namespace_impl.get_updated() namespace = namespace_impl.interfaces selfnode = get_node(self.owner, None, None) for name in self.owner.formula.srcnames: if name in namespace_impl and isinstance( namespace_impl[name], ReferenceImpl ): refnode = get_node(namespace_impl[name], None, None) self.owner.model.lexdep.add_path([selfnode, refnode]) closure = func.__closure__ # None normally. if closure is not None: # pytest fails without this. closure = create_closure(self.owner.interface) self.altfunc = FunctionType( codeobj, namespace, name=name, closure=closure )
Update altfunc
Below is the the instruction that describes the task: ### Input: Update altfunc ### Response: def _update_data(self): """Update altfunc""" func = self.owner.formula.func codeobj = func.__code__ name = func.__name__ # self.cells.name # func.__name__ namespace_impl = self.owner._namespace_impl.get_updated() namespace = namespace_impl.interfaces selfnode = get_node(self.owner, None, None) for name in self.owner.formula.srcnames: if name in namespace_impl and isinstance( namespace_impl[name], ReferenceImpl ): refnode = get_node(namespace_impl[name], None, None) self.owner.model.lexdep.add_path([selfnode, refnode]) closure = func.__closure__ # None normally. if closure is not None: # pytest fails without this. closure = create_closure(self.owner.interface) self.altfunc = FunctionType( codeobj, namespace, name=name, closure=closure )
def delete(self): """ Deletes the object from the database """ self.__dmlquery__(self.__class__, self, batch=self._batch, timestamp=self._timestamp, consistency=self.__consistency__, timeout=self._timeout, conditional=self._conditional, if_exists=self._if_exists).delete()
Deletes the object from the database
Below is the the instruction that describes the task: ### Input: Deletes the object from the database ### Response: def delete(self): """ Deletes the object from the database """ self.__dmlquery__(self.__class__, self, batch=self._batch, timestamp=self._timestamp, consistency=self.__consistency__, timeout=self._timeout, conditional=self._conditional, if_exists=self._if_exists).delete()
def update_one(self, mongo_collection, filter_doc, update_doc, mongo_db=None, **kwargs): """ Updates a single document in a mongo collection. https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.update_one :param mongo_collection: The name of the collection to update. :type mongo_collection: str :param filter_doc: A query that matches the documents to update. :type filter_doc: dict :param update_doc: The modifications to apply. :type update_doc: dict :param mongo_db: The name of the database to use. Can be omitted; then the database from the connection string is used. :type mongo_db: str """ collection = self.get_collection(mongo_collection, mongo_db=mongo_db) return collection.update_one(filter_doc, update_doc, **kwargs)
Updates a single document in a mongo collection. https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.update_one :param mongo_collection: The name of the collection to update. :type mongo_collection: str :param filter_doc: A query that matches the documents to update. :type filter_doc: dict :param update_doc: The modifications to apply. :type update_doc: dict :param mongo_db: The name of the database to use. Can be omitted; then the database from the connection string is used. :type mongo_db: str
Below is the the instruction that describes the task: ### Input: Updates a single document in a mongo collection. https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.update_one :param mongo_collection: The name of the collection to update. :type mongo_collection: str :param filter_doc: A query that matches the documents to update. :type filter_doc: dict :param update_doc: The modifications to apply. :type update_doc: dict :param mongo_db: The name of the database to use. Can be omitted; then the database from the connection string is used. :type mongo_db: str ### Response: def update_one(self, mongo_collection, filter_doc, update_doc, mongo_db=None, **kwargs): """ Updates a single document in a mongo collection. https://api.mongodb.com/python/current/api/pymongo/collection.html#pymongo.collection.Collection.update_one :param mongo_collection: The name of the collection to update. :type mongo_collection: str :param filter_doc: A query that matches the documents to update. :type filter_doc: dict :param update_doc: The modifications to apply. :type update_doc: dict :param mongo_db: The name of the database to use. Can be omitted; then the database from the connection string is used. :type mongo_db: str """ collection = self.get_collection(mongo_collection, mongo_db=mongo_db) return collection.update_one(filter_doc, update_doc, **kwargs)
def parse_etag_header(header): """Parse a header containing one or more ETags or a wildcard ('*'). Returns the string '*' or a list of ETags as (weak, etag) tuples. `weak` is the prefix designating a weak ETag, or the empty string. `etag` is the ETag (including quotes) with the weak prefix stripped off. Returns an empty list if the header could not be parsed. Example: >>> parse_etag_header('*') '*' >>> parse_etag_header('"foo" ') [('', '"foo"')] >>> parse_etag_header('"foo", w/"bar", W/"baz"') [('', '"foo"'), ('w/', '"bar"'), ('W/', '"baz"')] >>> parse_etag_header('invalid') [] """ m = etag_header_re.match(header.strip()) if not m: return [] if m.group(1): # star return m.group(1) else: # list of entity tags return etag_re.findall(header)
Parse a header containing one or more ETags or a wildcard ('*'). Returns the string '*' or a list of ETags as (weak, etag) tuples. `weak` is the prefix designating a weak ETag, or the empty string. `etag` is the ETag (including quotes) with the weak prefix stripped off. Returns an empty list if the header could not be parsed. Example: >>> parse_etag_header('*') '*' >>> parse_etag_header('"foo" ') [('', '"foo"')] >>> parse_etag_header('"foo", w/"bar", W/"baz"') [('', '"foo"'), ('w/', '"bar"'), ('W/', '"baz"')] >>> parse_etag_header('invalid') []
Below is the the instruction that describes the task: ### Input: Parse a header containing one or more ETags or a wildcard ('*'). Returns the string '*' or a list of ETags as (weak, etag) tuples. `weak` is the prefix designating a weak ETag, or the empty string. `etag` is the ETag (including quotes) with the weak prefix stripped off. Returns an empty list if the header could not be parsed. Example: >>> parse_etag_header('*') '*' >>> parse_etag_header('"foo" ') [('', '"foo"')] >>> parse_etag_header('"foo", w/"bar", W/"baz"') [('', '"foo"'), ('w/', '"bar"'), ('W/', '"baz"')] >>> parse_etag_header('invalid') [] ### Response: def parse_etag_header(header): """Parse a header containing one or more ETags or a wildcard ('*'). Returns the string '*' or a list of ETags as (weak, etag) tuples. `weak` is the prefix designating a weak ETag, or the empty string. `etag` is the ETag (including quotes) with the weak prefix stripped off. Returns an empty list if the header could not be parsed. Example: >>> parse_etag_header('*') '*' >>> parse_etag_header('"foo" ') [('', '"foo"')] >>> parse_etag_header('"foo", w/"bar", W/"baz"') [('', '"foo"'), ('w/', '"bar"'), ('W/', '"baz"')] >>> parse_etag_header('invalid') [] """ m = etag_header_re.match(header.strip()) if not m: return [] if m.group(1): # star return m.group(1) else: # list of entity tags return etag_re.findall(header)
def echo_json_response(response, pretty, limit=None, ndjson=False): '''Wrapper to echo JSON with optional 'pretty' printing. If pretty is not provided explicity and stdout is a terminal (and not redirected or piped), the default will be to indent and sort keys''' indent = None sort_keys = False nl = False if not ndjson and (pretty or (pretty is None and sys.stdout.isatty())): indent = 2 sort_keys = True nl = True try: if ndjson and hasattr(response, 'items_iter'): items = response.items_iter(limit) for item in items: click.echo(json.dumps(item)) elif not ndjson and hasattr(response, 'json_encode'): response.json_encode(click.get_text_stream('stdout'), limit=limit, indent=indent, sort_keys=sort_keys) else: res = response.get_raw() res = json.dumps(json.loads(res), indent=indent, sort_keys=sort_keys) click.echo(res) if nl: click.echo() except IOError as ioe: # hide scary looking broken pipe stack traces raise click.ClickException(str(ioe))
Wrapper to echo JSON with optional 'pretty' printing. If pretty is not provided explicity and stdout is a terminal (and not redirected or piped), the default will be to indent and sort keys
Below is the the instruction that describes the task: ### Input: Wrapper to echo JSON with optional 'pretty' printing. If pretty is not provided explicity and stdout is a terminal (and not redirected or piped), the default will be to indent and sort keys ### Response: def echo_json_response(response, pretty, limit=None, ndjson=False): '''Wrapper to echo JSON with optional 'pretty' printing. If pretty is not provided explicity and stdout is a terminal (and not redirected or piped), the default will be to indent and sort keys''' indent = None sort_keys = False nl = False if not ndjson and (pretty or (pretty is None and sys.stdout.isatty())): indent = 2 sort_keys = True nl = True try: if ndjson and hasattr(response, 'items_iter'): items = response.items_iter(limit) for item in items: click.echo(json.dumps(item)) elif not ndjson and hasattr(response, 'json_encode'): response.json_encode(click.get_text_stream('stdout'), limit=limit, indent=indent, sort_keys=sort_keys) else: res = response.get_raw() res = json.dumps(json.loads(res), indent=indent, sort_keys=sort_keys) click.echo(res) if nl: click.echo() except IOError as ioe: # hide scary looking broken pipe stack traces raise click.ClickException(str(ioe))
def attention_bias_batch(batch_coordinates_q, batch_coordinates_k=None, condition_fn=None): """Generate a mask to prevent the batch to attend to each others. Args: batch_coordinates_q: Int-like Tensor of shape [length_q, 1] containing the coordinates of the batches batch_coordinates_k: Int-like Tensor of shape [length_k, 1] containing the coordinates of the batches. If None, do self-attention. condition_fn: Callable defining the attention mask. Returns: Float-like Tensor of shape [length_q, length_k] containing either 0 or -infinity (-1e9). """ if batch_coordinates_k is None: batch_coordinates_k = batch_coordinates_q # Convert to float first because of b/25387198. def to_float(bc): bc = tf.squeeze(bc, 1) bc = tf.to_float(bc) return bc # Broadcast to create [length_q, length_k] mask. bc_v = tf.expand_dims(to_float(batch_coordinates_q), 1) bc_h = tf.expand_dims(to_float(batch_coordinates_k), 0) bias_batch = bc_h - bc_v bias_batch = condition_fn(bias_batch) bias_batch *= -1e9 return bias_batch
Generate a mask to prevent the batch to attend to each others. Args: batch_coordinates_q: Int-like Tensor of shape [length_q, 1] containing the coordinates of the batches batch_coordinates_k: Int-like Tensor of shape [length_k, 1] containing the coordinates of the batches. If None, do self-attention. condition_fn: Callable defining the attention mask. Returns: Float-like Tensor of shape [length_q, length_k] containing either 0 or -infinity (-1e9).
Below is the the instruction that describes the task: ### Input: Generate a mask to prevent the batch to attend to each others. Args: batch_coordinates_q: Int-like Tensor of shape [length_q, 1] containing the coordinates of the batches batch_coordinates_k: Int-like Tensor of shape [length_k, 1] containing the coordinates of the batches. If None, do self-attention. condition_fn: Callable defining the attention mask. Returns: Float-like Tensor of shape [length_q, length_k] containing either 0 or -infinity (-1e9). ### Response: def attention_bias_batch(batch_coordinates_q, batch_coordinates_k=None, condition_fn=None): """Generate a mask to prevent the batch to attend to each others. Args: batch_coordinates_q: Int-like Tensor of shape [length_q, 1] containing the coordinates of the batches batch_coordinates_k: Int-like Tensor of shape [length_k, 1] containing the coordinates of the batches. If None, do self-attention. condition_fn: Callable defining the attention mask. Returns: Float-like Tensor of shape [length_q, length_k] containing either 0 or -infinity (-1e9). """ if batch_coordinates_k is None: batch_coordinates_k = batch_coordinates_q # Convert to float first because of b/25387198. def to_float(bc): bc = tf.squeeze(bc, 1) bc = tf.to_float(bc) return bc # Broadcast to create [length_q, length_k] mask. bc_v = tf.expand_dims(to_float(batch_coordinates_q), 1) bc_h = tf.expand_dims(to_float(batch_coordinates_k), 0) bias_batch = bc_h - bc_v bias_batch = condition_fn(bias_batch) bias_batch *= -1e9 return bias_batch
def compile_obj(self, obj): """ generate a context based on the given obj :param obj: an instance of the model """ res = {} for column in self.columns: if isinstance(column['__col__'], ColumnProperty): value = self._get_column_value(obj, column) elif isinstance(column['__col__'], RelationshipProperty): value = self._get_relationship_value(obj, column) res[column['name']] = value return res
generate a context based on the given obj :param obj: an instance of the model
Below is the the instruction that describes the task: ### Input: generate a context based on the given obj :param obj: an instance of the model ### Response: def compile_obj(self, obj): """ generate a context based on the given obj :param obj: an instance of the model """ res = {} for column in self.columns: if isinstance(column['__col__'], ColumnProperty): value = self._get_column_value(obj, column) elif isinstance(column['__col__'], RelationshipProperty): value = self._get_relationship_value(obj, column) res[column['name']] = value return res
def preupdate(self, force_refresh=True): """Return a dict with all current options prior submitting request.""" ddata = MANUAL_OP_DATA.copy() # force update to make sure status is accurate if force_refresh: self.update() # select current controller and faucet ddata['select_controller'] = \ self._parent.controllers.index(self._controller) ddata['select_faucet'] = \ self._controller.faucets.index(self._faucet) # check if zone is scheduled automatically (zone1_program_toggle) # only add zoneX_program_toogle to ddata when needed, # otherwise the field will be always on for zone in self._faucet.zones: attr = 'zone{}_program_toggle'.format(zone.id) if zone.auto_watering: ddata[attr] = 'on' # check if zone current watering manually (zone1_select_manual_mode) for zone in self._faucet.zones: attr = 'zone{}_select_manual_mode'.format(zone.id) if zone.watering_time and attr in ddata.keys(): ddata[attr] = zone.watering_time # check if rain delay is selected (zone0_rain_delay_select) for zone in self._faucet.zones: attr = 'zone{}_rain_delay_select'.format(zone.id - 1) value = zone.rain_delay if value and attr in ddata.keys(): if int(value) >= 2 and int(value) <= 7: value = str(value) + 'days' else: value = str(value) + 'day' ddata[attr] = value return ddata
Return a dict with all current options prior submitting request.
Below is the the instruction that describes the task: ### Input: Return a dict with all current options prior submitting request. ### Response: def preupdate(self, force_refresh=True): """Return a dict with all current options prior submitting request.""" ddata = MANUAL_OP_DATA.copy() # force update to make sure status is accurate if force_refresh: self.update() # select current controller and faucet ddata['select_controller'] = \ self._parent.controllers.index(self._controller) ddata['select_faucet'] = \ self._controller.faucets.index(self._faucet) # check if zone is scheduled automatically (zone1_program_toggle) # only add zoneX_program_toogle to ddata when needed, # otherwise the field will be always on for zone in self._faucet.zones: attr = 'zone{}_program_toggle'.format(zone.id) if zone.auto_watering: ddata[attr] = 'on' # check if zone current watering manually (zone1_select_manual_mode) for zone in self._faucet.zones: attr = 'zone{}_select_manual_mode'.format(zone.id) if zone.watering_time and attr in ddata.keys(): ddata[attr] = zone.watering_time # check if rain delay is selected (zone0_rain_delay_select) for zone in self._faucet.zones: attr = 'zone{}_rain_delay_select'.format(zone.id - 1) value = zone.rain_delay if value and attr in ddata.keys(): if int(value) >= 2 and int(value) <= 7: value = str(value) + 'days' else: value = str(value) + 'day' ddata[attr] = value return ddata
def _parse_gene_anatomy(self, fh, limit): """ Process anat_entity files with columns: Ensembl gene ID,gene name, anatomical entity ID, anatomical entity name, rank score, XRefs to BTO :param fh: filehandle :param limit: int, limit per group :return: None """ dataframe = pd.read_csv(fh, sep='\t') col = self.files['anat_entity']['columns'] if list(dataframe) != col: LOG.warning( '\nExpected headers: %s\nRecived headers: %s', col, list(dataframe)) gene_groups = dataframe.sort_values( 'rank score', ascending=False).groupby('Ensembl gene ID') if limit is None: limit = 20 gene_groups = gene_groups.head(limit).groupby('Ensembl gene ID') for gene, group in gene_groups: for index, row in group.iterrows(): self._add_gene_anatomy_association( row['Ensembl gene ID'].strip(), row['anatomical entity ID'].strip(), row['rank score'] ) # uberon <==> bto equivelance? return
Process anat_entity files with columns: Ensembl gene ID,gene name, anatomical entity ID, anatomical entity name, rank score, XRefs to BTO :param fh: filehandle :param limit: int, limit per group :return: None
Below is the the instruction that describes the task: ### Input: Process anat_entity files with columns: Ensembl gene ID,gene name, anatomical entity ID, anatomical entity name, rank score, XRefs to BTO :param fh: filehandle :param limit: int, limit per group :return: None ### Response: def _parse_gene_anatomy(self, fh, limit): """ Process anat_entity files with columns: Ensembl gene ID,gene name, anatomical entity ID, anatomical entity name, rank score, XRefs to BTO :param fh: filehandle :param limit: int, limit per group :return: None """ dataframe = pd.read_csv(fh, sep='\t') col = self.files['anat_entity']['columns'] if list(dataframe) != col: LOG.warning( '\nExpected headers: %s\nRecived headers: %s', col, list(dataframe)) gene_groups = dataframe.sort_values( 'rank score', ascending=False).groupby('Ensembl gene ID') if limit is None: limit = 20 gene_groups = gene_groups.head(limit).groupby('Ensembl gene ID') for gene, group in gene_groups: for index, row in group.iterrows(): self._add_gene_anatomy_association( row['Ensembl gene ID'].strip(), row['anatomical entity ID'].strip(), row['rank score'] ) # uberon <==> bto equivelance? return
def _combine_files(orig_files, base_out_file, data, fill_paths=True): """Combine multiple input files, fixing file paths if needed. We fill in full paths from files in the data dictionary if we're not using basepath (old style GEMINI). """ orig_files = [x for x in orig_files if x and utils.file_exists(x)] if not orig_files: return None out_file = "%s-combine%s" % (utils.splitext_plus(base_out_file)[0], utils.splitext_plus(orig_files[0])[-1]) with open(out_file, "w") as out_handle: for orig_file in orig_files: with open(orig_file) as in_handle: for line in in_handle: if fill_paths and line.startswith("file"): line = _fill_file_path(line, data) out_handle.write(line) out_handle.write("\n\n") return out_file
Combine multiple input files, fixing file paths if needed. We fill in full paths from files in the data dictionary if we're not using basepath (old style GEMINI).
Below is the the instruction that describes the task: ### Input: Combine multiple input files, fixing file paths if needed. We fill in full paths from files in the data dictionary if we're not using basepath (old style GEMINI). ### Response: def _combine_files(orig_files, base_out_file, data, fill_paths=True): """Combine multiple input files, fixing file paths if needed. We fill in full paths from files in the data dictionary if we're not using basepath (old style GEMINI). """ orig_files = [x for x in orig_files if x and utils.file_exists(x)] if not orig_files: return None out_file = "%s-combine%s" % (utils.splitext_plus(base_out_file)[0], utils.splitext_plus(orig_files[0])[-1]) with open(out_file, "w") as out_handle: for orig_file in orig_files: with open(orig_file) as in_handle: for line in in_handle: if fill_paths and line.startswith("file"): line = _fill_file_path(line, data) out_handle.write(line) out_handle.write("\n\n") return out_file
def safe_power(a, b): """ Same power of a ^ b :param a: Number a :param b: Number b :return: a ^ b """ if abs(a) > MAX_POWER or abs(b) > MAX_POWER: raise ValueError('Number too high!') return a ** b
Same power of a ^ b :param a: Number a :param b: Number b :return: a ^ b
Below is the the instruction that describes the task: ### Input: Same power of a ^ b :param a: Number a :param b: Number b :return: a ^ b ### Response: def safe_power(a, b): """ Same power of a ^ b :param a: Number a :param b: Number b :return: a ^ b """ if abs(a) > MAX_POWER or abs(b) > MAX_POWER: raise ValueError('Number too high!') return a ** b
def fit_transform(self, data): """ Fits and transforms the SFrame `data` using a fitted model. Parameters ---------- data : SFrame The data to be transformed. Returns ------- A transformed SFrame. Returns ------- out: SFrame A transformed SFrame. See Also -------- fit, transform """ self._setup_from_data(data) ret = self.transform_chain.fit_transform(data) self.__proxy__.update({"fitted" : True}) return ret
Fits and transforms the SFrame `data` using a fitted model. Parameters ---------- data : SFrame The data to be transformed. Returns ------- A transformed SFrame. Returns ------- out: SFrame A transformed SFrame. See Also -------- fit, transform
Below is the the instruction that describes the task: ### Input: Fits and transforms the SFrame `data` using a fitted model. Parameters ---------- data : SFrame The data to be transformed. Returns ------- A transformed SFrame. Returns ------- out: SFrame A transformed SFrame. See Also -------- fit, transform ### Response: def fit_transform(self, data): """ Fits and transforms the SFrame `data` using a fitted model. Parameters ---------- data : SFrame The data to be transformed. Returns ------- A transformed SFrame. Returns ------- out: SFrame A transformed SFrame. See Also -------- fit, transform """ self._setup_from_data(data) ret = self.transform_chain.fit_transform(data) self.__proxy__.update({"fitted" : True}) return ret
def _get_hanging_wall_coeffs_rrup(self, dists): """ Returns the hanging wall rrup term defined in equation 13 """ fhngrrup = np.ones(len(dists.rrup)) idx = dists.rrup > 0.0 fhngrrup[idx] = (dists.rrup[idx] - dists.rjb[idx]) / dists.rrup[idx] return fhngrrup
Returns the hanging wall rrup term defined in equation 13
Below is the the instruction that describes the task: ### Input: Returns the hanging wall rrup term defined in equation 13 ### Response: def _get_hanging_wall_coeffs_rrup(self, dists): """ Returns the hanging wall rrup term defined in equation 13 """ fhngrrup = np.ones(len(dists.rrup)) idx = dists.rrup > 0.0 fhngrrup[idx] = (dists.rrup[idx] - dists.rjb[idx]) / dists.rrup[idx] return fhngrrup
def venn3_circles(subsets, normalize_to=1.0, alpha=1.0, color='black', linestyle='solid', linewidth=2.0, ax=None, **kwargs): ''' Plots only the three circles for the corresponding Venn diagram. Useful for debugging or enhancing the basic venn diagram. parameters ``subsets``, ``normalize_to`` and ``ax`` are the same as in venn3() kwargs are passed as-is to matplotlib.patches.Circle. returns a list of three Circle patches. >>> plot = venn3_circles({'001': 10, '100': 20, '010': 21, '110': 13, '011': 14}) >>> plot = venn3_circles([set(['A','B','C']), set(['A','D','E','F']), set(['D','G','H'])]) ''' # Prepare parameters if isinstance(subsets, dict): subsets = [subsets.get(t, 0) for t in ['100', '010', '110', '001', '101', '011', '111']] elif len(subsets) == 3: subsets = compute_venn3_subsets(*subsets) areas = compute_venn3_areas(subsets, normalize_to) centers, radii = solve_venn3_circles(areas) if ax is None: ax = gca() prepare_venn_axes(ax, centers, radii) result = [] for (c, r) in zip(centers, radii): circle = Circle(c, r, alpha=alpha, edgecolor=color, facecolor='none', linestyle=linestyle, linewidth=linewidth, **kwargs) ax.add_patch(circle) result.append(circle) return result
Plots only the three circles for the corresponding Venn diagram. Useful for debugging or enhancing the basic venn diagram. parameters ``subsets``, ``normalize_to`` and ``ax`` are the same as in venn3() kwargs are passed as-is to matplotlib.patches.Circle. returns a list of three Circle patches. >>> plot = venn3_circles({'001': 10, '100': 20, '010': 21, '110': 13, '011': 14}) >>> plot = venn3_circles([set(['A','B','C']), set(['A','D','E','F']), set(['D','G','H'])])
Below is the the instruction that describes the task: ### Input: Plots only the three circles for the corresponding Venn diagram. Useful for debugging or enhancing the basic venn diagram. parameters ``subsets``, ``normalize_to`` and ``ax`` are the same as in venn3() kwargs are passed as-is to matplotlib.patches.Circle. returns a list of three Circle patches. >>> plot = venn3_circles({'001': 10, '100': 20, '010': 21, '110': 13, '011': 14}) >>> plot = venn3_circles([set(['A','B','C']), set(['A','D','E','F']), set(['D','G','H'])]) ### Response: def venn3_circles(subsets, normalize_to=1.0, alpha=1.0, color='black', linestyle='solid', linewidth=2.0, ax=None, **kwargs): ''' Plots only the three circles for the corresponding Venn diagram. Useful for debugging or enhancing the basic venn diagram. parameters ``subsets``, ``normalize_to`` and ``ax`` are the same as in venn3() kwargs are passed as-is to matplotlib.patches.Circle. returns a list of three Circle patches. >>> plot = venn3_circles({'001': 10, '100': 20, '010': 21, '110': 13, '011': 14}) >>> plot = venn3_circles([set(['A','B','C']), set(['A','D','E','F']), set(['D','G','H'])]) ''' # Prepare parameters if isinstance(subsets, dict): subsets = [subsets.get(t, 0) for t in ['100', '010', '110', '001', '101', '011', '111']] elif len(subsets) == 3: subsets = compute_venn3_subsets(*subsets) areas = compute_venn3_areas(subsets, normalize_to) centers, radii = solve_venn3_circles(areas) if ax is None: ax = gca() prepare_venn_axes(ax, centers, radii) result = [] for (c, r) in zip(centers, radii): circle = Circle(c, r, alpha=alpha, edgecolor=color, facecolor='none', linestyle=linestyle, linewidth=linewidth, **kwargs) ax.add_patch(circle) result.append(circle) return result
def solidity_names(code): # pylint: disable=too-many-branches """ Return the library and contract names in order of appearence. """ names = [] in_string = None backslash = False comment = None # "parse" the code by hand to handle the corner cases: # - the contract or library can be inside a comment or string # - multiline comments # - the contract and library keywords could not be at the start of the line for pos, char in enumerate(code): if in_string: if not backslash and in_string == char: in_string = None backslash = False if char == '\\': # pylint: disable=simplifiable-if-statement backslash = True else: backslash = False elif comment == '//': if char in ('\n', '\r'): comment = None elif comment == '/*': if char == '*' and code[pos + 1] == '/': comment = None else: if char == '"' or char == "'": in_string = char if char == '/': if code[pos + 1] == '/': comment = '//' if code[pos + 1] == '*': comment = '/*' if char == 'c' and code[pos: pos + 8] == 'contract': result = re.match( '^contract[^_$a-zA-Z]+([_$a-zA-Z][_$a-zA-Z0-9]*)', code[pos:]) if result: names.append(('contract', result.groups()[0])) if char == 'i' and code[pos: pos + 9] == 'interface': result = re.match( '^interface[^_$a-zA-Z]+([_$a-zA-Z][_$a-zA-Z0-9]*)', code[pos:]) if result: names.append(('contract', result.groups()[0])) if char == 'l' and code[pos: pos + 7] == 'library': result = re.match( '^library[^_$a-zA-Z]+([_$a-zA-Z][_$a-zA-Z0-9]*)', code[pos:]) if result: names.append(('library', result.groups()[0])) return names
Return the library and contract names in order of appearence.
Below is the the instruction that describes the task: ### Input: Return the library and contract names in order of appearence. ### Response: def solidity_names(code): # pylint: disable=too-many-branches """ Return the library and contract names in order of appearence. """ names = [] in_string = None backslash = False comment = None # "parse" the code by hand to handle the corner cases: # - the contract or library can be inside a comment or string # - multiline comments # - the contract and library keywords could not be at the start of the line for pos, char in enumerate(code): if in_string: if not backslash and in_string == char: in_string = None backslash = False if char == '\\': # pylint: disable=simplifiable-if-statement backslash = True else: backslash = False elif comment == '//': if char in ('\n', '\r'): comment = None elif comment == '/*': if char == '*' and code[pos + 1] == '/': comment = None else: if char == '"' or char == "'": in_string = char if char == '/': if code[pos + 1] == '/': comment = '//' if code[pos + 1] == '*': comment = '/*' if char == 'c' and code[pos: pos + 8] == 'contract': result = re.match( '^contract[^_$a-zA-Z]+([_$a-zA-Z][_$a-zA-Z0-9]*)', code[pos:]) if result: names.append(('contract', result.groups()[0])) if char == 'i' and code[pos: pos + 9] == 'interface': result = re.match( '^interface[^_$a-zA-Z]+([_$a-zA-Z][_$a-zA-Z0-9]*)', code[pos:]) if result: names.append(('contract', result.groups()[0])) if char == 'l' and code[pos: pos + 7] == 'library': result = re.match( '^library[^_$a-zA-Z]+([_$a-zA-Z][_$a-zA-Z0-9]*)', code[pos:]) if result: names.append(('library', result.groups()[0])) return names
def preTranslate(self, tx, ty): """Calculate pre translation and replace current matrix.""" self.e += tx * self.a + ty * self.c self.f += tx * self.b + ty * self.d return self
Calculate pre translation and replace current matrix.
Below is the the instruction that describes the task: ### Input: Calculate pre translation and replace current matrix. ### Response: def preTranslate(self, tx, ty): """Calculate pre translation and replace current matrix.""" self.e += tx * self.a + ty * self.c self.f += tx * self.b + ty * self.d return self
def license(self, license_id: str, token: dict = None, prot: str = "https") -> dict: """Get details about a specific license. :param str token: API auth token :param str license_id: license UUID :param str prot: https [DEFAULT] or http (use it only for dev and tracking needs). """ # handling request parameters payload = {"lid": license_id} # search request license_url = "{}://v1.{}.isogeo.com/licenses/{}".format( prot, self.api_url, license_id ) license_req = self.get( license_url, headers=self.header, params=payload, proxies=self.proxies, verify=self.ssl, ) # checking response checker.check_api_response(license_req) # end of method return license_req.json()
Get details about a specific license. :param str token: API auth token :param str license_id: license UUID :param str prot: https [DEFAULT] or http (use it only for dev and tracking needs).
Below is the the instruction that describes the task: ### Input: Get details about a specific license. :param str token: API auth token :param str license_id: license UUID :param str prot: https [DEFAULT] or http (use it only for dev and tracking needs). ### Response: def license(self, license_id: str, token: dict = None, prot: str = "https") -> dict: """Get details about a specific license. :param str token: API auth token :param str license_id: license UUID :param str prot: https [DEFAULT] or http (use it only for dev and tracking needs). """ # handling request parameters payload = {"lid": license_id} # search request license_url = "{}://v1.{}.isogeo.com/licenses/{}".format( prot, self.api_url, license_id ) license_req = self.get( license_url, headers=self.header, params=payload, proxies=self.proxies, verify=self.ssl, ) # checking response checker.check_api_response(license_req) # end of method return license_req.json()
def wait(self, timeout=None): """Wait for a change in the journal. `timeout` is the maximum time in seconds to wait, or None which means to wait forever. Returns one of NOP (no change), APPEND (new entries have been added to the end of the journal), or INVALIDATE (journal files have been added or removed). """ us = -1 if timeout is None else int(timeout * 1000000) return super(Reader, self).wait(us)
Wait for a change in the journal. `timeout` is the maximum time in seconds to wait, or None which means to wait forever. Returns one of NOP (no change), APPEND (new entries have been added to the end of the journal), or INVALIDATE (journal files have been added or removed).
Below is the the instruction that describes the task: ### Input: Wait for a change in the journal. `timeout` is the maximum time in seconds to wait, or None which means to wait forever. Returns one of NOP (no change), APPEND (new entries have been added to the end of the journal), or INVALIDATE (journal files have been added or removed). ### Response: def wait(self, timeout=None): """Wait for a change in the journal. `timeout` is the maximum time in seconds to wait, or None which means to wait forever. Returns one of NOP (no change), APPEND (new entries have been added to the end of the journal), or INVALIDATE (journal files have been added or removed). """ us = -1 if timeout is None else int(timeout * 1000000) return super(Reader, self).wait(us)
def _get_metadata_path_for_display(self, name): """ Return the path to the given metadata file, if available. """ try: # We need to access _get_metadata_path() on the provider object # directly rather than through this class's __getattr__() # since _get_metadata_path() is marked private. path = self._provider._get_metadata_path(name) # Handle exceptions e.g. in case the distribution's metadata # provider doesn't support _get_metadata_path(). except Exception: return '[could not detect]' return path
Return the path to the given metadata file, if available.
Below is the the instruction that describes the task: ### Input: Return the path to the given metadata file, if available. ### Response: def _get_metadata_path_for_display(self, name): """ Return the path to the given metadata file, if available. """ try: # We need to access _get_metadata_path() on the provider object # directly rather than through this class's __getattr__() # since _get_metadata_path() is marked private. path = self._provider._get_metadata_path(name) # Handle exceptions e.g. in case the distribution's metadata # provider doesn't support _get_metadata_path(). except Exception: return '[could not detect]' return path
def create_parser(): """Creates the Namespace object to be used by the rest of the tool""" parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('-d', '--dictionary', nargs='?', default='dictionaries/all_en_US.dict', help='Specify a non-default word dictionary to use.') parser.add_argument('-c', '--count', help='Specify the number of words to return.', type=int) parser.add_argument('-i', '--initials', type=str, help='String of letters used to form the word list') parser.add_argument('-s', '--seed', help='Specify the seed to use for the random number ' 'generator. Using the same seed without changing ' 'other settings will give repeatable results.', type=int) parser.add_argument('-ws', '--wordstyle', nargs='?', default='lowercase', type=str, help='Specify how to style the individual words. ' 'Default is lowercase.') parser.add_argument('-sep', '--separator', nargs='?', default=' ', type=str, help='How to separate words. Default is space.') return parser.parse_args()
Creates the Namespace object to be used by the rest of the tool
Below is the the instruction that describes the task: ### Input: Creates the Namespace object to be used by the rest of the tool ### Response: def create_parser(): """Creates the Namespace object to be used by the rest of the tool""" parser = argparse.ArgumentParser(description=__doc__) parser.add_argument('-d', '--dictionary', nargs='?', default='dictionaries/all_en_US.dict', help='Specify a non-default word dictionary to use.') parser.add_argument('-c', '--count', help='Specify the number of words to return.', type=int) parser.add_argument('-i', '--initials', type=str, help='String of letters used to form the word list') parser.add_argument('-s', '--seed', help='Specify the seed to use for the random number ' 'generator. Using the same seed without changing ' 'other settings will give repeatable results.', type=int) parser.add_argument('-ws', '--wordstyle', nargs='?', default='lowercase', type=str, help='Specify how to style the individual words. ' 'Default is lowercase.') parser.add_argument('-sep', '--separator', nargs='?', default=' ', type=str, help='How to separate words. Default is space.') return parser.parse_args()
def load(self, val, **kwargs): """ Load the file contents into the supplied pandas dataframe or HoloViews Table. This allows a selection to be made over the metadata before loading the file contents (may be slow). """ if Table and isinstance(val, Table): return self.load_table(val, **kwargs) elif DataFrame and isinstance(val, DataFrame): return self.load_dframe(val, **kwargs) else: raise Exception("Type %s not a DataFrame or Table." % type(val))
Load the file contents into the supplied pandas dataframe or HoloViews Table. This allows a selection to be made over the metadata before loading the file contents (may be slow).
Below is the the instruction that describes the task: ### Input: Load the file contents into the supplied pandas dataframe or HoloViews Table. This allows a selection to be made over the metadata before loading the file contents (may be slow). ### Response: def load(self, val, **kwargs): """ Load the file contents into the supplied pandas dataframe or HoloViews Table. This allows a selection to be made over the metadata before loading the file contents (may be slow). """ if Table and isinstance(val, Table): return self.load_table(val, **kwargs) elif DataFrame and isinstance(val, DataFrame): return self.load_dframe(val, **kwargs) else: raise Exception("Type %s not a DataFrame or Table." % type(val))
def _roi_association(self, imgs_to_decode, value='z', binarize=None): """ Computes the strength of association between activation in a mask and presence/absence of a semantic feature. This is essentially a generalization of the voxel-wise reverse inference z-score to the multivoxel case. """ imgs_to_decode = imgs_to_decode.squeeze() x = average_within_regions(self.dataset, imgs_to_decode).astype(float) y = self.dataset.feature_table.data[self.feature_names].values if binarize is not None: y[y > binarize] = 1. y[y < 1.] = 0. r = self._xy_corr(x.T, y) if value == 'r': return r elif value == 'z': f_r = np.arctanh(r) return f_r * np.sqrt(y.shape[0] - 3)
Computes the strength of association between activation in a mask and presence/absence of a semantic feature. This is essentially a generalization of the voxel-wise reverse inference z-score to the multivoxel case.
Below is the the instruction that describes the task: ### Input: Computes the strength of association between activation in a mask and presence/absence of a semantic feature. This is essentially a generalization of the voxel-wise reverse inference z-score to the multivoxel case. ### Response: def _roi_association(self, imgs_to_decode, value='z', binarize=None): """ Computes the strength of association between activation in a mask and presence/absence of a semantic feature. This is essentially a generalization of the voxel-wise reverse inference z-score to the multivoxel case. """ imgs_to_decode = imgs_to_decode.squeeze() x = average_within_regions(self.dataset, imgs_to_decode).astype(float) y = self.dataset.feature_table.data[self.feature_names].values if binarize is not None: y[y > binarize] = 1. y[y < 1.] = 0. r = self._xy_corr(x.T, y) if value == 'r': return r elif value == 'z': f_r = np.arctanh(r) return f_r * np.sqrt(y.shape[0] - 3)
def is_valid_short_number(numobj): """Tests whether a short number matches a valid pattern. If a country calling code is shared by multiple regions, this returns True if it's valid in any of them. Note that this doesn't verify the number is actually in use, which is impossible to tell by just looking at the number itself. See is_valid_short_number_for_region for details. Arguments: numobj - the short number for which we want to test the validity Return whether the short number matches a valid pattern """ region_codes = region_codes_for_country_code(numobj.country_code) region_code = _region_code_for_short_number_from_region_list(numobj, region_codes) if len(region_codes) > 1 and region_code is not None: # If a matching region had been found for the phone number from among two or more regions, # then we have already implicitly verified its validity for that region. return True return is_valid_short_number_for_region(numobj, region_code)
Tests whether a short number matches a valid pattern. If a country calling code is shared by multiple regions, this returns True if it's valid in any of them. Note that this doesn't verify the number is actually in use, which is impossible to tell by just looking at the number itself. See is_valid_short_number_for_region for details. Arguments: numobj - the short number for which we want to test the validity Return whether the short number matches a valid pattern
Below is the the instruction that describes the task: ### Input: Tests whether a short number matches a valid pattern. If a country calling code is shared by multiple regions, this returns True if it's valid in any of them. Note that this doesn't verify the number is actually in use, which is impossible to tell by just looking at the number itself. See is_valid_short_number_for_region for details. Arguments: numobj - the short number for which we want to test the validity Return whether the short number matches a valid pattern ### Response: def is_valid_short_number(numobj): """Tests whether a short number matches a valid pattern. If a country calling code is shared by multiple regions, this returns True if it's valid in any of them. Note that this doesn't verify the number is actually in use, which is impossible to tell by just looking at the number itself. See is_valid_short_number_for_region for details. Arguments: numobj - the short number for which we want to test the validity Return whether the short number matches a valid pattern """ region_codes = region_codes_for_country_code(numobj.country_code) region_code = _region_code_for_short_number_from_region_list(numobj, region_codes) if len(region_codes) > 1 and region_code is not None: # If a matching region had been found for the phone number from among two or more regions, # then we have already implicitly verified its validity for that region. return True return is_valid_short_number_for_region(numobj, region_code)
def lag_avgs(self): ''' same data as expo_avgs, but with keys as the average age of the data -- assuming evenly spaced data points -- rather than decay rates ''' if not self.interval: return interval = self.interval.mean return dict([(interval/alpha, val) for alpha, val in self.get_expo_avgs().items()])
same data as expo_avgs, but with keys as the average age of the data -- assuming evenly spaced data points -- rather than decay rates
Below is the the instruction that describes the task: ### Input: same data as expo_avgs, but with keys as the average age of the data -- assuming evenly spaced data points -- rather than decay rates ### Response: def lag_avgs(self): ''' same data as expo_avgs, but with keys as the average age of the data -- assuming evenly spaced data points -- rather than decay rates ''' if not self.interval: return interval = self.interval.mean return dict([(interval/alpha, val) for alpha, val in self.get_expo_avgs().items()])