text_prompt stringlengths 157 13.1k | code_prompt stringlengths 7 19.8k ⌀ |
|---|---|
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute_esci(stat=None, nx=None, ny=None, paired=False, eftype='cohen', confidence=.95, decimals=2):
"""Parametric confidence intervals around a Cohen d or a correlation coefficient. Parameters stat : float Original effect size. Must be either a correlation coefficient or a Cohen-type effect size (Cohen d or Hedges g). nx, ny : int Length of vector x and y. paired : bool Indicates if the effect size was estimated from a paired sample. This is only relevant for cohen or hedges effect size. eftype : string Effect size type. Must be 'r' (correlation) or 'cohen' (Cohen d or Hedges g). confidence : float Confidence level (0.95 = 95%) decimals : int Number of rounded decimals. Returns ------- ci : array Desired converted effect size Notes ----- To compute the parametric confidence interval around a **Pearson r correlation** coefficient, one must first apply a Fisher's r-to-z transformation: .. math:: z = 0.5 \\cdot \\ln \\frac{1 + r}{1 - r} = \\text{arctanh}(r) and compute the standard deviation: .. math:: se = \\frac{1}{\\sqrt{n - 3}} where :math:`n` is the sample size. The lower and upper confidence intervals - *in z-space* - are then given by: .. math:: ci_z = z \\pm crit \\cdot se where :math:`crit` is the critical value of the nomal distribution corresponding to the desired confidence level (e.g. 1.96 in case of a 95% confidence interval). These confidence intervals can then be easily converted back to *r-space*: .. math:: ci_r = \\frac{\\exp(2 \\cdot ci_z) - 1}{\\exp(2 \\cdot ci_z) + 1} = \\text{tanh}(ci_z) A formula for calculating the confidence interval for a **Cohen d effect size** is given by Hedges and Olkin (1985, p86). If the effect size estimate from the sample is :math:`d`, then it is normally distributed, with standard deviation: .. math:: se = \\sqrt{\\frac{n_x + n_y}{n_x \\cdot n_y} + \\frac{d^2}{2 (n_x + n_y)}} where :math:`n_x` and :math:`n_y` are the sample sizes of the two groups. In one-sample test or paired test, this becomes: .. math:: se = \\sqrt{\\frac{1}{n_x} + \\frac{d^2}{2 \\cdot n_x}} The lower and upper confidence intervals are then given by: .. math:: ci_d = d \\pm crit \\cdot se where :math:`crit` is the critical value of the nomal distribution corresponding to the desired confidence level (e.g. 1.96 in case of a 95% confidence interval). References .. [1] https://en.wikipedia.org/wiki/Fisher_transformation .. [2] Hedges, L., and Ingram Olkin. "Statistical models for meta-analysis." (1985). .. [3] http://www.leeds.ac.uk/educol/documents/00002182.htm Examples -------- 1. Confidence interval of a Pearson correlation coefficient 0.7468280049029223 [0.27 0.93] 2. Confidence interval of a Cohen d 0.1537753990658328 [-0.68 0.99] """ |
from scipy.stats import norm
# Safety check
assert eftype.lower() in['r', 'pearson', 'spearman', 'cohen',
'd', 'g', 'hedges']
assert stat is not None and nx is not None
assert isinstance(confidence, float)
assert 0 < confidence < 1
# Note that we are using a normal dist and not a T dist:
# from scipy.stats import t
# crit = np.abs(t.ppf((1 - confidence) / 2), dof)
crit = np.abs(norm.ppf((1 - confidence) / 2))
if eftype.lower() in ['r', 'pearson', 'spearman']:
# Standardize correlation coefficient
z = np.arctanh(stat)
se = 1 / np.sqrt(nx - 3)
ci_z = np.array([z - crit * se, z + crit * se])
# Transform back to r
ci = np.tanh(ci_z)
else:
if ny == 1 or paired:
# One sample or paired
se = np.sqrt(1 / nx + stat**2 / (2 * nx))
else:
# Two-sample test
se = np.sqrt(((nx + ny) / (nx * ny)) + (stat**2) / (2 * (nx + ny)))
ci = np.array([stat - crit * se, stat + crit * se])
return np.round(ci, decimals) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def convert_effsize(ef, input_type, output_type, nx=None, ny=None):
"""Conversion between effect sizes. Parameters ef : float Original effect size input_type : string Effect size type of ef. Must be 'r' or 'd'. output_type : string Desired effect size type. Available methods are :: 'none' : no effect size 'cohen' : Unbiased Cohen d 'hedges' : Hedges g 'glass': Glass delta 'eta-square' : Eta-square 'odds-ratio' : Odds ratio 'AUC' : Area Under the Curve nx, ny : int, optional Length of vector x and y. nx and ny are required to convert to Hedges g Returns ------- ef : float Desired converted effect size See Also -------- compute_effsize : Calculate effect size between two set of observations. compute_effsize_from_t : Convert a T-statistic to an effect size. Notes ----- The formula to convert **r** to **d** is given in ref [1]: .. math:: d = \\frac{2r}{\\sqrt{1 - r^2}} The formula to convert **d** to **r** is given in ref [2]: .. math:: r = \\frac{d}{\\sqrt{d^2 + \\frac{(n_x + n_y)^2 - 2(n_x + n_y)} {n_xn_y}}} The formula to convert **d** to :math:`\\eta^2` is given in ref [3]: .. math:: \\eta^2 = \\frac{(0.5 * d)^2}{1 + (0.5 * d)^2} The formula to convert **d** to an odds-ratio is given in ref [4]: .. math:: OR = e(\\frac{d * \\pi}{\\sqrt{3}}) The formula to convert **d** to area under the curve is given in ref [5]: .. math:: AUC = \\mathcal{N}_{cdf}(\\frac{d}{\\sqrt{2}}) References .. [1] Rosenthal, Robert. "Parametric measures of effect size." The handbook of research synthesis 621 (1994):
231-244. .. [2] McGrath, Robert E., and Gregory J. Meyer. "When effect sizes disagree: the case of r and d." Psychological methods 11.4 (2006):
386. .. [3] Cohen, Jacob. "Statistical power analysis for the behavioral sciences. 2nd." (1988). .. [4] Borenstein, Michael, et al. "Effect sizes for continuous data." The handbook of research synthesis and meta-analysis 2 (2009):
221-235. .. [5] Ruscio, John. "A probability-based measure of effect size: Robustness to base rates and other factors." Psychological methods 1 3.1 (2008):
19. Examples -------- 1. Convert from Cohen d to eta-square 0.048185603807257595 2. Convert from Cohen d to Hegdes g (requires the sample sizes of each group) 0.4309859154929578 3. Convert Pearson r to Cohen d 0.8728715609439696 4. Reverse operation: convert Cohen d to Pearson r 0.40004943911648533 """ |
it = input_type.lower()
ot = output_type.lower()
# Check input and output type
for input in [it, ot]:
if not _check_eftype(input):
err = "Could not interpret input '{}'".format(input)
raise ValueError(err)
if it not in ['r', 'cohen']:
raise ValueError("Input type must be 'r' or 'cohen'")
if it == ot:
return ef
d = (2 * ef) / np.sqrt(1 - ef**2) if it == 'r' else ef # Rosenthal 1994
# Then convert to the desired output type
if ot == 'cohen':
return d
elif ot == 'hedges':
if all(v is not None for v in [nx, ny]):
return d * (1 - (3 / (4 * (nx + ny) - 9)))
else:
# If shapes of x and y are not known, return cohen's d
warnings.warn("You need to pass nx and ny arguments to compute "
"Hedges g. Returning Cohen's d instead")
return d
elif ot == 'glass':
warnings.warn("Returning original effect size instead of Glass "
"because variance is not known.")
return ef
elif ot == 'r':
# McGrath and Meyer 2006
if all(v is not None for v in [nx, ny]):
a = ((nx + ny)**2 - 2 * (nx + ny)) / (nx * ny)
else:
a = 4
return d / np.sqrt(d**2 + a)
elif ot == 'eta-square':
# Cohen 1988
return (d / 2)**2 / (1 + (d / 2)**2)
elif ot == 'odds-ratio':
# Borenstein et al. 2009
return np.exp(d * np.pi / np.sqrt(3))
elif ot in ['auc', 'cles']:
# Ruscio 2008
from scipy.stats import norm
return norm.cdf(d / np.sqrt(2))
else:
return None |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute_effsize(x, y, paired=False, eftype='cohen'):
"""Calculate effect size between two set of observations. Parameters x : np.array or list First set of observations. y : np.array or list Second set of observations. paired : boolean If True, uses Cohen d-avg formula to correct for repeated measurements (Cumming 2012) eftype : string Desired output effect size. Available methods are :: 'none' : no effect size 'cohen' : Unbiased Cohen d 'hedges' : Hedges g 'glass': Glass delta 'r' : correlation coefficient 'eta-square' : Eta-square 'odds-ratio' : Odds ratio 'AUC' : Area Under the Curve 'CLES' : Common language effect size Returns ------- ef : float Effect size See Also -------- convert_effsize : Conversion between effect sizes. compute_effsize_from_t : Convert a T-statistic to an effect size. Notes ----- Missing values are automatically removed from the data. If ``x`` and ``y`` are paired, the entire row is removed. If ``x`` and ``y`` are independent, the Cohen's d is: .. math:: d = \\frac{\\overline{X} - \\overline{Y}} {\\sqrt{\\frac{(n_{1} - 1)\\sigma_{1}^{2} + (n_{2} - 1) \\sigma_{2}^{2}}{n1 + n2 - 2}}} If ``x`` and ``y`` are paired, the Cohen :math:`d_{avg}` is computed: .. math:: d_{avg} = \\frac{\\overline{X} - \\overline{Y}} {0.5 * (\\sigma_1 + \\sigma_2)} The Cohen’s d is a biased estimate of the population effect size, especially for small samples (n < 20). It is often preferable to use the corrected effect size, or Hedges’g, instead: .. math:: g = d * (1 - \\frac{3}{4(n_1 + n_2) - 9}) If eftype = 'glass', the Glass :math:`\\delta` is reported, using the group with the lowest variance as the control group: .. math:: \\delta = \\frac{\\overline{X} - \\overline{Y}}{\\sigma_{control}} References .. [1] Lakens, D., 2013. Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front. Psychol. 4, 863. https://doi.org/10.3389/fpsyg.2013.00863 .. [2] Cumming, Geoff. Understanding the new statistics: Effect sizes, confidence intervals, and meta-analysis. Routledge, 2013. Examples -------- 1. Compute Cohen d from two independent set of observations. -0.2835170152506578 2. Compute Hedges g from two paired set of observations. 0.8370985097811404 3. Compute Glass delta from two independent set of observations. The group with the lowest variance will automatically be selected as the control. -0.1170721973604153 """ |
# Check arguments
if not _check_eftype(eftype):
err = "Could not interpret input '{}'".format(eftype)
raise ValueError(err)
x = np.asarray(x)
y = np.asarray(y)
if x.size != y.size and paired:
warnings.warn("x and y have unequal sizes. Switching to "
"paired == False.")
paired = False
# Remove rows with missing values
x, y = remove_na(x, y, paired=paired)
nx, ny = x.size, y.size
if ny == 1:
# Case 1: One-sample Test
d = (x.mean() - y) / x.std(ddof=1)
return d
if eftype.lower() == 'glass':
# Find group with lowest variance
sd_control = np.min([x.std(ddof=1), y.std(ddof=1)])
d = (x.mean() - y.mean()) / sd_control
return d
elif eftype.lower() == 'r':
# Return correlation coefficient (useful for CI bootstrapping)
from scipy.stats import pearsonr
r, _ = pearsonr(x, y)
return r
elif eftype.lower() == 'cles':
# Compute exact CLES
diff = x[:, None] - y
return max((diff < 0).sum(), (diff > 0).sum()) / diff.size
else:
# Test equality of variance of data with a stringent threshold
# equal_var, p = homoscedasticity(x, y, alpha=.001)
# if not equal_var:
# print('Unequal variances (p<.001). You should report',
# 'Glass delta instead.')
# Compute unbiased Cohen's d effect size
if not paired:
# https://en.wikipedia.org/wiki/Effect_size
dof = nx + ny - 2
poolsd = np.sqrt(((nx - 1) * x.var(ddof=1)
+ (ny - 1) * y.var(ddof=1)) / dof)
d = (x.mean() - y.mean()) / poolsd
else:
# Report Cohen d-avg (Cumming 2012; Lakens 2013)
d = (x.mean() - y.mean()) / (.5 * (x.std(ddof=1)
+ y.std(ddof=1)))
return convert_effsize(d, 'cohen', eftype, nx=nx, ny=ny) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def compute_effsize_from_t(tval, nx=None, ny=None, N=None, eftype='cohen'):
"""Compute effect size from a T-value. Parameters tval : float T-value nx, ny : int, optional Group sample sizes. N : int, optional Total sample size (will not be used if nx and ny are specified) eftype : string, optional desired output effect size Returns ------- ef : float Effect size See Also -------- compute_effsize : Calculate effect size between two set of observations. convert_effsize : Conversion between effect sizes. Notes ----- If both nx and ny are specified, the formula to convert from *t* to *d* is: .. math:: d = t * \\sqrt{\\frac{1}{n_x} + \\frac{1}{n_y}} If only N (total sample size) is specified, the formula is: .. math:: d = \\frac{2t}{\\sqrt{N}} Examples -------- 1. Compute effect size from a T-value when both sample sizes are known. 0.7593982580212534 2. Compute effect size when only total sample size is known (nx+ny) 0.7487767802667672 """ |
if not _check_eftype(eftype):
err = "Could not interpret input '{}'".format(eftype)
raise ValueError(err)
if not isinstance(tval, float):
err = "T-value must be float"
raise ValueError(err)
# Compute Cohen d (Lakens, 2013)
if nx is not None and ny is not None:
d = tval * np.sqrt(1 / nx + 1 / ny)
elif N is not None:
d = 2 * tval / np.sqrt(N)
else:
raise ValueError('You must specify either nx + ny, or just N')
return convert_effsize(d, 'cohen', eftype, nx=nx, ny=ny) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bsmahal(a, b, n_boot=200):
""" Bootstraps Mahalanobis distances for Shepherd's pi correlation. Parameters a : ndarray (shape=(n, 2)) Data b : ndarray (shape=(n, 2)) Data n_boot : int Number of bootstrap samples to calculate. Returns ------- m : ndarray (shape=(n,)) Mahalanobis distance for each row in a, averaged across all the bootstrap resamples. """ |
n, m = b.shape
MD = np.zeros((n, n_boot))
nr = np.arange(n)
xB = np.random.choice(nr, size=(n_boot, n), replace=True)
# Bootstrap the MD
for i in np.arange(n_boot):
s1 = b[xB[i, :], 0]
s2 = b[xB[i, :], 1]
X = np.column_stack((s1, s2))
mu = X.mean(0)
_, R = np.linalg.qr(X - mu)
sol = np.linalg.solve(R.T, (a - mu).T)
MD[:, i] = np.sum(sol**2, 0) * (n - 1)
# Average across all bootstraps
return MD.mean(1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def shepherd(x, y, n_boot=200):
""" Shepherd's Pi correlation, equivalent to Spearman's rho after outliers removal. Parameters x, y : array_like First and second set of observations. x and y must be independent. n_boot : int Number of bootstrap samples to calculate. Returns ------- r : float Pi correlation coefficient pval : float Two-tailed adjusted p-value. outliers : array of bool Indicate if value is an outlier or not Notes ----- It first bootstraps the Mahalanobis distances, removes all observations with m >= 6 and finally calculates the correlation of the remaining data. Pi is Spearman's Rho after outlier removal. """ |
from scipy.stats import spearmanr
X = np.column_stack((x, y))
# Bootstrapping on Mahalanobis distance
m = bsmahal(X, X, n_boot)
# Determine outliers
outliers = (m >= 6)
# Compute correlation
r, pval = spearmanr(x[~outliers], y[~outliers])
# (optional) double the p-value to achieve a nominal false alarm rate
# pval *= 2
# pval = 1 if pval > 1 else pval
return r, pval, outliers |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def rm_corr(data=None, x=None, y=None, subject=None, tail='two-sided'):
"""Repeated measures correlation. Parameters data : pd.DataFrame Dataframe. x, y : string Name of columns in ``data`` containing the two dependent variables. subject : string Name of column in ``data`` containing the subject indicator. tail : string Specify whether to return 'one-sided' or 'two-sided' p-value. Returns ------- stats : pandas DataFrame Test summary :: 'r' : Repeated measures correlation coefficient 'dof' : Degrees of freedom 'pval' : one or two tailed p-value 'CI95' : 95% parametric confidence intervals 'power' : achieved power of the test (= 1 - type II error). Notes ----- Repeated measures correlation (rmcorr) is a statistical technique for determining the common within-individual association for paired measures assessed on two or more occasions for multiple individuals. From Bakdash and Marusich (2017):
"Rmcorr accounts for non-independence among observations using analysis of covariance (ANCOVA) to statistically adjust for inter-individual variability. By removing measured variance between-participants, rmcorr provides the best linear fit for each participant using parallel regression lines (the same slope) with varying intercepts. Like a Pearson correlation coefficient, the rmcorr coefficient is bounded by − 1 to 1 and represents the strength of the linear association between two variables." Results have been tested against the `rmcorr` R package. Please note that NaN are automatically removed from the dataframe (listwise deletion). References .. [1] Bakdash, J.Z., Marusich, L.R., 2017. Repeated Measures Correlation. Front. Psychol. 8, 456. https://doi.org/10.3389/fpsyg.2017.00456 .. [2] Bland, J. M., & Altman, D. G. (1995). Statistics notes: Calculating correlation coefficients with repeated observations: Part 1—correlation within subjects. Bmj, 310(6977), 446. .. [3] https://github.com/cran/rmcorr Examples -------- r dof pval CI95% power rm_corr -0.507 38 0.000847 [-0.71, -0.23] 0.93 """ |
from pingouin import ancova, power_corr
# Safety checks
assert isinstance(data, pd.DataFrame), 'Data must be a DataFrame'
assert x in data, 'The %s column is not in data.' % x
assert y in data, 'The %s column is not in data.' % y
assert subject in data, 'The %s column is not in data.' % subject
if data[subject].nunique() < 3:
raise ValueError('rm_corr requires at least 3 unique subjects.')
# Remove missing values
data = data[[x, y, subject]].dropna(axis=0)
# Using PINGOUIN
aov, bw = ancova(dv=y, covar=x, between=subject, data=data,
return_bw=True)
sign = np.sign(bw)
dof = int(aov.loc[2, 'DF'])
n = dof + 2
ssfactor = aov.loc[1, 'SS']
sserror = aov.loc[2, 'SS']
rm = sign * np.sqrt(ssfactor / (ssfactor + sserror))
pval = aov.loc[1, 'p-unc']
pval *= 0.5 if tail == 'one-sided' else 1
ci = compute_esci(stat=rm, nx=n, eftype='pearson').tolist()
pwr = power_corr(r=rm, n=n, tail=tail)
# Convert to Dataframe
stats = pd.DataFrame({"r": round(rm, 3), "dof": int(dof),
"pval": pval, "CI95%": str(ci),
"power": round(pwr, 3)}, index=["rm_corr"])
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _dcorr(y, n2, A, dcov2_xx):
"""Helper function for distance correlation bootstrapping. """ |
# Pairwise Euclidean distances
b = squareform(pdist(y, metric='euclidean'))
# Double centering
B = b - b.mean(axis=0)[None, :] - b.mean(axis=1)[:, None] + b.mean()
# Compute squared distance covariances
dcov2_yy = np.vdot(B, B) / n2
dcov2_xy = np.vdot(A, B) / n2
return np.sqrt(dcov2_xy) / np.sqrt(np.sqrt(dcov2_xx) * np.sqrt(dcov2_yy)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def distance_corr(x, y, tail='upper', n_boot=1000, seed=None):
"""Distance correlation between two arrays. Statistical significance (p-value) is evaluated with a permutation test. Parameters x, y : np.ndarray 1D or 2D input arrays, shape (n_samples, n_features). x and y must have the same number of samples and must not contain missing values. tail : str Tail for p-value :: 'upper' : one-sided (upper tail) 'lower' : one-sided (lower tail) 'two-sided' : two-sided n_boot : int or None Number of bootstrap to perform. If None, no bootstrapping is performed and the function only returns the distance correlation (no p-value). Default is 1000 (thus giving a precision of 0.001). seed : int or None Random state seed. Returns ------- dcor : float Sample distance correlation (range from 0 to 1). pval : float P-value Notes ----- From Wikipedia: *Distance correlation is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors. This is in contrast to Pearson's correlation, which can only detect linear association between two random variables.* The distance correlation of two random variables is obtained by dividing their distance covariance by the product of their distance standard deviations: .. math:: \\text{dCor}(X, Y) = \\frac{\\text{dCov}(X, Y)} {\\sqrt{\\text{dVar}(X) \\cdot \\text{dVar}(Y)}} where :math:`\\text{dCov}(X, Y)` is the square root of the arithmetic average of the product of the double-centered pairwise Euclidean distance matrices. Note that by contrast to Pearson's correlation, the distance correlation cannot be negative, i.e :math:`0 \\leq \\text{dCor} \\leq 1`. Results have been tested against the 'energy' R package. To be consistent with this latter, only the one-sided p-value is computed, i.e. the upper tail of the T-statistic. References .. [1] https://en.wikipedia.org/wiki/Distance_correlation .. [2] Székely, G. J., Rizzo, M. L., & Bakirov, N. K. (2007). Measuring and testing dependence by correlation of distances. The annals of statistics, 35(6), 2769-2794. .. [3] https://gist.github.com/satra/aa3d19a12b74e9ab7941 .. [4] https://gist.github.com/wladston/c931b1495184fbb99bec .. [5] https://cran.r-project.org/web/packages/energy/energy.pdf Examples -------- 1. With two 1D vectors (0.7626762424168667, 0.312) 2. With two 2D arrays and no p-value 0.8799633012275321 """ |
assert tail in ['upper', 'lower', 'two-sided'], 'Wrong tail argument.'
x = np.asarray(x)
y = np.asarray(y)
# Check for NaN values
if any([np.isnan(np.min(x)), np.isnan(np.min(y))]):
raise ValueError('Input arrays must not contain NaN values.')
if x.ndim == 1:
x = x[:, None]
if y.ndim == 1:
y = y[:, None]
assert x.shape[0] == y.shape[0], 'x and y must have same number of samples'
# Extract number of samples
n = x.shape[0]
n2 = n**2
# Process first array to avoid redundancy when performing bootstrap
a = squareform(pdist(x, metric='euclidean'))
A = a - a.mean(axis=0)[None, :] - a.mean(axis=1)[:, None] + a.mean()
dcov2_xx = np.vdot(A, A) / n2
# Process second array and compute final distance correlation
dcor = _dcorr(y, n2, A, dcov2_xx)
# Compute one-sided p-value using a bootstrap procedure
if n_boot is not None and n_boot > 1:
# Define random seed and permutation
rng = np.random.RandomState(seed)
bootsam = rng.random_sample((n_boot, n)).argsort(axis=1)
bootstat = np.empty(n_boot)
for i in range(n_boot):
bootstat[i] = _dcorr(y[bootsam[i, :]], n2, A, dcov2_xx)
pval = _perm_pval(bootstat, dcor, tail=tail)
return dcor, pval
else:
return dcor |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _point_estimate(X_val, XM_val, M_val, y_val, idx, n_mediator, mtype='linear'):
"""Point estimate of indirect effect based on bootstrap sample.""" |
# Mediator(s) model (M(j) ~ X + covar)
beta_m = []
for j in range(n_mediator):
if mtype == 'linear':
beta_m.append(linear_regression(X_val[idx], M_val[idx, j],
coef_only=True)[1])
else:
beta_m.append(logistic_regression(X_val[idx], M_val[idx, j],
coef_only=True)[1])
# Full model (Y ~ X + M + covar)
beta_y = linear_regression(XM_val[idx], y_val[idx],
coef_only=True)[2:(2 + n_mediator)]
# Point estimate
return beta_m * beta_y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _pval_from_bootci(boot, estimate):
"""Compute p-value from bootstrap distribution. Similar to the pval function in the R package mediation. Note that this is less accurate than a permutation test because the bootstrap distribution is not conditioned on a true null hypothesis. """ |
if estimate == 0:
out = 1
else:
out = 2 * min(sum(boot > 0), sum(boot < 0)) / len(boot)
return min(out, 1) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _anova(self, dv=None, between=None, detailed=False, export_filename=None):
"""Return one-way and two-way ANOVA.""" |
aov = anova(data=self, dv=dv, between=between, detailed=detailed,
export_filename=export_filename)
return aov |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _welch_anova(self, dv=None, between=None, export_filename=None):
"""Return one-way Welch ANOVA.""" |
aov = welch_anova(data=self, dv=dv, between=between,
export_filename=export_filename)
return aov |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _mixed_anova(self, dv=None, between=None, within=None, subject=None, correction=False, export_filename=None):
"""Two-way mixed ANOVA.""" |
aov = mixed_anova(data=self, dv=dv, between=between, within=within,
subject=subject, correction=correction,
export_filename=export_filename)
return aov |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _mediation_analysis(self, x=None, m=None, y=None, covar=None, alpha=0.05, n_boot=500, seed=None, return_dist=False):
"""Mediation analysis.""" |
stats = mediation_analysis(data=self, x=x, m=m, y=y, covar=covar,
alpha=alpha, n_boot=n_boot, seed=seed,
return_dist=return_dist)
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def mad(a, normalize=True, axis=0):
""" Median Absolute Deviation along given axis of an array. Parameters a : array-like Input array. normalize : boolean. If True, scale by a normalization constant (~0.67) axis : int, optional The defaul is 0. Can also be None. Returns ------- mad : float mad = median(abs(a - median(a))) / c References .. [1] https://en.wikipedia.org/wiki/Median_absolute_deviation Examples -------- 2.965204437011204 2.0 """ |
from scipy.stats import norm
a = np.asarray(a)
c = norm.ppf(3 / 4.) if normalize else 1
center = np.apply_over_axes(np.median, a, axis)
return np.median((np.fabs(a - center)) / c, axis=axis) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def madmedianrule(a):
"""Outlier detection based on the MAD-median rule. Parameters a : array-like Input array. Returns ------- outliers: boolean (same shape as a) Boolean array indicating whether each sample is an outlier (True) or not (False). References .. [1] Hall, P., Welsh, A.H., 1985. Limit theorems for the median deviation. Ann. Inst. Stat. Math. 37, 27–36. https://doi.org/10.1007/BF02481078 Examples -------- array([False, False, False, False, False, True, False, False]) """ |
from scipy.stats import chi2
a = np.asarray(a)
k = np.sqrt(chi2.ppf(0.975, 1))
return (np.fabs(a - np.median(a)) / mad(a)) > k |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def wilcoxon(x, y, tail='two-sided'):
"""Wilcoxon signed-rank test. It is the non-parametric version of the paired T-test. Parameters x, y : array_like First and second set of observations. x and y must be related (e.g repeated measures). tail : string Specify whether to return 'one-sided' or 'two-sided' p-value. Returns ------- stats : pandas DataFrame Test summary :: 'W-val' : W-value 'p-val' : p-value 'RBC' : matched pairs rank-biserial correlation (effect size) 'CLES' : common language effect size Notes ----- The Wilcoxon signed-rank test tests the null hypothesis that two related paired samples come from the same distribution. A continuity correction is applied by default (see :py:func:`scipy.stats.wilcoxon` for details). The rank biserial correlation is the difference between the proportion of favorable evidence minus the proportion of unfavorable evidence (see Kerby 2014). The common language effect size is the probability (from 0 to 1) that a randomly selected observation from the first sample will be greater than a randomly selected observation from the second sample. References .. [1] Wilcoxon, F. (1945). Individual comparisons by ranking methods. Biometrics bulletin, 1(6), 80-83. .. [2] Kerby, D. S. (2014). The simple difference formula: An approach to teaching nonparametric correlation. Comprehensive Psychology, 3, 11-IT. .. [3] McGraw, K. O., & Wong, S. P. (1992). A common language effect size statistic. Psychological bulletin, 111(2), 361. Examples -------- 1. Wilcoxon test on two related samples. W-val p-val RBC CLES Wilcoxon 20.5 0.070844 0.333 0.583 """ |
from scipy.stats import wilcoxon
x = np.asarray(x)
y = np.asarray(y)
# Remove NA
x, y = remove_na(x, y, paired=True)
# Compute test
wval, pval = wilcoxon(x, y, zero_method='wilcox', correction=False)
pval *= .5 if tail == 'one-sided' else pval
# Effect size 1: common language effect size (McGraw and Wong 1992)
diff = x[:, None] - y
cles = max((diff < 0).sum(), (diff > 0).sum()) / diff.size
# Effect size 2: matched-pairs rank biserial correlation (Kerby 2014)
rank = np.arange(x.size, 0, -1)
rsum = rank.sum()
fav = rank[np.sign(y - x) > 0].sum()
unfav = rank[np.sign(y - x) < 0].sum()
rbc = fav / rsum - unfav / rsum
# Fill output DataFrame
stats = pd.DataFrame({}, index=['Wilcoxon'])
stats['W-val'] = round(wval, 3)
stats['p-val'] = pval
stats['RBC'] = round(rbc, 3)
stats['CLES'] = round(cles, 3)
col_order = ['W-val', 'p-val', 'RBC', 'CLES']
stats = stats.reindex(columns=col_order)
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def kruskal(dv=None, between=None, data=None, detailed=False, export_filename=None):
"""Kruskal-Wallis H-test for independent samples. Parameters dv : string Name of column containing the dependant variable. between : string Name of column containing the between factor. data : pandas DataFrame DataFrame export_filename : string Filename (without extension) for the output file. If None, do not export the table. By default, the file will be created in the current python console directory. To change that, specify the filename with full path. Returns ------- stats : DataFrame Test summary :: 'H' : The Kruskal-Wallis H statistic, corrected for ties 'p-unc' : Uncorrected p-value 'dof' : degrees of freedom Notes ----- The Kruskal-Wallis H-test tests the null hypothesis that the population median of all of the groups are equal. It is a non-parametric version of ANOVA. The test works on 2 or more independent samples, which may have different sizes. Due to the assumption that H has a chi square distribution, the number of samples in each group must not be too small. A typical rule is that each sample must have at least 5 measurements. NaN values are automatically removed. Examples -------- Compute the Kruskal-Wallis H-test for independent samples. Source ddof1 H p-unc Kruskal Hair color 3 10.589 0.014172 """ |
from scipy.stats import chi2, rankdata, tiecorrect
# Check data
_check_dataframe(dv=dv, between=between, data=data,
effects='between')
# Remove NaN values
data = data.dropna()
# Reset index (avoid duplicate axis error)
data = data.reset_index(drop=True)
# Extract number of groups and total sample size
groups = list(data[between].unique())
n_groups = len(groups)
n = data[dv].size
# Rank data, dealing with ties appropriately
data['rank'] = rankdata(data[dv])
# Find the total of rank per groups
grp = data.groupby(between)['rank']
sum_rk_grp = grp.sum().values
n_per_grp = grp.count().values
# Calculate chi-square statistic (H)
H = (12 / (n * (n + 1)) * np.sum(sum_rk_grp**2 / n_per_grp)) - 3 * (n + 1)
# Correct for ties
H /= tiecorrect(data['rank'].values)
# Calculate DOF and p-value
ddof1 = n_groups - 1
p_unc = chi2.sf(H, ddof1)
# Create output dataframe
stats = pd.DataFrame({'Source': between,
'ddof1': ddof1,
'H': np.round(H, 3),
'p-unc': p_unc,
}, index=['Kruskal'])
col_order = ['Source', 'ddof1', 'H', 'p-unc']
stats = stats.reindex(columns=col_order)
stats.dropna(how='all', axis=1, inplace=True)
# Export to .csv
if export_filename is not None:
_export_table(stats, export_filename)
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def friedman(dv=None, within=None, subject=None, data=None, export_filename=None):
"""Friedman test for repeated measurements. Parameters dv : string Name of column containing the dependant variable. within : string Name of column containing the within-subject factor. subject : string Name of column containing the subject identifier. data : pandas DataFrame DataFrame export_filename : string Filename (without extension) for the output file. If None, do not export the table. By default, the file will be created in the current python console directory. To change that, specify the filename with full path. Returns ------- stats : DataFrame Test summary :: 'Q' : The Friedman Q statistic, corrected for ties 'p-unc' : Uncorrected p-value 'dof' : degrees of freedom Notes ----- The Friedman test is used for one-way repeated measures ANOVA by ranks. Data are expected to be in long-format. Note that if the dataset contains one or more other within subject factors, an automatic collapsing to the mean is applied on the dependant variable (same behavior as the ezANOVA R package). As such, results can differ from those of JASP. If you can, always double-check the results. Due to the assumption that the test statistic has a chi squared distribution, the p-value is only reliable for n > 10 and more than 6 repeated measurements. NaN values are automatically removed. Examples -------- Compute the Friedman test for repeated measurements. Source ddof1 Q p-unc Friedman Disgustingness 1 9.228 0.002384 """ |
from scipy.stats import rankdata, chi2, find_repeats
# Check data
_check_dataframe(dv=dv, within=within, data=data, subject=subject,
effects='within')
# Collapse to the mean
data = data.groupby([subject, within]).mean().reset_index()
# Remove NaN
if data[dv].isnull().any():
data = remove_rm_na(dv=dv, within=within, subject=subject,
data=data[[subject, within, dv]])
# Extract number of groups and total sample size
grp = data.groupby(within)[dv]
rm = list(data[within].unique())
k = len(rm)
X = np.array([grp.get_group(r).values for r in rm]).T
n = X.shape[0]
# Rank per subject
ranked = np.zeros(X.shape)
for i in range(n):
ranked[i] = rankdata(X[i, :])
ssbn = (ranked.sum(axis=0)**2).sum()
# Compute the test statistic
Q = (12 / (n * k * (k + 1))) * ssbn - 3 * n * (k + 1)
# Correct for ties
ties = 0
for i in range(n):
replist, repnum = find_repeats(X[i])
for t in repnum:
ties += t * (t * t - 1)
c = 1 - ties / float(k * (k * k - 1) * n)
Q /= c
# Approximate the p-value
ddof1 = k - 1
p_unc = chi2.sf(Q, ddof1)
# Create output dataframe
stats = pd.DataFrame({'Source': within,
'ddof1': ddof1,
'Q': np.round(Q, 3),
'p-unc': p_unc,
}, index=['Friedman'])
col_order = ['Source', 'ddof1', 'Q', 'p-unc']
stats = stats.reindex(columns=col_order)
stats.dropna(how='all', axis=1, inplace=True)
# Export to .csv
if export_filename is not None:
_export_table(stats, export_filename)
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cochran(dv=None, within=None, subject=None, data=None, export_filename=None):
"""Cochran Q test. Special case of the Friedman test when the dependant variable is binary. Parameters dv : string Name of column containing the binary dependant variable. within : string Name of column containing the within-subject factor. subject : string Name of column containing the subject identifier. data : pandas DataFrame DataFrame export_filename : string Filename (without extension) for the output file. If None, do not export the table. By default, the file will be created in the current python console directory. To change that, specify the filename with full path. Returns ------- stats : DataFrame Test summary :: 'Q' : The Cochran Q statistic 'p-unc' : Uncorrected p-value 'dof' : degrees of freedom Notes ----- The Cochran Q Test is a non-parametric test for ANOVA with repeated measures where the dependent variable is binary. Data are expected to be in long-format. NaN are automatically removed from the data. The Q statistics is defined as: .. math:: Q = \\frac{(r-1)(r\\sum_j^rx_j^2-N^2)}{rN-\\sum_i^nx_i^2} :math:`n` is the number of observations per condition. The p-value is then approximated using a chi-square distribution with :math:`r-1` degrees of freedom: .. math:: Q \\sim \\chi^2(r-1) References .. [1] Cochran, W.G., 1950. The comparison of percentages in matched samples. Biometrika 37, 256–266. https://doi.org/10.1093/biomet/37.3-4.256 Examples -------- Compute the Cochran Q test for repeated measurements. Source dof Q p-unc cochran Time 2 6.706 0.034981 """ |
from scipy.stats import chi2
# Check data
_check_dataframe(dv=dv, within=within, data=data, subject=subject,
effects='within')
# Remove NaN
if data[dv].isnull().any():
data = remove_rm_na(dv=dv, within=within, subject=subject,
data=data[[subject, within, dv]])
# Groupby and extract size
grp = data.groupby(within)[dv]
grp_s = data.groupby(subject)[dv]
k = data[within].nunique()
dof = k - 1
# n = grp.count().unique()[0]
# Q statistic and p-value
q = (dof * (k * np.sum(grp.sum()**2) - grp.sum().sum()**2)) / \
(k * grp.sum().sum() - np.sum(grp_s.sum()**2))
p_unc = chi2.sf(q, dof)
# Create output dataframe
stats = pd.DataFrame({'Source': within,
'dof': dof,
'Q': np.round(q, 3),
'p-unc': p_unc,
}, index=['cochran'])
# Export to .csv
if export_filename is not None:
_export_table(stats, export_filename)
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _multiline_width(multiline_s, line_width_fn=len):
"""Visible width of a potentially multiline content.""" |
return max(map(line_width_fn, re.split("[\r\n]", multiline_s))) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _choose_width_fn(has_invisible, enable_widechars, is_multiline):
"""Return a function to calculate visible cell width.""" |
if has_invisible:
line_width_fn = _visible_width
elif enable_widechars: # optional wide-character support if available
line_width_fn = wcwidth.wcswidth
else:
line_width_fn = len
if is_multiline:
def width_fn(s): return _multiline_width(s, line_width_fn)
else:
width_fn = line_width_fn
return width_fn |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def _align_header(header, alignment, width, visible_width, is_multiline=False,
width_fn=None):
"Pad string header to width chars given known visible_width of the header."
if is_multiline:
header_lines = re.split(_multiline_codes, header)
padded_lines = [_align_header(h, alignment, width, width_fn(h))
for h in header_lines]
return "\n".join(padded_lines)
# else: not multiline
ninvisible = len(header) - visible_width
width += ninvisible
if alignment == "left":
return _padright(width, header)
elif alignment == "center":
return _padboth(width, header)
elif not alignment:
return "{0}".format(header)
else:
return _padleft(width, header) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _prepend_row_index(rows, index):
"""Add a left-most index column.""" |
if index is None or index is False:
return rows
if len(index) != len(rows):
print('index=', index)
print('rows=', rows)
raise ValueError('index must be as long as the number of data rows')
rows = [[v] + list(row) for v, row in zip(index, rows)]
return rows |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _expand_numparse(disable_numparse, column_count):
""" Return a list of bools of length `column_count` which indicates whether number parsing should be used on each column. If `disable_numparse` is a list of indices, each of those indices are False, and everything else is True. If `disable_numparse` is a bool, then the returned list is all the same. """ |
if isinstance(disable_numparse, Iterable):
numparses = [True] * column_count
for index in disable_numparse:
numparses[index] = False
return numparses
else:
return [not disable_numparse] * column_count |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def pairwise_tukey(dv=None, between=None, data=None, alpha=.05,
tail='two-sided', effsize='hedges'):
'''Pairwise Tukey-HSD post-hoc test.
Parameters
----------
dv : string
Name of column containing the dependant variable.
between: string
Name of column containing the between factor.
data : pandas DataFrame
DataFrame
alpha : float
Significance level
tail : string
Indicates whether to return the 'two-sided' or 'one-sided' p-values
effsize : string or None
Effect size type. Available methods are ::
'none' : no effect size
'cohen' : Unbiased Cohen d
'hedges' : Hedges g
'glass': Glass delta
'eta-square' : Eta-square
'odds-ratio' : Odds ratio
'AUC' : Area Under the Curve
Returns
-------
stats : DataFrame
Stats summary ::
'A' : Name of first measurement
'B' : Name of second measurement
'mean(A)' : Mean of first measurement
'mean(B)' : Mean of second measurement
'diff' : Mean difference
'SE' : Standard error
'tail' : indicate whether the p-values are one-sided or two-sided
'T' : T-values
'p-tukey' : Tukey-HSD corrected p-values
'efsize' : effect sizes
'eftype' : type of effect size
Notes
-----
Tukey HSD post-hoc is best for balanced one-way ANOVA.
It has been proven to be conservative for one-way ANOVA with unequal
sample sizes. However, it is not robust if the groups have unequal
variances, in which case the Games-Howell test is more adequate.
Tukey HSD is not valid for repeated measures ANOVA.
Note that when the sample sizes are unequal, this function actually
performs the Tukey-Kramer test (which allows for unequal sample sizes).
The T-values are defined as:
.. math::
t = \\frac{\\overline{x}_i - \\overline{x}_j}
{\\sqrt{2 \\cdot MS_w / n}}
where :math:`\\overline{x}_i` and :math:`\\overline{x}_j` are the means of
the first and second group, respectively, :math:`MS_w` the mean squares of
the error (computed using ANOVA) and :math:`n` the sample size.
If the sample sizes are unequal, the Tukey-Kramer procedure is
automatically used:
.. math::
t = \\frac{\\overline{x}_i - \\overline{x}_j}{\\sqrt{\\frac{MS_w}{n_i}
+ \\frac{MS_w}{n_j}}}
where :math:`n_i` and :math:`n_j` are the sample sizes of the first and
second group, respectively.
The p-values are then approximated using the Studentized range distribution
:math:`Q(\\sqrt2*|t_i|, r, N - r)` where :math:`r` is the total number of
groups and :math:`N` is the total sample size.
Note that the p-values might be slightly different than those obtained
using R or Matlab since the studentized range approximation is done using
the Gleason (1999) algorithm, which is more efficient and accurate than
the algorithms used in Matlab or R.
References
----------
.. [1] Tukey, John W. "Comparing individual means in the analysis of
variance." Biometrics (1949): 99-114.
.. [2] Gleason, John R. "An accurate, non-iterative approximation for
studentized range quantiles." Computational statistics & data
analysis 31.2 (1999): 147-158.
Examples
--------
Pairwise Tukey post-hocs on the pain threshold dataset.
>>> from pingouin import pairwise_tukey, read_dataset
>>> df = read_dataset('anova')
>>> pt = pairwise_tukey(dv='Pain threshold', between='Hair color', data=df)
'''
from pingouin.external.qsturng import psturng
# First compute the ANOVA
aov = anova(dv=dv, data=data, between=between, detailed=True)
df = aov.loc[1, 'DF']
ng = aov.loc[0, 'DF'] + 1
grp = data.groupby(between)[dv]
n = grp.count().values
gmeans = grp.mean().values
gvar = aov.loc[1, 'MS'] / n
# Pairwise combinations
g1, g2 = np.array(list(combinations(np.arange(ng), 2))).T
mn = gmeans[g1] - gmeans[g2]
se = np.sqrt(gvar[g1] + gvar[g2])
tval = mn / se
# Critical values and p-values
# from pingouin.external.qsturng import qsturng
# crit = qsturng(1 - alpha, ng, df) / np.sqrt(2)
pval = psturng(np.sqrt(2) * np.abs(tval), ng, df)
pval *= 0.5 if tail == 'one-sided' else 1
# Uncorrected p-values
# from scipy.stats import t
# punc = t.sf(np.abs(tval), n[g1].size + n[g2].size - 2) * 2
# Effect size
d = tval * np.sqrt(1 / n[g1] + 1 / n[g2])
ef = convert_effsize(d, 'cohen', effsize, n[g1], n[g2])
# Create dataframe
# Careful: pd.unique does NOT sort whereas numpy does
stats = pd.DataFrame({
'A': np.unique(data[between])[g1],
'B': np.unique(data[between])[g2],
'mean(A)': gmeans[g1],
'mean(B)': gmeans[g2],
'diff': mn,
'SE': np.round(se, 3),
'tail': tail,
'T': np.round(tval, 3),
# 'alpha': alpha,
# 'crit': np.round(crit, 3),
'p-tukey': pval,
'efsize': np.round(ef, 3),
'eftype': effsize,
})
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
| def pairwise_gameshowell(dv=None, between=None, data=None, alpha=.05,
tail='two-sided', effsize='hedges'):
'''Pairwise Games-Howell post-hoc test.
Parameters
----------
dv : string
Name of column containing the dependant variable.
between: string
Name of column containing the between factor.
data : pandas DataFrame
DataFrame
alpha : float
Significance level
tail : string
Indicates whether to return the 'two-sided' or 'one-sided' p-values
effsize : string or None
Effect size type. Available methods are ::
'none' : no effect size
'cohen' : Unbiased Cohen d
'hedges' : Hedges g
'glass': Glass delta
'eta-square' : Eta-square
'odds-ratio' : Odds ratio
'AUC' : Area Under the Curve
Returns
-------
stats : DataFrame
Stats summary ::
'A' : Name of first measurement
'B' : Name of second measurement
'mean(A)' : Mean of first measurement
'mean(B)' : Mean of second measurement
'diff' : Mean difference
'SE' : Standard error
'tail' : indicate whether the p-values are one-sided or two-sided
'T' : T-values
'df' : adjusted degrees of freedom
'pval' : Games-Howell corrected p-values
'efsize' : effect sizes
'eftype' : type of effect size
Notes
-----
Games-Howell is very similar to the Tukey HSD post-hoc test but is much
more robust to heterogeneity of variances. While the
Tukey-HSD post-hoc is optimal after a classic one-way ANOVA, the
Games-Howell is optimal after a Welch ANOVA.
Games-Howell is not valid for repeated measures ANOVA.
Compared to the Tukey-HSD test, the Games-Howell test uses different pooled
variances for each pair of variables instead of the same pooled variance.
The T-values are defined as:
.. math::
t = \\frac{\\overline{x}_i - \\overline{x}_j}
{\\sqrt{(\\frac{s_i^2}{n_i} + \\frac{s_j^2}{n_j})}}
and the corrected degrees of freedom are:
.. math::
v = \\frac{(\\frac{s_i^2}{n_i} + \\frac{s_j^2}{n_j})^2}
{\\frac{(\\frac{s_i^2}{n_i})^2}{n_i-1} +
\\frac{(\\frac{s_j^2}{n_j})^2}{n_j-1}}
where :math:`\\overline{x}_i`, :math:`s_i^2`, and :math:`n_i`
are the mean, variance and sample size of the first group and
:math:`\\overline{x}_j`, :math:`s_j^2`, and :math:`n_j` the mean, variance
and sample size of the second group.
The p-values are then approximated using the Studentized range distribution
:math:`Q(\\sqrt2*|t_i|, r, v_i)`.
Note that the p-values might be slightly different than those obtained
using R or Matlab since the studentized range approximation is done using
the Gleason (1999) algorithm, which is more efficient and accurate than
the algorithms used in Matlab or R.
References
----------
.. [1] Games, Paul A., and John F. Howell. "Pairwise multiple comparison
procedures with unequal n’s and/or variances: a Monte Carlo study."
Journal of Educational Statistics 1.2 (1976): 113-125.
.. [2] Gleason, John R. "An accurate, non-iterative approximation for
studentized range quantiles." Computational statistics & data
analysis 31.2 (1999): 147-158.
Examples
--------
Pairwise Games-Howell post-hocs on the pain threshold dataset.
>>> from pingouin import pairwise_gameshowell, read_dataset
>>> df = read_dataset('anova')
>>> pairwise_gameshowell(dv='Pain threshold', between='Hair color',
... data=df) # doctest: +SKIP
'''
from pingouin.external.qsturng import psturng
# Check the dataframe
_check_dataframe(dv=dv, between=between, effects='between', data=data)
# Reset index (avoid duplicate axis error)
data = data.reset_index(drop=True)
# Extract infos
ng = data[between].nunique()
grp = data.groupby(between)[dv]
n = grp.count().values
gmeans = grp.mean().values
gvars = grp.var().values
# Pairwise combinations
g1, g2 = np.array(list(combinations(np.arange(ng), 2))).T
mn = gmeans[g1] - gmeans[g2]
se = np.sqrt(0.5 * (gvars[g1] / n[g1] + gvars[g2] / n[g2]))
tval = mn / np.sqrt(gvars[g1] / n[g1] + gvars[g2] / n[g2])
df = (gvars[g1] / n[g1] + gvars[g2] / n[g2])**2 / \
((((gvars[g1] / n[g1])**2) / (n[g1] - 1)) +
(((gvars[g2] / n[g2])**2) / (n[g2] - 1)))
# Compute corrected p-values
pval = psturng(np.sqrt(2) * np.abs(tval), ng, df)
pval *= 0.5 if tail == 'one-sided' else 1
# Uncorrected p-values
# from scipy.stats import t
# punc = t.sf(np.abs(tval), n[g1].size + n[g2].size - 2) * 2
# Effect size
d = tval * np.sqrt(1 / n[g1] + 1 / n[g2])
ef = convert_effsize(d, 'cohen', effsize, n[g1], n[g2])
# Create dataframe
# Careful: pd.unique does NOT sort whereas numpy does
stats = pd.DataFrame({
'A': np.unique(data[between])[g1],
'B': np.unique(data[between])[g2],
'mean(A)': gmeans[g1],
'mean(B)': gmeans[g2],
'diff': mn,
'SE': se,
'tail': tail,
'T': tval,
'df': df,
'pval': pval,
'efsize': ef,
'eftype': effsize,
})
col_round = ['mean(A)', 'mean(B)', 'diff', 'SE', 'T', 'df', 'efsize']
stats[col_round] = stats[col_round].round(3)
return stats |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def circ_axial(alpha, n):
"""Transforms n-axial data to a common scale. Parameters alpha : array Sample of angles in radians n : int Number of modes Returns ------- alpha : float Transformed angles Notes ----- Tranform data with multiple modes (known as axial data) to a unimodal sample, for the purpose of certain analysis such as computation of a mean resultant vector (see Berens 2009). Examples -------- Transform degrees to unimodal radians in the Berens 2009 neuro dataset. """ |
alpha = np.array(alpha)
return np.remainder(alpha * n, 2 * np.pi) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def circ_corrcc(x, y, tail='two-sided'):
"""Correlation coefficient between two circular variables. Parameters x : np.array First circular variable (expressed in radians) y : np.array Second circular variable (expressed in radians) tail : string Specify whether to return 'one-sided' or 'two-sided' p-value. Returns ------- r : float Correlation coefficient pval : float Uncorrected p-value Notes ----- Adapted from the CircStats MATLAB toolbox (Berens 2009). Use the np.deg2rad function to convert angles from degrees to radians. Please note that NaN are automatically removed. Examples -------- Compute the r and p-value of two circular variables 0.942 0.06579836070349088 """ |
from scipy.stats import norm
x = np.asarray(x)
y = np.asarray(y)
# Check size
if x.size != y.size:
raise ValueError('x and y must have the same length.')
# Remove NA
x, y = remove_na(x, y, paired=True)
n = x.size
# Compute correlation coefficient
x_sin = np.sin(x - circmean(x))
y_sin = np.sin(y - circmean(y))
# Similar to np.corrcoef(x_sin, y_sin)[0][1]
r = np.sum(x_sin * y_sin) / np.sqrt(np.sum(x_sin**2) * np.sum(y_sin**2))
# Compute T- and p-values
tval = np.sqrt((n * (x_sin**2).mean() * (y_sin**2).mean())
/ np.mean(x_sin**2 * y_sin**2)) * r
# Approximately distributed as a standard normal
pval = 2 * norm.sf(abs(tval))
pval = pval / 2 if tail == 'one-sided' else pval
return np.round(r, 3), pval |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def circ_corrcl(x, y, tail='two-sided'):
"""Correlation coefficient between one circular and one linear variable random variables. Parameters x : np.array First circular variable (expressed in radians) y : np.array Second circular variable (linear) tail : string Specify whether to return 'one-sided' or 'two-sided' p-value. Returns ------- r : float Correlation coefficient pval : float Uncorrected p-value Notes ----- Python code borrowed from brainpipe (based on the MATLAB toolbox CircStats) Please note that NaN are automatically removed from datasets. Examples -------- Compute the r and p-value between one circular and one linear variables. 0.109 0.9708899750629236 """ |
from scipy.stats import pearsonr, chi2
x = np.asarray(x)
y = np.asarray(y)
# Check size
if x.size != y.size:
raise ValueError('x and y must have the same length.')
# Remove NA
x, y = remove_na(x, y, paired=True)
n = x.size
# Compute correlation coefficent for sin and cos independently
rxs = pearsonr(y, np.sin(x))[0]
rxc = pearsonr(y, np.cos(x))[0]
rcs = pearsonr(np.sin(x), np.cos(x))[0]
# Compute angular-linear correlation (equ. 27.47)
r = np.sqrt((rxc**2 + rxs**2 - 2 * rxc * rxs * rcs) / (1 - rcs**2))
# Compute p-value
pval = chi2.sf(n * r**2, 2)
pval = pval / 2 if tail == 'one-sided' else pval
return np.round(r, 3), pval |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def circ_mean(alpha, w=None, axis=0):
"""Mean direction for circular data. Parameters alpha : array Sample of angles in radians w : array Number of incidences in case of binned angle data axis : int Compute along this dimension Returns ------- mu : float Mean direction Examples -------- Mean resultant vector of circular data 1.012962445838065 """ |
alpha = np.array(alpha)
if isinstance(w, (list, np.ndarray)):
w = np.array(w)
if alpha.shape != w.shape:
raise ValueError("w must have the same shape as alpha.")
else:
w = np.ones_like(alpha)
return np.angle(np.multiply(w, np.exp(1j * alpha)).sum(axis=axis)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def circ_r(alpha, w=None, d=None, axis=0):
"""Mean resultant vector length for circular data. Parameters alpha : array Sample of angles in radians w : array Number of incidences in case of binned angle data d : float Spacing (in radians) of bin centers for binned data. If supplied, a correction factor is used to correct for bias in the estimation of r. axis : int Compute along this dimension Returns ------- r : float Mean resultant length Notes ----- The length of the mean resultant vector is a crucial quantity for the measurement of circular spread or hypothesis testing in directional statistics. The closer it is to one, the more concentrated the data sample is around the mean direction (Berens 2009). Examples -------- Mean resultant vector length of circular data 0.49723034495605356 """ |
alpha = np.array(alpha)
w = np.array(w) if w is not None else np.ones(alpha.shape)
if alpha.size is not w.size:
raise ValueError("Input dimensions do not match")
# Compute weighted sum of cos and sin of angles:
r = np.multiply(w, np.exp(1j * alpha)).sum(axis=axis)
# Obtain length:
r = np.abs(r) / w.sum(axis=axis)
# For data with known spacing, apply correction factor
if d is not None:
c = d / 2 / np.sin(d / 2)
r = c * r
return r |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def circ_rayleigh(alpha, w=None, d=None):
"""Rayleigh test for non-uniformity of circular data. Parameters alpha : np.array Sample of angles in radians. w : np.array Number of incidences in case of binned angle data. d : float Spacing (in radians) of bin centers for binned data. If supplied, a correction factor is used to correct for bias in the estimation of r. Returns ------- z : float Z-statistic pval : float P-value Notes ----- The Rayleigh test asks how large the resultant vector length R must be to indicate a non-uniform distribution (Fisher 1995). H0: the population is uniformly distributed around the circle HA: the populatoin is not distributed uniformly around the circle The assumptions for the Rayleigh test are that (1) the distribution has only one mode and (2) the data is sampled from a von Mises distribution. Examples -------- 1. Simple Rayleigh test for non-uniformity of circular data. 1.236 0.3048435876500138 2. Specifying w and d (0.278, 0.8069972000769801) """ |
alpha = np.array(alpha)
if w is None:
r = circ_r(alpha)
n = len(alpha)
else:
if len(alpha) is not len(w):
raise ValueError("Input dimensions do not match")
r = circ_r(alpha, w, d)
n = np.sum(w)
# Compute Rayleigh's statistic
R = n * r
z = (R**2) / n
# Compute p value using approxation in Zar (1999), p. 617
pval = np.exp(np.sqrt(1 + 4 * n + 4 * (n**2 - R**2)) - (1 + 2 * n))
return np.round(z, 3), pval |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def bonf(pvals, alpha=0.05):
"""P-values correction with Bonferroni method. Parameters pvals : array_like Array of p-values of the individual tests. alpha : float Error rate (= alpha level). Returns ------- reject : array, bool True if a hypothesis is rejected, False if not pval_corrected : array P-values adjusted for multiple hypothesis testing using the Bonferroni procedure (= multiplied by the number of tests). See also -------- holm : Holm-Bonferroni correction fdr : Benjamini/Hochberg and Benjamini/Yekutieli FDR correction Notes ----- From Wikipedia: Statistical hypothesis testing is based on rejecting the null hypothesis if the likelihood of the observed data under the null hypotheses is low. If multiple hypotheses are tested, the chance of a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases. The Bonferroni correction compensates for that increase by testing each individual hypothesis :math:`p_i` at a significance level of :math:`p_i = \\alpha / n` where :math:`\\alpha` is the desired overall alpha level and :math:`n` is the number of hypotheses. For example, if a trial is testing :math:`n=20` hypotheses with a desired :math:`\\alpha=0.05`, then the Bonferroni correction would test each individual hypothesis at :math:`\\alpha=0.05/20=0.0025``. The Bonferroni adjusted p-values are defined as: .. math:: \\widetilde {p}_{{(i)}}= n \\cdot p_{{(i)}} The Bonferroni correction tends to be a bit too conservative. Note that NaN values are not taken into account in the p-values correction. References - Bonferroni, C. E. (1935). Il calcolo delle assicurazioni su gruppi di teste. Studi in onore del professore salvatore ortu carboni, 13-60. - https://en.wikipedia.org/wiki/Bonferroni_correction Examples -------- [False True False False True] [1. 0.015 1. 0.27 0.0015] """ |
pvals = np.asarray(pvals)
num_nan = np.isnan(pvals).sum()
pvals_corrected = pvals * (float(pvals.size) - num_nan)
pvals_corrected = np.clip(pvals_corrected, None, 1)
with np.errstate(invalid='ignore'):
reject = np.less(pvals_corrected, alpha)
return reject, pvals_corrected |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def holm(pvals, alpha=.05):
"""P-values correction with Holm method. Parameters pvals : array_like Array of p-values of the individual tests. alpha : float Error rate (= alpha level). Returns ------- reject : array, bool True if a hypothesis is rejected, False if not pvals_corrected : array P-values adjusted for multiple hypothesis testing using the Holm procedure. See also -------- bonf : Bonferroni correction fdr : Benjamini/Hochberg and Benjamini/Yekutieli FDR correction Notes ----- From Wikipedia: In statistics, the Holm–Bonferroni method (also called the Holm method) is used to counteract the problem of multiple comparisons. It is intended to control the family-wise error rate and offers a simple test uniformly more powerful than the Bonferroni correction. The Holm adjusted p-values are the running maximum of the sorted p-values divided by the corresponding increasing alpha level: .. math:: where :math:`n` is the number of test. The full mathematical formula is: .. math:: \\widetilde {p}_{{(i)}}=\\max _{{j\\leq i}}\\left\\{(n-j+1)p_{{(j)}} \\right\\}_{{1}} Note that NaN values are not taken into account in the p-values correction. References - Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics, 65-70. - https://en.wikipedia.org/wiki/Holm%E2%80%93Bonferroni_method Examples -------- [False True False False True] [0.64 0.012 0.64 0.162 0.0015] """ |
# Convert to array and save original shape
pvals = np.asarray(pvals)
shape_init = pvals.shape
pvals = pvals.ravel()
num_nan = np.isnan(pvals).sum()
# Sort the (flattened) p-values
pvals_sortind = np.argsort(pvals)
pvals_sorted = pvals[pvals_sortind]
sortrevind = pvals_sortind.argsort()
ntests = pvals.size - num_nan
# Now we adjust the p-values
pvals_corr = np.diag(pvals_sorted * np.arange(ntests, 0, -1)[..., None])
pvals_corr = np.maximum.accumulate(pvals_corr)
pvals_corr = np.clip(pvals_corr, None, 1)
# And revert to the original shape and order
pvals_corr = np.append(pvals_corr, np.full(num_nan, np.nan))
pvals_corrected = pvals_corr[sortrevind].reshape(shape_init)
with np.errstate(invalid='ignore'):
reject = np.less(pvals_corrected, alpha)
return reject, pvals_corrected |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def multicomp(pvals, alpha=0.05, method='holm'):
"""P-values correction for multiple comparisons. Parameters pvals : array_like uncorrected p-values. alpha : float Significance level. method : string Method used for testing and adjustment of pvalues. Can be either the full name or initial letters. Available methods are :: 'bonf' : one-step Bonferroni correction 'holm' : step-down method using Bonferroni adjustments 'fdr_bh' : Benjamini/Hochberg FDR correction 'fdr_by' : Benjamini/Yekutieli FDR correction 'none' : pass-through option (no correction applied) Returns ------- reject : array, boolean True for hypothesis that can be rejected for given alpha. pvals_corrected : array P-values corrected for multiple testing. See Also -------- bonf : Bonferroni correction holm : Holm-Bonferroni correction fdr : Benjamini/Hochberg and Benjamini/Yekutieli FDR correction pairwise_ttests : Pairwise post-hocs T-tests Notes ----- This function is similar to the `p.adjust` R function. The correction methods include the Bonferroni correction ("bonf") in which the p-values are multiplied by the number of comparisons. Less conservative methods are also included such as Holm (1979) ("holm"), Benjamini & Hochberg (1995) ("fdr_bh"), and Benjamini & Yekutieli (2001) ("fdr_by"), respectively. The first two methods are designed to give strong control of the family-wise error rate. Note that the Holm's method is usually preferred over the Bonferroni correction. The "fdr_bh" and "fdr_by" methods control the false discovery rate, i.e. the expected proportion of false discoveries amongst the rejected hypotheses. The false discovery rate is a less stringent condition than the family-wise error rate, so these methods are more powerful than the others. References - Bonferroni, C. E. (1935). Il calcolo delle assicurazioni su gruppi di teste. Studi in onore del professore salvatore ortu carboni, 13-60. - Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6, 65–70. - Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B, 57, 289–300. - Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics, 29, 1165–1188. Examples -------- FDR correction of an array of p-values [False True False False True] [0.5 0.0075 0.4 0.09 0.0015] """ |
if not isinstance(pvals, (list, np.ndarray)):
err = "pvals must be a list or a np.ndarray"
raise ValueError(err)
if method.lower() in ['b', 'bonf', 'bonferroni']:
reject, pvals_corrected = bonf(pvals, alpha=alpha)
elif method.lower() in ['h', 'holm']:
reject, pvals_corrected = holm(pvals, alpha=alpha)
elif method.lower() in ['fdr', 'fdr_bh']:
reject, pvals_corrected = fdr(pvals, alpha=alpha, method='fdr_bh')
elif method.lower() in ['fdr_by']:
reject, pvals_corrected = fdr(pvals, alpha=alpha, method='fdr_by')
elif method.lower() == 'none':
pvals_corrected = pvals
with np.errstate(invalid='ignore'):
reject = np.less(pvals_corrected, alpha)
else:
raise ValueError('Multiple comparison method not recognized')
return reject, pvals_corrected |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def cronbach_alpha(data=None, items=None, scores=None, subject=None, remove_na=False, ci=.95):
"""Cronbach's alpha reliability measure. Parameters data : pandas dataframe Wide or long-format dataframe. items : str Column in ``data`` with the items names (long-format only). scores : str Column in ``data`` with the scores (long-format only). subject : str Column in ``data`` with the subject identifier (long-format only). remove_na : bool If True, remove the entire rows that contain missing values (= listwise deletion). If False, only pairwise missing values are removed when computing the covariance matrix. For more details, please refer to the :py:meth:`pandas.DataFrame.cov` method. ci : float Confidence interval (.95 = 95%) Returns ------- alpha : float Cronbach's alpha Notes ----- This function works with both wide and long format dataframe. If you pass a long-format dataframe, you must also pass the ``items``, ``scores`` and ``subj`` columns (in which case the data will be converted into wide format using the :py:meth:`pandas.DataFrame.pivot` method). Internal consistency is usually measured with Cronbach's alpha, a statistic calculated from the pairwise correlations between items. Internal consistency ranges between negative infinity and one. Coefficient alpha will be negative whenever there is greater within-subject variability than between-subject variability. Cronbach's :math:`\\alpha` is defined as .. math:: \\alpha ={k \\over k-1}\\left(1-{\\sum_{{i=1}}^{k}\\sigma_{{y_{i}}}^{2} \\over\\sigma_{x}^{2}}\\right) where :math:`k` refers to the number of items, :math:`\\sigma_{x}^{2}` is the variance of the observed total scores, and :math:`\\sigma_{{y_{i}}}^{2}` the variance of component :math:`i` for the current sample of subjects. Another formula for Cronbach's :math:`\\alpha` is .. math:: \\alpha = \\frac{k \\times \\bar c}{\\bar v + (k - 1) \\times \\bar c} where :math:`\\bar c` refers to the average of all covariances between items and :math:`\\bar v` to the average variance of each item. 95% confidence intervals are calculated using Feldt's method: .. math:: c_L = 1 - (1 - \\alpha) \\cdot F_{(0.025, n-1, (n-1)(k-1))} c_U = 1 - (1 - \\alpha) \\cdot F_{(0.975, n-1, (n-1)(k-1))} where :math:`n` is the number of subjects and :math:`k` the number of items. Results have been tested against the R package psych. References .. [1] https://en.wikipedia.org/wiki/Cronbach%27s_alpha .. [2] http://www.real-statistics.com/reliability/cronbachs-alpha/ .. [3] https://cran.r-project.org/web/packages/psych/psych.pdf .. [4] Feldt, Leonard S., Woodruff, David J., & Salih, Fathi A. (1987). Statistical inference for coefficient alpha. Applied Psychological Measurement, 11(1):
93-103. Examples -------- Binary wide-format dataframe (with missing values) (0.732661, array([0.435, 0.909])) After listwise deletion of missing values (remove the entire rows) (0.801695, array([0.581, 0.933])) After imputing the missing values with the median of each column (0.738019, array([0.447, 0.911])) Likert-type long-format dataframe (0.591719, array([0.195, 0.84 ])) """ |
# Safety check
assert isinstance(data, pd.DataFrame), 'data must be a dataframe.'
if all([v is not None for v in [items, scores, subject]]):
# Data in long-format: we first convert to a wide format
data = data.pivot(index=subject, values=scores, columns=items)
# From now we assume that data is in wide format
n, k = data.shape
assert k >= 2, 'At least two items are required.'
assert n >= 2, 'At least two raters/subjects are required.'
err = 'All columns must be numeric.'
assert all([data[c].dtype.kind in 'bfi' for c in data.columns]), err
if data.isna().any().any() and remove_na:
# In R = psych:alpha(data, use="complete.obs")
data = data.dropna(axis=0, how='any')
# Compute covariance matrix and Cronbach's alpha
C = data.cov()
cronbach = (k / (k - 1)) * (1 - np.trace(C) / C.sum().sum())
# which is equivalent to
# v = np.diag(C).mean()
# c = C.values[np.tril_indices_from(C, k=-1)].mean()
# cronbach = (k * c) / (v + (k - 1) * c)
# Confidence intervals
alpha = 1 - ci
df1 = n - 1
df2 = df1 * (k - 1)
lower = 1 - (1 - cronbach) * f.isf(alpha / 2, df1, df2)
upper = 1 - (1 - cronbach) * f.isf(1 - alpha / 2, df1, df2)
return round(cronbach, 6), np.round([lower, upper], 3) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def intraclass_corr(data=None, groups=None, raters=None, scores=None, ci=.95):
"""Intra-class correlation coefficient. Parameters data : pd.DataFrame Dataframe containing the variables groups : string Name of column in data containing the groups. raters : string Name of column in data containing the raters (scorers). scores : string Name of column in data containing the scores (ratings). ci : float Confidence interval Returns ------- icc : float Intraclass correlation coefficient ci : list Lower and upper confidence intervals Notes ----- The intraclass correlation (ICC) assesses the reliability of ratings by comparing the variability of different ratings of the same subject to the total variation across all ratings and all subjects. The ratings are quantitative (e.g. Likert scale). Inspired from: http://www.real-statistics.com/reliability/intraclass-correlation/ Examples -------- ICC of wine quality assessed by 4 judges. (0.727526, array([0.434, 0.927])) """ |
from pingouin import anova
# Check dataframe
if any(v is None for v in [data, groups, raters, scores]):
raise ValueError('Data, groups, raters and scores must be specified')
assert isinstance(data, pd.DataFrame), 'Data must be a pandas dataframe.'
# Check that scores is a numeric variable
assert data[scores].dtype.kind in 'fi', 'Scores must be numeric.'
# Check that data are fully balanced
if data.groupby(raters)[scores].count().nunique() > 1:
raise ValueError('Data must be balanced.')
# Extract sizes
k = data[raters].nunique()
# n = data[groups].nunique()
# ANOVA and ICC
aov = anova(dv=scores, data=data, between=groups, detailed=True)
icc = (aov.loc[0, 'MS'] - aov.loc[1, 'MS']) / \
(aov.loc[0, 'MS'] + (k - 1) * aov.loc[1, 'MS'])
# Confidence interval
alpha = 1 - ci
df_num, df_den = aov.loc[0, 'DF'], aov.loc[1, 'DF']
f_lower = aov.loc[0, 'F'] / f.isf(alpha / 2, df_num, df_den)
f_upper = aov.loc[0, 'F'] * f.isf(alpha / 2, df_den, df_num)
lower = (f_lower - 1) / (f_lower + k - 1)
upper = (f_upper - 1) / (f_upper + k - 1)
return round(icc, 6), np.round([lower, upper], 3) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _func(a, p, r, v):
""" calculates f-hat for the coefficients in a, probability p, sample mean difference r, and degrees of freedom v. """ |
# eq. 2.3
f = a[0]*math.log(r-1.) + \
a[1]*math.log(r-1.)**2 + \
a[2]*math.log(r-1.)**3 + \
a[3]*math.log(r-1.)**4
# eq. 2.7 and 2.8 corrections
if r == 3:
f += -0.002 / (1. + 12. * _phi(p)**2)
if v <= 4.364:
f += 1./517. - 1./(312.*(v,1e38)[np.isinf(v)])
else:
f += 1./(191.*(v,1e38)[np.isinf(v)])
return -f |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _select_ps(p):
# There are more generic ways of doing this but profiling # revealed that selecting these points is one of the slow # things that is easy to change. This is about 11 times # faster than the generic algorithm it is replacing. # # it is possible that different break points could yield # better estimates, but the function this is refactoring # just used linear distance. """returns the points to use for interpolating p""" |
if p >= .99:
return .990, .995, .999
elif p >= .975:
return .975, .990, .995
elif p >= .95:
return .950, .975, .990
elif p >= .9125:
return .900, .950, .975
elif p >= .875:
return .850, .900, .950
elif p >= .825:
return .800, .850, .900
elif p >= .7625:
return .750, .800, .850
elif p >= .675:
return .675, .750, .800
elif p >= .500:
return .500, .675, .750
else:
return .100, .500, .675 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _select_vs(v, p):
# This one is is about 30 times faster than # the generic algorithm it is replacing. """returns the points to use for interpolating v""" |
if v >= 120.:
return 60, 120, inf
elif v >= 60.:
return 40, 60, 120
elif v >= 40.:
return 30, 40, 60
elif v >= 30.:
return 24, 30, 40
elif v >= 24.:
return 20, 24, 30
elif v >= 19.5:
return 19, 20, 24
if p >= .9:
if v < 2.5:
return 1, 2, 3
else:
if v < 3.5:
return 2, 3, 4
vi = int(round(v))
return vi - 1, vi, vi + 1 |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _interpolate_v(p, r, v):
""" interpolates v based on the values in the A table for the scalar value of r and th """ |
# interpolate v (p should be in table)
# ordinate: y**2
# abcissa: 1./v
# find the 3 closest v values
# only p >= .9 have table values for 1 degree of freedom.
# The boolean is used to index the tuple and append 1 when
# p >= .9
v0, v1, v2 = _select_vs(v, p)
# y = f - 1.
y0_sq = (_func(A[(p,v0)], p, r, v0) + 1.)**2.
y1_sq = (_func(A[(p,v1)], p, r, v1) + 1.)**2.
y2_sq = (_func(A[(p,v2)], p, r, v2) + 1.)**2.
# if v2 is inf set to a big number so interpolation
# calculations will work
if v2 > 1e38: v2 = 1e38
# transform v
v_, v0_, v1_, v2_ = 1./v, 1./v0, 1./v1, 1./v2
# calculate derivatives for quadratic interpolation
d2 = 2.*((y2_sq-y1_sq)/(v2_-v1_) - \
(y0_sq-y1_sq)/(v0_-v1_)) / (v2_-v0_)
if (v2_ + v0_) >= (v1_ + v1_):
d1 = (y2_sq-y1_sq) / (v2_-v1_) - 0.5*d2*(v2_-v1_)
else:
d1 = (y1_sq-y0_sq) / (v1_-v0_) + 0.5*d2*(v1_-v0_)
d0 = y1_sq
# calculate y
y = math.sqrt((d2/2.)*(v_-v1_)**2. + d1*(v_-v1_)+ d0)
return y |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _qsturng(p, r, v):
"""scalar version of qsturng""" |
## print 'q',p
# r is interpolated through the q to y here we only need to
# account for when p and/or v are not found in the table.
global A, p_keys, v_keys
if p < .1 or p > .999:
raise ValueError('p must be between .1 and .999')
if p < .9:
if v < 2:
raise ValueError('v must be > 2 when p < .9')
else:
if v < 1:
raise ValueError('v must be > 1 when p >= .9')
# The easy case. A tabled value is requested.
#numpy 1.4.1: TypeError: unhashable type: 'numpy.ndarray' :
p = float(p)
if isinstance(v, np.ndarray):
v = v.item()
if (p,v) in A:
y = _func(A[(p,v)], p, r, v) + 1.
elif p not in p_keys and v not in v_keys+([],[1])[p>=.90]:
# find the 3 closest v values
v0, v1, v2 = _select_vs(v, p)
# find the 3 closest p values
p0, p1, p2 = _select_ps(p)
# calculate r0, r1, and r2
r0_sq = _interpolate_p(p, r, v0)**2
r1_sq = _interpolate_p(p, r, v1)**2
r2_sq = _interpolate_p(p, r, v2)**2
# transform v
v_, v0_, v1_, v2_ = 1./v, 1./v0, 1./v1, 1./v2
# calculate derivatives for quadratic interpolation
d2 = 2.*((r2_sq-r1_sq)/(v2_-v1_) - \
(r0_sq-r1_sq)/(v0_-v1_)) / (v2_-v0_)
if (v2_ + v0_) >= (v1_ + v1_):
d1 = (r2_sq-r1_sq) / (v2_-v1_) - 0.5*d2*(v2_-v1_)
else:
d1 = (r1_sq-r0_sq) / (v1_-v0_) + 0.5*d2*(v1_-v0_)
d0 = r1_sq
# calculate y
y = math.sqrt((d2/2.)*(v_-v1_)**2. + d1*(v_-v1_)+ d0)
elif v not in v_keys+([],[1])[p>=.90]:
y = _interpolate_v(p, r, v)
elif p not in p_keys:
y = _interpolate_p(p, r, v)
return math.sqrt(2) * -y * \
scipy.stats.t.isf((1. + p) / 2., max(v, 1e38)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def qsturng(p, r, v):
"""Approximates the quantile p for a studentized range distribution having v degrees of freedom and r samples for probability p. Parameters p : (scalar, array_like) The cumulative probability value p >= .1 and p <=.999 (values under .5 are not recommended) r : (scalar, array_like) The number of samples r >= 2 and r <= 200 (values over 200 are permitted but not recommended) v : (scalar, array_like) The sample degrees of freedom if p >= .9: v >=1 and v >= inf else: v >=2 and v >= inf Returns ------- q : (scalar, array_like) approximation of the Studentized Range """ |
if all(map(_isfloat, [p, r, v])):
return _qsturng(p, r, v)
return _vqsturng(p, r, v) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _psturng(q, r, v):
"""scalar version of psturng""" |
if q < 0.:
raise ValueError('q should be >= 0')
opt_func = lambda p, r, v : abs(_qsturng(p, r, v) - q)
if v == 1:
if q < _qsturng(.9, r, 1):
return .1
elif q > _qsturng(.999, r, 1):
return .001
return 1. - fminbound(opt_func, .9, .999, args=(r,v))
else:
if q < _qsturng(.1, r, v):
return .9
elif q > _qsturng(.999, r, v):
return .001
return 1. - fminbound(opt_func, .1, .999, args=(r,v)) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def psturng(q, r, v):
"""Evaluates the probability from 0 to q for a studentized range having v degrees of freedom and r samples. Parameters q : (scalar, array_like) quantile value of Studentized Range q >= 0. r : (scalar, array_like) The number of samples r >= 2 and r <= 200 (values over 200 are permitted but not recommended) v : (scalar, array_like) The sample degrees of freedom if p >= .9: v >=1 and v >= inf else: v >=2 and v >= inf Returns ------- p : (scalar, array_like) 1. - area from zero to q under the Studentized Range distribution. When v == 1, p is bound between .001 and .1, when v > 1, p is bound between .001 and .9. Values between .5 and .9 are 1st order appoximations. """ |
if all(map(_isfloat, [q, r, v])):
return _psturng(q, r, v)
return _vpsturng(q, r, v) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def power_anova(eta=None, k=None, n=None, power=None, alpha=0.05):
""" Evaluate power, sample size, effect size or significance level of a one-way balanced ANOVA. Parameters eta : float ANOVA effect size (eta-square == :math:`\\eta^2`). k : int Number of groups n : int Sample size per group. Groups are assumed to be balanced (i.e. same sample size). power : float Test power (= 1 - type II error). alpha : float Significance level (type I error probability). The default is 0.05. Notes ----- Exactly ONE of the parameters ``eta``, ``k``, ``n``, ``power`` and ``alpha`` must be passed as None, and that parameter is determined from the others. Notice that ``alpha`` has a default value of 0.05 so None must be explicitly passed if you want to compute it. This function is a mere Python translation of the original `pwr.anova.test` function implemented in the `pwr` package. All credit goes to the author, Stephane Champely. Statistical power is the likelihood that a study will detect an effect when there is an effect there to be detected. A high statistical power means that there is a low probability of concluding that there is no effect when there is one. Statistical power is mainly affected by the effect size and the sample size. For one-way ANOVA, eta-square is the same as partial eta-square. It can be evaluated from the f-value and degrees of freedom of the ANOVA using the following formula: .. math:: \\eta^2 = \\frac{v_1 F^*}{v_1 F^* + v_2} Using :math:`\\eta^2` and the total sample size :math:`N`, the non-centrality parameter is defined by: .. math:: \\delta = N * \\frac{\\eta^2}{1 - \\eta^2} Then the critical value of the non-central F-distribution is computed using the percentile point function of the F-distribution with: .. math:: q = 1 - alpha .. math:: v_1 = k - 1 .. math:: v_2 = N - k where :math:`k` is the number of groups. Finally, the power of the ANOVA is calculated using the survival function of the non-central F-distribution using the previously computed critical value, non-centrality parameter, and degrees of freedom. :py:func:`scipy.optimize.brenth` is used to solve power equations for other variables (i.e. sample size, effect size, or significance level). If the solving fails, a nan value is returned. Results have been tested against GPower and the R pwr package. References .. [1] Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale,NJ: Lawrence Erlbaum. .. [2] https://cran.r-project.org/web/packages/pwr/pwr.pdf Examples -------- 1. Compute achieved power power: 0.6082 2. Compute required number of groups k: 6.0944 3. Compute required sample size n: 29.9255 4. Compute achieved effect size eta: 0.1255 5. Compute achieved alpha (significance) alpha: 0.1085 """ |
# Check the number of arguments that are None
n_none = sum([v is None for v in [eta, k, n, power, alpha]])
if n_none != 1:
err = 'Exactly one of eta, k, n, power, and alpha must be None.'
raise ValueError(err)
# Safety checks
if eta is not None:
eta = abs(eta)
f_sq = eta / (1 - eta)
if alpha is not None:
assert 0 < alpha <= 1
if power is not None:
assert 0 < power <= 1
def func(f_sq, k, n, power, alpha):
nc = (n * k) * f_sq
dof1 = k - 1
dof2 = (n * k) - k
fcrit = stats.f.ppf(1 - alpha, dof1, dof2)
return stats.ncf.sf(fcrit, dof1, dof2, nc)
# Evaluate missing variable
if power is None:
# Compute achieved power
return func(f_sq, k, n, power, alpha)
elif k is None:
# Compute required number of groups
def _eval_k(k, eta, n, power, alpha):
return func(f_sq, k, n, power, alpha) - power
try:
return brenth(_eval_k, 2, 100, args=(f_sq, n, power, alpha))
except ValueError: # pragma: no cover
return np.nan
elif n is None:
# Compute required sample size
def _eval_n(n, f_sq, k, power, alpha):
return func(f_sq, k, n, power, alpha) - power
try:
return brenth(_eval_n, 2, 1e+07, args=(f_sq, k, power, alpha))
except ValueError: # pragma: no cover
return np.nan
elif eta is None:
# Compute achieved eta
def _eval_eta(f_sq, k, n, power, alpha):
return func(f_sq, k, n, power, alpha) - power
try:
f_sq = brenth(_eval_eta, 1e-10, 1 - 1e-10, args=(k, n, power,
alpha))
return f_sq / (f_sq + 1) # Return eta-square
except ValueError: # pragma: no cover
return np.nan
else:
# Compute achieved alpha
def _eval_alpha(alpha, f_sq, k, n, power):
return func(f_sq, k, n, power, alpha) - power
try:
return brenth(_eval_alpha, 1e-10, 1 - 1e-10, args=(f_sq, k, n,
power))
except ValueError: # pragma: no cover
return np.nan |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def consume(self, timeout=None, loop=None):
"""Start consuming the stream :param timeout: int: if it's given then it stops consumer after given number of seconds """ |
if self._consumer_fn is None:
raise ValueError('Consumer function is not defined yet')
logger.info('Start consuming the stream')
@asyncio.coroutine
def worker(conn_url):
extra_headers = {
'Connection': 'upgrade',
'Upgrade': 'websocket',
'Sec-Websocket-Version': 13,
}
ws = yield from websockets.connect(
conn_url, extra_headers=extra_headers)
if ws is None:
raise RuntimeError("Couldn't connect to the '%s'" % conn_url)
try:
while True:
message = yield from ws.recv()
yield from self._consumer_fn(message)
finally:
yield from ws.close()
if loop is None:
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
task = worker(conn_url=self._conn_url)
if timeout:
logger.info('Running task with timeout %s sec', timeout)
loop.run_until_complete(
asyncio.wait_for(task, timeout=timeout))
else:
loop.run_until_complete(task)
except asyncio.TimeoutError:
logger.info('Timeout is reached. Closing the loop')
loop.close()
except KeyboardInterrupt:
logger.info('Closing the loop')
loop.close() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def add_rule(self, value, tag):
"""Add a new rule :param value: str :param tag: str :return: dict of a json response """ |
resp = requests.post(url=self.REQUEST_URL.format(**self._params),
json={'rule': {'value': value, 'tag': tag}})
return resp.json() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def remove_rule(self, tag):
"""Remove a rule by tag """ |
resp = requests.delete(url=self.REQUEST_URL.format(**self._params),
json={'tag': tag})
return resp.json() |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def stringify_values(data):
"""Coerce iterable values to 'val1,val2,valN' Example: fields=['nickname', 'city', 'can_see_all_posts'] --> fields='nickname,city,can_see_all_posts' :param data: dict :return: converted values dict """ |
if not isinstance(data, dict):
raise ValueError('Data must be dict. %r is passed' % data)
values_dict = {}
for key, value in data.items():
items = []
if isinstance(value, six.string_types):
items.append(value)
elif isinstance(value, Iterable):
for v in value:
# Convert to str int values
if isinstance(v, int):
v = str(v)
try:
item = six.u(v)
except TypeError:
item = v
items.append(item)
value = ','.join(items)
values_dict[key] = value
return values_dict |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_url_query_params(url, fragment=True):
"""Parse url query params :param fragment: bool: flag is used for parsing oauth url :param url: str: url string :return: dict """ |
parsed_url = urlparse(url)
if fragment:
url_query = parse_qsl(parsed_url.fragment)
else:
url_query = parse_qsl(parsed_url.query)
# login_response_url_query can have multiple key
url_query = dict(url_query)
return url_query |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def parse_masked_phone_number(html, parser=None):
"""Get masked phone number from security check html :param html: str: raw html text :param parser: bs4.BeautifulSoup: html parser :return: tuple of phone prefix and suffix, for example: ('+1234', '89') :rtype : tuple """ |
if parser is None:
parser = bs4.BeautifulSoup(html, 'html.parser')
fields = parser.find_all('span', {'class': 'field_prefix'})
if not fields:
raise VkParseError(
'No <span class="field_prefix">...</span> in the \n%s' % html)
result = []
for f in fields:
value = f.get_text().replace(six.u('\xa0'), '')
result.append(value)
return tuple(result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def http_session(self):
"""HTTP Session property :return: vk_requests.utils.VerboseHTTPSession instance """ |
if self._http_session is None:
session = VerboseHTTPSession()
session.headers.update(self.DEFAULT_HTTP_HEADERS)
self._http_session = session
return self._http_session |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def do_login(self, http_session):
"""Do vk login :param http_session: vk_requests.utils.VerboseHTTPSession: http session """ |
response = http_session.get(self.LOGIN_URL)
action_url = parse_form_action_url(response.text)
# Stop login it action url is not found
if not action_url:
logger.debug(response.text)
raise VkParseError("Can't parse form action url")
login_form_data = {'email': self._login, 'pass': self._password}
login_response = http_session.post(action_url, login_form_data)
logger.debug('Cookies: %s', http_session.cookies)
response_url_query = parse_url_query_params(
login_response.url, fragment=False)
logger.debug('response_url_query: %s', response_url_query)
act = response_url_query.get('act')
# Check response url query params firstly
if 'sid' in response_url_query:
self.require_auth_captcha(
response=login_response,
query_params=response_url_query,
login_form_data=login_form_data,
http_session=http_session)
elif act == 'authcheck':
self.require_2fa(html=login_response.text,
http_session=http_session)
elif act == 'security_check':
self.require_phone_number(html=login_response.text,
session=http_session)
session_cookies = ('remixsid' in http_session.cookies,
'remixsid6' in http_session.cookies)
if any(session_cookies):
logger.info('VK session is established')
return True
else:
message = 'Authorization error: incorrect password or ' \
'authentication code'
logger.error(message)
raise VkAuthError(message) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def require_auth_captcha(self, response, query_params, login_form_data, http_session):
"""Resolve auth captcha case :param response: http response :param query_params: dict: response query params, for example: {'s': '0', 'email': 'my@email', 'dif': '1', 'role': 'fast', 'sid': '1'} :param login_form_data: dict :param http_session: requests.Session :return: :raise VkAuthError: """ |
logger.info('Captcha is needed. Query params: %s', query_params)
form_text = response.text
action_url = parse_form_action_url(form_text)
logger.debug('form action url: %s', action_url)
if not action_url:
raise VkAuthError('Cannot find form action url')
captcha_sid, captcha_url = parse_captcha_html(
html=response.text, response_url=response.url)
logger.info('Captcha url %s', captcha_url)
login_form_data['captcha_sid'] = captcha_sid
login_form_data['captcha_key'] = self.get_captcha_key(captcha_url)
response = http_session.post(action_url, login_form_data)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_captcha_key(self, captcha_image_url):
"""Read CAPTCHA key from user input""" |
if self.interactive:
print('Open CAPTCHA image url in your browser and enter it below: ',
captcha_image_url)
captcha_key = raw_input('Enter CAPTCHA key: ')
return captcha_key
else:
raise VkAuthError(
'Captcha is required. Use interactive mode to enter it '
'manually') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def make_request(self, request, captcha_response=None):
"""Make api request helper function :param request: vk_requests.api.Request instance :param captcha_response: None or dict, e.g {'sid': <sid>, 'key': <key>} :return: dict: json decoded http response """ |
logger.debug('Prepare API Method request %r', request)
response = self._send_api_request(request=request,
captcha_response=captcha_response)
response.raise_for_status()
response_or_error = json.loads(response.text)
logger.debug('response: %s', response_or_error)
if 'error' in response_or_error:
error_data = response_or_error['error']
vk_error = VkAPIError(error_data)
if vk_error.is_captcha_needed():
captcha_key = self.get_captcha_key(vk_error.captcha_img_url)
if not captcha_key:
raise vk_error
# Retry http request with captcha info attached
captcha_response = {
'sid': vk_error.captcha_sid,
'key': captcha_key,
}
return self.make_request(
request, captcha_response=captcha_response)
elif vk_error.is_access_token_incorrect():
logger.info(
'Authorization failed. Access token will be dropped')
self._access_token = None
return self.make_request(request)
else:
raise vk_error
elif 'execute_errors' in response_or_error:
# can take place while running .execute vk method
# See more: https://vk.com/dev/execute
raise VkAPIError(response_or_error['execute_errors'][0])
elif 'response' in response_or_error:
return response_or_error['response'] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _send_api_request(self, request, captcha_response=None):
"""Prepare and send HTTP API request :param request: vk_requests.api.Request instance :param captcha_response: None or dict :return: HTTP response """ |
url = self.API_URL + request.method_name
# Prepare request arguments
method_kwargs = {'v': self.api_version}
# Shape up the request data
for values in (request.method_args,):
method_kwargs.update(stringify_values(values))
if self.is_token_required() or self._service_token:
# Auth api call if access_token hadn't been gotten earlier
method_kwargs['access_token'] = self.access_token
if captcha_response:
method_kwargs['captcha_sid'] = captcha_response['sid']
method_kwargs['captcha_key'] = captcha_response['key']
http_params = dict(url=url,
data=method_kwargs,
**request.http_params)
logger.debug('send_api_request:http_params: %s', http_params)
response = self.http_session.post(**http_params)
return response |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def create_api(app_id=None, login=None, password=None, phone_number=None, scope='offline', api_version='5.92', http_params=None, interactive=False, service_token=None, client_secret=None, two_fa_supported=False, two_fa_force_sms=False):
"""Factory method to explicitly create API with app_id, login, password and phone_number parameters. If the app_id, login, password are not passed, then token-free session will be created automatically :param app_id: int: vk application id, more info: https://vk.com/dev/main :param login: str: vk login :param password: str: vk password :param phone_number: str: phone number with country code (+71234568990) :param scope: str or list of str: vk session scope :param api_version: str: vk api version, check https://vk.com/dev/versions :param interactive: bool: flag which indicates to use InteractiveVKSession :param service_token: str: new way of querying vk api, instead of getting oauth token :param http_params: dict: requests http parameters passed along :param client_secret: str: secure application key for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_supported: bool: enable two-factor authentication for Direct Authorization, more info: https://vk.com/dev/auth_direct :param two_fa_force_sms: bool: force SMS two-factor authentication for Direct Authorization if two_fa_supported is True, more info: https://vk.com/dev/auth_direct :return: api instance :rtype : vk_requests.api.API """ |
session = VKSession(app_id=app_id,
user_login=login,
user_password=password,
phone_number=phone_number,
scope=scope,
service_token=service_token,
api_version=api_version,
interactive=interactive,
client_secret=client_secret,
two_fa_supported = two_fa_supported,
two_fa_force_sms=two_fa_force_sms)
return API(session=session, http_params=http_params) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def result(self, value):
"""The result of the command.""" |
if self._process_result:
self._result = self._process_result(value)
self._raw_result = value |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def url(self, host):
"""Generate url for coap client.""" |
path = '/'.join(str(v) for v in self._path)
return 'coaps://{}:5684/{}'.format(host, path) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _merge(self, a, b):
"""Merges a into b.""" |
for k, v in a.items():
if isinstance(v, dict):
item = b.setdefault(k, {})
self._merge(v, item)
elif isinstance(v, list):
item = b.setdefault(k, [{}])
if len(v) == 1 and isinstance(v[0], dict):
self._merge(v[0], item[0])
else:
b[k] = v
else:
b[k] = v
return b |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def combine_data(self, command2):
"""Combines the data for this command with another.""" |
if command2 is None:
return
self._data = self._merge(command2._data, self._data) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def load_json(filename: str) -> Union[List, Dict]: """Load JSON data from a file and return as dict or list. Defaults to returning empty dict if file is not found. """ |
try:
with open(filename, encoding='utf-8') as fdesc:
return json.loads(fdesc.read())
except FileNotFoundError:
# This is not a fatal error
_LOGGER.debug('JSON file not found: %s', filename)
except ValueError as error:
_LOGGER.exception('Could not parse JSON content: %s', filename)
raise PytradfriError(error)
except OSError as error:
_LOGGER.exception('JSON file reading failed: %s', filename)
raise PytradfriError(error)
return {} |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def save_json(filename: str, config: Union[List, Dict]):
"""Save JSON data to a file. Returns True on success. """ |
try:
data = json.dumps(config, sort_keys=True, indent=4)
with open(filename, 'w', encoding='utf-8') as fdesc:
fdesc.write(data)
return True
except TypeError as error:
_LOGGER.exception('Failed to serialize to JSON: %s',
filename)
raise PytradfriError(error)
except OSError as error:
_LOGGER.exception('Saving JSON file failed: %s',
filename)
raise PytradfriError(error) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_selected_keys(self, selection):
"""Return a list of keys for the given selection.""" |
return [k for k, b in self._lookup.items() if b & selection] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_selected_values(self, selection):
"""Return a list of values for the given selection.""" |
return [v for b, v in self._choices if b & selection] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def retry_timeout(api, retries=3):
"""Retry API call when a timeout occurs.""" |
@wraps(api)
def retry_api(*args, **kwargs):
"""Retrying API."""
for i in range(1, retries + 1):
try:
return api(*args, **kwargs)
except RequestTimeout:
if i == retries:
raise
return retry_api |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def request(self, api_commands, *, timeout=None):
"""Make a request. Timeout is in seconds.""" |
if not isinstance(api_commands, list):
return self._execute(api_commands, timeout=timeout)
command_results = []
for api_command in api_commands:
result = self._execute(api_command, timeout=timeout)
command_results.append(result)
return command_results |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def task_start_time(self):
"""Return the time the task starts. Time is set according to iso8601. """ |
return datetime.time(
self.task_start_parameters[
ATTR_SMART_TASK_TRIGGER_TIME_START_HOUR],
self.task_start_parameters[
ATTR_SMART_TASK_TRIGGER_TIME_START_MIN]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def tasks(self):
"""Return task objects of the task control.""" |
return [StartActionItem(
self._task,
i,
self.state,
self.path,
self.raw) for i in range(len(self.raw))] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_dimmer(self, dimmer):
"""Set final dimmer value for task.""" |
command = {
ATTR_START_ACTION: {
ATTR_DEVICE_STATE: self.state,
ROOT_START_ACTION: [{
ATTR_ID: self.raw[ATTR_ID],
ATTR_LIGHT_DIMMER: dimmer,
ATTR_TRANSITION_TIME: self.raw[ATTR_TRANSITION_TIME]
}, self.devices_dict]
}
}
return self.set_values(command) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def observe(self, callback, err_callback, duration=60):
"""Observe resource and call callback when updated.""" |
def observe_callback(value):
"""
Called when end point is updated.
Returns a Command.
"""
self.raw = value
callback(self)
return Command('get', self.path, process_result=observe_callback,
err_callback=err_callback,
observe=True,
observe_duration=duration) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def update(self):
""" Update the group. Returns a Command. """ |
def process_result(result):
self.raw = result
return Command('get', self.path, process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def generate_psk(self, identity):
""" Generates the PRE_SHARED_KEY from the gateway. Returns a Command. """ |
def process_result(result):
return result[ATTR_PSK]
return Command('post', [ROOT_GATEWAY, ATTR_AUTH], {
ATTR_IDENTITY: identity
}, process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_endpoints(self):
""" Return all available endpoints on the gateway. Returns a Command. """ |
def process_result(result):
return [line.split(';')[0][2:-1] for line in result.split(',')]
return Command('get', ['.well-known', 'core'], parse_json=False,
process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_devices(self):
""" Return the devices linked to the gateway. Returns a Command. """ |
def process_result(result):
return [self.get_device(dev) for dev in result]
return Command('get', [ROOT_DEVICES], process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_device(self, device_id):
""" Return specified device. Returns a Command. """ |
def process_result(result):
return Device(result)
return Command('get', [ROOT_DEVICES, device_id],
process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_groups(self):
""" Return the groups linked to the gateway. Returns a Command. """ |
def process_result(result):
return [self.get_group(group) for group in result]
return Command('get', [ROOT_GROUPS], process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_group(self, group_id):
""" Return specified group. Returns a Command. """ |
def process_result(result):
return Group(self, result)
return Command('get', [ROOT_GROUPS, group_id],
process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_gateway_info(self):
""" Return the gateway info. Returns a Command. """ |
def process_result(result):
return GatewayInfo(result)
return Command('get',
[ROOT_GATEWAY, ATTR_GATEWAY_INFO],
process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_moods(self):
""" Return moods defined on the gateway. Returns a Command. """ |
mood_parent = self._get_mood_parent()
def process_result(result):
return [self.get_mood(mood, mood_parent=mood_parent) for mood in
result]
return Command('get', [ROOT_MOODS, mood_parent],
process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_mood(self, mood_id, *, mood_parent=None):
""" Return a mood. Returns a Command. """ |
if mood_parent is None:
mood_parent = self._get_mood_parent()
def process_result(result):
return Mood(result, mood_parent)
return Command('get', [ROOT_MOODS, mood_parent, mood_id],
mood_parent, process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_smart_tasks(self):
""" Return the transitions linked to the gateway. Returns a Command. """ |
def process_result(result):
return [self.get_smart_task(task) for task in result]
return Command('get', [ROOT_SMART_TASKS],
process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def get_smart_task(self, task_id):
""" Return specified transition. Returns a Command. """ |
def process_result(result):
return SmartTask(self, result)
return Command('get', [ROOT_SMART_TASKS, task_id],
process_result=process_result) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def first_setup(self):
"""This is a guess of the meaning of this value.""" |
if ATTR_FIRST_SETUP not in self.raw:
return None
return datetime.utcfromtimestamp(self.raw[ATTR_FIRST_SETUP]) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def power_source_str(self):
"""String representation of current power source.""" |
if DeviceInfo.ATTR_POWER_SOURCE not in self.raw:
return None
return DeviceInfo.VALUE_POWER_SOURCES.get(self.power_source, 'Unknown') |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def lights(self):
"""Return light objects of the light control.""" |
return [Light(self._device, i) for i in range(len(self.raw))] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_state(self, state, *, index=0):
"""Set state of a light.""" |
return self.set_values({
ATTR_DEVICE_STATE: int(state)
}, index=index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_color_temp(self, color_temp, *, index=0, transition_time=None):
"""Set color temp a light.""" |
self._value_validate(color_temp, RANGE_MIREDS, "Color temperature")
values = {
ATTR_LIGHT_MIREDS: color_temp
}
if transition_time is not None:
values[ATTR_TRANSITION_TIME] = transition_time
return self.set_values(values, index=index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_hex_color(self, color, *, index=0, transition_time=None):
"""Set hex color of the light.""" |
values = {
ATTR_LIGHT_COLOR_HEX: color,
}
if transition_time is not None:
values[ATTR_TRANSITION_TIME] = transition_time
return self.set_values(values, index=index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_xy_color(self, color_x, color_y, *, index=0, transition_time=None):
"""Set xy color of the light.""" |
self._value_validate(color_x, RANGE_X, "X color")
self._value_validate(color_y, RANGE_Y, "Y color")
values = {
ATTR_LIGHT_COLOR_X: color_x,
ATTR_LIGHT_COLOR_Y: color_y
}
if transition_time is not None:
values[ATTR_TRANSITION_TIME] = transition_time
return self.set_values(values, index=index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_hsb(self, hue, saturation, brightness=None, *, index=0, transition_time=None):
"""Set HSB color settings of the light.""" |
self._value_validate(hue, RANGE_HUE, "Hue")
self._value_validate(saturation, RANGE_SATURATION, "Saturation")
values = {
ATTR_LIGHT_COLOR_SATURATION: saturation,
ATTR_LIGHT_COLOR_HUE: hue
}
if brightness is not None:
values[ATTR_LIGHT_DIMMER] = brightness
self._value_validate(brightness, RANGE_BRIGHTNESS, "Brightness")
if transition_time is not None:
values[ATTR_TRANSITION_TIME] = transition_time
return self.set_values(values, index=index) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def _value_validate(self, value, rnge, identifier="Given"):
""" Make sure a value is within a given range """ |
if value is not None and (value < rnge[0] or value > rnge[1]):
raise ValueError('%s value must be between %d and %d.'
% (identifier, rnge[0], rnge[1])) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def set_values(self, values, *, index=0):
""" Set values on light control. Returns a Command. """ |
assert len(self.raw) == 1, \
'Only devices with 1 light supported'
return Command('put', self._device.path, {
ATTR_LIGHT_CONTROL: [
values
]
}) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
def sockets(self):
"""Return socket objects of the socket control.""" |
return [Socket(self._device, i) for i in range(len(self.raw))] |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def _get_protocol(self):
"""Get the protocol for the request.""" |
if self._protocol is None:
self._protocol = asyncio.Task(Context.create_client_context(
loop=self._loop))
return (await self._protocol) |
<SYSTEM_TASK:>
Solve the following problem using Python, implementing the functions described below, one line at a time
<END_TASK>
<USER_TASK:>
Description:
async def _reset_protocol(self, exc=None):
"""Reset the protocol if an error occurs.""" |
# Be responsible and clean up.
protocol = await self._get_protocol()
await protocol.shutdown()
self._protocol = None
# Let any observers know the protocol has been shutdown.
for ob_error in self._observations_err_callbacks:
ob_error(exc)
self._observations_err_callbacks.clear() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.