INSTRUCTION
stringlengths
1
46.3k
RESPONSE
stringlengths
75
80.2k
Return inverse probability of censoring weights at given time points. :math:`\\omega_i = \\delta_i / \\hat{G}(y_i)` Parameters ---------- y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. Returns ------- ipcw : array, shape = (n_samples,) Inverse probability of censoring weights.
def predict_ipcw(self, y): """Return inverse probability of censoring weights at given time points. :math:`\\omega_i = \\delta_i / \\hat{G}(y_i)` Parameters ---------- y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. Returns ------- ipcw : array, shape = (n_samples,) Inverse probability of censoring weights. """ event, time = check_y_survival(y) Ghat = self.predict_proba(time[event]) if (Ghat == 0.0).any(): raise ValueError("censoring survival function is zero at one or more time points") weights = numpy.zeros(time.shape[0]) weights[event] = 1.0 / Ghat return weights
Concordance index for right-censored data The concordance index is defined as the proportion of all comparable pairs in which the predictions and outcomes are concordant. Samples are comparable if for at least one of them an event occurred. If the estimated risk is larger for the sample with a higher time of event/censoring, the predictions of that pair are said to be concordant. If an event occurred for one sample and the other is known to be event-free at least until the time of event of the first, the second sample is assumed to *outlive* the first. When predicted risks are identical for a pair, 0.5 rather than 1 is added to the count of concordant pairs. A pair is not comparable if an event occurred for both of them at the same time or an event occurred for one of them but the time of censoring is smaller than the time of event of the first one. Parameters ---------- event_indicator : array-like, shape = (n_samples,) Boolean array denotes whether an event occurred event_time : array-like, shape = (n_samples,) Array containing the time of an event or time of censoring estimate : array-like, shape = (n_samples,) Estimated risk of experiencing an event tied_tol : float, optional, default: 1e-8 The tolerance value for considering ties. If the absolute difference between risk scores is smaller or equal than `tied_tol`, risk scores are considered tied. Returns ------- cindex : float Concordance index concordant : int Number of concordant pairs discordant : int Number of discordant pairs tied_risk : int Number of pairs having tied estimated risks tied_time : int Number of comparable pairs sharing the same time References ---------- .. [1] Harrell, F.E., Califf, R.M., Pryor, D.B., Lee, K.L., Rosati, R.A, "Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors", Statistics in Medicine, 15(4), 361-87, 1996.
def concordance_index_censored(event_indicator, event_time, estimate, tied_tol=1e-8): """Concordance index for right-censored data The concordance index is defined as the proportion of all comparable pairs in which the predictions and outcomes are concordant. Samples are comparable if for at least one of them an event occurred. If the estimated risk is larger for the sample with a higher time of event/censoring, the predictions of that pair are said to be concordant. If an event occurred for one sample and the other is known to be event-free at least until the time of event of the first, the second sample is assumed to *outlive* the first. When predicted risks are identical for a pair, 0.5 rather than 1 is added to the count of concordant pairs. A pair is not comparable if an event occurred for both of them at the same time or an event occurred for one of them but the time of censoring is smaller than the time of event of the first one. Parameters ---------- event_indicator : array-like, shape = (n_samples,) Boolean array denotes whether an event occurred event_time : array-like, shape = (n_samples,) Array containing the time of an event or time of censoring estimate : array-like, shape = (n_samples,) Estimated risk of experiencing an event tied_tol : float, optional, default: 1e-8 The tolerance value for considering ties. If the absolute difference between risk scores is smaller or equal than `tied_tol`, risk scores are considered tied. Returns ------- cindex : float Concordance index concordant : int Number of concordant pairs discordant : int Number of discordant pairs tied_risk : int Number of pairs having tied estimated risks tied_time : int Number of comparable pairs sharing the same time References ---------- .. [1] Harrell, F.E., Califf, R.M., Pryor, D.B., Lee, K.L., Rosati, R.A, "Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors", Statistics in Medicine, 15(4), 361-87, 1996. """ event_indicator, event_time, estimate = _check_inputs( event_indicator, event_time, estimate) w = numpy.ones_like(estimate) return _estimate_concordance_index(event_indicator, event_time, estimate, w, tied_tol)
Concordance index for right-censored data based on inverse probability of censoring weights. This is an alternative to the estimator in :func:`concordance_index_censored` that does not depend on the distribution of censoring times in the test data. Therefore, the estimate is unbiased and consistent for a population concordance measure that is free of censoring. It is based on inverse probability of censoring weights, thus requires access to survival times from the training data to estimate the censoring distribution. Note that this requires that survival times `survival_test` lie within the range of survival times `survival_train`. This can be achieved by specifying the truncation time `tau`. The resulting `cindex` tells how well the given prediction model works in predicting events that occur in the time range from 0 to `tau`. The estimator uses the Kaplan-Meier estimator to estimate the censoring survivor function. Therefore, it is restricted to situations where the random censoring assumption holds and censoring is independent of the features. Parameters ---------- survival_train : structured array, shape = (n_train_samples,) Survival times for training data to estimate the censoring distribution from. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. survival_test : structured array, shape = (n_samples,) Survival times of test data. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. estimate : array-like, shape = (n_samples,) Estimated risk of experiencing an event of test data. tau : float, optional Truncation time. The survival function for the underlying censoring time distribution :math:`D` needs to be positive at `tau`, i.e., `tau` should be chosen such that the probability of being censored after time `tau` is non-zero: :math:`P(D > \\tau) > 0`. If `None`, no truncation is performed. tied_tol : float, optional, default: 1e-8 The tolerance value for considering ties. If the absolute difference between risk scores is smaller or equal than `tied_tol`, risk scores are considered tied. Returns ------- cindex : float Concordance index concordant : int Number of concordant pairs discordant : int Number of discordant pairs tied_risk : int Number of pairs having tied estimated risks tied_time : int Number of comparable pairs sharing the same time References ---------- .. [1] Uno, H., Cai, T., Pencina, M. J., D’Agostino, R. B., & Wei, L. J. (2011). "On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data". Statistics in Medicine, 30(10), 1105–1117.
def concordance_index_ipcw(survival_train, survival_test, estimate, tau=None, tied_tol=1e-8): """Concordance index for right-censored data based on inverse probability of censoring weights. This is an alternative to the estimator in :func:`concordance_index_censored` that does not depend on the distribution of censoring times in the test data. Therefore, the estimate is unbiased and consistent for a population concordance measure that is free of censoring. It is based on inverse probability of censoring weights, thus requires access to survival times from the training data to estimate the censoring distribution. Note that this requires that survival times `survival_test` lie within the range of survival times `survival_train`. This can be achieved by specifying the truncation time `tau`. The resulting `cindex` tells how well the given prediction model works in predicting events that occur in the time range from 0 to `tau`. The estimator uses the Kaplan-Meier estimator to estimate the censoring survivor function. Therefore, it is restricted to situations where the random censoring assumption holds and censoring is independent of the features. Parameters ---------- survival_train : structured array, shape = (n_train_samples,) Survival times for training data to estimate the censoring distribution from. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. survival_test : structured array, shape = (n_samples,) Survival times of test data. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. estimate : array-like, shape = (n_samples,) Estimated risk of experiencing an event of test data. tau : float, optional Truncation time. The survival function for the underlying censoring time distribution :math:`D` needs to be positive at `tau`, i.e., `tau` should be chosen such that the probability of being censored after time `tau` is non-zero: :math:`P(D > \\tau) > 0`. If `None`, no truncation is performed. tied_tol : float, optional, default: 1e-8 The tolerance value for considering ties. If the absolute difference between risk scores is smaller or equal than `tied_tol`, risk scores are considered tied. Returns ------- cindex : float Concordance index concordant : int Number of concordant pairs discordant : int Number of discordant pairs tied_risk : int Number of pairs having tied estimated risks tied_time : int Number of comparable pairs sharing the same time References ---------- .. [1] Uno, H., Cai, T., Pencina, M. J., D’Agostino, R. B., & Wei, L. J. (2011). "On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data". Statistics in Medicine, 30(10), 1105–1117. """ test_event, test_time = check_y_survival(survival_test) if tau is not None: survival_test = survival_test[test_time < tau] estimate = check_array(estimate, ensure_2d=False) check_consistent_length(test_event, test_time, estimate) cens = CensoringDistributionEstimator() cens.fit(survival_train) ipcw = cens.predict_ipcw(survival_test) w = numpy.square(ipcw) return _estimate_concordance_index(test_event, test_time, estimate, w, tied_tol)
Estimator of cumulative/dynamic AUC for right-censored time-to-event data. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) can be extended to survival data by defining sensitivity (true positive rate) and specificity (true negative rate) as time-dependent measures. *Cumulative cases* are all individuals that experienced an event prior to or at time :math:`t` (:math:`t_i \\leq t`), whereas *dynamic controls* are those with :math:`t_i > t`. The associated cumulative/dynamic AUC quantifies how well a model can distinguish subjects who fail by a given time (:math:`t_i \\leq t`) from subjects who fail after this time (:math:`t_i > t`). Given an estimator of the :math:`i`-th individual's risk score :math:`\\hat{f}(\\mathbf{x}_i)`, the cumulative/dynamic AUC at time :math:`t` is defined as .. math:: \\widehat{\\mathrm{AUC}}(t) = \\frac{\\sum_{i=1}^n \\sum_{j=1}^n I(y_j > t) I(y_i \\leq t) \\omega_i I(\\hat{f}(\\mathbf{x}_j) \\leq \\hat{f}(\\mathbf{x}_i))} {(\\sum_{i=1}^n I(y_i > t)) (\\sum_{i=1}^n I(y_i \\leq t) \\omega_i)} where :math:`\\omega_i` are inverse probability of censoring weights (IPCW). To estimate IPCW, access to survival times from the training data is required to estimate the censoring distribution. Note that this requires that survival times `survival_test` lie within the range of survival times `survival_train`. This can be achieved by specifying `times` accordingly, e.g. by setting `times[-1]` slightly below the maximum expected follow-up time. IPCW are computed using the Kaplan-Meier estimator, which is restricted to situations where the random censoring assumption holds and censoring is independent of the features. The function also provides a single summary measure that refers to the mean of the :math:`\\mathrm{AUC}(t)` over the time range :math:`(\\tau_1, \\tau_2)`. .. math:: \\overline{\\mathrm{AUC}}(\\tau_1, \\tau_2) = \\frac{1}{\\hat{S}(\\tau_1) - \\hat{S}(\\tau_2)} \\int_{\\tau_1}^{\\tau_2} \\widehat{\\mathrm{AUC}}(t)\\,d \\hat{S}(t) where :math:`\\hat{S}(t)` is the Kaplan–Meier estimator of the survival function. Parameters ---------- survival_train : structured array, shape = (n_train_samples,) Survival times for training data to estimate the censoring distribution from. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. survival_test : structured array, shape = (n_samples,) Survival times of test data. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. estimate : array-like, shape = (n_samples,) Estimated risk of experiencing an event of test data. times : array-like, shape = (n_times,) The time points for which the area under the time-dependent ROC curve is computed. Values must be within the range of follow-up times of the test data `survival_test`. tied_tol : float, optional, default: 1e-8 The tolerance value for considering ties. If the absolute difference between risk scores is smaller or equal than `tied_tol`, risk scores are considered tied. Returns ------- auc : array, shape = (n_times,) The cumulative/dynamic AUC estimates (evaluated at `times`). mean_auc : float Summary measure referring to the mean cumulative/dynamic AUC over the specified time range `(times[0], times[-1])`. References ---------- .. [1] H. Uno, T. Cai, L. Tian, and L. J. Wei, "Evaluating prediction rules for t-year survivors with censored regression models," Journal of the American Statistical Association, vol. 102, pp. 527–537, 2007. .. [2] H. Hung and C. T. Chiang, "Estimation methods for time-dependent AUC models with survival data," Canadian Journal of Statistics, vol. 38, no. 1, pp. 8–26, 2010. .. [3] J. Lambert and S. Chevret, "Summary measure of discrimination in survival models based on cumulative/dynamic time-dependent ROC curves," Statistical Methods in Medical Research, 2014.
def cumulative_dynamic_auc(survival_train, survival_test, estimate, times, tied_tol=1e-8): """Estimator of cumulative/dynamic AUC for right-censored time-to-event data. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) can be extended to survival data by defining sensitivity (true positive rate) and specificity (true negative rate) as time-dependent measures. *Cumulative cases* are all individuals that experienced an event prior to or at time :math:`t` (:math:`t_i \\leq t`), whereas *dynamic controls* are those with :math:`t_i > t`. The associated cumulative/dynamic AUC quantifies how well a model can distinguish subjects who fail by a given time (:math:`t_i \\leq t`) from subjects who fail after this time (:math:`t_i > t`). Given an estimator of the :math:`i`-th individual's risk score :math:`\\hat{f}(\\mathbf{x}_i)`, the cumulative/dynamic AUC at time :math:`t` is defined as .. math:: \\widehat{\\mathrm{AUC}}(t) = \\frac{\\sum_{i=1}^n \\sum_{j=1}^n I(y_j > t) I(y_i \\leq t) \\omega_i I(\\hat{f}(\\mathbf{x}_j) \\leq \\hat{f}(\\mathbf{x}_i))} {(\\sum_{i=1}^n I(y_i > t)) (\\sum_{i=1}^n I(y_i \\leq t) \\omega_i)} where :math:`\\omega_i` are inverse probability of censoring weights (IPCW). To estimate IPCW, access to survival times from the training data is required to estimate the censoring distribution. Note that this requires that survival times `survival_test` lie within the range of survival times `survival_train`. This can be achieved by specifying `times` accordingly, e.g. by setting `times[-1]` slightly below the maximum expected follow-up time. IPCW are computed using the Kaplan-Meier estimator, which is restricted to situations where the random censoring assumption holds and censoring is independent of the features. The function also provides a single summary measure that refers to the mean of the :math:`\\mathrm{AUC}(t)` over the time range :math:`(\\tau_1, \\tau_2)`. .. math:: \\overline{\\mathrm{AUC}}(\\tau_1, \\tau_2) = \\frac{1}{\\hat{S}(\\tau_1) - \\hat{S}(\\tau_2)} \\int_{\\tau_1}^{\\tau_2} \\widehat{\\mathrm{AUC}}(t)\\,d \\hat{S}(t) where :math:`\\hat{S}(t)` is the Kaplan–Meier estimator of the survival function. Parameters ---------- survival_train : structured array, shape = (n_train_samples,) Survival times for training data to estimate the censoring distribution from. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. survival_test : structured array, shape = (n_samples,) Survival times of test data. A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. estimate : array-like, shape = (n_samples,) Estimated risk of experiencing an event of test data. times : array-like, shape = (n_times,) The time points for which the area under the time-dependent ROC curve is computed. Values must be within the range of follow-up times of the test data `survival_test`. tied_tol : float, optional, default: 1e-8 The tolerance value for considering ties. If the absolute difference between risk scores is smaller or equal than `tied_tol`, risk scores are considered tied. Returns ------- auc : array, shape = (n_times,) The cumulative/dynamic AUC estimates (evaluated at `times`). mean_auc : float Summary measure referring to the mean cumulative/dynamic AUC over the specified time range `(times[0], times[-1])`. References ---------- .. [1] H. Uno, T. Cai, L. Tian, and L. J. Wei, "Evaluating prediction rules for t-year survivors with censored regression models," Journal of the American Statistical Association, vol. 102, pp. 527–537, 2007. .. [2] H. Hung and C. T. Chiang, "Estimation methods for time-dependent AUC models with survival data," Canadian Journal of Statistics, vol. 38, no. 1, pp. 8–26, 2010. .. [3] J. Lambert and S. Chevret, "Summary measure of discrimination in survival models based on cumulative/dynamic time-dependent ROC curves," Statistical Methods in Medical Research, 2014. """ test_event, test_time = check_y_survival(survival_test) estimate = check_array(estimate, ensure_2d=False) check_consistent_length(test_event, test_time, estimate) times = check_array(numpy.atleast_1d(times), ensure_2d=False, dtype=test_time.dtype) times = numpy.unique(times) if times.max() >= test_time.max() or times.min() < test_time.min(): raise ValueError( 'all times must be within follow-up time of test data: [{}; {}['.format( test_time.min(), test_time.max())) # sort by risk score (descending) o = numpy.argsort(-estimate) test_time = test_time[o] test_event = test_event[o] estimate = estimate[o] survival_test = survival_test[o] cens = CensoringDistributionEstimator() cens.fit(survival_train) ipcw = cens.predict_ipcw(survival_test) n_samples = test_time.shape[0] scores = numpy.empty(times.shape[0], dtype=float) for k, t in enumerate(times): is_case = (test_time <= t) & test_event is_control = test_time > t n_controls = is_control.sum() true_pos = [] false_pos = [] tp_value = 0.0 fp_value = 0.0 est_prev = numpy.infty for i in range(n_samples): est = estimate[i] if numpy.absolute(est - est_prev) > tied_tol: true_pos.append(tp_value) false_pos.append(fp_value) est_prev = est if is_case[i]: tp_value += ipcw[i] elif is_control[i]: fp_value += 1 true_pos.append(tp_value) false_pos.append(fp_value) sens = numpy.array(true_pos) / ipcw[is_case].sum() fpr = numpy.array(false_pos) / n_controls scores[k] = trapz(sens, fpr) if times.shape[0] == 1: mean_auc = scores[0] else: surv = SurvivalFunctionEstimator() surv.fit(survival_test) s_times = surv.predict_proba(times) # compute integral of AUC over survival function d = -numpy.diff(numpy.concatenate(([1.0], s_times))) integral = (scores * d).sum() mean_auc = integral / (1.0 - s_times[-1]) return scores, mean_auc
Number of features that match exactly
def _nominal_kernel(x, y, out): """Number of features that match exactly""" for i in range(x.shape[0]): for j in range(y.shape[0]): out[i, j] += (x[i, :] == y[j, :]).sum() return out
Convert array from continuous and ordered categorical columns
def _get_continuous_and_ordinal_array(x): """Convert array from continuous and ordered categorical columns""" nominal_columns = x.select_dtypes(include=['object', 'category']).columns ordinal_columns = pandas.Index([v for v in nominal_columns if x[v].cat.ordered]) continuous_columns = x.select_dtypes(include=[numpy.number]).columns x_num = x.loc[:, continuous_columns].astype(numpy.float64).values if len(ordinal_columns) > 0: x = _ordinal_as_numeric(x, ordinal_columns) nominal_columns = nominal_columns.difference(ordinal_columns) x_out = numpy.column_stack((x_num, x)) else: x_out = x_num return x_out, nominal_columns
Computes clinical kernel The clinical kernel distinguishes between continuous ordinal,and nominal variables. Parameters ---------- x : pandas.DataFrame, shape = (n_samples_x, n_features) Training data y : pandas.DataFrame, shape = (n_samples_y, n_features) Testing data Returns ------- kernel : array, shape = (n_samples_x, n_samples_y) Kernel matrix. Values are normalized to lie within [0, 1]. References ---------- .. [1] Daemen, A., De Moor, B., "Development of a kernel function for clinical data". Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 5913-7, 2009
def clinical_kernel(x, y=None): """Computes clinical kernel The clinical kernel distinguishes between continuous ordinal,and nominal variables. Parameters ---------- x : pandas.DataFrame, shape = (n_samples_x, n_features) Training data y : pandas.DataFrame, shape = (n_samples_y, n_features) Testing data Returns ------- kernel : array, shape = (n_samples_x, n_samples_y) Kernel matrix. Values are normalized to lie within [0, 1]. References ---------- .. [1] Daemen, A., De Moor, B., "Development of a kernel function for clinical data". Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 5913-7, 2009 """ if y is not None: if x.shape[1] != y.shape[1]: raise ValueError('x and y have different number of features') if not x.columns.equals(y.columns): raise ValueError('columns do not match') else: y = x mat = numpy.zeros((x.shape[0], y.shape[0]), dtype=float) x_numeric, nominal_columns = _get_continuous_and_ordinal_array(x) if id(x) != id(y): y_numeric, _ = _get_continuous_and_ordinal_array(y) else: y_numeric = x_numeric continuous_ordinal_kernel(x_numeric, y_numeric, mat) _nominal_kernel(x.loc[:, nominal_columns].values, y.loc[:, nominal_columns].values, mat) mat /= x.shape[1] return mat
Get distance functions for each column's dtype
def _prepare_by_column_dtype(self, X): """Get distance functions for each column's dtype""" if not isinstance(X, pandas.DataFrame): raise TypeError('X must be a pandas DataFrame') numeric_columns = [] nominal_columns = [] numeric_ranges = [] fit_data = numpy.empty_like(X) for i, dt in enumerate(X.dtypes): col = X.iloc[:, i] if is_categorical_dtype(dt): if col.cat.ordered: numeric_ranges.append(col.cat.codes.max() - col.cat.codes.min()) numeric_columns.append(i) else: nominal_columns.append(i) col = col.cat.codes elif is_numeric_dtype(dt): numeric_ranges.append(col.max() - col.min()) numeric_columns.append(i) else: raise TypeError('unsupported dtype: %r' % dt) fit_data[:, i] = col.values self._numeric_columns = numpy.asarray(numeric_columns) self._nominal_columns = numpy.asarray(nominal_columns) self._numeric_ranges = numpy.asarray(numeric_ranges, dtype=float) self.X_fit_ = fit_data
Determine transformation parameters from data in X. Subsequent calls to `transform(Y)` compute the pairwise distance to `X`. Parameters of the clinical kernel are only updated if `fit_once` is `False`, otherwise you have to explicitly call `prepare()` once. Parameters ---------- X: pandas.DataFrame, shape = (n_samples, n_features) Data to estimate parameters from. y : None Argument is ignored (included for compatibility reasons). kwargs : dict Argument is ignored (included for compatibility reasons). Returns ------- self : object Returns the instance itself.
def fit(self, X, y=None, **kwargs): """Determine transformation parameters from data in X. Subsequent calls to `transform(Y)` compute the pairwise distance to `X`. Parameters of the clinical kernel are only updated if `fit_once` is `False`, otherwise you have to explicitly call `prepare()` once. Parameters ---------- X: pandas.DataFrame, shape = (n_samples, n_features) Data to estimate parameters from. y : None Argument is ignored (included for compatibility reasons). kwargs : dict Argument is ignored (included for compatibility reasons). Returns ------- self : object Returns the instance itself. """ if X.ndim != 2: raise ValueError("expected 2d array, but got %d" % X.ndim) if self.fit_once: self.X_fit_ = X else: self._prepare_by_column_dtype(X) return self
r"""Compute all pairwise distances between `self.X_fit_` and `Y`. Parameters ---------- y : array-like, shape = (n_samples_y, n_features) Returns ------- kernel : ndarray, shape = (n_samples_y, n_samples_X_fit\_) Kernel matrix. Values are normalized to lie within [0, 1].
def transform(self, Y): r"""Compute all pairwise distances between `self.X_fit_` and `Y`. Parameters ---------- y : array-like, shape = (n_samples_y, n_features) Returns ------- kernel : ndarray, shape = (n_samples_y, n_samples_X_fit\_) Kernel matrix. Values are normalized to lie within [0, 1]. """ check_is_fitted(self, 'X_fit_') n_samples_x, n_features = self.X_fit_.shape Y = numpy.asarray(Y) if Y.shape[1] != n_features: raise ValueError('expected array with %d features, but got %d' % (n_features, Y.shape[1])) n_samples_y = Y.shape[0] mat = numpy.zeros((n_samples_y, n_samples_x), dtype=float) continuous_ordinal_kernel_with_ranges(Y[:, self._numeric_columns].astype(numpy.float64), self.X_fit_[:, self._numeric_columns].astype(numpy.float64), self._numeric_ranges, mat) if len(self._nominal_columns) > 0: _nominal_kernel(Y[:, self._nominal_columns], self.X_fit_[:, self._nominal_columns], mat) mat /= n_features return mat
Function to use with :func:`sklearn.metrics.pairwise.pairwise_kernels` Parameters ---------- X : array, shape = (n_features,) Y : array, shape = (n_features,) Returns ------- similarity : float Similarities are normalized to be within [0, 1]
def pairwise_kernel(self, X, Y): """Function to use with :func:`sklearn.metrics.pairwise.pairwise_kernels` Parameters ---------- X : array, shape = (n_features,) Y : array, shape = (n_features,) Returns ------- similarity : float Similarities are normalized to be within [0, 1] """ check_is_fitted(self, 'X_fit_') if X.shape[0] != Y.shape[0]: raise ValueError('X and Y have different number of features') val = pairwise_continuous_ordinal_kernel(X[self._numeric_columns], Y[self._numeric_columns], self._numeric_ranges) if len(self._nominal_columns) > 0: val += pairwise_nominal_kernel(X[self._nominal_columns].astype(numpy.int8), Y[self._nominal_columns].astype(numpy.int8)) val /= X.shape[0] return val
Fit estimator. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. sample_weight : array-like, shape = (n_samples,), optional Weights given to each sample. If omitted, all samples have weight 1. Returns ------- self
def fit(self, X, y, sample_weight=None): """Fit estimator. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. sample_weight : array-like, shape = (n_samples,), optional Weights given to each sample. If omitted, all samples have weight 1. Returns ------- self """ X, event, time = check_arrays_survival(X, y) n_samples, n_features = X.shape if sample_weight is None: sample_weight = numpy.ones(n_samples, dtype=numpy.float32) else: sample_weight = column_or_1d(sample_weight, warn=True) check_consistent_length(X, sample_weight) random_state = check_random_state(self.random_state) self._check_params() self.estimators_ = [] self.n_features_ = n_features self.loss_ = LOSS_FUNCTIONS[self.loss](1) if isinstance(self.loss_, (CensoredSquaredLoss, IPCWLeastSquaresError)): time = numpy.log(time) self.train_score_ = numpy.zeros((self.n_estimators,), dtype=numpy.float64) # do oob? if self.subsample < 1.0: self.oob_improvement_ = numpy.zeros(self.n_estimators, dtype=numpy.float64) self._fit(X, event, time, sample_weight, random_state) return self
Check validity of parameters and raise ValueError if not valid.
def _check_params(self): """Check validity of parameters and raise ValueError if not valid. """ if self.n_estimators <= 0: raise ValueError("n_estimators must be greater than 0 but " "was %r" % self.n_estimators) if not 0.0 < self.subsample <= 1.0: raise ValueError("subsample must be in ]0; 1] but " "was %r" % self.subsample) if not 0.0 < self.learning_rate <= 1.0: raise ValueError("learning_rate must be within ]0; 1] but " "was %r" % self.learning_rate) if not 0.0 <= self.dropout_rate < 1.0: raise ValueError("dropout_rate must be within [0; 1[, but " "was %r" % self.dropout_rate) if self.loss not in LOSS_FUNCTIONS: raise ValueError("Loss '{0:s}' not supported. ".format(self.loss))
Fit component-wise weighted least squares model
def _fit_stage_componentwise(X, residuals, sample_weight, **fit_params): """Fit component-wise weighted least squares model""" n_features = X.shape[1] base_learners = [] error = numpy.empty(n_features) for component in range(n_features): learner = ComponentwiseLeastSquares(component).fit(X, residuals, sample_weight) l_pred = learner.predict(X) error[component] = squared_norm(residuals - l_pred) base_learners.append(learner) # TODO: could use bottleneck.nanargmin for speed best_component = numpy.nanargmin(error) best_learner = base_learners[best_component] return best_learner
Predict risk scores. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix. Returns ------- risk_score : array, shape = (n_samples,) Predicted risk scores.
def predict(self, X): """Predict risk scores. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix. Returns ------- risk_score : array, shape = (n_samples,) Predicted risk scores. """ check_is_fitted(self, 'estimators_') if X.shape[1] != self.n_features_: raise ValueError('Dimensions of X are inconsistent with training data: ' 'expected %d features, but got %s' % (self.n_features_, X.shape[1])) n_samples = X.shape[0] Xi = numpy.column_stack((numpy.ones(n_samples), X)) pred = numpy.zeros(n_samples, dtype=float) for estimator in self.estimators_: pred += self.learning_rate * estimator.predict(Xi) if isinstance(self.loss_, (CensoredSquaredLoss, IPCWLeastSquaresError)): numpy.exp(pred, out=pred) return pred
Return the aggregated coefficients. Returns ------- coef_ : ndarray, shape = (n_features + 1,) Coefficients of features. The first element denotes the intercept.
def coef_(self): """Return the aggregated coefficients. Returns ------- coef_ : ndarray, shape = (n_features + 1,) Coefficients of features. The first element denotes the intercept. """ coef = numpy.zeros(self.n_features_ + 1, dtype=float) for estimator in self.estimators_: coef[estimator.component] += self.learning_rate * estimator.coef_ return coef
Check validity of parameters and raise ValueError if not valid.
def _check_params(self): """Check validity of parameters and raise ValueError if not valid. """ self.n_estimators = int(self.n_estimators) if self.n_estimators <= 0: raise ValueError("n_estimators must be greater than 0 but " "was %r" % self.n_estimators) if not 0.0 < self.learning_rate <= 1.0: raise ValueError("learning_rate must be within ]0; 1] but " "was %r" % self.learning_rate) if not 0.0 < self.subsample <= 1.0: raise ValueError("subsample must be in ]0; 1] but " "was %r" % self.subsample) if not 0.0 <= self.dropout_rate < 1.0: raise ValueError("dropout_rate must be within [0; 1[, but " "was %r" % self.dropout_rate) max_features = self._check_max_features() self.min_samples_split = int(self.min_samples_split) self.min_samples_leaf = int(self.min_samples_leaf) self.max_depth = int(self.max_depth) if self.max_leaf_nodes: self.max_leaf_nodes = int(self.max_leaf_nodes) self.max_features_ = max_features allowed_presort = ('auto', True, False) if self.presort not in allowed_presort: raise ValueError("'presort' should be in {}. Got {!r} instead." .format(allowed_presort, self.presort)) if self.loss not in LOSS_FUNCTIONS: raise ValueError("Loss '{0:s}' not supported. ".format(self.loss))
Fit another stage of ``n_classes_`` trees to the boosting model.
def _fit_stage(self, i, X, y, y_pred, sample_weight, sample_mask, random_state, scale, X_idx_sorted, X_csc=None, X_csr=None): """Fit another stage of ``n_classes_`` trees to the boosting model. """ assert sample_mask.dtype == numpy.bool loss = self.loss_ # whether to use dropout in next iteration do_dropout = self.dropout_rate > 0. and 0 < i < len(scale) - 1 for k in range(loss.K): residual = loss.negative_gradient(y, y_pred, k=k, sample_weight=sample_weight) # induce regression tree on residuals tree = DecisionTreeRegressor( criterion=self.criterion, splitter='best', max_depth=self.max_depth, min_samples_split=self.min_samples_split, min_samples_leaf=self.min_samples_leaf, min_weight_fraction_leaf=self.min_weight_fraction_leaf, min_impurity_split=self.min_impurity_split, min_impurity_decrease=self.min_impurity_decrease, max_features=self.max_features, max_leaf_nodes=self.max_leaf_nodes, random_state=random_state, presort=self.presort) if self.subsample < 1.0: # no inplace multiplication! sample_weight = sample_weight * sample_mask.astype(numpy.float64) X = X_csr if X_csr is not None else X tree.fit(X, residual, sample_weight=sample_weight, check_input=False, X_idx_sorted=X_idx_sorted) # add tree to ensemble self.estimators_[i, k] = tree # update tree leaves if do_dropout: # select base learners to be dropped for next iteration drop_model, n_dropped = _sample_binomial_plus_one(self.dropout_rate, i + 1, random_state) # adjust scaling factor of tree that is going to be trained in next iteration scale[i + 1] = 1. / (n_dropped + 1.) y_pred[:, k] = 0 for m in range(i + 1): if drop_model[m] == 1: # adjust scaling factor of dropped trees scale[m] *= n_dropped / (n_dropped + 1.) else: # pseudoresponse of next iteration (without contribution of dropped trees) y_pred[:, k] += self.learning_rate * scale[m] * self.estimators_[m, k].predict(X).ravel() else: # update tree leaves loss.update_terminal_regions(tree.tree_, X, y, residual, y_pred, sample_weight, sample_mask, self.learning_rate, k=k) return y_pred
Iteratively fits the stages. For each stage it computes the progress (OOB, train score) and delegates to ``_fit_stage``. Returns the number of stages fit; might differ from ``n_estimators`` due to early stopping.
def _fit_stages(self, X, y, y_pred, sample_weight, random_state, begin_at_stage=0, monitor=None, X_idx_sorted=None): """Iteratively fits the stages. For each stage it computes the progress (OOB, train score) and delegates to ``_fit_stage``. Returns the number of stages fit; might differ from ``n_estimators`` due to early stopping. """ n_samples = X.shape[0] do_oob = self.subsample < 1.0 sample_mask = numpy.ones((n_samples, ), dtype=numpy.bool) n_inbag = max(1, int(self.subsample * n_samples)) loss_ = self.loss_ if self.verbose: verbose_reporter = VerboseReporter(self.verbose) verbose_reporter.init(self, begin_at_stage) X_csc = csc_matrix(X) if issparse(X) else None X_csr = csr_matrix(X) if issparse(X) else None if self.dropout_rate > 0.: scale = numpy.ones(self.n_estimators, dtype=float) else: scale = None # perform boosting iterations i = begin_at_stage for i in range(begin_at_stage, self.n_estimators): # subsampling if do_oob: sample_mask = _random_sample_mask(n_samples, n_inbag, random_state) # OOB score before adding this stage y_oob_sample = y[~sample_mask] old_oob_score = loss_(y_oob_sample, y_pred[~sample_mask], sample_weight[~sample_mask]) # fit next stage of trees y_pred = self._fit_stage(i, X, y, y_pred, sample_weight, sample_mask, random_state, scale, X_idx_sorted, X_csc, X_csr) # track deviance (= loss) if do_oob: self.train_score_[i] = loss_(y[sample_mask], y_pred[sample_mask], sample_weight[sample_mask]) self.oob_improvement_[i] = (old_oob_score - loss_(y_oob_sample, y_pred[~sample_mask], sample_weight[~sample_mask])) else: # no need to fancy index w/ no subsampling self.train_score_[i] = loss_(y, y_pred, sample_weight) if self.verbose > 0: verbose_reporter.update(i, self) if monitor is not None: early_stopping = monitor(i, self, locals()) if early_stopping: break if self.dropout_rate > 0.: self.scale_ = scale return i + 1
Fit the gradient boosting model. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. sample_weight : array-like, shape = (n_samples,), optional Weights given to each sample. If omitted, all samples have weight 1. monitor : callable, optional The monitor is called after each iteration with the current iteration, a reference to the estimator and the local variables of ``_fit_stages`` as keyword arguments ``callable(i, self, locals())``. If the callable returns ``True`` the fitting procedure is stopped. The monitor can be used for various things such as computing held-out estimates, early stopping, model introspect, and snapshoting. Returns ------- self : object Returns self.
def fit(self, X, y, sample_weight=None, monitor=None): """Fit the gradient boosting model. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. sample_weight : array-like, shape = (n_samples,), optional Weights given to each sample. If omitted, all samples have weight 1. monitor : callable, optional The monitor is called after each iteration with the current iteration, a reference to the estimator and the local variables of ``_fit_stages`` as keyword arguments ``callable(i, self, locals())``. If the callable returns ``True`` the fitting procedure is stopped. The monitor can be used for various things such as computing held-out estimates, early stopping, model introspect, and snapshoting. Returns ------- self : object Returns self. """ random_state = check_random_state(self.random_state) X, event, time = check_arrays_survival(X, y, accept_sparse=['csr', 'csc', 'coo'], dtype=DTYPE) n_samples, self.n_features_ = X.shape X = X.astype(DTYPE) if sample_weight is None: sample_weight = numpy.ones(n_samples, dtype=numpy.float32) else: sample_weight = column_or_1d(sample_weight, warn=True) check_consistent_length(X, sample_weight) self._check_params() self.loss_ = LOSS_FUNCTIONS[self.loss](1) if isinstance(self.loss_, (CensoredSquaredLoss, IPCWLeastSquaresError)): time = numpy.log(time) self._init_state() self.init_.fit(X, (event, time), sample_weight) y_pred = self.init_.predict(X) begin_at_stage = 0 if self.presort is True and issparse(X): raise ValueError( "Presorting is not supported for sparse matrices.") presort = self.presort # Allow presort to be 'auto', which means True if the dataset is dense, # otherwise it will be False. if presort == 'auto': presort = not issparse(X) X_idx_sorted = None if presort: X_idx_sorted = numpy.asfortranarray(numpy.argsort(X, axis=0), dtype=numpy.int32) # fit the boosting stages y = numpy.fromiter(zip(event, time), dtype=[('event', numpy.bool), ('time', numpy.float64)]) n_stages = self._fit_stages(X, y, y_pred, sample_weight, random_state, begin_at_stage, monitor, X_idx_sorted) # change shape of arrays after fit (early-stopping or additional tests) if n_stages != self.estimators_.shape[0]: self.estimators_ = self.estimators_[:n_stages] self.train_score_ = self.train_score_[:n_stages] if hasattr(self, 'oob_improvement_'): self.oob_improvement_ = self.oob_improvement_[:n_stages] self.n_estimators_ = n_stages return self
Predict risk scores. Parameters ---------- X : array-like, shape = (n_samples, n_features) The input samples. Returns ------- y : ndarray, shape = (n_samples,) The risk scores.
def predict(self, X): """Predict risk scores. Parameters ---------- X : array-like, shape = (n_samples, n_features) The input samples. Returns ------- y : ndarray, shape = (n_samples,) The risk scores. """ check_is_fitted(self, 'estimators_') X = check_array(X, dtype=DTYPE, order="C") score = self._decision_function(X) if score.shape[1] == 1: score = score.ravel() return score
Predict hazard at each stage for X. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters ---------- X : array-like, shape = (n_samples, n_features) The input samples. Returns ------- y : generator of array of shape = (n_samples,) The predicted value of the input samples.
def staged_predict(self, X): """Predict hazard at each stage for X. This method allows monitoring (i.e. determine error on testing set) after each stage. Parameters ---------- X : array-like, shape = (n_samples, n_features) The input samples. Returns ------- y : generator of array of shape = (n_samples,) The predicted value of the input samples. """ check_is_fitted(self, 'estimators_') # if dropout wasn't used during training, proceed as usual, # otherwise consider scaling factor of individual trees if not hasattr(self, "scale_"): for y in self._staged_decision_function(X): yield self._scale_prediction(y.ravel()) else: for y in self._dropout_staged_decision_function(X): yield self._scale_prediction(y.ravel())
Build a MINLIP survival model from training data. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix. y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. Returns ------- self
def fit(self, X, y): """Build a MINLIP survival model from training data. Parameters ---------- X : array-like, shape = (n_samples, n_features) Data matrix. y : structured array, shape = (n_samples,) A structured array containing the binary event indicator as first field, and time of event or time of censoring as second field. Returns ------- self """ X, event, time = check_arrays_survival(X, y) self._fit(X, event, time) return self
Predict risk score of experiencing an event. Higher scores indicate shorter survival (high risk), lower scores longer survival (low risk). Parameters ---------- X : array-like, shape = (n_samples, n_features) The input samples. Returns ------- y : ndarray, shape = (n_samples,) Predicted risk.
def predict(self, X): """Predict risk score of experiencing an event. Higher scores indicate shorter survival (high risk), lower scores longer survival (low risk). Parameters ---------- X : array-like, shape = (n_samples, n_features) The input samples. Returns ------- y : ndarray, shape = (n_samples,) Predicted risk. """ K = self._get_kernel(X, self.X_fit_) pred = -numpy.dot(self.coef_, K.T) return pred.ravel()
Split data frame into features and labels. Parameters ---------- data_frame : pandas.DataFrame, shape = (n_samples, n_columns) A data frame. attr_labels : sequence of str or None A list of one or more columns that are considered the label. If `survival` is `True`, then attr_labels has two elements: 1) the name of the column denoting the event indicator, and 2) the name of the column denoting the survival time. If the sequence contains `None`, then labels are not retrieved and only a data frame with features is returned. pos_label : any, optional Which value of the event indicator column denotes that a patient experienced an event. This value is ignored if `survival` is `False`. survival : bool, optional, default: True Whether to return `y` that can be used for survival analysis. Returns ------- X : pandas.DataFrame, shape = (n_samples, n_columns - len(attr_labels)) Data frame containing features. y : None or pandas.DataFrame, shape = (n_samples, len(attr_labels)) Data frame containing columns with supervised information. If `survival` was `True`, then the column denoting the event indicator will be boolean and survival times will be float. If `attr_labels` contains `None`, y is set to `None`.
def get_x_y(data_frame, attr_labels, pos_label=None, survival=True): """Split data frame into features and labels. Parameters ---------- data_frame : pandas.DataFrame, shape = (n_samples, n_columns) A data frame. attr_labels : sequence of str or None A list of one or more columns that are considered the label. If `survival` is `True`, then attr_labels has two elements: 1) the name of the column denoting the event indicator, and 2) the name of the column denoting the survival time. If the sequence contains `None`, then labels are not retrieved and only a data frame with features is returned. pos_label : any, optional Which value of the event indicator column denotes that a patient experienced an event. This value is ignored if `survival` is `False`. survival : bool, optional, default: True Whether to return `y` that can be used for survival analysis. Returns ------- X : pandas.DataFrame, shape = (n_samples, n_columns - len(attr_labels)) Data frame containing features. y : None or pandas.DataFrame, shape = (n_samples, len(attr_labels)) Data frame containing columns with supervised information. If `survival` was `True`, then the column denoting the event indicator will be boolean and survival times will be float. If `attr_labels` contains `None`, y is set to `None`. """ if survival: if len(attr_labels) != 2: raise ValueError("expected sequence of length two for attr_labels, but got %d" % len(attr_labels)) if pos_label is None: raise ValueError("pos_label needs to be specified if survival=True") return _get_x_y_survival(data_frame, attr_labels[0], attr_labels[1], pos_label) return _get_x_y_other(data_frame, attr_labels)
Load dataset in ARFF format. Parameters ---------- path_training : str Path to ARFF file containing data. attr_labels : sequence of str Names of attributes denoting dependent variables. If ``survival`` is set, it must be a sequence with two items: the name of the event indicator and the name of the survival/censoring time. pos_label : any type, optional Value corresponding to an event in survival analysis. Only considered if ``survival`` is ``True``. path_testing : str, optional Path to ARFF file containing hold-out data. Only columns that are available in both training and testing are considered (excluding dependent variables). If ``standardize_numeric`` is set, data is normalized by considering both training and testing data. survival : bool, optional, default: True Whether the dependent variables denote event indicator and survival/censoring time. standardize_numeric : bool, optional, default: True Whether to standardize data to zero mean and unit variance. See :func:`sksurv.column.standardize`. to_numeric : boo, optional, default: True Whether to convert categorical variables to numeric values. See :func:`sksurv.column.categorical_to_numeric`. Returns ------- x_train : pandas.DataFrame, shape = (n_train, n_features) Training data. y_train : pandas.DataFrame, shape = (n_train, n_labels) Dependent variables of training data. x_test : None or pandas.DataFrame, shape = (n_train, n_features) Testing data if `path_testing` was provided. y_test : None or pandas.DataFrame, shape = (n_train, n_labels) Dependent variables of testing data if `path_testing` was provided.
def load_arff_files_standardized(path_training, attr_labels, pos_label=None, path_testing=None, survival=True, standardize_numeric=True, to_numeric=True): """Load dataset in ARFF format. Parameters ---------- path_training : str Path to ARFF file containing data. attr_labels : sequence of str Names of attributes denoting dependent variables. If ``survival`` is set, it must be a sequence with two items: the name of the event indicator and the name of the survival/censoring time. pos_label : any type, optional Value corresponding to an event in survival analysis. Only considered if ``survival`` is ``True``. path_testing : str, optional Path to ARFF file containing hold-out data. Only columns that are available in both training and testing are considered (excluding dependent variables). If ``standardize_numeric`` is set, data is normalized by considering both training and testing data. survival : bool, optional, default: True Whether the dependent variables denote event indicator and survival/censoring time. standardize_numeric : bool, optional, default: True Whether to standardize data to zero mean and unit variance. See :func:`sksurv.column.standardize`. to_numeric : boo, optional, default: True Whether to convert categorical variables to numeric values. See :func:`sksurv.column.categorical_to_numeric`. Returns ------- x_train : pandas.DataFrame, shape = (n_train, n_features) Training data. y_train : pandas.DataFrame, shape = (n_train, n_labels) Dependent variables of training data. x_test : None or pandas.DataFrame, shape = (n_train, n_features) Testing data if `path_testing` was provided. y_test : None or pandas.DataFrame, shape = (n_train, n_labels) Dependent variables of testing data if `path_testing` was provided. """ dataset = loadarff(path_training) if "index" in dataset.columns: dataset.index = dataset["index"].astype(object) dataset.drop("index", axis=1, inplace=True) x_train, y_train = get_x_y(dataset, attr_labels, pos_label, survival) if path_testing is not None: x_test, y_test = _load_arff_testing(path_testing, attr_labels, pos_label, survival) if len(x_train.columns.symmetric_difference(x_test.columns)) > 0: warnings.warn("Restricting columns to intersection between training and testing data", stacklevel=2) cols = x_train.columns.intersection(x_test.columns) if len(cols) == 0: raise ValueError("columns of training and test data do not intersect") x_train = x_train.loc[:, cols] x_test = x_test.loc[:, cols] x = safe_concat((x_train, x_test), axis=0) if standardize_numeric: x = standardize(x) if to_numeric: x = categorical_to_numeric(x) n_train = x_train.shape[0] x_train = x.iloc[:n_train, :] x_test = x.iloc[n_train:, :] else: if standardize_numeric: x_train = standardize(x_train) if to_numeric: x_train = categorical_to_numeric(x_train) x_test = None y_test = None return x_train, y_train, x_test, y_test
Load and return the AIDS Clinical Trial dataset The dataset has 1,151 samples and 11 features. The dataset has 2 endpoints: 1. AIDS defining event, which occurred for 96 patients (8.3%) 2. Death, which occurred for 26 patients (2.3%) Parameters ---------- endpoint : aids|death The endpoint Returns ------- x : pandas.DataFrame The measurements for each patient. y : structured array with 2 fields *censor*: boolean indicating whether the endpoint has been reached or the event time is right censored. *time*: total length of follow-up If ``endpoint`` is death, the fields are named *censor_d* and *time_d*. References ---------- .. [1] http://www.umass.edu/statdata/statdata/data/ .. [2] Hosmer, D., Lemeshow, S., May, S.: "Applied Survival Analysis: Regression Modeling of Time to Event Data." John Wiley & Sons, Inc. (2008)
def load_aids(endpoint="aids"): """Load and return the AIDS Clinical Trial dataset The dataset has 1,151 samples and 11 features. The dataset has 2 endpoints: 1. AIDS defining event, which occurred for 96 patients (8.3%) 2. Death, which occurred for 26 patients (2.3%) Parameters ---------- endpoint : aids|death The endpoint Returns ------- x : pandas.DataFrame The measurements for each patient. y : structured array with 2 fields *censor*: boolean indicating whether the endpoint has been reached or the event time is right censored. *time*: total length of follow-up If ``endpoint`` is death, the fields are named *censor_d* and *time_d*. References ---------- .. [1] http://www.umass.edu/statdata/statdata/data/ .. [2] Hosmer, D., Lemeshow, S., May, S.: "Applied Survival Analysis: Regression Modeling of Time to Event Data." John Wiley & Sons, Inc. (2008) """ labels_aids = ['censor', 'time'] labels_death = ['censor_d', 'time_d'] if endpoint == "aids": attr_labels = labels_aids drop_columns = labels_death elif endpoint == "death": attr_labels = labels_death drop_columns = labels_aids else: raise ValueError("endpoint must be 'aids' or 'death'") fn = resource_filename(__name__, 'data/actg320.arff') x, y = get_x_y(loadarff(fn), attr_labels=attr_labels, pos_label='1') x.drop(drop_columns, axis=1, inplace=True) return x, y
Convert categorical columns to numeric values. Parameters ---------- X : pandas.DataFrame Data to encode. y : Ignored. For compatibility with TransformerMixin. fit_params : Ignored. For compatibility with TransformerMixin. Returns ------- Xt : pandas.DataFrame Encoded data.
def fit_transform(self, X, y=None, **fit_params): """Convert categorical columns to numeric values. Parameters ---------- X : pandas.DataFrame Data to encode. y : Ignored. For compatibility with TransformerMixin. fit_params : Ignored. For compatibility with TransformerMixin. Returns ------- Xt : pandas.DataFrame Encoded data. """ columns_to_encode = X.select_dtypes(include=["object", "category"]).columns x_dummy = self._encode(X, columns_to_encode) self.feature_names_ = columns_to_encode self.categories_ = {k: X[k].cat.categories for k in columns_to_encode} self.encoded_columns_ = x_dummy.columns return x_dummy
Convert categorical columns to numeric values. Parameters ---------- X : pandas.DataFrame Data to encode. Returns ------- Xt : pandas.DataFrame Encoded data.
def transform(self, X): """Convert categorical columns to numeric values. Parameters ---------- X : pandas.DataFrame Data to encode. Returns ------- Xt : pandas.DataFrame Encoded data. """ check_is_fitted(self, "encoded_columns_") check_columns_exist(X.columns, self.feature_names_) Xt = X.copy() for col, cat in self.categories_.items(): Xt[col].cat.set_categories(cat, inplace=True) new_data = self._encode(Xt, self.feature_names_) return new_data.loc[:, self.encoded_columns_]
Internal method to streamline the getting of data from the json Args: json_inp (json): json input from our caller ndx (int): index where the data is located in the api Returns: If pandas is present: DataFrame (pandas.DataFrame): data set from ndx within the API's json else: A dictionary of both headers and values from the page
def _api_scrape(json_inp, ndx): """ Internal method to streamline the getting of data from the json Args: json_inp (json): json input from our caller ndx (int): index where the data is located in the api Returns: If pandas is present: DataFrame (pandas.DataFrame): data set from ndx within the API's json else: A dictionary of both headers and values from the page """ try: headers = json_inp['resultSets'][ndx]['headers'] values = json_inp['resultSets'][ndx]['rowSet'] except KeyError: # This is so ugly but this is what you get when your data comes out # in not a standard format try: headers = json_inp['resultSet'][ndx]['headers'] values = json_inp['resultSet'][ndx]['rowSet'] except KeyError: # Added for results that only include one set (ex. LeagueLeaders) headers = json_inp['resultSet']['headers'] values = json_inp['resultSet']['rowSet'] if HAS_PANDAS: return DataFrame(values, columns=headers) else: # Taken from www.github.com/bradleyfay/py-goldsberry return [dict(zip(headers, value)) for value in values]
Internal method to streamline our requests / json getting Args: endpoint (str): endpoint to be called from the API params (dict): parameters to be passed to the API Raises: HTTPError: if requests hits a status code != 200 Returns: json (json): json object for selected API call
def _get_json(endpoint, params, referer='scores'): """ Internal method to streamline our requests / json getting Args: endpoint (str): endpoint to be called from the API params (dict): parameters to be passed to the API Raises: HTTPError: if requests hits a status code != 200 Returns: json (json): json object for selected API call """ h = dict(HEADERS) h['referer'] = 'http://stats.nba.com/{ref}/'.format(ref=referer) _get = get(BASE_URL.format(endpoint=endpoint), params=params, headers=h) # print _get.url _get.raise_for_status() return _get.json()
Calls our PlayerList class to get a full list of players and then returns just an id if specified or the full row of player information Args: :first_name: First name of the player :last_name: Last name of the player (this is None if the player only has first name [Nene]) :only_current: Only wants the current list of players :just_id: Only wants the id of the player Returns: Either the ID or full row of information of the player inputted Raises: :PlayerNotFoundException::
def get_player(first_name, last_name=None, season=constants.CURRENT_SEASON, only_current=0, just_id=True): """ Calls our PlayerList class to get a full list of players and then returns just an id if specified or the full row of player information Args: :first_name: First name of the player :last_name: Last name of the player (this is None if the player only has first name [Nene]) :only_current: Only wants the current list of players :just_id: Only wants the id of the player Returns: Either the ID or full row of information of the player inputted Raises: :PlayerNotFoundException:: """ if last_name is None: name = first_name.lower() else: name = '{}, {}'.format(last_name, first_name).lower() pl = PlayerList(season=season, only_current=only_current).info() hdr = 'DISPLAY_LAST_COMMA_FIRST' if HAS_PANDAS: item = pl[pl.DISPLAY_LAST_COMMA_FIRST.str.lower() == name] else: item = next(plyr for plyr in pl if str(plyr[hdr]).lower() == name) if len(item) == 0: raise PlayerNotFoundException elif just_id: return item['PERSON_ID'] else: return item
Called from Dealer when ask message received from RoundManager
def respond_to_ask(self, message): """Called from Dealer when ask message received from RoundManager""" valid_actions, hole_card, round_state = self.__parse_ask_message(message) return self.declare_action(valid_actions, hole_card, round_state)
Called from Dealer when notification received from RoundManager
def receive_notification(self, message): """Called from Dealer when notification received from RoundManager""" msg_type = message["message_type"] if msg_type == "game_start_message": info = self.__parse_game_start_message(message) self.receive_game_start_message(info) elif msg_type == "round_start_message": round_count, hole, seats = self.__parse_round_start_message(message) self.receive_round_start_message(round_count, hole, seats) elif msg_type == "street_start_message": street, state = self.__parse_street_start_message(message) self.receive_street_start_message(street, state) elif msg_type == "game_update_message": new_action, round_state = self.__parse_game_update_message(message) self.receive_game_update_message(new_action, round_state) elif msg_type == "round_result_message": winners, hand_info, state = self.__parse_round_result_message(message) self.receive_round_result_message(winners, hand_info, state)
A preliminary result processor we'll chain on to the original task This will get executed wherever the source task was executed, in this case one of the threads in the ThreadPoolExecutor
async def result_continuation(task): """A preliminary result processor we'll chain on to the original task This will get executed wherever the source task was executed, in this case one of the threads in the ThreadPoolExecutor""" await asyncio.sleep(0.1) num, res = task.result() return num, res * 2
An async result aggregator that combines all the results This gets executed in unsync.loop and unsync.thread
async def result_processor(tasks): """An async result aggregator that combines all the results This gets executed in unsync.loop and unsync.thread""" output = {} for task in tasks: num, res = await task output[num] = res return output
based on https://github.com/apache/avro/pull/82/
def _read_decimal(data, size, writer_schema): """ based on https://github.com/apache/avro/pull/82/ """ scale = writer_schema.get('scale', 0) precision = writer_schema['precision'] datum_byte = str2ints(data) unscaled_datum = 0 msb = fstint(data) leftmost_bit = (msb >> 7) & 1 if leftmost_bit == 1: modified_first_byte = datum_byte[0] ^ (1 << 7) datum_byte = [modified_first_byte] + datum_byte[1:] for offset in xrange(size): unscaled_datum <<= 8 unscaled_datum += datum_byte[offset] unscaled_datum += pow(-2, (size * 8) - 1) else: for offset in xrange(size): unscaled_datum <<= 8 unscaled_datum += (datum_byte[offset]) with localcontext() as ctx: ctx.prec = precision scaled_datum = Decimal(unscaled_datum).scaleb(-scale) return scaled_datum
int and long values are written using variable-length, zig-zag coding.
def read_long(fo, writer_schema=None, reader_schema=None): """int and long values are written using variable-length, zig-zag coding.""" c = fo.read(1) # We do EOF checking only here, since most reader start here if not c: raise StopIteration b = ord(c) n = b & 0x7F shift = 7 while (b & 0x80) != 0: b = ord(fo.read(1)) n |= (b & 0x7F) << shift shift += 7 return (n >> 1) ^ -(n & 1)
Bytes are encoded as a long followed by that many bytes of data.
def read_bytes(fo, writer_schema=None, reader_schema=None): """Bytes are encoded as a long followed by that many bytes of data.""" size = read_long(fo) return fo.read(size)
An enum is encoded by a int, representing the zero-based position of the symbol in the schema.
def read_enum(fo, writer_schema, reader_schema=None): """An enum is encoded by a int, representing the zero-based position of the symbol in the schema. """ index = read_long(fo) symbol = writer_schema['symbols'][index] if reader_schema and symbol not in reader_schema['symbols']: default = reader_schema.get("default") if default: return default else: symlist = reader_schema['symbols'] msg = '%s not found in reader symbol list %s' % (symbol, symlist) raise SchemaResolutionError(msg) return symbol
Arrays are encoded as a series of blocks. Each block consists of a long count value, followed by that many array items. A block with count zero indicates the end of the array. Each item is encoded per the array's item schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written.
def read_array(fo, writer_schema, reader_schema=None): """Arrays are encoded as a series of blocks. Each block consists of a long count value, followed by that many array items. A block with count zero indicates the end of the array. Each item is encoded per the array's item schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written. """ if reader_schema: def item_reader(fo, w_schema, r_schema): return read_data(fo, w_schema['items'], r_schema['items']) else: def item_reader(fo, w_schema, _): return read_data(fo, w_schema['items']) read_items = [] block_count = read_long(fo) while block_count != 0: if block_count < 0: block_count = -block_count # Read block size, unused read_long(fo) for i in xrange(block_count): read_items.append(item_reader(fo, writer_schema, reader_schema)) block_count = read_long(fo) return read_items
Maps are encoded as a series of blocks. Each block consists of a long count value, followed by that many key/value pairs. A block with count zero indicates the end of the map. Each item is encoded per the map's value schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written.
def read_map(fo, writer_schema, reader_schema=None): """Maps are encoded as a series of blocks. Each block consists of a long count value, followed by that many key/value pairs. A block with count zero indicates the end of the map. Each item is encoded per the map's value schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written. """ if reader_schema: def item_reader(fo, w_schema, r_schema): return read_data(fo, w_schema['values'], r_schema['values']) else: def item_reader(fo, w_schema, _): return read_data(fo, w_schema['values']) read_items = {} block_count = read_long(fo) while block_count != 0: if block_count < 0: block_count = -block_count # Read block size, unused read_long(fo) for i in xrange(block_count): key = read_utf8(fo) read_items[key] = item_reader(fo, writer_schema, reader_schema) block_count = read_long(fo) return read_items
A union is encoded by first writing a long value indicating the zero-based position within the union of the schema of its value. The value is then encoded per the indicated schema within the union.
def read_union(fo, writer_schema, reader_schema=None): """A union is encoded by first writing a long value indicating the zero-based position within the union of the schema of its value. The value is then encoded per the indicated schema within the union. """ # schema resolution index = read_long(fo) if reader_schema: # Handle case where the reader schema is just a single type (not union) if not isinstance(reader_schema, list): if match_types(writer_schema[index], reader_schema): return read_data(fo, writer_schema[index], reader_schema) else: for schema in reader_schema: if match_types(writer_schema[index], schema): return read_data(fo, writer_schema[index], schema) msg = 'schema mismatch: %s not found in %s' % \ (writer_schema, reader_schema) raise SchemaResolutionError(msg) else: return read_data(fo, writer_schema[index])
A record is encoded by encoding the values of its fields in the order that they are declared. In other words, a record is encoded as just the concatenation of the encodings of its fields. Field values are encoded per their schema. Schema Resolution: * the ordering of fields may be different: fields are matched by name. * schemas for fields with the same name in both records are resolved recursively. * if the writer's record contains a field with a name not present in the reader's record, the writer's value for that field is ignored. * if the reader's record schema has a field that contains a default value, and writer's schema does not have a field with the same name, then the reader should use the default value from its field. * if the reader's record schema has a field with no default value, and writer's schema does not have a field with the same name, then the field's value is unset.
def read_record(fo, writer_schema, reader_schema=None): """A record is encoded by encoding the values of its fields in the order that they are declared. In other words, a record is encoded as just the concatenation of the encodings of its fields. Field values are encoded per their schema. Schema Resolution: * the ordering of fields may be different: fields are matched by name. * schemas for fields with the same name in both records are resolved recursively. * if the writer's record contains a field with a name not present in the reader's record, the writer's value for that field is ignored. * if the reader's record schema has a field that contains a default value, and writer's schema does not have a field with the same name, then the reader should use the default value from its field. * if the reader's record schema has a field with no default value, and writer's schema does not have a field with the same name, then the field's value is unset. """ record = {} if reader_schema is None: for field in writer_schema['fields']: record[field['name']] = read_data(fo, field['type']) else: readers_field_dict = {} aliases_field_dict = {} for f in reader_schema['fields']: readers_field_dict[f['name']] = f for alias in f.get('aliases', []): aliases_field_dict[alias] = f for field in writer_schema['fields']: readers_field = readers_field_dict.get( field['name'], aliases_field_dict.get(field['name']), ) if readers_field: record[readers_field['name']] = read_data( fo, field['type'], readers_field['type'], ) else: # should implement skip read_data(fo, field['type'], field['type']) # fill in default values if len(readers_field_dict) > len(record): writer_fields = [f['name'] for f in writer_schema['fields']] for f_name, field in iteritems(readers_field_dict): if f_name not in writer_fields and f_name not in record: if 'default' in field: record[field['name']] = field['default'] else: msg = 'No default value for %s' % field['name'] raise SchemaResolutionError(msg) return record
Read data from file object according to schema.
def read_data(fo, writer_schema, reader_schema=None): """Read data from file object according to schema.""" record_type = extract_record_type(writer_schema) logical_type = extract_logical_type(writer_schema) if reader_schema and record_type in AVRO_TYPES: # If the schemas are the same, set the reader schema to None so that no # schema resolution is done for this call or future recursive calls if writer_schema == reader_schema: reader_schema = None else: match_schemas(writer_schema, reader_schema) reader_fn = READERS.get(record_type) if reader_fn: try: data = reader_fn(fo, writer_schema, reader_schema) except StructError: raise EOFError('cannot read %s from %s' % (record_type, fo)) if 'logicalType' in writer_schema: fn = LOGICAL_READERS.get(logical_type) if fn: return fn(data, writer_schema, reader_schema) if reader_schema is not None: return maybe_promote( data, record_type, extract_record_type(reader_schema) ) else: return data else: return read_data( fo, SCHEMA_DEFS[record_type], SCHEMA_DEFS.get(reader_schema) )
Return iterator over avro records.
def _iter_avro_records(fo, header, codec, writer_schema, reader_schema): """Return iterator over avro records.""" sync_marker = header['sync'] read_block = BLOCK_READERS.get(codec) if not read_block: raise ValueError('Unrecognized codec: %r' % codec) block_count = 0 while True: try: block_count = read_long(fo) except StopIteration: return block_fo = read_block(fo) for i in xrange(block_count): yield read_data(block_fo, writer_schema, reader_schema) skip_sync(fo, sync_marker)
Return iterator over avro blocks.
def _iter_avro_blocks(fo, header, codec, writer_schema, reader_schema): """Return iterator over avro blocks.""" sync_marker = header['sync'] read_block = BLOCK_READERS.get(codec) if not read_block: raise ValueError('Unrecognized codec: %r' % codec) while True: offset = fo.tell() try: num_block_records = read_long(fo) except StopIteration: return block_bytes = read_block(fo) skip_sync(fo, sync_marker) size = fo.tell() - offset yield Block( block_bytes, num_block_records, codec, reader_schema, writer_schema, offset, size )
Reads a single record writen using the :meth:`~fastavro._write_py.schemaless_writer` Parameters ---------- fo: file-like Input stream writer_schema: dict Schema used when calling schemaless_writer reader_schema: dict, optional If the schema has changed since being written then the new schema can be given to allow for schema migration Example:: parsed_schema = fastavro.parse_schema(schema) with open('file.avro', 'rb') as fp: record = fastavro.schemaless_reader(fp, parsed_schema) Note: The ``schemaless_reader`` can only read a single record.
def schemaless_reader(fo, writer_schema, reader_schema=None): """Reads a single record writen using the :meth:`~fastavro._write_py.schemaless_writer` Parameters ---------- fo: file-like Input stream writer_schema: dict Schema used when calling schemaless_writer reader_schema: dict, optional If the schema has changed since being written then the new schema can be given to allow for schema migration Example:: parsed_schema = fastavro.parse_schema(schema) with open('file.avro', 'rb') as fp: record = fastavro.schemaless_reader(fp, parsed_schema) Note: The ``schemaless_reader`` can only read a single record. """ if writer_schema == reader_schema: # No need for the reader schema if they are the same reader_schema = None writer_schema = parse_schema(writer_schema) if reader_schema: reader_schema = parse_schema(reader_schema) return read_data(fo, writer_schema, reader_schema)
Converts datetime.datetime to int timestamp with microseconds
def prepare_timestamp_micros(data, schema): """Converts datetime.datetime to int timestamp with microseconds""" if isinstance(data, datetime.datetime): if data.tzinfo is not None: delta = (data - epoch) return int(delta.total_seconds() * MCS_PER_SECOND) t = int(time.mktime(data.timetuple())) * MCS_PER_SECOND + \ data.microsecond return t else: return data
Return True if path (or buffer) points to an Avro file. Parameters ---------- path_or_buffer: path to file or file-like object Path to file
def is_avro(path_or_buffer): """Return True if path (or buffer) points to an Avro file. Parameters ---------- path_or_buffer: path to file or file-like object Path to file """ if is_str(path_or_buffer): fp = open(path_or_buffer, 'rb') close = True else: fp = path_or_buffer close = False try: header = fp.read(len(MAGIC)) return header == MAGIC finally: if close: fp.close()
Converts datetime.date to int timestamp
def prepare_date(data, schema): """Converts datetime.date to int timestamp""" if isinstance(data, datetime.date): return data.toordinal() - DAYS_SHIFT else: return data
Converts uuid.UUID to string formatted UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
def prepare_uuid(data, schema): """Converts uuid.UUID to string formatted UUID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx """ if isinstance(data, uuid.UUID): return str(data) else: return data
Converts datetime.datetime object to int timestamp with milliseconds
def prepare_timestamp_millis(data, schema): """Converts datetime.datetime object to int timestamp with milliseconds """ if isinstance(data, datetime.datetime): if data.tzinfo is not None: delta = (data - epoch) return int(delta.total_seconds() * MLS_PER_SECOND) t = int(time.mktime(data.timetuple())) * MLS_PER_SECOND + int( data.microsecond / 1000) return t else: return data
Convert datetime.time to int timestamp with milliseconds
def prepare_time_millis(data, schema): """Convert datetime.time to int timestamp with milliseconds""" if isinstance(data, datetime.time): return int( data.hour * MLS_PER_HOUR + data.minute * MLS_PER_MINUTE + data.second * MLS_PER_SECOND + int(data.microsecond / 1000)) else: return data
Convert datetime.time to int timestamp with microseconds
def prepare_time_micros(data, schema): """Convert datetime.time to int timestamp with microseconds""" if isinstance(data, datetime.time): return long(data.hour * MCS_PER_HOUR + data.minute * MCS_PER_MINUTE + data.second * MCS_PER_SECOND + data.microsecond) else: return data
Convert decimal.Decimal to bytes
def prepare_bytes_decimal(data, schema): """Convert decimal.Decimal to bytes""" if not isinstance(data, decimal.Decimal): return data scale = schema.get('scale', 0) # based on https://github.com/apache/avro/pull/82/ sign, digits, exp = data.as_tuple() if -exp > scale: raise ValueError( 'Scale provided in schema does not match the decimal') delta = exp + scale if delta > 0: digits = digits + (0,) * delta unscaled_datum = 0 for digit in digits: unscaled_datum = (unscaled_datum * 10) + digit bits_req = unscaled_datum.bit_length() + 1 if sign: unscaled_datum = (1 << bits_req) - unscaled_datum bytes_req = bits_req // 8 padding_bits = ~((1 << bits_req) - 1) if sign else 0 packed_bits = padding_bits | unscaled_datum bytes_req += 1 if (bytes_req << 3) < bits_req else 0 tmp = MemoryIO() for index in range(bytes_req - 1, -1, -1): bits_to_write = packed_bits >> (8 * index) tmp.write(mk_bits(bits_to_write & 0xff)) return tmp.getvalue()
Converts decimal.Decimal to fixed length bytes array
def prepare_fixed_decimal(data, schema): """Converts decimal.Decimal to fixed length bytes array""" if not isinstance(data, decimal.Decimal): return data scale = schema.get('scale', 0) size = schema['size'] # based on https://github.com/apache/avro/pull/82/ sign, digits, exp = data.as_tuple() if -exp > scale: raise ValueError( 'Scale provided in schema does not match the decimal') delta = exp + scale if delta > 0: digits = digits + (0,) * delta unscaled_datum = 0 for digit in digits: unscaled_datum = (unscaled_datum * 10) + digit bits_req = unscaled_datum.bit_length() + 1 size_in_bits = size * 8 offset_bits = size_in_bits - bits_req mask = 2 ** size_in_bits - 1 bit = 1 for i in range(bits_req): mask ^= bit bit <<= 1 if bits_req < 8: bytes_req = 1 else: bytes_req = bits_req // 8 if bits_req % 8 != 0: bytes_req += 1 tmp = MemoryIO() if sign: unscaled_datum = (1 << bits_req) - unscaled_datum unscaled_datum = mask | unscaled_datum for index in range(size - 1, -1, -1): bits_to_write = unscaled_datum >> (8 * index) tmp.write(mk_bits(bits_to_write & 0xff)) else: for i in range(offset_bits // 8): tmp.write(mk_bits(0)) for index in range(bytes_req - 1, -1, -1): bits_to_write = unscaled_datum >> (8 * index) tmp.write(mk_bits(bits_to_write & 0xff)) return tmp.getvalue()
int and long values are written using variable-length, zig-zag coding.
def write_int(fo, datum, schema=None): """int and long values are written using variable-length, zig-zag coding. """ datum = (datum << 1) ^ (datum >> 63) while (datum & ~0x7F) != 0: fo.write(pack('B', (datum & 0x7f) | 0x80)) datum >>= 7 fo.write(pack('B', datum))
Bytes are encoded as a long followed by that many bytes of data.
def write_bytes(fo, datum, schema=None): """Bytes are encoded as a long followed by that many bytes of data.""" write_long(fo, len(datum)) fo.write(datum)
A 4-byte, big-endian CRC32 checksum
def write_crc32(fo, bytes): """A 4-byte, big-endian CRC32 checksum""" data = crc32(bytes) & 0xFFFFFFFF fo.write(pack('>I', data))
An enum is encoded by a int, representing the zero-based position of the symbol in the schema.
def write_enum(fo, datum, schema): """An enum is encoded by a int, representing the zero-based position of the symbol in the schema.""" index = schema['symbols'].index(datum) write_int(fo, index)
Arrays are encoded as a series of blocks. Each block consists of a long count value, followed by that many array items. A block with count zero indicates the end of the array. Each item is encoded per the array's item schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written.
def write_array(fo, datum, schema): """Arrays are encoded as a series of blocks. Each block consists of a long count value, followed by that many array items. A block with count zero indicates the end of the array. Each item is encoded per the array's item schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written. """ if len(datum) > 0: write_long(fo, len(datum)) dtype = schema['items'] for item in datum: write_data(fo, item, dtype) write_long(fo, 0)
Maps are encoded as a series of blocks. Each block consists of a long count value, followed by that many key/value pairs. A block with count zero indicates the end of the map. Each item is encoded per the map's value schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written.
def write_map(fo, datum, schema): """Maps are encoded as a series of blocks. Each block consists of a long count value, followed by that many key/value pairs. A block with count zero indicates the end of the map. Each item is encoded per the map's value schema. If a block's count is negative, then the count is followed immediately by a long block size, indicating the number of bytes in the block. The actual count in this case is the absolute value of the count written.""" if len(datum) > 0: write_long(fo, len(datum)) vtype = schema['values'] for key, val in iteritems(datum): write_utf8(fo, key) write_data(fo, val, vtype) write_long(fo, 0)
A union is encoded by first writing a long value indicating the zero-based position within the union of the schema of its value. The value is then encoded per the indicated schema within the union.
def write_union(fo, datum, schema): """A union is encoded by first writing a long value indicating the zero-based position within the union of the schema of its value. The value is then encoded per the indicated schema within the union.""" if isinstance(datum, tuple): (name, datum) = datum for index, candidate in enumerate(schema): if extract_record_type(candidate) == 'record': schema_name = candidate['name'] else: schema_name = candidate if name == schema_name: break else: msg = 'provided union type name %s not found in schema %s' \ % (name, schema) raise ValueError(msg) else: pytype = type(datum) best_match_index = -1 most_fields = -1 for index, candidate in enumerate(schema): if validate(datum, candidate, raise_errors=False): if extract_record_type(candidate) == 'record': fields = len(candidate['fields']) if fields > most_fields: best_match_index = index most_fields = fields else: best_match_index = index break if best_match_index < 0: msg = '%r (type %s) do not match %s' % (datum, pytype, schema) raise ValueError(msg) index = best_match_index # write data write_long(fo, index) write_data(fo, datum, schema[index])
A record is encoded by encoding the values of its fields in the order that they are declared. In other words, a record is encoded as just the concatenation of the encodings of its fields. Field values are encoded per their schema.
def write_record(fo, datum, schema): """A record is encoded by encoding the values of its fields in the order that they are declared. In other words, a record is encoded as just the concatenation of the encodings of its fields. Field values are encoded per their schema.""" for field in schema['fields']: name = field['name'] if name not in datum and 'default' not in field and \ 'null' not in field['type']: raise ValueError('no value and no default for %s' % name) write_data(fo, datum.get( name, field.get('default')), field['type'])
Write a datum of data to output stream. Paramaters ---------- fo: file-like Output file datum: object Data to write schema: dict Schemda to use
def write_data(fo, datum, schema): """Write a datum of data to output stream. Paramaters ---------- fo: file-like Output file datum: object Data to write schema: dict Schemda to use """ record_type = extract_record_type(schema) logical_type = extract_logical_type(schema) fn = WRITERS.get(record_type) if fn: if logical_type: prepare = LOGICAL_WRITERS.get(logical_type) if prepare: datum = prepare(datum, schema) return fn(fo, datum, schema) else: return write_data(fo, datum, SCHEMA_DEFS[record_type])
Write block in "null" codec.
def null_write_block(fo, block_bytes): """Write block in "null" codec.""" write_long(fo, len(block_bytes)) fo.write(block_bytes)
Write block in "deflate" codec.
def deflate_write_block(fo, block_bytes): """Write block in "deflate" codec.""" # The first two characters and last character are zlib # wrappers around deflate data. data = compress(block_bytes)[2:-1] write_long(fo, len(data)) fo.write(data)
Write records to fo (stream) according to schema Parameters ---------- fo: file-like Output stream records: iterable Records to write. This is commonly a list of the dictionary representation of the records, but it can be any iterable codec: string, optional Compression codec, can be 'null', 'deflate' or 'snappy' (if installed) sync_interval: int, optional Size of sync interval metadata: dict, optional Header metadata validator: None, True or a function Validator function. If None (the default) - no validation. If True then then fastavro.validation.validate will be used. If it's a function, it should have the same signature as fastavro.writer.validate and raise an exeption on error. sync_marker: bytes, optional A byte string used as the avro sync marker. If not provided, a random byte string will be used. Example:: from fastavro import writer, parse_schema schema = { 'doc': 'A weather reading.', 'name': 'Weather', 'namespace': 'test', 'type': 'record', 'fields': [ {'name': 'station', 'type': 'string'}, {'name': 'time', 'type': 'long'}, {'name': 'temp', 'type': 'int'}, ], } parsed_schema = parse_schema(schema) records = [ {u'station': u'011990-99999', u'temp': 0, u'time': 1433269388}, {u'station': u'011990-99999', u'temp': 22, u'time': 1433270389}, {u'station': u'011990-99999', u'temp': -11, u'time': 1433273379}, {u'station': u'012650-99999', u'temp': 111, u'time': 1433275478}, ] with open('weather.avro', 'wb') as out: writer(out, parsed_schema, records) Given an existing avro file, it's possible to append to it by re-opening the file in `a+b` mode. If the file is only opened in `ab` mode, we aren't able to read some of the existing header information and an error will be raised. For example:: # Write initial records with open('weather.avro', 'wb') as out: writer(out, parsed_schema, records) # Write some more records with open('weather.avro', 'a+b') as out: writer(out, parsed_schema, more_records)
def writer(fo, schema, records, codec='null', sync_interval=1000 * SYNC_SIZE, metadata=None, validator=None, sync_marker=None): """Write records to fo (stream) according to schema Parameters ---------- fo: file-like Output stream records: iterable Records to write. This is commonly a list of the dictionary representation of the records, but it can be any iterable codec: string, optional Compression codec, can be 'null', 'deflate' or 'snappy' (if installed) sync_interval: int, optional Size of sync interval metadata: dict, optional Header metadata validator: None, True or a function Validator function. If None (the default) - no validation. If True then then fastavro.validation.validate will be used. If it's a function, it should have the same signature as fastavro.writer.validate and raise an exeption on error. sync_marker: bytes, optional A byte string used as the avro sync marker. If not provided, a random byte string will be used. Example:: from fastavro import writer, parse_schema schema = { 'doc': 'A weather reading.', 'name': 'Weather', 'namespace': 'test', 'type': 'record', 'fields': [ {'name': 'station', 'type': 'string'}, {'name': 'time', 'type': 'long'}, {'name': 'temp', 'type': 'int'}, ], } parsed_schema = parse_schema(schema) records = [ {u'station': u'011990-99999', u'temp': 0, u'time': 1433269388}, {u'station': u'011990-99999', u'temp': 22, u'time': 1433270389}, {u'station': u'011990-99999', u'temp': -11, u'time': 1433273379}, {u'station': u'012650-99999', u'temp': 111, u'time': 1433275478}, ] with open('weather.avro', 'wb') as out: writer(out, parsed_schema, records) Given an existing avro file, it's possible to append to it by re-opening the file in `a+b` mode. If the file is only opened in `ab` mode, we aren't able to read some of the existing header information and an error will be raised. For example:: # Write initial records with open('weather.avro', 'wb') as out: writer(out, parsed_schema, records) # Write some more records with open('weather.avro', 'a+b') as out: writer(out, parsed_schema, more_records) """ # Sanity check that records is not a single dictionary (as that is a common # mistake and the exception that gets raised is not helpful) if isinstance(records, dict): raise ValueError('"records" argument should be an iterable, not dict') output = Writer( fo, schema, codec, sync_interval, metadata, validator, sync_marker, ) for record in records: output.write(record) output.flush()
Write a single record without the schema or header information Parameters ---------- fo: file-like Output file schema: dict Schema record: dict Record to write Example:: parsed_schema = fastavro.parse_schema(schema) with open('file.avro', 'rb') as fp: fastavro.schemaless_writer(fp, parsed_schema, record) Note: The ``schemaless_writer`` can only write a single record.
def schemaless_writer(fo, schema, record): """Write a single record without the schema or header information Parameters ---------- fo: file-like Output file schema: dict Schema record: dict Record to write Example:: parsed_schema = fastavro.parse_schema(schema) with open('file.avro', 'rb') as fp: fastavro.schemaless_writer(fp, parsed_schema, record) Note: The ``schemaless_writer`` can only write a single record. """ schema = parse_schema(schema) write_data(fo, record, schema)
A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class.
def python_2_unicode_compatible(klass): """ A decorator that defines __unicode__ and __str__ methods under Python 2. Under Python 3 it does nothing. To support Python 2 and 3 with a single code base, define a __str__ method returning text and apply this decorator to the class. """ if sys.version_info[0] == 2: klass.__unicode__ = klass.__str__ klass.__str__ = lambda self: self.__unicode__().encode('utf-8') return klass
Check that the data value is a non floating point number with size less that Int32. Also support for logicalType timestamp validation with datetime. Int32 = -2147483648<=datum<=2147483647 conditional python types (int, long, numbers.Integral, datetime.time, datetime.datetime, datetime.date) Parameters ---------- datum: Any Data being validated kwargs: Any Unused kwargs
def validate_int(datum, **kwargs): """ Check that the data value is a non floating point number with size less that Int32. Also support for logicalType timestamp validation with datetime. Int32 = -2147483648<=datum<=2147483647 conditional python types (int, long, numbers.Integral, datetime.time, datetime.datetime, datetime.date) Parameters ---------- datum: Any Data being validated kwargs: Any Unused kwargs """ return ( (isinstance(datum, (int, long, numbers.Integral)) and INT_MIN_VALUE <= datum <= INT_MAX_VALUE and not isinstance(datum, bool)) or isinstance( datum, (datetime.time, datetime.datetime, datetime.date) ) )
Check that the data value is a non floating point number with size less that long64. * Also support for logicalType timestamp validation with datetime. Int64 = -9223372036854775808 <= datum <= 9223372036854775807 conditional python types (int, long, numbers.Integral, datetime.time, datetime.datetime, datetime.date) :Parameters ---------- datum: Any Data being validated kwargs: Any Unused kwargs
def validate_long(datum, **kwargs): """ Check that the data value is a non floating point number with size less that long64. * Also support for logicalType timestamp validation with datetime. Int64 = -9223372036854775808 <= datum <= 9223372036854775807 conditional python types (int, long, numbers.Integral, datetime.time, datetime.datetime, datetime.date) :Parameters ---------- datum: Any Data being validated kwargs: Any Unused kwargs """ return ( (isinstance(datum, (int, long, numbers.Integral)) and LONG_MIN_VALUE <= datum <= LONG_MAX_VALUE and not isinstance(datum, bool)) or isinstance( datum, (datetime.time, datetime.datetime, datetime.date) ) )
Check that the data value is a floating point number or double precision. conditional python types (int, long, float, numbers.Real) Parameters ---------- datum: Any Data being validated kwargs: Any Unused kwargs
def validate_float(datum, **kwargs): """ Check that the data value is a floating point number or double precision. conditional python types (int, long, float, numbers.Real) Parameters ---------- datum: Any Data being validated kwargs: Any Unused kwargs """ return ( isinstance(datum, (int, long, float, numbers.Real)) and not isinstance(datum, bool) )
Check that the data value is fixed width bytes, matching the schema['size'] exactly! Parameters ---------- datum: Any Data being validated schema: dict Schema kwargs: Any Unused kwargs
def validate_fixed(datum, schema, **kwargs): """ Check that the data value is fixed width bytes, matching the schema['size'] exactly! Parameters ---------- datum: Any Data being validated schema: dict Schema kwargs: Any Unused kwargs """ return ( (isinstance(datum, bytes) and len(datum) == schema['size']) or (isinstance(datum, decimal.Decimal)) )
Check that the data list values all match schema['items']. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data
def validate_array(datum, schema, parent_ns=None, raise_errors=True): """ Check that the data list values all match schema['items']. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data """ return ( isinstance(datum, Sequence) and not is_str(datum) and all(validate(datum=d, schema=schema['items'], field=parent_ns, raise_errors=raise_errors) for d in datum) )
Check that the data is a Map(k,v) matching values to schema['values'] type. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data
def validate_map(datum, schema, parent_ns=None, raise_errors=True): """ Check that the data is a Map(k,v) matching values to schema['values'] type. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data """ return ( isinstance(datum, Mapping) and all(is_str(k) for k in iterkeys(datum)) and all(validate(datum=v, schema=schema['values'], field=parent_ns, raise_errors=raise_errors) for v in itervalues(datum)) )
Check that the data is a Mapping type with all schema defined fields validated as True. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data
def validate_record(datum, schema, parent_ns=None, raise_errors=True): """ Check that the data is a Mapping type with all schema defined fields validated as True. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data """ _, namespace = schema_name(schema, parent_ns) return ( isinstance(datum, Mapping) and all(validate(datum=datum.get(f['name'], f.get('default', no_value)), schema=f['type'], field='{}.{}'.format(namespace, f['name']), raise_errors=raise_errors) for f in schema['fields'] ) )
Check that the data is a list type with possible options to validate as True. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data
def validate_union(datum, schema, parent_ns=None, raise_errors=True): """ Check that the data is a list type with possible options to validate as True. Parameters ---------- datum: Any Data being validated schema: dict Schema parent_ns: str parent namespace raise_errors: bool If true, raises ValidationError on invalid data """ if isinstance(datum, tuple): (name, datum) = datum for candidate in schema: if extract_record_type(candidate) == 'record': if name == candidate["name"]: return validate(datum, schema=candidate, field=parent_ns, raise_errors=raise_errors) else: return False errors = [] for s in schema: try: ret = validate(datum, schema=s, field=parent_ns, raise_errors=raise_errors) if ret: # We exit on the first passing type in Unions return True except ValidationError as e: errors.extend(e.errors) if raise_errors: raise ValidationError(*errors) return False
Determine if a python datum is an instance of a schema. Parameters ---------- datum: Any Data being validated schema: dict Schema field: str, optional Record field being validated raise_errors: bool, optional If true, errors are raised for invalid data. If false, a simple True (valid) or False (invalid) result is returned Example:: from fastavro.validation import validate schema = {...} record = {...} validate(record, schema)
def validate(datum, schema, field=None, raise_errors=True): """ Determine if a python datum is an instance of a schema. Parameters ---------- datum: Any Data being validated schema: dict Schema field: str, optional Record field being validated raise_errors: bool, optional If true, errors are raised for invalid data. If false, a simple True (valid) or False (invalid) result is returned Example:: from fastavro.validation import validate schema = {...} record = {...} validate(record, schema) """ record_type = extract_record_type(schema) result = None validator = VALIDATORS.get(record_type) if validator: result = validator(datum, schema=schema, parent_ns=field, raise_errors=raise_errors) elif record_type in SCHEMA_DEFS: result = validate(datum, schema=SCHEMA_DEFS[record_type], field=field, raise_errors=raise_errors) else: raise UnknownType(record_type) if raise_errors and result is False: raise ValidationError(ValidationErrorData(datum, schema, field)) return result
Validate a list of data! Parameters ---------- records: iterable List of records to validate schema: dict Schema raise_errors: bool, optional If true, errors are raised for invalid data. If false, a simple True (valid) or False (invalid) result is returned Example:: from fastavro.validation import validate_many schema = {...} records = [{...}, {...}, ...] validate_many(records, schema)
def validate_many(records, schema, raise_errors=True): """ Validate a list of data! Parameters ---------- records: iterable List of records to validate schema: dict Schema raise_errors: bool, optional If true, errors are raised for invalid data. If false, a simple True (valid) or False (invalid) result is returned Example:: from fastavro.validation import validate_many schema = {...} records = [{...}, {...}, ...] validate_many(records, schema) """ errors = [] results = [] for record in records: try: results.append(validate(record, schema, raise_errors=raise_errors)) except ValidationError as e: errors.extend(e.errors) if raise_errors and errors: raise ValidationError(*errors) return all(results)
Returns a parsed avro schema It is not necessary to call parse_schema but doing so and saving the parsed schema for use later will make future operations faster as the schema will not need to be reparsed. Parameters ---------- schema: dict Input schema _write_hint: bool Internal API argument specifying whether or not the __fastavro_parsed marker should be added to the schema _force: bool Internal API argument. If True, the schema will always be parsed even if it has been parsed and has the __fastavro_parsed marker Example:: from fastavro import parse_schema from fastavro import writer parsed_schema = parse_schema(original_schema) with open('weather.avro', 'wb') as out: writer(out, parsed_schema, records)
def parse_schema(schema, _write_hint=True, _force=False): """Returns a parsed avro schema It is not necessary to call parse_schema but doing so and saving the parsed schema for use later will make future operations faster as the schema will not need to be reparsed. Parameters ---------- schema: dict Input schema _write_hint: bool Internal API argument specifying whether or not the __fastavro_parsed marker should be added to the schema _force: bool Internal API argument. If True, the schema will always be parsed even if it has been parsed and has the __fastavro_parsed marker Example:: from fastavro import parse_schema from fastavro import writer parsed_schema = parse_schema(original_schema) with open('weather.avro', 'wb') as out: writer(out, parsed_schema, records) """ if _force: return _parse_schema(schema, "", _write_hint) elif isinstance(schema, dict) and "__fastavro_parsed" in schema: return schema else: return _parse_schema(schema, "", _write_hint)
Returns a schema loaded from the file at `schema_path`. Will recursively load referenced schemas assuming they can be found in files in the same directory and named with the convention `<type_name>.avsc`.
def load_schema(schema_path): ''' Returns a schema loaded from the file at `schema_path`. Will recursively load referenced schemas assuming they can be found in files in the same directory and named with the convention `<type_name>.avsc`. ''' with open(schema_path) as fd: schema = json.load(fd) schema_dir, schema_file = path.split(schema_path) return _load_schema(schema, schema_dir)
Display text in tooltip window
def showtip(self, text): "Display text in tooltip window" self.text = text if self.tipwindow or not self.text: return x, y, cx, cy = self.widget.bbox("insert") x = x + self.widget.winfo_rootx() + 27 y = y + cy + self.widget.winfo_rooty() +27 self.tipwindow = tw = tk.Toplevel(self.widget) tw.wm_overrideredirect(1) tw.wm_geometry("+%d+%d" % (x, y)) try: # For Mac OS tw.tk.call("::tk::unsupported::MacWindowStyle", "style", tw._w, "help", "noActivates") except tk.TclError: pass label = tk.Label(tw, text=self.text, justify=tk.LEFT, background="#ffffe0", foreground="black", relief=tk.SOLID, borderwidth=1, font=("tahoma", "8", "normal")) label.pack(ipadx=1)
Ejecute the main loop.
def run(self): """Ejecute the main loop.""" self.toplevel.protocol("WM_DELETE_WINDOW", self.__on_window_close) self.toplevel.mainloop()
Hide and show scrollbar as needed. Code from Joe English (JE) at http://wiki.tcl.tk/950
def _autoscroll(sbar, first, last): """Hide and show scrollbar as needed. Code from Joe English (JE) at http://wiki.tcl.tk/950""" first, last = float(first), float(last) if first <= 0 and last >= 1: sbar.grid_remove() else: sbar.grid() sbar.set(first, last)
Create a regular polygon
def create_regpoly(self, x0, y0, x1, y1, sides=0, start=90, extent=360, **kw): """Create a regular polygon""" coords = self.__regpoly_coords(x0, y0, x1, y1, sides, start, extent) return self.canvas.create_polygon(*coords, **kw)
Create the coordinates of the regular polygon specified
def __regpoly_coords(self, x0, y0, x1, y1, sides, start, extent): """Create the coordinates of the regular polygon specified""" coords = [] if extent == 0: return coords xm = (x0 + x1) / 2. ym = (y0 + y1) / 2. rx = xm - x0 ry = ym - y0 n = sides if n == 0: # 0 sides => circle n = round((rx + ry) * .5) if n < 2: n = 4 # Extent can be negative dirv = 1 if extent > 0 else -1 if abs(extent) > 360: extent = dirv * abs(extent) % 360 step = dirv * 360 / n numsteps = 1 + extent / float(step) numsteps_int = int(numsteps) i = 0 while i < numsteps_int: rad = (start - i * step) * DEG2RAD x = rx * math.cos(rad) y = ry * math.sin(rad) coords.append((xm+x, ym-y)) i += 1 # Figure out where last segment should end if numsteps != numsteps_int: # Vecter V1 is last drawn vertext (x,y) from above # Vector V2 is the edge of the polygon rad2 = (start - numsteps_int * step) * DEG2RAD x2 = rx * math.cos(rad2) - x y2 = ry * math.sin(rad2) - y # Vector V3 is unit vector in direction we end at rad3 = (start - extent) * DEG2RAD x3 = math.cos(rad3) y3 = math.sin(rad3) # Find where V3 crosses V1+V2 => find j s.t. V1 + kV2 = jV3 j = (x*y2 - x2*y) / (x3*y2 - x2*y3) coords.append((xm + j * x3, ym - j * y3)) return coords
Deletes unused grid row/cols
def remove_unused_grid_rc(self): """Deletes unused grid row/cols""" if 'columns' in self['layout']: ckeys = tuple(self['layout']['columns'].keys()) for key in ckeys: value = int(key) if value > self.max_col: del self['layout']['columns'][key] if 'rows' in self['layout']: rkeys = tuple(self['layout']['rows'].keys()) for key in rkeys: value = int(key) if value > self.max_row: del self['layout']['rows'][key]
Return tk image corresponding to name which is taken form path.
def get_image(self, path): """Return tk image corresponding to name which is taken form path.""" image = '' name = os.path.basename(path) if not StockImage.is_registered(name): ipath = self.__find_image(path) if ipath is not None: StockImage.register(name, ipath) else: msg = "Image '{0}' not found in resource paths.".format(name) logger.warning(msg) try: image = StockImage.get(name) except StockImageException: # TODO: notify something here. pass return image
Helper method to avoid call get_variable for every variable.
def import_variables(self, container, varnames=None): """Helper method to avoid call get_variable for every variable.""" if varnames is None: for keyword in self.tkvariables: setattr(container, keyword, self.tkvariables[keyword]) else: for keyword in varnames: if keyword in self.tkvariables: setattr(container, keyword, self.tkvariables[keyword])
Create a tk variable. If the variable was created previously return that instance.
def create_variable(self, varname, vtype=None): """Create a tk variable. If the variable was created previously return that instance. """ var_types = ('string', 'int', 'boolean', 'double') vname = varname var = None type_from_name = 'string' # default type if ':' in varname: type_from_name, vname = varname.split(':') # Fix incorrect order bug #33 if type_from_name not in (var_types): # Swap order type_from_name, vname = vname, type_from_name if type_from_name not in (var_types): raise Exception('Undefined variable type in "{0}"'.format(varname)) if vname in self.tkvariables: var = self.tkvariables[vname] else: if vtype is None: # get type from name if type_from_name == 'int': var = tkinter.IntVar() elif type_from_name == 'boolean': var = tkinter.BooleanVar() elif type_from_name == 'double': var = tkinter.DoubleVar() else: var = tkinter.StringVar() else: var = vtype() self.tkvariables[vname] = var return var
Load ui definition from file.
def add_from_file(self, fpath): """Load ui definition from file.""" if self.tree is None: base, name = os.path.split(fpath) self.add_resource_path(base) self.tree = tree = ET.parse(fpath) self.root = tree.getroot() self.objects = {} else: # TODO: append to current tree pass
Load ui definition from string.
def add_from_string(self, strdata): """Load ui definition from string.""" if self.tree is None: self.tree = tree = ET.ElementTree(ET.fromstring(strdata)) self.root = tree.getroot() self.objects = {} else: # TODO: append to current tree pass
Load ui definition from xml.etree.Element node.
def add_from_xmlnode(self, element): """Load ui definition from xml.etree.Element node.""" if self.tree is None: root = ET.Element('interface') root.append(element) self.tree = tree = ET.ElementTree(root) self.root = tree.getroot() self.objects = {} # ET.dump(tree) else: # TODO: append to current tree pass
Find and create the widget named name. Use master as parent. If widget was already created, return that instance.
def get_object(self, name, master=None): """Find and create the widget named name. Use master as parent. If widget was already created, return that instance.""" widget = None if name in self.objects: widget = self.objects[name].widget else: xpath = ".//object[@id='{0}']".format(name) node = self.tree.find(xpath) if node is not None: root = BuilderObject(self, dict()) root.widget = master bobject = self._realize(root, node) widget = bobject.widget if widget is None: msg = 'Widget "{0}" not defined.'.format(name) raise Exception(msg) return widget
Builds a widget from xml element using master as parent.
def _realize(self, master, element): """Builds a widget from xml element using master as parent.""" data = data_xmlnode_to_dict(element, self.translator) cname = data['class'] uniqueid = data['id'] if cname not in CLASS_MAP: self._import_class(cname) if cname in CLASS_MAP: self._pre_process_data(data) parent = CLASS_MAP[cname].builder.factory(self, data) widget = parent.realize(master) self.objects[uniqueid] = parent xpath = "./child" children = element.findall(xpath) for child in children: child_xml = child.find('./object') child = self._realize(parent, child_xml) parent.add_child(child) parent.configure() parent.layout() return parent else: raise Exception('Class "{0}" not mapped'.format(cname))
Connect callbacks specified in callbacks_bag with callbacks defined in the ui definition. Return a list with the name of the callbacks not connected.
def connect_callbacks(self, callbacks_bag): """Connect callbacks specified in callbacks_bag with callbacks defined in the ui definition. Return a list with the name of the callbacks not connected. """ notconnected = [] for wname, builderobj in self.objects.items(): missing = builderobj.connect_commands(callbacks_bag) if missing is not None: notconnected.extend(missing) missing = builderobj.connect_bindings(callbacks_bag) if missing is not None: notconnected.extend(missing) if notconnected: notconnected = list(set(notconnected)) msg = 'Missing callbacks for commands: {}'.format(notconnected) logger.warning(msg) return notconnected else: return None
Comienza con el proceso de seleccion.
def _start_selecting(self, event): """Comienza con el proceso de seleccion.""" self._selecting = True canvas = self._canvas x = canvas.canvasx(event.x) y = canvas.canvasy(event.y) self._sstart = (x, y) if not self._sobject: self._sobject = canvas.create_rectangle( self._sstart[0], self._sstart[1], x, y, dash=(3,5), outline='#0000ff' ) canvas.itemconfigure(self._sobject, state=tk.NORMAL)
Continua con el proceso de seleccion. Crea o redimensiona el cuadro de seleccion de acuerdo con la posicion del raton.
def _keep_selecting(self, event): """Continua con el proceso de seleccion. Crea o redimensiona el cuadro de seleccion de acuerdo con la posicion del raton.""" canvas = self._canvas x = canvas.canvasx(event.x) y = canvas.canvasy(event.y) canvas.coords(self._sobject, self._sstart[0], self._sstart[1], x, y)