text_prompt
stringlengths
157
13.1k
code_prompt
stringlengths
7
19.8k
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _unused_label(self, label): """Generate an unused label."""
original = label existing = self.labels i = 2 while label in existing: label = '{}_{}'.format(original, i) i += 1 return label
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_column(self, column_or_label): """Convert label to column and check column length."""
c = column_or_label if isinstance(c, collections.Hashable) and c in self.labels: return self[c] elif isinstance(c, numbers.Integral): return self[c] elif isinstance(c, str): raise ValueError('label "{}" not in labels {}'.format(c, self.labels)) else: assert len(c) == self.num_rows, 'column length mismatch' return c
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def percentile(self, p): """Return a new table with one row containing the pth percentile for each column. Assumes that each column only contains one type of value. Returns a new table with one row and the same column labels. The row contains the pth percentile of the original column, where the pth percentile of a column is the smallest value that at at least as large as the p% of numbers in the column. count | points 9 | 1 3 | 2 3 | 2 1 | 10 count | points 9 | 10 """
percentiles = [[_util.percentile(p, column)] for column in self.columns] return self._with_columns(percentiles)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def sample(self, k=None, with_replacement=True, weights=None): """Return a new table where k rows are randomly sampled from the original table. Args: ``k`` -- specifies the number of rows (``int``) to be sampled from the table. Default is k equal to number of rows in the table. ``with_replacement`` -- (``bool``) By default True; Samples ``k`` rows with replacement from table, else samples ``k`` rows without replacement. ``weights`` -- Array specifying probability the ith row of the table is sampled. Defaults to None, which samples each row with equal probability. ``weights`` must be a valid probability distribution -- i.e. an array the length of the number of rows, summing to 1. Raises: ValueError -- if ``weights`` is not length equal to number of rows in the table; or, if ``weights`` does not sum to 1. Returns: A new instance of ``Table`` with ``k`` rows resampled. job | wage a | 10 b | 20 c | 15 d | 8 job | wage b | 20 b | 20 a | 10 d | 8 job | wage d | 8 b | 20 c | 15 a | 10 job | wage b | 20 c | 15 job | wage a | 10 a | 10 Traceback (most recent call last): ValueError: probabilities do not sum to 1 # Weights must be length of table. Traceback (most recent call last): ValueError: a and p must have same size """
n = self.num_rows if k is None: k = n index = np.random.choice(n, k, replace=with_replacement, p=weights) columns = [[c[i] for i in index] for c in self.columns] sample = self._with_columns(columns) return sample
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def split(self, k): """Return a tuple of two tables where the first table contains ``k`` rows randomly sampled and the second contains the remaining rows. Args: ``k`` (int): The number of rows randomly sampled into the first table. ``k`` must be between 1 and ``num_rows - 1``. Raises: ``ValueError``: ``k`` is not between 1 and ``num_rows - 1``. Returns: A tuple containing two instances of ``Table``. job | wage a | 10 b | 20 c | 15 d | 8 job | wage c | 15 a | 10 b | 20 job | wage d | 8 """
if not 1 <= k <= self.num_rows - 1: raise ValueError("Invalid value of k. k must be between 1 and the" "number of rows - 1") rows = np.random.permutation(self.num_rows) first = self.take(rows[:k]) rest = self.take(rows[k:]) for column_label in self._formats: first._formats[column_label] = self._formats[column_label] rest._formats[column_label] = self._formats[column_label] return first, rest
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def with_row(self, row): """Return a table with an additional row. Args: ``row`` (sequence): A value for each column. Raises: ``ValueError``: If the row length differs from the column count. letter | count | points c | 2 | 3 d | 4 | 2 """
self = self.copy() self.append(row) return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def with_rows(self, rows): """Return a table with additional rows. Args: ``rows`` (sequence of sequences): Each row has a value per column. If ``rows`` is a 2-d array, its shape must be (_, n) for n columns. Raises: ``ValueError``: If a row length differs from the column count. letter | count | points c | 2 | 3 d | 4 | 2 """
self = self.copy() self.append(self._with_columns(zip(*rows))) return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def with_column(self, label, values, *rest): """Return a new table with an additional or replaced column. Args: ``label`` (str): The column label. If an existing label is used, the existing column will be replaced in the new table. ``values`` (single value or sequence): If a single value, every value in the new column is ``values``. If sequence of values, new column takes on values in ``values``. ``rest``: An alternating list of labels and values describing additional columns. See with_columns for a full description. Raises: ``ValueError``: If - ``label`` is not a valid column name - if ``label`` is not of type (str) - ``values`` is a list/array that does not have the same length as the number of rows in the table. Returns: copy of original table with new or replaced column letter | count c | 2 d | 4 letter | count | permutes c | 2 | a d | 4 | g letter | count c | 2 d | 4 letter | count c | 1 d | 1 Traceback (most recent call last): ValueError: The column label must be a string, but a int was given Traceback (most recent call last): ValueError: Column length mismatch. New column does not have the same number of rows as table. """
# Ensure that if with_column is called instead of with_columns; # no error is raised. if rest: return self.with_columns(label, values, *rest) new_table = self.copy() new_table.append_column(label, values) return new_table
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def with_columns(self, *labels_and_values): """Return a table with additional or replaced columns. Args: ``labels_and_values``: An alternating list of labels and values or a list of label-value pairs. If one of the labels is in existing table, then every value in the corresponding column is set to that value. If label has only a single value (``int``), every row of corresponding column takes on that value. Raises: ``ValueError``: If - any label in ``labels_and_values`` is not a valid column name, i.e if label is not of type (str). - if any value in ``labels_and_values`` is a list/array and does not have the same length as the number of rows in the table. ``AssertionError``: - 'incorrect columns format', if passed more than one sequence (iterables) for ``labels_and_values``. - 'even length sequence required' if missing a pair in label-value pairs. Returns: Copy of original table with new or replaced columns. Columns added in order of labels. Equivalent to ``with_column(label, value)`` when passed only one label-value pair. player_id | wOBA 110,234 | 0.354 110,235 | 0.236 player_id | wOBA | salaries | season 110,234 | 0.354 | N/A | 2,016 110,235 | 0.236 | N/A | 2,016 player_id | wOBA | salaries | season | years 110,234 | 0.354 | $500,000 | 2,016 | 6 110,235 | 0.236 | $15,500,000 | 2,016 | 1 Traceback (most recent call last): ValueError: The column label must be a string, but a int was given Traceback (most recent call last): ValueError: Column length mismatch. New column does not have the same number of rows as table. """
if len(labels_and_values) == 1: labels_and_values = labels_and_values[0] if isinstance(labels_and_values, collections.abc.Mapping): labels_and_values = list(labels_and_values.items()) if not isinstance(labels_and_values, collections.abc.Sequence): labels_and_values = list(labels_and_values) if not labels_and_values: return self first = labels_and_values[0] if not isinstance(first, str) and hasattr(first, '__iter__'): for pair in labels_and_values: assert len(pair) == 2, 'incorrect columns format' labels_and_values = [x for pair in labels_and_values for x in pair] assert len(labels_and_values) % 2 == 0, 'Even length sequence required' for i in range(0, len(labels_and_values), 2): label, values = labels_and_values[i], labels_and_values[i+1] self = self.with_column(label, values) return self
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def bin(self, *columns, **vargs): """Group values by bin and compute counts per bin by column. By default, bins are chosen to contain all values in all columns. The following named arguments from numpy.histogram can be applied to specialize bin widths: If the original table has n columns, the resulting binned table has n+1 columns, where column 0 contains the lower bound of each bin. Args: ``columns`` (str or int): Labels or indices of columns to be binned. If empty, all columns are binned. ``bins`` (int or sequence of scalars): If bins is an int, it defines the number of equal-width bins in the given range (10, by default). If bins is a sequence, it defines the bin edges, including the rightmost edge, allowing for non-uniform bin widths. ``range`` ((float, float)): The lower and upper range of the bins. If not provided, range contains all values in the table. Values outside the range are ignored. ``density`` (bool): If False, the result will contain the number of samples in each bin. If True, the result is the value of the probability density function at the bin, normalized such that the integral over the range is 1. Note that the sum of the histogram values will not be equal to 1 unless bins of unity width are chosen; it is not a probability mass function. """
if columns: self = self.select(*columns) if 'normed' in vargs: vargs.setdefault('density', vargs.pop('normed')) density = vargs.get('density', False) tag = 'density' if density else 'count' cols = list(self._columns.values()) _, bins = np.histogram(cols, **vargs) binned = type(self)().with_column('bin', bins) for label in self.labels: counts, _ = np.histogram(self[label], bins=bins, density=density) binned[label + ' ' + tag] = np.append(counts, 0) return binned
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _use_html_if_available(format_fn): """Use the value's HTML rendering if available, overriding format_fn."""
def format_using_as_html(v, label=False): if not label and hasattr(v, 'as_html'): return v.as_html() else: return format_fn(v, label) return format_using_as_html
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_column_formatters(self, max_rows, as_html): """Return one value formatting function per column. Each function has the signature f(value, label=False) -> str """
formats = {s: self._formats.get(s, self.formatter) for s in self.labels} cols = self._columns.items() fmts = [formats[k].format_column(k, v[:max_rows]) for k, v in cols] if as_html: fmts = list(map(type(self)._use_html_if_available, fmts)) return fmts
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def as_text(self, max_rows=0, sep=" | "): """Format table as text."""
if not max_rows or max_rows > self.num_rows: max_rows = self.num_rows omitted = max(0, self.num_rows - max_rows) labels = self._columns.keys() fmts = self._get_column_formatters(max_rows, False) rows = [[fmt(label, label=True) for fmt, label in zip(fmts, labels)]] for row in itertools.islice(self.rows, max_rows): rows.append([f(v, label=False) for v, f in zip(row, fmts)]) lines = [sep.join(row) for row in rows] if omitted: lines.append('... ({} rows omitted)'.format(omitted)) return '\n'.join([line.rstrip() for line in lines])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def as_html(self, max_rows=0): """Format table as HTML."""
if not max_rows or max_rows > self.num_rows: max_rows = self.num_rows omitted = max(0, self.num_rows - max_rows) labels = self.labels lines = [ (0, '<table border="1" class="dataframe">'), (1, '<thead>'), (2, '<tr>'), (3, ' '.join('<th>' + label + '</th>' for label in labels)), (2, '</tr>'), (1, '</thead>'), (1, '<tbody>'), ] fmts = self._get_column_formatters(max_rows, True) for row in itertools.islice(self.rows, max_rows): lines += [ (2, '<tr>'), (3, ' '.join('<td>' + fmt(v, label=False) + '</td>' for v, fmt in zip(row, fmts))), (2, '</tr>'), ] lines.append((1, '</tbody>')) lines.append((0, '</table>')) if omitted: lines.append((0, '<p>... ({} rows omitted)</p>'.format(omitted))) return '\n'.join(4 * indent * ' ' + text for indent, text in lines)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def index_by(self, column_or_label): """Return a dict keyed by values in a column that contains lists of rows corresponding to each value. """
column = self._get_column(column_or_label) index = {} for key, row in zip(column, self.rows): index.setdefault(key, []).append(row) return index
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def to_array(self): """Convert the table to a structured NumPy array."""
dt = np.dtype(list(zip(self.labels, (c.dtype for c in self.columns)))) arr = np.empty_like(self.columns[0], dt) for label in self.labels: arr[label] = self[label] return arr
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot(self, column_for_xticks=None, select=None, overlay=True, width=6, height=4, **vargs): """Plot line charts for the table. Args: column_for_xticks (``str/array``): A column containing x-axis labels Kwargs: overlay (bool): create a chart with one color per data column; if False, each plot will be displayed separately. vargs: Additional arguments that get passed into `plt.plot`. See http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot for additional arguments that can be passed into vargs. Raises: ValueError -- Every selected column must be numerical. Returns: Returns a line plot (connected scatter). Each plot is labeled using the values in `column_for_xticks` and one plot is produced for all other columns in self (or for the columns designated by `select`). days | price | projection 0 | 90.5 | 90.75 1 | 90 | 82 2 | 83 | 82.5 3 | 95.5 | 82.5 4 | 82 | 83 5 | 82 | 82.5 <line graph with days as x-axis and lines for price and projection> <line graph with days as x-axis and line for price> <line graph with days as x-axis and line for projection> <line graph with days as x-axis and line for price> """
options = self.default_options.copy() options.update(vargs) if column_for_xticks is not None: x_data, y_labels = self._split_column_and_labels(column_for_xticks) x_label = self._as_label(column_for_xticks) else: x_data, y_labels = None, self.labels x_label = None if select is not None: y_labels = self._as_labels(select) if x_data is not None: self = self.sort(x_data) x_data = np.sort(x_data) def draw(axis, label, color): if x_data is None: axis.plot(self[label], color=color, **options) else: axis.plot(x_data, self[label], color=color, **options) self._visualize(x_label, y_labels, None, overlay, draw, _vertical_x, width=width, height=height)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def bar(self, column_for_categories=None, select=None, overlay=True, width=6, height=4, **vargs): """Plot bar charts for the table. Each plot is labeled using the values in `column_for_categories` and one plot is produced for every other column (or for the columns designated by `select`). Every selected column except `column_for_categories` must be numerical. Args: column_for_categories (str): A column containing x-axis categories Kwargs: overlay (bool): create a chart with one color per data column; if False, each will be displayed separately. vargs: Additional arguments that get passed into `plt.bar`. See http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.bar for additional arguments that can be passed into vargs. """
options = self.default_options.copy() # Matplotlib tries to center the labels, but we already handle that # TODO consider changing the custom centering code and using matplotlib's default vargs['align'] = 'edge' options.update(vargs) xticks, labels = self._split_column_and_labels(column_for_categories) if select is not None: labels = self._as_labels(select) index = np.arange(self.num_rows) def draw(axis, label, color): axis.bar(index-0.5, self[label], 1.0, color=color, **options) def annotate(axis, ticks): if (ticks is not None) : tick_labels = [ticks[int(l)] if 0<=l<len(ticks) else '' for l in axis.get_xticks()] axis.set_xticklabels(tick_labels, stretch='ultra-condensed') self._visualize(column_for_categories, labels, xticks, overlay, draw, annotate, width=width, height=height)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def group_bar(self, column_label, **vargs): """Plot a bar chart for the table. The values of the specified column are grouped and counted, and one bar is produced for each group. Note: This differs from ``bar`` in that there is no need to specify bar heights; the height of a category's bar is the number of copies of that category in the given column. This method behaves more like ``hist`` in that regard, while ``bar`` behaves more like ``plot`` or ``scatter`` (which require the height of each point to be specified). Args: ``column_label`` (str or int): The name or index of a column Kwargs: overlay (bool): create a chart with one color per data column; if False, each will be displayed separately. width (float): The width of the plot, in inches height (float): The height of the plot, in inches vargs: Additional arguments that get passed into `plt.bar`. See http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.bar for additional arguments that can be passed into vargs. """
self.group(column_label).bar(column_label, **vargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def barh(self, column_for_categories=None, select=None, overlay=True, width=6, **vargs): """Plot horizontal bar charts for the table. Args: ``column_for_categories`` (``str``): A column containing y-axis categories used to create buckets for bar chart. Kwargs: overlay (bool): create a chart with one color per data column; if False, each will be displayed separately. vargs: Additional arguments that get passed into `plt.barh`. See http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.barh for additional arguments that can be passed into vargs. Raises: ValueError -- Every selected except column for ``column_for_categories`` must be numerical. Returns: Horizontal bar graph with buckets specified by ``column_for_categories``. Each plot is labeled using the values in ``column_for_categories`` and one plot is produced for every other column (or for the columns designated by ``select``). Furniture | Count | Price chairs | 6 | 10 tables | 1 | 20 desks | 2 | 30 <bar graph with furniture as categories and bars for count and price> <bar graph with furniture as categories and bars for price> <bar graph with furniture as categories and bars for count and price> """
options = self.default_options.copy() # Matplotlib tries to center the labels, but we already handle that # TODO consider changing the custom centering code and using matplotlib's default vargs['align'] = 'edge' options.update(vargs) yticks, labels = self._split_column_and_labels(column_for_categories) if select is not None: labels = self._as_labels(select) n = len(labels) index = np.arange(self.num_rows) margin = 0.1 bwidth = 1 - 2 * margin if overlay: bwidth /= len(labels) if 'height' in options: height = options.pop('height') else: height = max(4, len(index)/2) def draw(axis, label, color): if overlay: ypos = index + margin + (1-2*margin)*(n - 1 - labels.index(label))/n else: ypos = index # barh plots entries in reverse order from bottom to top axis.barh(ypos, self[label][::-1], bwidth, color=color, **options) ylabel = self._as_label(column_for_categories) def annotate(axis, ticks): axis.set_yticks(index+0.5) # Center labels on bars # barh plots entries in reverse order from bottom to top axis.set_yticklabels(ticks[::-1], stretch='ultra-condensed') axis.set_xlabel(axis.get_ylabel()) axis.set_ylabel(ylabel) self._visualize('', labels, yticks, overlay, draw, annotate, width=width, height=height)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def group_barh(self, column_label, **vargs): """Plot a horizontal bar chart for the table. The values of the specified column are grouped and counted, and one bar is produced for each group. Note: This differs from ``barh`` in that there is no need to specify bar heights; the size of a category's bar is the number of copies of that category in the given column. This method behaves more like ``hist`` in that regard, while ``barh`` behaves more like ``plot`` or ``scatter`` (which require the second coordinate of each point to be specified in another column). Args: ``column_label`` (str or int): The name or index of a column Kwargs: overlay (bool): create a chart with one color per data column; if False, each will be displayed separately. width (float): The width of the plot, in inches height (float): The height of the plot, in inches vargs: Additional arguments that get passed into `plt.bar`. See http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.bar for additional arguments that can be passed into vargs. """
self.group(column_label).barh(column_label, **vargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _visualize(self, x_label, y_labels, ticks, overlay, draw, annotate, width=6, height=4): """Generic visualization that overlays or separates the draw function. Raises: ValueError: The Table contains non-numerical values in columns other than `column_for_categories` """
for label in y_labels: if not all(isinstance(x, numbers.Real) for x in self[label]): raise ValueError("The column '{0}' contains non-numerical " "values. A plot cannot be drawn for this column." .format(label)) n = len(y_labels) colors = list(itertools.islice(itertools.cycle(self.chart_colors), n)) if overlay and n > 1: _, axis = plt.subplots(figsize=(width, height)) if x_label is not None: axis.set_xlabel(x_label) for label, color in zip(y_labels, colors): draw(axis, label, color) if ticks is not None: annotate(axis, ticks) axis.legend(y_labels, loc=2, bbox_to_anchor=(1.05, 1)) type(self).plots.append(axis) else: fig, axes = plt.subplots(n, 1, figsize=(width, height*n)) if not isinstance(axes, collections.Iterable): axes=[axes] for axis, y_label, color in zip(axes, y_labels, colors): draw(axis, y_label, color) axis.set_ylabel(y_label, fontsize=16) if x_label is not None: axis.set_xlabel(x_label, fontsize=16) if ticks is not None: annotate(axis, ticks) type(self).plots.append(axis)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _split_column_and_labels(self, column_or_label): """Return the specified column and labels of other columns."""
column = None if column_or_label is None else self._get_column(column_or_label) labels = [label for i, label in enumerate(self.labels) if column_or_label not in (i, label)] return column, labels
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def pivot_hist(self, pivot_column_label, value_column_label, overlay=True, width=6, height=4, **vargs): """Draw histograms of each category in a column."""
warnings.warn("pivot_hist is deprecated; use " "hist(value_column_label, group=pivot_column_label), or " "with side_by_side=True if you really want side-by-side " "bars.") pvt_labels = np.unique(self[pivot_column_label]) pvt_columns = [self[value_column_label][np.where(self[pivot_column_label] == pivot)] for pivot in pvt_labels] n = len(pvt_labels) colors = list(itertools.islice(itertools.cycle(self.chart_colors), n)) if overlay: plt.figure(figsize=(width, height)) vals, bins, patches = plt.hist(pvt_columns, color=colors, **vargs) plt.legend(pvt_labels) else: _, axes = plt.subplots(n, 1, figsize=(width, height * n)) vals = [] bins = None for axis, label, column, color in zip(axes, pvt_labels, pvt_columns, colors): if isinstance(bins, np.ndarray): avals, abins, patches = axis.hist(column, color=color, bins=bins, **vargs) else: avals, abins, patches = axis.hist(column, color=color, **vargs) axis.set_xlabel(label, fontsize=16) vals.append(avals) if not isinstance(bins, np.ndarray): bins = abins else: assert bins.all() == abins.all(), "Inconsistent bins in hist" t = type(self)() t['start'] = bins[0:-1] t['end'] = bins[1:] for label, column in zip(pvt_labels,vals): t[label] = column
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def hist_of_counts(self, *columns, overlay=True, bins=None, bin_column=None, group=None, side_by_side=False, width=6, height=4, **vargs): """ Plots one count-based histogram for each column in columns. The heights of each bar will represent the counts, and all the bins must be of equal size. If no column is specified, plot all columns. Kwargs: overlay (bool): If True, plots 1 chart with all the histograms overlaid on top of each other (instead of the default behavior of one histogram for each column in the table). Also adds a legend that matches each bar color to its column. Note that if the histograms are not overlaid, they are not forced to the same scale. bins (array or int): Lower bound for each bin in the histogram or number of bins. If None, bins will be chosen automatically. bin_column (column name or index): A column of bin lower bounds. All other columns are treated as counts of these bins. If None, each value in each row is assigned a count of 1. group (column name or index): A column of categories. The rows are grouped by the values in this column, and a separate histogram is generated for each group. The histograms are overlaid or plotted separately depending on the overlay argument. If None, no such grouping is done. side_by_side (bool): Whether histogram bins should be plotted side by side (instead of directly overlaid). Makes sense only when plotting multiple histograms, either by passing several columns or by using the group option. vargs: Additional arguments that get passed into :func:plt.hist. See http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist for additional arguments that can be passed into vargs. These include: `range`, `cumulative`, and `orientation`, to name a few. count | points 9 | 1 3 | 2 3 | 2 1 | 10 <histogram of values in count with counts on y-axis> <histogram of values in points with counts on y-axis> <histogram of values weighted by corresponding counts> <two overlaid histograms of the data [1, 2, 3] and [2, 5]> """
if bin_column is not None and bins is None: bins = np.unique(self.column(bin_column)) # TODO ensure counts are integers even when `columns` is empty for column in columns: if not _is_array_integer(self.column(column)): raise ValueError('The column {0} contains non-integer values ' 'When using hist_of_counts with bin_columns, ' 'all columns should contain counts.' .format(column)) if vargs.get('normed', False) or vargs.get('density', False): raise ValueError("hist_of_counts is for displaying counts only, " "and should not be used with the normed or " "density keyword arguments") vargs['density'] = False if bins is not None: if len(bins) < 2: raise ValueError("bins must have at least two items") diffs = np.diff(sorted(bins)) # Diffs should all be equal (up to floating point error) normalized_diff_deviances = np.abs((diffs - diffs[0])/diffs[0]) if np.any(normalized_diff_deviances > 1e-11): raise ValueError("Bins of unequal size should not be used " "with hist_of_counts. Please use hist() and " "make sure to set normed=True") return self.hist(*columns, overlay=overlay, bins=bins, bin_column=bin_column, group=group, side_by_side=side_by_side, width=width, height=height, **vargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def boxplot(self, **vargs): """Plots a boxplot for the table. Every column must be numerical. Kwargs: vargs: Additional arguments that get passed into `plt.boxplot`. See http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.boxplot for additional arguments that can be passed into vargs. These include `vert` and `showmeans`. Returns: None Raises: ValueError: The Table contains columns with non-numerical values. test1 | test2 92.5 | 89 88 | 84 72 | 74 71 | 66 99 | 92 100 | 99 95 | 88 83 | 81 94 | 95 93 | 94 <boxplot of test1 and boxplot of test2 side-by-side on the same figure> """
# Check for non-numerical values and raise a ValueError if any found for col in self: if any(isinstance(cell, np.flexible) for cell in self[col]): raise ValueError("The column '{0}' contains non-numerical " "values. A histogram cannot be drawn for this table." .format(col)) columns = self._columns.copy() vargs['labels'] = columns.keys() values = list(columns.values()) plt.boxplot(values, **vargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def plot_normal_cdf(rbound=None, lbound=None, mean=0, sd=1): """Plots a normal curve with specified parameters and area below curve shaded between ``lbound`` and ``rbound``. Args: ``rbound`` (numeric): right boundary of shaded region ``lbound`` (numeric): left boundary of shaded region; by default is negative infinity ``mean`` (numeric): mean/expectation of normal distribution ``sd`` (numeric): standard deviation of normal distribution """
shade = rbound is not None or lbound is not None shade_left = rbound is not None and lbound is not None inf = 3.5 * sd step = 0.1 rlabel = rbound llabel = lbound if rbound is None: rbound = inf + mean rlabel = "$\infty$" if lbound is None: lbound = -inf + mean llabel = "-$\infty$" pdf_range = np.arange(-inf + mean, inf + mean, step) plt.plot(pdf_range, stats.norm.pdf(pdf_range, loc=mean, scale=sd), color='k', lw=1) cdf_range = np.arange(lbound, rbound + step, step) if shade: plt.fill_between(cdf_range, stats.norm.pdf(cdf_range, loc=mean, scale=sd), color='gold') if shade_left: cdf_range = np.arange(-inf+mean, lbound + step, step) plt.fill_between(cdf_range, stats.norm.pdf(cdf_range, loc=mean, scale=sd), color='darkblue') plt.ylim(0, stats.norm.pdf(0, loc=0, scale=sd) * 1.25) plt.xlabel('z') plt.ylabel('$\phi$(z)', rotation=90) plt.title("Normal Curve ~ ($\mu$ = {0}, $\sigma$ = {1}) " "{2} < z < {3}".format(mean, sd, llabel, rlabel), fontsize=16) plt.show()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def proportions_from_distribution(table, label, sample_size, column_name='Random Sample'): """ Adds a column named ``column_name`` containing the proportions of a random draw using the distribution in ``label``. This method uses ``np.random.multinomial`` to draw ``sample_size`` samples from the distribution in ``table.column(label)``, then divides by ``sample_size`` to create the resulting column of proportions. Args: ``table``: An instance of ``Table``. ``label``: Label of column in ``table``. This column must contain a distribution (the values must sum to 1). ``sample_size``: The size of the sample to draw from the distribution. ``column_name``: The name of the new column that contains the sampled proportions. Defaults to ``'Random Sample'``. Returns: A copy of ``table`` with a column ``column_name`` containing the sampled proportions. The proportions will sum to 1. Throws: ``ValueError``: If the ``label`` is not in the table, or if ``table.column(label)`` does not sum to 1. """
proportions = sample_proportions(sample_size, table.column(label)) return table.with_column('Random Sample', proportions)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def table_apply(table, func, subset=None): """Applies a function to each column and returns a Table. Uses pandas `apply` under the hood, then converts back to a Table Args: table : instance of Table The table to apply your function to func : function Any function that will work with DataFrame.apply subset : list | None A list of columns to apply the function to. If None, function will be applied to all columns in table Returns ------- tab : instance of Table A table with the given function applied. It will either be the shape == shape(table), or shape (1, table.shape[1]) """
from . import Table df = table.to_df() if subset is not None: # Iterate through columns subset = np.atleast_1d(subset) if any([i not in df.columns for i in subset]): err = np.where([i not in df.columns for i in subset])[0] err = "Column mismatch: {0}".format( [subset[i] for i in err]) raise ValueError(err) for col in subset: df[col] = df[col].apply(func) else: df = df.apply(func) if isinstance(df, pd.Series): # Reshape it so that we can easily convert back df = pd.DataFrame(df).T tab = Table.from_df(df) return tab
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def minimize(f, start=None, smooth=False, log=None, array=False, **vargs): """Minimize a function f of one or more arguments. Args: f: A function that takes numbers and returns a number start: A starting value or list of starting values smooth: Whether to assume that f is smooth and use first-order info log: Logging function called on the result of optimization (e.g. print) vargs: Other named arguments passed to scipy.optimize.minimize Returns either: (a) the minimizing argument of a one-argument function (b) an array of minimizing arguments of a multi-argument function """
if start is None: assert not array, "Please pass starting values explicitly when array=True" arg_count = f.__code__.co_argcount assert arg_count > 0, "Please pass starting values explicitly for variadic functions" start = [0] * arg_count if not hasattr(start, '__len__'): start = [start] if array: objective = f else: @functools.wraps(f) def objective(args): return f(*args) if not smooth and 'method' not in vargs: vargs['method'] = 'Powell' result = optimize.minimize(objective, start, **vargs) if log is not None: log(result) if len(start) == 1: return result.x.item(0) else: return result.x
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _lat_lons_from_geojson(s): """Return a latitude-longitude pairs from nested GeoJSON coordinates. GeoJSON coordinates are always stored in (longitude, latitude) order. """
if len(s) >= 2 and isinstance(s[0], _number) and isinstance(s[0], _number): lat, lon = s[1], s[0] return [(lat, lon)] else: return [lat_lon for sub in s for lat_lon in _lat_lons_from_geojson(sub)]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def as_html(self): """Generate HTML to display map."""
if not self._folium_map: self.draw() return self._inline_map(self._folium_map, self._width, self._height)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def show(self): """Publish HTML."""
IPython.display.display(IPython.display.HTML(self.as_html()))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def copy(self): """ Copies the current Map into a new one and returns it. """
m = Map(features=self._features, width=self._width, height=self._height, **self._attrs) m._folium_map = self._folium_map return m
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _autozoom(self): """Calculate zoom and location."""
bounds = self._autobounds() attrs = {} midpoint = lambda a, b: (a + b)/2 attrs['location'] = ( midpoint(bounds['min_lat'], bounds['max_lat']), midpoint(bounds['min_lon'], bounds['max_lon']) ) # self._folium_map.fit_bounds( # [bounds['min_long'], bounds['min_lat']], # [bounds['max_long'], bounds['max_lat']] # ) # remove the following with new Folium release # rough approximation, assuming max_zoom is 18 import math try: lat_diff = bounds['max_lat'] - bounds['min_lat'] lon_diff = bounds['max_lon'] - bounds['min_lon'] area, max_area = lat_diff*lon_diff, 180*360 if area: factor = 1 + max(0, 1 - self._width/1000)/2 + max(0, 1-area**0.5)/2 zoom = math.log(area/max_area)/-factor else: zoom = self._default_zoom zoom = max(1, min(18, round(zoom))) attrs['zoom_start'] = zoom except ValueError as e: raise Exception('Check that your locations are lat-lon pairs', e) return attrs
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _autobounds(self): """Simple calculation for bounds."""
bounds = {} def check(prop, compare, extreme, val): opp = min if compare is max else max bounds.setdefault(prop, val) bounds[prop] = opp(compare(bounds[prop], val), extreme) def bound_check(lat_lon): lat, lon = lat_lon check('max_lat', max, 90, lat) check('min_lat', min, -90, lat) check('max_lon', max, 180, lon) check('min_lon', min, -180, lon) lat_lons = [lat_lon for feature in self._features.values() for lat_lon in feature.lat_lons] if not lat_lons: lat_lons.append(self._default_lat_lon) for lat_lon in lat_lons: bound_check(lat_lon) return bounds
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def geojson(self): """Render features as a FeatureCollection."""
return { "type": "FeatureCollection", "features": [f.geojson(i) for i, f in self._features.items()] }
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def color(self, values, ids=(), key_on='feature.id', palette='YlOrBr', **kwargs): """Color map features by binning values. values -- a sequence of values or a table of keys and values ids -- an ID for each value; if none are provided, indices are used key_on -- attribute of each feature to match to ids palette -- one of the following color brewer palettes: 'BuGn', 'BuPu', 'GnBu', 'OrRd', 'PuBu', 'PuBuGn', 'PuRd', 'RdPu', 'YlGn', 'YlGnBu', 'YlOrBr', and 'YlOrRd'. Defaults from Folium: threshold_scale: list, default None Data range for D3 threshold scale. Defaults to the following range of quantiles: [0, 0.5, 0.75, 0.85, 0.9], rounded to the nearest order-of-magnitude integer. Ex: 270 rounds to 200, 5600 to 6000. fill_opacity: float, default 0.6 Area fill opacity, range 0-1. line_color: string, default 'black' GeoJSON geopath line color. line_weight: int, default 1 GeoJSON geopath line weight. line_opacity: float, default 1 GeoJSON geopath line opacity, range 0-1. legend_name: string, default None Title for data legend. If not passed, defaults to columns[1]. """
# Set values and ids to both be simple sequences by inspecting values id_name, value_name = 'IDs', 'values' if isinstance(values, collections.abc.Mapping): assert not ids, 'IDs and a map cannot both be used together' if hasattr(values, 'columns') and len(values.columns) == 2: table = values ids, values = table.columns id_name, value_name = table.labels else: dictionary = values ids, values = list(dictionary.keys()), list(dictionary.values()) if len(ids) != len(values): assert len(ids) == 0 # Use indices as IDs ids = list(range(len(values))) m = self._create_map() data = pandas.DataFrame({id_name: ids, value_name: values}) attrs = { 'geo_str': json.dumps(self.geojson()), 'data': data, 'columns': [id_name, value_name], 'key_on': key_on, 'fill_color': palette, } kwargs.update(attrs) m.geo_json(**kwargs) colored = self.format() colored._folium_map = m return colored
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def overlay(self, feature, color='Blue', opacity=0.6): """ Overlays ``feature`` on the map. Returns a new Map. Args: ``feature``: a ``Table`` of map features, a list of map features, a Map, a Region, or a circle marker map table. The features will be overlayed on the Map with specified ``color``. ``color`` (``str``): Color of feature. Defaults to 'Blue' ``opacity`` (``float``): Opacity of overlain feature. Defaults to 0.6. Returns: A new ``Map`` with the overlain ``feature``. """
result = self.copy() if type(feature) == Table: # if table of features e.g. Table.from_records(taz_map.features) if 'feature' in feature: feature = feature['feature'] # if marker table e.g. table with columns: latitudes,longitudes,popup,color,radius else: feature = Circle.map_table(feature) if type(feature) in [list, np.ndarray]: for f in feature: f._attrs['fill_color'] = color f._attrs['fill_opacity'] = opacity f.draw_on(result._folium_map) elif type(feature) == Map: for i in range(len(feature._features)): f = feature._features[i] f._attrs['fill_color'] = color f._attrs['fill_opacity'] = opacity f.draw_on(result._folium_map) elif type(feature) == Region: feature._attrs['fill_color'] = color feature._attrs['fill_opacity'] = opacity feature.draw_on(result._folium_map) return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read_geojson(cls, path_or_json_or_string): """Read a geoJSON string, object, or file. Return a dict of features keyed by ID."""
assert path_or_json_or_string data = None if isinstance(path_or_json_or_string, (dict, list)): data = path_or_json_or_string try: data = json.loads(path_or_json_or_string) except ValueError: pass try: path = path_or_json_or_string if path.endswith('.gz') or path.endswith('.gzip'): import gzip contents = gzip.open(path, 'r').read().decode('utf-8') else: contents = open(path, 'r').read() data = json.loads(contents) except FileNotFoundError: pass # TODO web address assert data, 'MapData accepts a valid geoJSON object, geoJSON string, or path to a geoJSON file' return cls(cls._read_geojson_features(data))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _read_geojson_features(data, features=None, prefix=""): """Return a dict of features keyed by ID."""
if features is None: features = collections.OrderedDict() for i, feature in enumerate(data['features']): key = feature.get('id', prefix + str(i)) feature_type = feature['geometry']['type'] if feature_type == 'FeatureCollection': _read_geojson_features(feature, features, prefix + '.' + key) elif feature_type == 'Point': value = Circle._convert_point(feature) elif feature_type in ['Polygon', 'MultiPolygon']: value = Region(feature) else: # TODO Support all http://geojson.org/geojson-spec.html#geometry-objects value = None features[key] = value return features
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def draw_on(self, folium_map): """Add feature to Folium map object."""
f = getattr(folium_map, self._map_method_name) f(**self._folium_kwargs)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _set_folium_map(self): """A map containing only the feature."""
m = Map(features=[self], width=self._width, height=self._height) self._folium_map = m.draw()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def geojson(self, feature_id): """GeoJSON representation of the marker as a point."""
lat, lon = self.lat_lon return { 'type': 'Feature', 'id': feature_id, 'geometry': { 'type': 'Point', 'coordinates': (lon, lat), }, }
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _convert_point(cls, feature): """Convert a GeoJSON point to a Marker."""
lon, lat = feature['geometry']['coordinates'] popup = feature['properties'].get('name', '') return cls(lat, lon)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def map(cls, latitudes, longitudes, labels=None, colors=None, areas=None, **kwargs): """Return markers from columns of coordinates, labels, & colors. The areas column is not applicable to markers, but sets circle areas. """
assert len(latitudes) == len(longitudes) assert areas is None or hasattr(cls, '_has_radius'), "A " + cls.__name__ + " has no radius" inputs = [latitudes, longitudes] if labels is not None: assert len(labels) == len(latitudes) inputs.append(labels) else: inputs.append(("",) * len(latitudes)) if colors is not None: assert len(colors) == len(latitudes) inputs.append(colors) if areas is not None: assert len(areas) == len(latitudes) inputs.append(np.array(areas) ** 0.5 / math.pi) ms = [cls(*args, **kwargs) for args in zip(*inputs)] return Map(ms)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def polygons(self): """Return a list of polygons describing the region. - Each polygon is a list of linear rings, where the first describes the exterior and the rest describe interior holes. - Each linear ring is a list of positions where the last is a repeat of the first. - Each position is a (lat, lon) pair. """
if self.type == 'Polygon': polygons = [self._geojson['geometry']['coordinates']] elif self.type == 'MultiPolygon': polygons = self._geojson['geometry']['coordinates'] return [ [ [_lat_lons_from_geojson(s) for s in ring ] for ring in polygon] for polygon in polygons]
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def geojson(self, feature_id): """Return GeoJSON with ID substituted."""
if self._geojson.get('id', feature_id) == feature_id: return self._geojson else: geo = self._geojson.copy() geo['id'] = feature_id return geo
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def between(y, z): """Greater than or equal to y and less than z."""
return _combinable(lambda x: (y <= x < z) or _equal_or_float_equal(x, y))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def between_or_equal_to(y, z): """Greater than or equal to y and less than or equal to z."""
return _combinable(lambda x: (y <= x <= z) or _equal_or_float_equal(x, y) or _equal_or_float_equal(x, z))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def format_column(self, label, column): """Return a formatting function that pads & truncates values."""
if len(column) == 0: val_width = 0 else: val_width = max(len(self.format_value(v)) for v in column) val_width = min(val_width, self.max_width) width = max(val_width, len(str(label)), self.min_width, len(self.etc)) def pad(value, label=False): if label: raw = value else: raw = self.format_value(value) if len(raw) > width: prefix = raw[:width-len(self.etc)] + self.etc else: prefix = raw return prefix.ljust(width) return pad
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def format_value(value): """Pretty-print an arbitrary value."""
if isinstance(value, (bool, np.bool_)): return str(value) elif isinstance(value, (int, np.integer)): return '{:n}'.format(value) elif isinstance(value, (float, np.floating)): return '{:g}'.format(value) else: return str(value)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def converts_values(self): """Whether this Formatter also converts values."""
return self.convert_value is not Formatter.convert_value or \ self.convert_column is not Formatter.convert_column
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def convert_value(self, value): """Convert string 93,000.00 to float 93000.0."""
if isinstance(value, str): value = value.replace(self.separator, '') if self.decimal_point not in value: return int(value) else: return float(value.replace(self.decimal_point, '.')) elif self.int_to_float: return float(value) else: return value
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def convert_value(self, value): """Convert value to float. If value is a string, ensure that the first character is the same as symbol ie. the value is in the currency this formatter is representing. """
if isinstance(value, str): assert value.startswith(self.symbol), "Currency does not start with " + self.symbol value = value.lstrip(self.symbol) return super().convert_value(value)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def convert_column(self, values): """Normalize values."""
assert all(values >= 0), 'Cannot normalize a column with negatives' total = sum(values) if total > 0: return values / total else: return values
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def decode(bstr): """ Decodes an ASCII encoded binary MAC address tring into a number. """
bstr = bstr.replace(b':', b'') if len(bstr) != 12: raise ValueError('not a valid MAC address: {!r}'.format(bstr)) try: return int(bstr, 16) except ValueError: raise ValueError('not a valid MAC address: {!r}'.format(bstr))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def init(lib_name=None, bin_path=None, sdk_path=None): """ Initialize the Myo SDK by loading the libmyo shared library. With no arguments, libmyo must be on your `PATH` or `LD_LIBRARY_PATH`. You can specify the exact path to libmyo with *lib_name*. Alternatively, you can specify the binaries directory that contains libmyo with *bin_path*. Finally, you can also pass the path to the Myo SDK root directory and it will figure out the path to libmyo by itself. """
if sum(bool(x) for x in [lib_name, bin_path, sdk_path]) > 1: raise ValueError('expected zero or one arguments') if sdk_path: if sys.platform.startswith('win32'): bin_path = os.path.join(sdk_path, 'bin') elif sys.platform.startswith('darwin'): bin_path = os.path.join(sdk_path, 'myo.framework') else: raise RuntimeError('unsupported platform: {!r}'.format(sys.platform)) if bin_path: lib_name = os.path.join(bin_path, _getdlname()) if not lib_name: lib_name = _getdlname() global libmyo libmyo = ffi.dlopen(lib_name)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def reset(self, value=None): """ Resets the start time of the interval to now or the specified value. """
if value is None: value = time.clock() self.start = value if self.value_on_reset: self.value = self.value_on_reset
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def normalized(self): """ Returns a normalized copy of this vector. """
norm = self.magnitude() return Vector(self.x / norm, self.y / norm, self.z / norm)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def magnitude(self): """ Returns the magnitude of the quaternion. """
return math.sqrt(self.x ** 2 + self.y ** 2 + self.z ** 2 + self.w ** 2)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def normalized(self): """ Returns the unit quaternion corresponding to the same rotation as this one. """
magnitude = self.magnitude() return Quaternion( self.x / magnitude, self.y / magnitude, self.z / magnitude, self.w / magnitude)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def roll(self): """ Calculates the Roll of the Quaternion. """
x, y, z, w = self.x, self.y, self.z, self.w return math.atan2(2*y*w - 2*x*z, 1 - 2*y*y - 2*z*z)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def pitch(self): """ Calculates the Pitch of the Quaternion. """
x, y, z, w = self.x, self.y, self.z, self.w return math.atan2(2*x*w - 2*y*z, 1 - 2*x*x - 2*z*z)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def yaw(self): """ Calculates the Yaw of the Quaternion. """
x, y, z, w = self.x, self.y, self.z, self.w return math.asin(2*x*y + 2*z*w)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def rpy(self): """ Calculates the Roll, Pitch and Yaw of the Quaternion. """
x, y, z, w = self.x, self.y, self.z, self.w roll = math.atan2(2*y*w - 2*x*z, 1 - 2*y*y - 2*z*z) pitch = math.atan2(2*x*w - 2*y*z, 1 - 2*x*x - 2*z*z) yaw = math.asin(2*x*y + 2*z*w) return (roll, pitch, yaw)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_iso_packet_buffer_list(transfer_p): """ Python-specific helper extracting a list of iso packet buffers. """
transfer = transfer_p.contents offset = 0 result = [] append = result.append for iso_transfer in _get_iso_packet_list(transfer): length = iso_transfer.length append(_get_iso_packet_buffer(transfer, offset, length)) offset += length return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def get_extra(descriptor): """ Python-specific helper to access "extra" field of descriptors, because it's not as straight-forward as in C. Returns a list, where each entry is an individual extra descriptor. """
result = [] extra_length = descriptor.extra_length if extra_length: extra = buffer_at(descriptor.extra.value, extra_length) append = result.append while extra: length = _string_item_to_int(extra[0]) if not 0 < length <= len(extra): raise ValueError( 'Extra descriptor %i is incomplete/invalid' % ( len(result), ), ) append(extra[:length]) extra = extra[length:] return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def create_binary_buffer(init_or_size): """ ctypes.create_string_buffer variant which does not add a trailing null when init_or_size is not a size. """
# As per ctypes.create_string_buffer, as of python 2.7.10 at least: # - int or long is a length # - str or unicode is an initialiser # Testing the latter confuses 2to3, so test the former. if isinstance(init_or_size, (int, long)): init_or_size = bytearray(init_or_size) return create_initialised_buffer(init_or_size)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def close(self): """ Break reference cycles to allow instance to be garbage-collected. Raises if called on a submitted transfer. """
if self.__submitted: raise ValueError('Cannot close a submitted transfer') self.doom() self.__initialized = False # Break possible external reference cycles self.__callback = None self.__user_data = None # Break libusb_transfer reference cycles self.__ctypesCallbackWrapper = None # For some reason, overwriting callback is not enough to remove this # reference cycle - though sometimes it works: # self -> self.__dict__ -> libusb_transfer -> dict[x] -> dict[x] -> # CThunkObject -> __callbackWrapper -> self # So free transfer altogether. if self.__transfer is not None: self.__libusb_free_transfer(self.__transfer) self.__transfer = None self.__transfer_buffer = None # Break USBDeviceHandle reference cycle self.__before_submit = None self.__after_completion = None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def setControl( self, request_type, request, value, index, buffer_or_len, callback=None, user_data=None, timeout=0): """ Setup transfer for control use. request_type, request, value, index See USBDeviceHandle.controlWrite. request_type defines transfer direction (see ENDPOINT_OUT and ENDPOINT_IN)). buffer_or_len Either a string (when sending data), or expected data length (when receiving data). callback Callback function to be invoked on transfer completion. Called with transfer as parameter, return value ignored. user_data User data to pass to callback function. timeout Transfer timeout in milliseconds. 0 to disable. """
if self.__submitted: raise ValueError('Cannot alter a submitted transfer') if self.__doomed: raise DoomedTransferError('Cannot reuse a doomed transfer') if isinstance(buffer_or_len, (int, long)): length = buffer_or_len # pylint: disable=undefined-variable string_buffer, transfer_py_buffer = create_binary_buffer( length + CONTROL_SETUP_SIZE, ) # pylint: enable=undefined-variable else: length = len(buffer_or_len) string_buffer, transfer_py_buffer = create_binary_buffer( CONTROL_SETUP + buffer_or_len, ) self.__initialized = False self.__transfer_buffer = string_buffer # pylint: disable=undefined-variable self.__transfer_py_buffer = integer_memoryview( transfer_py_buffer, )[CONTROL_SETUP_SIZE:] # pylint: enable=undefined-variable self.__user_data = user_data libusb1.libusb_fill_control_setup( string_buffer, request_type, request, value, index, length) libusb1.libusb_fill_control_transfer( self.__transfer, self.__handle, string_buffer, self.__ctypesCallbackWrapper, None, timeout) self.__callback = callback self.__initialized = True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def setInterrupt( self, endpoint, buffer_or_len, callback=None, user_data=None, timeout=0): """ Setup transfer for interrupt use. endpoint Endpoint to submit transfer to. Defines transfer direction (see ENDPOINT_OUT and ENDPOINT_IN)). buffer_or_len Either a string (when sending data), or expected data length (when receiving data) To avoid memory copies, use an object implementing the writeable buffer interface (ex: bytearray). callback Callback function to be invoked on transfer completion. Called with transfer as parameter, return value ignored. user_data User data to pass to callback function. timeout Transfer timeout in milliseconds. 0 to disable. """
if self.__submitted: raise ValueError('Cannot alter a submitted transfer') if self.__doomed: raise DoomedTransferError('Cannot reuse a doomed transfer') string_buffer, self.__transfer_py_buffer = create_binary_buffer( buffer_or_len ) self.__initialized = False self.__transfer_buffer = string_buffer self.__user_data = user_data libusb1.libusb_fill_interrupt_transfer( self.__transfer, self.__handle, endpoint, string_buffer, sizeof(string_buffer), self.__ctypesCallbackWrapper, None, timeout) self.__callback = callback self.__initialized = True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def setIsochronous( self, endpoint, buffer_or_len, callback=None, user_data=None, timeout=0, iso_transfer_length_list=None): """ Setup transfer for isochronous use. endpoint Endpoint to submit transfer to. Defines transfer direction (see ENDPOINT_OUT and ENDPOINT_IN)). buffer_or_len Either a string (when sending data), or expected data length (when receiving data) To avoid memory copies, use an object implementing the writeable buffer interface (ex: bytearray). callback Callback function to be invoked on transfer completion. Called with transfer as parameter, return value ignored. user_data User data to pass to callback function. timeout Transfer timeout in milliseconds. 0 to disable. iso_transfer_length_list List of individual transfer sizes. If not provided, buffer_or_len will be divided evenly among available transfers if possible, and raise ValueError otherwise. """
if self.__submitted: raise ValueError('Cannot alter a submitted transfer') num_iso_packets = self.__num_iso_packets if num_iso_packets == 0: raise TypeError( 'This transfer canot be used for isochronous I/O. ' 'You must get another one with a non-zero iso_packets ' 'parameter.' ) if self.__doomed: raise DoomedTransferError('Cannot reuse a doomed transfer') string_buffer, transfer_py_buffer = create_binary_buffer(buffer_or_len) buffer_length = sizeof(string_buffer) if iso_transfer_length_list is None: iso_length, remainder = divmod(buffer_length, num_iso_packets) if remainder: raise ValueError( 'Buffer size %i cannot be evenly distributed among %i ' 'transfers' % ( buffer_length, num_iso_packets, ) ) iso_transfer_length_list = [iso_length] * num_iso_packets configured_iso_packets = len(iso_transfer_length_list) if configured_iso_packets > num_iso_packets: raise ValueError( 'Too many ISO transfer lengths (%i), there are ' 'only %i ISO transfers available' % ( configured_iso_packets, num_iso_packets, ) ) if sum(iso_transfer_length_list) > buffer_length: raise ValueError( 'ISO transfers too long (%i), there are only ' '%i bytes available' % ( sum(iso_transfer_length_list), buffer_length, ) ) transfer_p = self.__transfer self.__initialized = False self.__transfer_buffer = string_buffer self.__transfer_py_buffer = transfer_py_buffer self.__user_data = user_data libusb1.libusb_fill_iso_transfer( transfer_p, self.__handle, endpoint, string_buffer, buffer_length, configured_iso_packets, self.__ctypesCallbackWrapper, None, timeout) for length, iso_packet_desc in zip( iso_transfer_length_list, libusb1.get_iso_packet_list(transfer_p)): if length <= 0: raise ValueError( 'Negative/null length transfers are not possible.' ) iso_packet_desc.length = length self.__callback = callback self.__initialized = True
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getISOBufferList(self): """ Get individual ISO transfer's buffer. Returns a list with one item per ISO transfer, with their individually-configured sizes. Returned list is consistent with getISOSetupList return value. Should not be called on a submitted transfer. See also iterISO. """
transfer_p = self.__transfer transfer = transfer_p.contents # pylint: disable=undefined-variable if transfer.type != TRANSFER_TYPE_ISOCHRONOUS: # pylint: enable=undefined-variable raise TypeError( 'This method cannot be called on non-iso transfers.' ) return libusb1.get_iso_packet_buffer_list(transfer_p)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def submit(self): """ Submit transfer for asynchronous handling. """
if self.__submitted: raise ValueError('Cannot submit a submitted transfer') if not self.__initialized: raise ValueError( 'Cannot submit a transfer until it has been initialized' ) if self.__doomed: raise DoomedTransferError('Cannot submit doomed transfer') self.__before_submit(self) self.__submitted = True result = libusb1.libusb_submit_transfer(self.__transfer) if result: self.__after_completion(self) self.__submitted = False raiseUSBError(result)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def register(self, fd, events): """ Register an USB-unrelated fd to poller. Convenience method. """
if fd in self.__fd_set: raise ValueError( 'This fd is a special USB event fd, it cannot be polled.' ) self.__poller.register(fd, events)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def unregister(self, fd): """ Unregister an USB-unrelated fd from poller. Convenience method. """
if fd in self.__fd_set: raise ValueError( 'This fd is a special USB event fd, it must stay registered.' ) self.__poller.unregister(fd)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def close(self): """ Close this handle. If not called explicitely, will be called by destructor. This method cancels any in-flight transfer when it is called. As cancellation is not immediate, this method needs to let libusb handle events until transfers are actually cancelled. In multi-threaded programs, this can lead to stalls. To avoid this, do not close nor let GC collect a USBDeviceHandle which has in-flight transfers. """
handle = self.__handle if handle is None: return # Build a strong set from weak self.__transfer_set so we can doom # and close all contained transfers. # Because of backward compatibility, self.__transfer_set might be a # wrapper around WeakKeyDictionary. As it might be modified by gc, # we must pop until there is not key left instead of iterating over # it. weak_transfer_set = self.__transfer_set transfer_set = self.__set() while True: try: transfer = weak_transfer_set.pop() except self.__KeyError: break transfer_set.add(transfer) transfer.doom() inflight = self.__inflight for transfer in inflight: try: transfer.cancel() except (self.__USBErrorNotFound, self.__USBErrorNoDevice): pass while inflight: try: self.__context.handleEvents() except self.__USBErrorInterrupted: pass for transfer in transfer_set: transfer.close() self.__libusb_close(handle) self.__handle = None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getConfiguration(self): """ Get the current configuration number for this device. """
configuration = c_int() mayRaiseUSBError(libusb1.libusb_get_configuration( self.__handle, byref(configuration), )) return configuration.value
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def kernelDriverActive(self, interface): """ Tell whether a kernel driver is active on given interface number. """
result = libusb1.libusb_kernel_driver_active(self.__handle, interface) if result == 0: return False elif result == 1: return True raiseUSBError(result)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getStringDescriptor(self, descriptor, lang_id, errors='strict'): """ Fetch description string for given descriptor and in given language. Use getSupportedLanguageList to know which languages are available. Return value is a unicode string. Return None if there is no such descriptor on device. """
if descriptor == 0: return None descriptor_string = bytearray(STRING_LENGTH) try: received = mayRaiseUSBError(libusb1.libusb_get_string_descriptor( self.__handle, descriptor, lang_id, create_binary_buffer(descriptor_string)[0], STRING_LENGTH, )) # pylint: disable=undefined-variable except USBErrorNotFound: # pylint: enable=undefined-variable return None if received < 2 or descriptor_string[1] != DT_STRING: raise ValueError('Invalid string descriptor') return descriptor_string[2:min( received, descriptor_string[0], )].decode('UTF-16-LE', errors=errors)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getASCIIStringDescriptor(self, descriptor, errors='strict'): """ Fetch description string for given descriptor in first available language. Return value is a unicode string. Return None if there is no such descriptor on device. """
if descriptor == 0: return None descriptor_string = bytearray(STRING_LENGTH) try: received = mayRaiseUSBError(libusb1.libusb_get_string_descriptor_ascii( self.__handle, descriptor, create_binary_buffer(descriptor_string)[0], STRING_LENGTH, )) # pylint: disable=undefined-variable except USBErrorNotFound: # pylint: enable=undefined-variable return None return descriptor_string[:received].decode('ASCII', errors=errors)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getPortNumberList(self): """ Get the port number of each hub toward device. """
port_list = (c_uint8 * PATH_MAX_DEPTH)() result = libusb1.libusb_get_port_numbers( self.device_p, port_list, len(port_list)) mayRaiseUSBError(result) return list(port_list[:result])
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getMaxPacketSize(self, endpoint): """ Get device's max packet size for given endpoint. Warning: this function will not always give you the expected result. See https://libusb.org/ticket/77 . You should instead consult the endpoint descriptor of current configuration and alternate setting. """
result = libusb1.libusb_get_max_packet_size(self.device_p, endpoint) mayRaiseUSBError(result) return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getMaxISOPacketSize(self, endpoint): """ Get the maximum size for a single isochronous packet for given endpoint. Warning: this function will not always give you the expected result. See https://libusb.org/ticket/77 . You should instead consult the endpoint descriptor of current configuration and alternate setting. """
result = libusb1.libusb_get_max_iso_packet_size(self.device_p, endpoint) mayRaiseUSBError(result) return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def open(self): """ Open device. Returns an USBDeviceHandle instance. """
handle = libusb1.libusb_device_handle_p() mayRaiseUSBError(libusb1.libusb_open(self.device_p, byref(handle))) result = USBDeviceHandle(self.__context, handle, self) self.__close_set.add(result) return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getDeviceIterator(self, skip_on_error=False): """ Return an iterator over all USB devices currently plugged in, as USBDevice instances. skip_on_error (bool) If True, ignore devices which raise USBError. """
device_p_p = libusb1.libusb_device_p_p() libusb_device_p = libusb1.libusb_device_p device_list_len = libusb1.libusb_get_device_list(self.__context_p, byref(device_p_p)) mayRaiseUSBError(device_list_len) try: for device_p in device_p_p[:device_list_len]: try: # Instanciate our own libusb_device_p object so we can free # libusb-provided device list. Is this a bug in ctypes that # it doesn't copy pointer value (=pointed memory address) ? # At least, it's not so convenient and forces using such # weird code. device = USBDevice(self, libusb_device_p(device_p.contents)) except USBError: if not skip_on_error: raise else: self.__close_set.add(device) yield device finally: libusb1.libusb_free_device_list(device_p_p, 1)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getDeviceList(self, skip_on_access_error=False, skip_on_error=False): """ Return a list of all USB devices currently plugged in, as USBDevice instances. skip_on_error (bool) If True, ignore devices which raise USBError. skip_on_access_error (bool) DEPRECATED. Alias for skip_on_error. """
return list( self.getDeviceIterator( skip_on_error=skip_on_access_error or skip_on_error, ), )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getPollFDList(self): """ Return file descriptors to be used to poll USB events. You should not have to call this method, unless you are integrating this class with a polling mechanism. """
pollfd_p_p = libusb1.libusb_get_pollfds(self.__context_p) if not pollfd_p_p: errno = get_errno() if errno: raise OSError(errno) else: # Assume not implemented raise NotImplementedError( 'Your libusb does not seem to implement pollable FDs') try: result = [] append = result.append fd_index = 0 while pollfd_p_p[fd_index]: append(( pollfd_p_p[fd_index].contents.fd, pollfd_p_p[fd_index].contents.events, )) fd_index += 1 finally: _free(pollfd_p_p) return result
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def handleEventsTimeout(self, tv=0): """ Handle any pending event. If tv is 0, will return immediately after handling already-pending events. Otherwise, defines the maximum amount of time to wait for events, in seconds. """
if tv is None: tv = 0 tv_s = int(tv) real_tv = libusb1.timeval(tv_s, int((tv - tv_s) * 1000000)) mayRaiseUSBError( libusb1.libusb_handle_events_timeout( self.__context_p, byref(real_tv), ), )
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def getNextTimeout(self): """ Returns the next internal timeout that libusb needs to handle, in seconds, or None if no timeout is needed. You should not have to call this method, unless you are integrating this class with a polling mechanism. """
timeval = libusb1.timeval() result = libusb1.libusb_get_next_timeout( self.__context_p, byref(timeval)) if result == 0: return None elif result == 1: return timeval.tv_sec + (timeval.tv_usec * 0.000001) raiseUSBError(result)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def waitForEvent(self, tv=0): """ See libusb_wait_for_event doc. """
if tv is None: tv = 0 tv_s = int(tv) real_tv = libusb1.timeval(tv_s, int((tv - tv_s) * 1000000)) libusb1.libusb_wait_for_event(self.__context_p, byref(real_tv))
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read(address, length): """ Prepares an i2c read transaction. :param address: Slave address. :type: address: int :param length: Number of bytes to read. :type: length: int :return: New :py:class:`i2c_msg` instance for read operation. :rtype: :py:class:`i2c_msg` """
arr = create_string_buffer(length) return i2c_msg( addr=address, flags=I2C_M_RD, len=length, buf=arr)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write(address, buf): """ Prepares an i2c write transaction. :param address: Slave address. :type address: int :param buf: Bytes to write. Either list of values or str. :type buf: list :return: New :py:class:`i2c_msg` instance for write operation. :rtype: :py:class:`i2c_msg` """
if sys.version_info.major >= 3: if type(buf) is str: buf = bytes(map(ord, buf)) else: buf = bytes(buf) else: if type(buf) is not str: buf = ''.join([chr(x) for x in buf]) arr = create_string_buffer(buf, len(buf)) return i2c_msg( addr=address, flags=0, len=len(arr), buf=arr)
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def open(self, bus): """ Open a given i2c bus. :param bus: i2c bus number (e.g. 0 or 1) :type bus: int """
self.fd = os.open("/dev/i2c-{}".format(bus), os.O_RDWR) self.funcs = self._get_funcs()
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def close(self): """ Close the i2c connection. """
if self.fd: os.close(self.fd) self.fd = None
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _set_address(self, address, force=None): """ Set i2c slave address to use for subsequent calls. :param address: :type address: int :param force: :type force: Boolean """
force = force if force is not None else self.force if self.address != address or self._force_last != force: if force is True: ioctl(self.fd, I2C_SLAVE_FORCE, address) else: ioctl(self.fd, I2C_SLAVE, address) self.address = address self._force_last = force
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def _get_funcs(self): """ Returns a 32-bit value stating supported I2C functions. :rtype: int """
f = c_uint32() ioctl(self.fd, I2C_FUNCS, f) return f.value
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def read_byte(self, i2c_addr, force=None): """ Read a single byte from a device. :rtype: int :param i2c_addr: i2c address :type i2c_addr: int :param force: :type force: Boolean :return: Read byte value """
self._set_address(i2c_addr, force=force) msg = i2c_smbus_ioctl_data.create( read_write=I2C_SMBUS_READ, command=0, size=I2C_SMBUS_BYTE ) ioctl(self.fd, I2C_SMBUS, msg) return msg.data.contents.byte
<SYSTEM_TASK:> Solve the following problem using Python, implementing the functions described below, one line at a time <END_TASK> <USER_TASK:> Description: def write_byte(self, i2c_addr, value, force=None): """ Write a single byte to a device. :param i2c_addr: i2c address :type i2c_addr: int :param value: value to write :type value: int :param force: :type force: Boolean """
self._set_address(i2c_addr, force=force) msg = i2c_smbus_ioctl_data.create( read_write=I2C_SMBUS_WRITE, command=value, size=I2C_SMBUS_BYTE ) ioctl(self.fd, I2C_SMBUS, msg)