Unnamed: 0 int64 0 2.93k | code stringlengths 101 62.2k | docs stringlengths 51 10.7k | doc_len int64 4 1.74k | words int64 4 4.82k | lang stringclasses 1
value | prompt stringlengths 320 71.2k |
|---|---|---|---|---|---|---|
700 | def _clean_configuration_value(cls, item_type, new_value):
if (
item_type == ConfigurationTypeField.BOOLEAN
and new_value
and not isinstance(new_value, bool)
):
new_value = new_value.lower() == "true"
if item_type == ConfigurationTypeField... | Clean the value that is saved in plugin configuration.
Change the string provided as boolean into the bool value.
Return None for Output type, as it's read only field.
| 29 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _clean_configuration_value(cls, item_type, new_value):
if (
item_type == ConfigurationTypeField.BOOLEAN
and new_value
and not isinsta... |
701 | def coord_map_from_to(top_from, top_to):
# We need to find a common ancestor of top_from and top_to.
# We'll assume that all ancestors are equivalent here (otherwise the graph
# is an inconsistent state (which we could improve this to check for)).
# For now use a brute-force algorithm.
|
Determine the coordinate mapping betweeen a top (from) and a top (to).
Walk the graph to find a common ancestor while composing the coord maps for
from and to until they meet. As a last step the from map is inverted.
| 41 | 47 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def coord_map_from_to(top_from, top_to):
# We need to find a common ancestor of top_from and top_to.
# We'll assume that all ancestors are equivalent here (otherwise the graph
... |
702 | def get_conditions_to_validate_future_sle(sl_entries):
warehouse_items_map = {}
for entry in sl_entries:
if entry.warehouse not in warehouse_items_map:
warehouse_items_map[entry.warehouse] = set()
warehouse_items_map[entry.warehouse].add(entry.item_code)
or_conditions = []
for warehouse, items in warehouse... | warehouse = {frappe.db.escape(warehouse)}
and item_code in ({', '.join(frappe.db.escape(item) for item in items)}) | 12 | 31 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_conditions_to_validate_future_sle(sl_entries):
warehouse_items_map = {}
for entry in sl_entries:
if entry.warehouse not in warehouse_items_map:
warehouse_items_map[entry.wareh... |
703 | def split_dataset(items, eval_split_max_size=None, eval_split_size=0.01):
speakers = [item["speaker_name"] for item in items]
is_multi_speaker = len(set(speakers)) > 1
if eval_split_size > 1:
eval_split_size = int(eval_split_size)
else:
if eval_split_max_size:
eval_split... | Split a dataset into train and eval. Consider speaker distribution in multi-speaker training.
Args:
<<<<<<< HEAD
items (List[List]):
A list of samples. Each sample is a list of `[audio_path, text, speaker_id]`.
eval_split_max_size (int):
Number maxim... | 101 | 118 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def split_dataset(items, eval_split_max_size=None, eval_split_size=0.01):
speakers = [item["speaker_name"] for item in items]
is_multi_speaker = len(set(speakers)) > 1
if ev... |
704 | def test_quarantine_media(self) -> None:
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertFalse(media_info["quarantined_by"])
# quarantining
channel = self.make_request(
"POST",
se... |
Tests that quarantining and remove from quarantine a media is successfully
| 11 | 67 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_quarantine_media(self) -> None:
media_info = self.get_success(self.store.get_local_media(self.media_id))
assert media_info is not None
self.assertF... |
705 | def naive_greedy_modularity_communities(G, resolution=1, weight=None):
r
# First create one community for each node
communities = list(frozenset([u]) for u in G.nodes())
# Track merges
merges = []
# Greedily merge communities until no improvement is possible
old_modularity = None
new_mod... | Find communities in G using greedy modularity maximization.
This implementation is O(n^4), much slower than alternatives, but it is
provided as an easy-to-understand reference implementation.
Greedy modularity maximization begins with each node in its own community
and joins the pair of communities th... | 199 | 250 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def naive_greedy_modularity_communities(G, resolution=1, weight=None):
r
# First create one community for each node
communities = list(frozenset([u]) for u in G.nodes())
# Tr... |
706 | def invoke(self) -> Generator[PowerShell, None, None]:
logger = copy(self.log)
logger.setLevel(self._logging_level)
local_context = self._conn is None
if local_context:
self.__enter__()
try:
assert self._conn is not None
ps = PowerShel... |
Context manager that yields a PowerShell object to which commands can be
added. Upon exit, the commands will be invoked.
| 20 | 167 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def invoke(self) -> Generator[PowerShell, None, None]:
logger = copy(self.log)
logger.setLevel(self._logging_level)
local_context = self._conn is None
... |
707 | def util_call_before_task_publish_handler(self, headers_to_use, body_to_use):
self.assertEqual(PaperlessTask.objects.all().count(), 0)
before_task_publish_handler(headers=headers_to_use, body=body_to_use)
self.assertEqual(PaperlessTask.objects.all().count(), 1)
|
Simple utility to call the pre-run handle and ensure it created a single task
instance
| 15 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def util_call_before_task_publish_handler(self, headers_to_use, body_to_use):
self.assertEqual(PaperlessTask.objects.all().count(), 0)
before_task_publish_handler(h... |
708 | def test_form(self):
form = self.EventPageForm(instance=self.event_page)
self.assertIn("comments", form.formsets)
comments_formset = form.formsets["comments"]
self.assertEqual(len(comments_formset.forms), 1)
self.assertEqual(comments_formset.forms[0].user, self.comment... |
Check that the form has the comments/replies formsets, and that the
user has been set on each CommentForm/CommentReplyForm subclass
| 19 | 21 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_form(self):
form = self.EventPageForm(instance=self.event_page)
self.assertIn("comments", form.formsets)
comments_formset = form.formsets["comment... |
709 | def _doc(self, doc_type, default, lang="eng"):
corpus = self._wordnet_corpus_reader
if lang not in corpus.langs():
return None
elif lang == "eng":
return default
else:
corpus._load_lang_data(lang)
of = corpus.ss2of(self)
... | Helper method for Synset.definition and Synset.examples | 6 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _doc(self, doc_type, default, lang="eng"):
corpus = self._wordnet_corpus_reader
if lang not in corpus.langs():
return None
elif lang == "eng"... |
710 | def map(self, mapper):
new_categories = self.categories.map(mapper)
try:
return self.from_codes(
self._codes.copy(), categories=new_categories, ordered=self.ordered
)
except ValueError:
# NA values are represented in self._codes with -... |
Map categories using an input mapping or function.
Maps the categories to new categories. If the mapping correspondence is
one-to-one the result is a :class:`~pandas.Categorical` which has the
same order property as the original, otherwise a :class:`~pandas.Index`
is returned. ... | 269 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def map(self, mapper):
new_categories = self.categories.map(mapper)
try:
return self.from_codes(
self._codes.copy(), categories=new_categ... |
711 | def to_pickle_distributed(cls, qc, **kwargs):
if not (
isinstance(kwargs["filepath_or_buffer"], str)
and "*" in kwargs["filepath_or_buffer"]
) or not isinstance(qc, PandasQueryCompiler):
warnings.warn("Defaulting to Modin core implementation")
ret... |
When `*` in the filename all partitions are written to their own separate file.
The filenames is determined as follows:
- if `*` in the filename then it will be replaced by the increasing sequence 0, 1, 2, …
- if `*` is not the filename, then will be used default implementation.
... | 92 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def to_pickle_distributed(cls, qc, **kwargs):
if not (
isinstance(kwargs["filepath_or_buffer"], str)
and "*" in kwargs["filepath_or_buffer"]
... |
712 | def project_state(self, nodes=None, at_end=True):
return self.graph.make_state(
nodes=nodes, at_end=at_end, real_apps=self.unmigrated_apps
)
|
Return a ProjectState object representing the most recent state
that the loaded migrations represent.
See graph.make_state() for the meaning of "nodes" and "at_end".
| 23 | 10 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def project_state(self, nodes=None, at_end=True):
return self.graph.make_state(
nodes=nodes, at_end=at_end, real_apps=self.unmigrated_apps
)
```... |
713 | def center(self, frequency=1000):
equal_energy_fr = self.__class__(name='equal_energy', frequency=self.frequency.copy(), raw=self.raw.copy())
equal_energy_fr.interpolate()
interpolator = InterpolatedUnivariateSpline(np.log10(equal_energy_fr.frequency), equal_energy_fr.raw, k=1)
... | Removed bias from frequency response.
Args:
frequency: Frequency which is set to 0 dB. If this is a list with two values then an average between the two
frequencies is set to 0 dB.
Returns:
Gain shifted
| 37 | 125 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def center(self, frequency=1000):
equal_energy_fr = self.__class__(name='equal_energy', frequency=self.frequency.copy(), raw=self.raw.copy())
equal_energy_fr.interpo... |
714 | def _i18n_cache_key_suffix(request, cache_key):
if settings.USE_I18N:
# first check if LocaleMiddleware or another middleware added
# LANGUAGE_CODE to request, then fall back to the active language
# which in turn can also fall back to settings.LANGUAGE_CODE
cache_key += ".%s" %... | If necessary, add the current locale or time zone to the cache key. | 13 | 51 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _i18n_cache_key_suffix(request, cache_key):
if settings.USE_I18N:
# first check if LocaleMiddleware or another middleware added
# LANGUAGE_CODE to request, then ... |
715 | def test_pandas_arff_parser_strip_double_quotes(parser_func):
pd = pytest.importorskip("pandas")
arff_file = BytesIO(
textwrap.dedent(
| Check that we properly strip double quotes from the data. | 10 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_pandas_arff_parser_strip_double_quotes(parser_func):
pd = pytest.importorskip("pandas")
arff_file = BytesIO(
textwrap.dedent(
```
###A... |
716 | def get_tokens_unprocessed(self, text=None, context=None):
tokendefs = self._tokens
if not context:
ctx = LexerContext(text, 0)
statetokens = tokendefs['root']
else:
ctx = context
statetokens = tokendefs[ctx.stack[-1]]
text = c... |
Split ``text`` into (tokentype, text) pairs.
If ``context`` is given, use this lexer context instead.
| 15 | 193 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_tokens_unprocessed(self, text=None, context=None):
tokendefs = self._tokens
if not context:
ctx = LexerContext(text, 0)
statetokens =... |
717 | def get_policy_data_from_agent_data(agent_data, policy_map_fn):
policy_data = {}
for agent_id, data in agent_data.items():
policy_id = policy_map_fn(agent_id)
policy_data.setdefault(policy_id, {})
policy_data[policy_id].setdefault("agent_id", [])
if data["obs"].ndim == 1:
... | Utility function to get policy data from agent data and policy map function.
It also keeps track of agent_id for each row so that we can retreive the agent
level information after the forward pass.
Returns:
dict of module_id to module data
| 42 | 67 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_policy_data_from_agent_data(agent_data, policy_map_fn):
policy_data = {}
for agent_id, data in agent_data.items():
policy_id = policy_map_fn(agent_id)
po... |
718 | def _update_dimensions(self) -> None:
total_width = sum(column.width for column in self.columns)
self.virtual_size = Size(
total_width,
len(self._y_offsets) + (self.header_height if self.show_header else 0),
)
| Called to recalculate the virtual (scrollable) size. | 7 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _update_dimensions(self) -> None:
total_width = sum(column.width for column in self.columns)
self.virtual_size = Size(
total_width,
len(s... |
719 | def _store(self, messages, response, *args, **kwargs):
raise NotImplementedError(
"subclasses of BaseStorage must provide a _store() method"
)
|
Store a list of messages and return a list of any messages which could
not be stored.
One type of object must be able to be stored, ``Message``.
**This method must be implemented by a subclass.**
| 36 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _store(self, messages, response, *args, **kwargs):
raise NotImplementedError(
"subclasses of BaseStorage must provide a _store() method"
)
`... |
720 | def test_from_is_negative(self) -> None:
channel = self.make_request(
"GET",
self.url + "?from=-5",
access_token=self.admin_user_tok,
)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.... |
Testing that a negative from parameter returns a 400
| 9 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_from_is_negative(self) -> None:
channel = self.make_request(
"GET",
self.url + "?from=-5",
access_token=self.admin_user_tok,
... |
721 | async def test_focused_child_widget_no_inherit_empty_bindings_with_movement_bindings_on_screen() -> None:
| A focused child widget, that doesn't inherit bindings and sets BINDINGS empty, with movement bindings in the screen, should trigger screen actions. | 22 | 5 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_focused_child_widget_no_inherit_empty_bindings_with_movement_bindings_on_screen() -> None:
```
###Assistant : A focused child widget, that doesn't inherit bin... |
722 | def addtoken(self, type, value, context):
# Map from token to label
ilabel = self.classify(type, value, context)
# Loop until the token is shifted; may raise exceptions
while True:
dfa, state, node = self.stack[-1]
states, first = dfa
arcs = s... | Add a token; return True iff this is the end of the program. | 13 | 220 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def addtoken(self, type, value, context):
# Map from token to label
ilabel = self.classify(type, value, context)
# Loop until the token is shifted; may raise... |
723 | def import_local_settings():
try:
import airflow_local_settings
if hasattr(airflow_local_settings, "__all__"):
for i in airflow_local_settings.__all__:
globals()[i] = getattr(airflow_local_settings, i)
else:
for k, v in airflow_local_settings.__d... | Import airflow_local_settings.py files to allow overriding any configs in settings.py file | 11 | 115 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def import_local_settings():
try:
import airflow_local_settings
if hasattr(airflow_local_settings, "__all__"):
for i in airflow_local_settings.__all__:
... |
724 | def unpolarify(eq, subs=None, exponents_only=False):
if isinstance(eq, bool):
return eq
eq = sympify(eq)
if subs is not None:
return unpolarify(eq.subs(subs))
changed = True
pause = False
if exponents_only:
pause = True
while changed:
changed = False
... |
If `p` denotes the projection from the Riemann surface of the logarithm to
the complex line, return a simplified version `eq'` of `eq` such that
`p(eq') = p(eq)`.
Also apply the substitution subs in the end. (This is a convenience, since
``unpolarify``, in a certain sense, undoes :func:`polarify`.)... | 72 | 75 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def unpolarify(eq, subs=None, exponents_only=False):
if isinstance(eq, bool):
return eq
eq = sympify(eq)
if subs is not None:
return unpolarify(eq.subs(subs... |
725 | def check_and_raise_error(self) -> None:
for thread in self._threads:
thread.check_and_raise_error()
| Check all threads for errors
Exposed for :mod:`~plugins.extract.pipeline` to check plugin's threads for errors
| 14 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def check_and_raise_error(self) -> None:
for thread in self._threads:
thread.check_and_raise_error()
```
###Assistant : Check all threads for error... |
726 | def recorder_or_dbworker(self) -> bool:
thread_name = threading.current_thread().name
return bool(
thread_name == "Recorder" or thread_name.startswith(DB_WORKER_PREFIX)
)
| Check if the thread is a recorder or dbworker thread. | 10 | 15 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def recorder_or_dbworker(self) -> bool:
thread_name = threading.current_thread().name
return bool(
thread_name == "Recorder" or thread_name.startswith(DB... |
727 | def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
mapping = []
for old_item in old_list:
new_item = old_item.replace('in_layers.0', 'norm1')
new_item = new_item.replace('in_layers.2', 'conv1')
new_item = new_item.replace('out_layers.0', 'norm2')
new_item = new_it... |
Updates paths inside resnets to the new naming scheme (local renaming)
| 11 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def renew_resnet_paths(old_list, n_shave_prefix_segments=0):
mapping = []
for old_item in old_list:
new_item = old_item.replace('in_layers.0', 'norm1')
new_item ... |
728 | def _assert_splits_match(nested_splits_lists):
error_msg = (
"Inputs must have identical ragged splits. "
f"Input received: {nested_splits_lists}"
)
for splits_list in nested_splits_lists:
if len(splits_list) != len(nested_splits_lists[0]):
raise ValueError(error_msg... | Checks that the given splits lists are identical.
Performs static tests to ensure that the given splits lists are identical,
and returns a list of control dependency op tensors that check that they are
fully identical.
Args:
nested_splits_lists: A list of nested_splits_lists, where each split_li... | 79 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _assert_splits_match(nested_splits_lists):
error_msg = (
"Inputs must have identical ragged splits. "
f"Input received: {nested_splits_lists}"
)
for spli... |
729 | def _iteration_limit_callback(self, *args) -> None:
try:
limit = self.vars["display_iterations"].get()
except tk.TclError:
# Don't update when there is no value in the variable
return
logger.debug("Updating graph iteration limit: (new_value: %s, args:... | Limit the amount of data displayed in the live graph on a iteration slider
variable change. | 16 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _iteration_limit_callback(self, *args) -> None:
try:
limit = self.vars["display_iterations"].get()
except tk.TclError:
# Don't update whe... |
730 | def test_queued_dagruns_stops_creating_when_max_active_is_reached(self, dag_maker):
with dag_maker(max_active_runs=10) as dag:
EmptyOperator(task_id='mytask')
session = settings.Session()
self.scheduler_job = SchedulerJob(subdir=os.devnull)
self.scheduler_job.execut... | This tests that queued dagruns stops creating once max_active_runs is reached | 11 | 88 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_queued_dagruns_stops_creating_when_max_active_is_reached(self, dag_maker):
with dag_maker(max_active_runs=10) as dag:
EmptyOperator(task_id='mytask')
... |
731 | def test_escape_sequence_resulting_in_multiple_keypresses(parser):
events = list(parser.feed("\x1b[2;4~"))
assert len(events) == 2
assert events[0].key == "escape"
assert events[1].key == "shift+insert"
| Some sequences are interpreted as more than 1 keypress | 9 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_escape_sequence_resulting_in_multiple_keypresses(parser):
events = list(parser.feed("\x1b[2;4~"))
assert len(events) == 2
assert events[0].key == "escape"
asser... |
732 | def generate_altered_options(self):
models_to_check = self.kept_model_keys.union(
self.kept_proxy_keys,
self.kept_unmanaged_keys,
# unmanaged converted to managed
self.old_unmanaged_keys & self.new_model_keys,
# managed converted to unmanaged
... |
Work out if any non-schema-affecting options have changed and make an
operation to represent them in state changes (in case Python code in
migrations needs them).
| 26 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def generate_altered_options(self):
models_to_check = self.kept_model_keys.union(
self.kept_proxy_keys,
self.kept_unmanaged_keys,
# unman... |
733 | def _check_readonly_fields(self, obj):
if obj.readonly_fields == ():
return []
elif not isinstance(obj.readonly_fields, (list, tuple)):
return must_be(
"a list or tuple", option="readonly_fields", obj=obj, id="admin.E034"
)
else:
... | Check that readonly_fields refers to proper attribute or field. | 9 | 42 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _check_readonly_fields(self, obj):
if obj.readonly_fields == ():
return []
elif not isinstance(obj.readonly_fields, (list, tuple)):
retu... |
734 | def querystring(context, **kwargs):
request = context["request"]
querydict = request.GET.copy()
# Can't do querydict.update(kwargs), because QueryDict.update() appends to
# the list of values, instead of replacing the values.
for key, value in kwargs.items():
if value is None:
... |
Print out the current querystring. Any keyword arguments to this template
tag will be added to the querystring before it is printed out.
<a href="/page/{% querystring key='value' %}">
Will result in something like:
<a href="/page/?foo=bar&key=value">
| 35 | 61 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def querystring(context, **kwargs):
request = context["request"]
querydict = request.GET.copy()
# Can't do querydict.update(kwargs), because QueryDict.update() appends to
... |
735 | def _get_fingerprint_of_schema_without_irrelevant_keys(self) -> Text:
graph_schema = self._execution_context.graph_schema
schema_as_dict = graph_schema.as_dict()
for node_name, node_dict in schema_as_dict["nodes"].items():
config_copy = copy.deepcopy(node_dict["config"])
... | Returns a fingerprint of the given schema with certain items removed.
These items include specifications that do not influence actual training
results such as "eager" mode. The only configuration (in your config) that is
allowed to change is the number of `epochs`.
Returns:
... | 44 | 66 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_fingerprint_of_schema_without_irrelevant_keys(self) -> Text:
graph_schema = self._execution_context.graph_schema
schema_as_dict = graph_schema.as_dict()
... |
736 | def get_gi_typelibs(module, version):
module_info = GiModuleInfo(module, version)
return module_info.collect_typelib_data()
|
Return a tuple of (binaries, datas, hiddenimports) to be used by PyGObject related hooks. Searches for and adds
dependencies recursively.
:param module: GI module name, as passed to 'gi.require_version()'
:param version: GI module version, as passed to 'gi.require_version()'
| 38 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_gi_typelibs(module, version):
module_info = GiModuleInfo(module, version)
return module_info.collect_typelib_data()
```
###Assistant :
Return a tuple ... |
737 | def _sync_dag_view_permissions(self, dag_id, access_control):
dag_resource_name = permissions.resource_name_for_dag(dag_id)
|
Set the access policy on the given DAG's ViewModel.
:param dag_id: the ID of the DAG whose permissions should be updated
:param access_control: a dict where each key is a rolename and
each value is a set() of action names (e.g. {'can_read'})
| 42 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _sync_dag_view_permissions(self, dag_id, access_control):
dag_resource_name = permissions.resource_name_for_dag(dag_id)
```
###Assistant :
Set the... |
738 | def test_series_equal_datetime_values_mismatch(rtol):
msg =
s1 = Series(pd.date_range("2018-01-01", periods=3, freq="D"))
s2 = Series(pd.date_range("2019-02-02", periods=3, freq="D"))
with pytest.raises(AssertionError, match=msg):
tm.assert_series_equal(s1, s2, rtol=rtol)
| Series are different
Series values are different \\(100.0 %\\)
\\[index\\]: \\[0, 1, 2\\]
\\[left\\]: \\[1514764800000000000, 1514851200000000000, 1514937600000000000\\]
\\[right\\]: \\[1549065600000000000, 1549152000000000000, 1549238400000000000\\] | 21 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_series_equal_datetime_values_mismatch(rtol):
msg =
s1 = Series(pd.date_range("2018-01-01", periods=3, freq="D"))
s2 = Series(pd.date_range("2019-02-02", periods=3, fre... |
739 | def test_decrypt_pillar_invalid_renderer(salt_master, grains, pillar_homedir):
opts = salt_master.config.copy()
opts["decrypt_pillar"] = [{"secrets:vault": "gpg"}]
opts["decrypt_pillar_default"] = "foo"
opts["decrypt_pillar_renderers"] = ["foo", "bar"]
pillar_obj = salt.pillar.Pillar(opts, grai... |
Test decryption using a renderer which is not permitted. It should
fail, leaving the encrypted keys intact, and add an error to the pillar
dictionary.
decrypt_pillar_default: foo
decrypt_pillar_renderers:
- foo
- bar
decrypt_pillar:
- 'secrets:vault': ... | 36 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_decrypt_pillar_invalid_renderer(salt_master, grains, pillar_homedir):
opts = salt_master.config.copy()
opts["decrypt_pillar"] = [{"secrets:vault": "gpg"}]
opts["dec... |
740 | def get_trial_name():
warnings.warn(
_deprecation_msg,
DeprecationWarning,
)
_session = get_session()
if _session:
return _session.trial_name
@DeveloperAPI | Trial name for the corresponding trial.
For function API use only.
| 11 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_trial_name():
warnings.warn(
_deprecation_msg,
DeprecationWarning,
)
_session = get_session()
if _session:
return _session.trial_name
@... |
741 | def get_scheduler_lock(collection=None, scheduler=None):
from dask import multiprocessing
from dask.base import get_scheduler
actual_get = get_scheduler(collections=[collection], scheduler=scheduler)
if actual_get == multiprocessing.get:
return multiprocessing.get_context().Manager().Lock... | Get an instance of the appropriate lock for a certain situation based on
scheduler used. | 15 | 23 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_scheduler_lock(collection=None, scheduler=None):
from dask import multiprocessing
from dask.base import get_scheduler
actual_get = get_scheduler(collections=[collec... |
742 | def verify_dataset_shuffled(x):
assert isinstance(x, tf.data.Dataset)
graph_def = get_dataset_graph_def(x)
for node in graph_def.node:
if node.op.startswith("ShuffleDataset"):
return True
# Also check graph_def.library.function for ds.interleave or ds.flat_map
for function i... | Verifies that the dataset is shuffled.
Args:
x: Dataset passed as an input to the model.
Returns:
boolean, whether the input dataset is shuffled or not.
| 26 | 58 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def verify_dataset_shuffled(x):
assert isinstance(x, tf.data.Dataset)
graph_def = get_dataset_graph_def(x)
for node in graph_def.node:
if node.op.startswith("Shuffle... |
743 | def get_binance_available_quotes_for_each_coin() -> dict:
trading_pairs = _get_trading_pairs()
results = defaultdict(list)
for pair in trading_pairs:
results[pair["baseAsset"]].append(pair["quoteAsset"])
return results
@log_start_end(log=logger) | Helper methods that for every coin available on Binance add all quote assets. [Source: Binance]
Returns
-------
dict:
All quote assets for given coin
{'ETH' : ['BTC', 'USDT' ...], 'UNI' : ['ETH', 'BTC','BUSD', ...]
| 34 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_binance_available_quotes_for_each_coin() -> dict:
trading_pairs = _get_trading_pairs()
results = defaultdict(list)
for pair in trading_pairs:
results[pair["b... |
744 | def update_qty_in_future_sle(args, allow_negative_stock=False):
datetime_limit_condition = ""
qty_shift = args.actual_qty
# find difference/shift in qty caused by stock reconciliation
if args.voucher_type == "Stock Reconciliation":
qty_shift = get_stock_reco_qty_shift(args)
# find the next nearest stock reco... | Recalculate Qty after Transaction in future SLEs based on current SLE.
update `tabStock Ledger Entry`
set qty_after_transaction = qty_after_transaction + {qty_shift}
where
item_code = %(item_code)s
and warehouse = %(warehouse)s
and voucher_no != %(voucher_no)s
and is_cancelled = 0
and (timestamp(po... | 57 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update_qty_in_future_sle(args, allow_negative_stock=False):
datetime_limit_condition = ""
qty_shift = args.actual_qty
# find difference/shift in qty caused by stock reconciliation
... |
745 | def get_current_timezone_tag(parser, token):
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
if len(args) != 3 or args[1] != 'as':
raise TemplateSyntaxError(
"'get_current_timezone' requires 'as variable'... |
Store the name of the current time zone in the context.
Usage::
{% get_current_timezone as TIME_ZONE %}
This will fetch the currently active time zone and put its name
into the ``TIME_ZONE`` context variable.
| 34 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_current_timezone_tag(parser, token):
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
args = token.contents.split()
i... |
746 | def test_save_multiple_world_logs_mutator(self):
with testing_utils.tempdir() as tmpdir:
log_report = os.path.join(tmpdir, 'world_logs.jsonl')
multitask = 'integration_tests:mutators=flatten,integration_tests:ReverseTeacher:mutator=reverse'
valid, test = testing_util... |
Test that we can save multiple world_logs from train model on multiple tasks
with mutators present.
| 16 | 55 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_save_multiple_world_logs_mutator(self):
with testing_utils.tempdir() as tmpdir:
log_report = os.path.join(tmpdir, 'world_logs.jsonl')
multit... |
747 | def testDotsInLogdir(self):
local_dir_path = Path("/tmp/test_rel_dots")
local_dir = str(local_dir_path)
if local_dir_path.exists():
local_dir = tempfile.mkdtemp(prefix=str(local_dir_path) + "_")
trial = Trial(trainable_name="rel_logdir", local_dir=local_dir)
... | This should result in errors as dots in paths are not allowed. | 12 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def testDotsInLogdir(self):
local_dir_path = Path("/tmp/test_rel_dots")
local_dir = str(local_dir_path)
if local_dir_path.exists():
local_dir = t... |
748 | def test_delete_post(self):
# Send request
response = self.client.post(
reverse("wagtailimages:delete_multiple", args=(self.image.id,))
)
# Check response
self.assertEqual(response.status_code, 200)
self.assertEqual(response["Content-Type"], "applica... |
This tests that a POST request to the delete view deletes the image
| 13 | 40 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_delete_post(self):
# Send request
response = self.client.post(
reverse("wagtailimages:delete_multiple", args=(self.image.id,))
)
... |
749 | def _show_diff_helper(self, frame_data, expected_frame_data):
import matplotlib.gridspec as gridspec # type: ignore
import matplotlib.pyplot as plt
gs = gridspec.GridSpec(2, 2)
fig = plt.figure()
fig.suptitle(f"Test for {str(self.scene).replace('Test', '')}", fontsize=... | Will visually display with matplotlib differences between frame generated and the one expected. | 13 | 106 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _show_diff_helper(self, frame_data, expected_frame_data):
import matplotlib.gridspec as gridspec # type: ignore
import matplotlib.pyplot as plt
gs = gr... |
750 | def consume_capacity(self, task):
if self.is_container_group:
self.container_group_jobs += 1
self.container_group_consumed_forks += task.task_impact
else:
raise RuntimeError("We only track capacity for container groups at the instance group level. Otherwise, ... | We only consume capacity on an instance group level if it is a container group. Otherwise we consume capacity on an instance level. | 23 | 30 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def consume_capacity(self, task):
if self.is_container_group:
self.container_group_jobs += 1
self.container_group_consumed_forks += task.task_impact
... |
751 | def get_avail_mem_per_ray_worker_node(spark, object_store_memory_per_node):
num_cpus_per_spark_task = int(
spark.sparkContext.getConf().get("spark.task.cpus", "1")
)
|
Return the available heap memory and object store memory for each ray worker.
NB: We have one ray node per spark task.
| 22 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_avail_mem_per_ray_worker_node(spark, object_store_memory_per_node):
num_cpus_per_spark_task = int(
spark.sparkContext.getConf().get("spark.task.cpus", "1")
)
... |
752 | def _iter_egg_info_dependencies(self) -> Iterable[str]:
for entry in self._iter_requires_txt_entries():
if entry.extra and entry.marker:
marker = f'({entry.marker}) and extra == "{safe_extra(entry.extra)}"'
elif entry.extra:
marker = f'extra == "{... | Get distribution dependencies from the egg-info directory.
To ease parsing, this converts a legacy dependency entry into a PEP 508
requirement string. Like ``_iter_requires_txt_entries()``, there is code
in ``importlib.metadata`` that does mostly the same, but not do exactly
what we nee... | 81 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _iter_egg_info_dependencies(self) -> Iterable[str]:
for entry in self._iter_requires_txt_entries():
if entry.extra and entry.marker:
marker =... |
753 | def cache_from_source(path, debug_override=None):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
return util.cache_from_source(path, debug_override)
| **DEPRECATED**
Given the path to a .py file, return the path to its .pyc file.
The .py file does not need to exist; this simply returns the path to the
.pyc file calculated as if the .py file were imported.
If debug_override is not None, then it must be a boolean and is used in
place of sys.flags... | 66 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cache_from_source(path, debug_override=None):
with warnings.catch_warnings():
warnings.simplefilter('ignore')
return util.cache_from_source(path, debug_override)... |
754 | def _url_collapse_path(path):
# Query component should not be involved.
path, _, query = path.partition('?')
path = urllib.parse.unquote(path)
# Similar to os.path.split(os.path.normpath(path)) but specific to URL
# path semantics rather than local operating system semantics.
path_parts = ... |
Given a URL path, remove extra '/'s and '.' path elements and collapse
any '..' references and returns a collapsed path.
Implements something akin to RFC-2396 5.2 step 6 to parse relative paths.
The utility of this function is limited to is_cgi method and helps
preventing some security attacks.
... | 70 | 112 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _url_collapse_path(path):
# Query component should not be involved.
path, _, query = path.partition('?')
path = urllib.parse.unquote(path)
# Similar to os.path.spli... |
755 | def to_coo(self, row_levels=(0,), column_levels=(1,), sort_labels: bool = False):
from pandas.core.arrays.sparse.scipy_sparse import sparse_series_to_coo
A, rows, columns = sparse_series_to_coo(
self._parent, row_levels, column_levels, sort_labels=sort_labels
)
retu... |
Create a scipy.sparse.coo_matrix from a Series with MultiIndex.
Use row_levels and column_levels to determine the row and column
coordinates respectively. row_levels and column_levels are the names
(labels) or numbers of the levels. {row_levels, column_levels} must be
a partiti... | 279 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def to_coo(self, row_levels=(0,), column_levels=(1,), sort_labels: bool = False):
from pandas.core.arrays.sparse.scipy_sparse import sparse_series_to_coo
A, rows, c... |
756 | def _get_memory_heuristic_values(self) -> Dict[str, Union[str, float, bool]]:
return {
'ignore_in_session_memories': self.opt.get(
'ignore_in_session_memories_mkm', False
),
'memory_overlap_threshold': self.opt.get('memory_overlap_threshold', 0.0),
... |
Extract heuristics from self.opt.
| 4 | 28 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _get_memory_heuristic_values(self) -> Dict[str, Union[str, float, bool]]:
return {
'ignore_in_session_memories': self.opt.get(
'ignore_in_ses... |
757 | def plugin_list_buttons(context, model):
return _get_registered_content(model, 'list_buttons', context)
|
Render all list buttons registered by plugins
| 7 | 7 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def plugin_list_buttons(context, model):
return _get_registered_content(model, 'list_buttons', context)
```
###Assistant :
Render all list buttons registered by pl... |
758 | def drop_path(self, inputs):
# if prob is 0 or eval mode, return original input
if self.drop_prob == 0. or not self.training:
return inputs
keep_prob = 1 - self.drop_prob
keep_prob = paddle.to_tensor(keep_prob, dtype='float32')
shape = (inputs.shape[0], ) + (... | drop path op
Args:
input: tensor with arbitrary shape
drop_prob: float number of drop path probability, default: 0.0
training: bool, if current mode is training, default: False
Returns:
output: output tensor after drop path
| 34 | 73 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def drop_path(self, inputs):
# if prob is 0 or eval mode, return original input
if self.drop_prob == 0. or not self.training:
return inputs
keep_... |
759 | def set_style(style=None, rc=None):
style_object = axes_style(style, rc)
mpl.rcParams.update(style_object)
|
Set the parameters that control the general style of the plots.
The style parameters control properties like the color of the background and
whether a grid is enabled by default. This is accomplished using the
matplotlib rcParams system.
The options are illustrated in the
:doc:`aesthetics tut... | 111 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def set_style(style=None, rc=None):
style_object = axes_style(style, rc)
mpl.rcParams.update(style_object)
```
###Assistant :
Set the parameters that control ... |
760 | def _resize(self, image, shorter=800, longer=1333, size_divisor=32, resample=Image.BICUBIC):
if not isinstance(image, Image.Image):
image = self.to_pil_image(image)
w, h = image.size
min_size = shorter
max_size = longer
scale = min_size / min(w, h)
i... |
Resizes the shorter edge of `image` to `shorter` and limits the longer edge to under `longer`, while preserving
the aspect ratio. Also makes sure that both the height and width can be divided by `size_divisor`.
Based on original implementation:
https://github.com/dandelin/ViLT/blob/3db... | 117 | 97 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _resize(self, image, shorter=800, longer=1333, size_divisor=32, resample=Image.BICUBIC):
if not isinstance(image, Image.Image):
image = self.to_pil_image(ima... |
761 | def extra_action_out_fn(self) -> Dict[str, TensorType]:
extra_action_fetches = super().extra_action_out_fn()
extra_action_fetches.update(self._policy_extra_action_fetches)
return extra_action_fetches
| Extra values to fetch and return from compute_actions().
Returns:
Dict[str, TensorType]: An extra fetch-dict to be passed to and
returned from the compute_actions() call.
| 24 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def extra_action_out_fn(self) -> Dict[str, TensorType]:
extra_action_fetches = super().extra_action_out_fn()
extra_action_fetches.update(self._policy_extra_action_fe... |
762 | def get_all_exported_dataset_infos(cls) -> dict:
dset_infos_file_path = os.path.join(cls.get_imported_module_dir(), config.DATASETDICT_INFOS_FILENAME)
if os.path.exists(dset_infos_file_path):
return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
return {}
| Empty dict if doesn't exist
Example:
```py
>>> from datasets import load_dataset_builder
>>> ds_builder = load_dataset_builder('rotten_tomatoes')
>>> ds_builder.get_all_exported_dataset_infos()
{'default': DatasetInfo(description="Movie Review Dataset.\nThis is a datase... | 140 | 14 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_all_exported_dataset_infos(cls) -> dict:
dset_infos_file_path = os.path.join(cls.get_imported_module_dir(), config.DATASETDICT_INFOS_FILENAME)
if os.path.exi... |
763 | def to_sanitized_dict(self) -> Dict[str, Any]:
d = self.to_dict()
d = {
** d, ** {
"train_batch_size": self.train_batch_size,
"eval_batch_size": self.eval_batch_size
}
}
valid_types = [bool, int, float, str]
valid_... |
Sanitized serialization to use with TensorBoard’s hparams
| 7 | 44 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def to_sanitized_dict(self) -> Dict[str, Any]:
d = self.to_dict()
d = {
** d, ** {
"train_batch_size": self.train_batch_size,
... |
764 | def readinto(self, b):
self._check_can_read()
return self._buffer.readinto(b)
| Read bytes into b.
Returns the number of bytes read (0 for EOF).
| 13 | 6 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def readinto(self, b):
self._check_can_read()
return self._buffer.readinto(b)
```
###Assistant : Read bytes into b.
Returns the number of bytes... |
765 | def _output_type_handler(cursor, name, defaultType, length, precision, scale):
if defaultType == Database.NUMBER:
if scale == -127:
if precision == 0:
# NUMBER column: decimal-precision floating point.
# This will normally be an intege... |
Called for each db column fetched from cursors. Return numbers as the
appropriate Python type.
| 15 | 126 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _output_type_handler(cursor, name, defaultType, length, precision, scale):
if defaultType == Database.NUMBER:
if scale == -127:
if precision ... |
766 | def decode_nested_example(schema, obj):
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
return {
k: decode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
}
elif isinstance(schema, (list,... | Decode a nested example.
This is used since some features (in particular Audio and Image) have some logic during decoding.
To avoid iterating over possibly long lists, it first checks (recursively) if the first element that is not None or empty (if it is a sequence) has to be decoded.
If the first element ... | 73 | 121 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def decode_nested_example(schema, obj):
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, dict):
return {
k: decode_nested_exa... |
767 | def running_under_virtualenv() -> bool:
return _running_under_venv() or _running_under_legacy_virtualenv()
| True if we're running inside a virtual environment, False otherwise. | 10 | 8 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def running_under_virtualenv() -> bool:
return _running_under_venv() or _running_under_legacy_virtualenv()
```
###Assistant : True if we're running inside a virtual en... |
768 | def forward_train(self, x, data_samples, proposal_cfg=None, **kwargs):
img_metas = [data_sample['meta'] for data_sample in data_samples]
outs = self(x)
gt_bboxes = [
data_sample.gt_instances.bboxes for data_sample in data_samples
]
if hasattr(data_samples[0]... |
Args:
x (list[Tensor]): Features from FPN.
data_samples (list[:obj:`GeneralData`]): Each item contains
the meta information of each image and corresponding
annotations.
proposal_cfg (mmcv.Config): Test / postprocessing configuration,
... | 147 | 97 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def forward_train(self, x, data_samples, proposal_cfg=None, **kwargs):
img_metas = [data_sample['meta'] for data_sample in data_samples]
outs = self(x)
gt_bb... |
769 | def cf(self):
return {
cf.name: cf.deserialize(self.custom_field_data.get(cf.name))
for cf in self.custom_fields
}
|
Return a dictionary mapping each custom field for this instance to its deserialized value.
```python
>>> tenant = Tenant.objects.first()
>>> tenant.cf
{'primary_site': <Site: DM-NYC>, 'cust_id': 'DMI01', 'is_active': True}
```
| 29 | 11 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def cf(self):
return {
cf.name: cf.deserialize(self.custom_field_data.get(cf.name))
for cf in self.custom_fields
}
```
###Assist... |
770 | def test_arf_layout_negative_a_check(self):
G = self.Gs
pytest.raises(ValueError, nx.arf_layout, G=G, a=-1)
|
Checks input parameters correctly raises errors. For example, `a` should be larger than 1
| 14 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_arf_layout_negative_a_check(self):
G = self.Gs
pytest.raises(ValueError, nx.arf_layout, G=G, a=-1)
```
###Assistant :
Checks input pa... |
771 | async def test_connected_device_registered(hass):
registry = mock_registry(hass)
dispatches = []
| Test dispatch on connected device being registered. | 7 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def test_connected_device_registered(hass):
registry = mock_registry(hass)
dispatches = []
```
###Assistant : Test dispatch on connected device being registe... |
772 | def test_ddp_sharded_strategy_checkpoint_multi_gpu(tmpdir):
model = BoringModel()
trainer = Trainer(gpus=2, strategy="ddp_sharded_spawn", fast_dev_run=True)
trainer.fit(model)
checkpoint_path = os.path.join(tmpdir, "model.pt")
trainer.save_checkpoint(checkpoint_path)
saved_model = BoringM... | Test to ensure that checkpoint is saved correctly when using multiple GPUs. | 12 | 39 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_ddp_sharded_strategy_checkpoint_multi_gpu(tmpdir):
model = BoringModel()
trainer = Trainer(gpus=2, strategy="ddp_sharded_spawn", fast_dev_run=True)
trainer.fit(mod... |
773 | def autoscale(self) -> None:
for deployment_name, (
deployment_info,
route_prefix,
) in self.list_deployments().items():
deployment_config = deployment_info.deployment_config
autoscaling_policy = deployment_info.autoscaling_policy
if ... | Updates autoscaling deployments with calculated num_replicas. | 6 | 85 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def autoscale(self) -> None:
for deployment_name, (
deployment_info,
route_prefix,
) in self.list_deployments().items():
deployme... |
774 | def match_state_dict(model_state_dict, weight_state_dict):
model_keys = sorted(model_state_dict.keys())
weight_keys = sorted(weight_state_dict.keys())
|
Match between the model state dict and pretrained weight state dict.
Return the matched state dict.
The method supposes that all the names in pretrained weight state dict are
subclass of the names in models`, if the prefix 'backbone.' in pretrained weight
keys is stripped. And we could get the can... | 99 | 9 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def match_state_dict(model_state_dict, weight_state_dict):
model_keys = sorted(model_state_dict.keys())
weight_keys = sorted(weight_state_dict.keys())
```
###Assis... |
775 | def local_node_connectivity(G, source, target, cutoff=None):
if target == source:
raise nx.NetworkXError("source and target have to be different nodes.")
# Maximum possible node independent paths
if G.is_directed():
possible = min(G.out_degree(source), G.in_degree(target))
else:
... | Compute node connectivity between source and target.
Pairwise or local node connectivity between two distinct and nonadjacent
nodes is the minimum number of nodes that must be removed (minimum
separating cutset) to disconnect them. By Menger's theorem, this is equal
to the number of node independent pa... | 314 | 74 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def local_node_connectivity(G, source, target, cutoff=None):
if target == source:
raise nx.NetworkXError("source and target have to be different nodes.")
# Maximum poss... |
776 | def _copartition(self, axis, other, how, sort, force_repartition=False):
if isinstance(other, type(self)):
other = [other]
self_index = self.axes[axis]
others_index = [o.axes[axis] for o in other]
joined_index, make_reindexer = self._join_index_objects(
... |
Copartition two Modin DataFrames.
Perform aligning of partitions, index and partition blocks.
Parameters
----------
axis : {0, 1}
Axis to copartition along (0 - rows, 1 - columns).
other : PandasDataframe
Other Modin DataFrame(s) to copartition ... | 161 | 304 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _copartition(self, axis, other, how, sort, force_repartition=False):
if isinstance(other, type(self)):
other = [other]
self_index = self.axes[axis]
... |
777 | def size(self) -> int:
# override Index.size to avoid materializing _values
return len(self)
# --------------------------------------------------------------------
# Levels Methods
|
Return the number of elements in the underlying data.
| 9 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def size(self) -> int:
# override Index.size to avoid materializing _values
return len(self)
# -----------------------------------------------------------------... |
778 | def groupby_agg(self, by, axis, agg, groupby_args, **kwargs):
# Currently we only expect 'by' to be a projection of the same frame.
# If 'by' holds a list of columns/series, then we create such projection
# to re-use code.
if not isinstance(by, DFAlgQueryCompiler):
i... |
Groupby with aggregation operation.
Parameters
----------
by : DFAlgQueryCompiler or list-like of str
Grouping keys.
axis : {0, 1}
Only rows groupby is supported, so should be 0.
agg : str or dict
Aggregates to compute.
groupb... | 55 | 278 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def groupby_agg(self, by, axis, agg, groupby_args, **kwargs):
# Currently we only expect 'by' to be a projection of the same frame.
# If 'by' holds a list of columns... |
779 | def delete_subscription_from_snuba(query_subscription_id, **kwargs):
try:
subscription = QuerySubscription.objects.get(id=query_subscription_id)
except QuerySubscription.DoesNotExist:
metrics.incr("snuba.subscriptions.delete.subscription_does_not_exist")
return
if subscription.... |
Task to delete a corresponding subscription in Snuba from a `QuerySubscription` in
Sentry.
If the local subscription is marked for deletion (as opposed to disabled),
then we delete the local subscription once we've successfully removed from Snuba.
| 37 | 48 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def delete_subscription_from_snuba(query_subscription_id, **kwargs):
try:
subscription = QuerySubscription.objects.get(id=query_subscription_id)
except QuerySubscription... |
780 | def test_changing_timer_with_messages_shown(qtbot, view, config_stub):
config_stub.val.messages.timeout = 900000 # 15s
view.show_message(message.MessageInfo(usertypes.MessageLevel.info, 'test'))
with qtbot.wait_signal(view._clear_timer.timeout):
config_stub.val.messages.timeout = 100
@pytest... | When we change messages.timeout, the timer should be restarted. | 9 | 26 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_changing_timer_with_messages_shown(qtbot, view, config_stub):
config_stub.val.messages.timeout = 900000 # 15s
view.show_message(message.MessageInfo(usertypes.MessageLe... |
781 | def get_events(start, end, filters=None):
from frappe.desk.calendar import get_event_conditions
events = []
event_color = {
"Pending": "#fff4f0",
"Under Review": "#d3e8fc",
"Cleared": "#eaf5ed",
"Rejected": "#fce7e7",
}
conditions = get_event_conditions("Interview", filters)
interviews = frappe.db.s... | Returns events for Gantt / Calendar view rendering.
:param start: Start date-time.
:param end: End date-time.
:param filters: Filters (JSON).
SELECT DISTINCT
`tabInterview`.name, `tabInterview`.job_applicant, `tabInterview`.interview_round,
`tabInterview`.scheduled_on, `tabInterview`.status, `tabInterv... | 46 | 96 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get_events(start, end, filters=None):
from frappe.desk.calendar import get_event_conditions
events = []
event_color = {
"Pending": "#fff4f0",
"Under Review": "#d3e8fc",
"Cle... |
782 | def testFuncTrainableCheckpointConfigValidation(self):
with self.assertRaises(ValueError):
Experiment(
name="foo",
run="f1", # Will point to a wrapped function trainable
checkpoint_config=CheckpointConfig(checkpoint_at_end=True),
... | Raise an error when trying to specify checkpoint_at_end/checkpoint_frequency
with a function trainable. | 12 | 33 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def testFuncTrainableCheckpointConfigValidation(self):
with self.assertRaises(ValueError):
Experiment(
name="foo",
run="f1", # W... |
783 | def create_perspective_transform(src, dst, round=False, splat_args=False):
try:
transform_matrix = create_perspective_transform_matrix(src, dst)
error = None
except np.linalg.LinAlgError as e:
transform_matrix = np.identity(3, dtype=np.float)
error = "invalid input quads (%s... | Returns a function which will transform points in quadrilateral
``src`` to the corresponding points on quadrilateral ``dst``::
>>> transform = create_perspective_transform(
... [(0, 0), (10, 0), (10, 10), (0, 10)],
... [(50, 50), (100, 50), (100, 100), (50, 100)],
... | 194 | 102 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def create_perspective_transform(src, dst, round=False, splat_args=False):
try:
transform_matrix = create_perspective_transform_matrix(src, dst)
error = None
exc... |
784 | def get(self, url, cache=True, **kwargs):
if not url.isValid():
urlutils.invalid_url_error(url, "start download")
return None
req = QNetworkRequest(url)
user_agent = websettings.user_agent(url)
req.setHeader(QNetworkRequest.KnownHeaders.UserAgentHeader, ... | Start a download with a link URL.
Args:
url: The URL to get, as QUrl
cache: If set to False, don't cache the response.
**kwargs: passed to get_request().
Return:
The created DownloadItem.
| 32 | 29 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def get(self, url, cache=True, **kwargs):
if not url.isValid():
urlutils.invalid_url_error(url, "start download")
return None
req = QNetwork... |
785 | def test_send_join_partial_state(self):
joining_user = "@misspiggy:" + self.OTHER_SERVER_NAME
join_result = self._make_join(joining_user)
join_event_dict = join_result["event"]
add_hashes_and_signatures(
KNOWN_ROOM_VERSIONS[DEFAULT_ROOM_VERSION],
join_ev... | When MSC3706 support is enabled, /send_join should return partial state | 10 | 106 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_send_join_partial_state(self):
joining_user = "@misspiggy:" + self.OTHER_SERVER_NAME
join_result = self._make_join(joining_user)
join_event_dict = ... |
786 | async def drain(self):
if self._reader is not None:
exc = self._reader.exception()
if exc is not None:
raise exc
if self._transport.is_closing():
# Wait for protocol.connection_lost() call
# Raise connection closing error if any,
... | Flush the write buffer.
The intended use is to write
w.write(data)
await w.drain()
| 13 | 87 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def drain(self):
if self._reader is not None:
exc = self._reader.exception()
if exc is not None:
raise exc
if self._tra... |
787 | def on_chord_header_start(self, chord, **header) -> dict:
if not isinstance(chord.tasks, group):
chord.tasks = group(chord.tasks)
return self.on_group_start(chord.tasks, **header)
| Method that is called on сhord header stamping start.
Arguments:
chord (chord): chord that is stamped.
headers (Dict): Partial headers that could be merged with existing headers.
Returns:
Dict: headers to update.
| 32 | 16 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def on_chord_header_start(self, chord, **header) -> dict:
if not isinstance(chord.tasks, group):
chord.tasks = group(chord.tasks)
return self.on_group_st... |
788 | def load_historic_predictions_from_disk(self):
exists = self.historic_predictions_path.is_file()
if exists:
try:
with open(self.historic_predictions_path, "rb") as fp:
self.historic_predictions = cloudpickle.load(fp)
logger.info(
... |
Locate and load a previously saved historic predictions.
:return: bool - whether or not the drawer was located
| 18 | 105 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def load_historic_predictions_from_disk(self):
exists = self.historic_predictions_path.is_file()
if exists:
try:
with open(self.historic_... |
789 | def test_background(self):
css =
stylesheet = Stylesheet()
stylesheet.parse(css)
styles = stylesheet.rules[0].styles
assert styles.text_background == Color("red", type=ColorType.STANDARD, number=1)
| #some-widget {
text: on red;
}
| 6 | 17 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def test_background(self):
css =
stylesheet = Stylesheet()
stylesheet.parse(css)
styles = stylesheet.rules[0].styles
assert styles.text_background =... |
790 | def update(self) -> None:
try:
response = requests.get(self._url, timeout=5)
except (requests.exceptions.RequestException, ValueError):
_LOGGER.warning(
"Could not update status for DTE Energy Bridge (%s)", self._attr_name
)
return... | Get the energy usage data from the DTE energy bridge. | 10 | 146 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def update(self) -> None:
try:
response = requests.get(self._url, timeout=5)
except (requests.exceptions.RequestException, ValueError):
_LOGG... |
791 | def ndependencies(dependencies, dependents):
num_needed = {}
result = {}
for k, v in dependencies.items():
num_needed[k] = len(v)
if not v:
result[k] = 1
num_dependencies = num_needed.copy()
current = []
current_pop = current.pop
current_append = current.app... | Number of total data elements on which this key depends
For each key we return the number of tasks that must be run for us to run
this task.
Examples
--------
>>> inc = lambda x: x + 1
>>> dsk = {'a': 1, 'b': (inc, 'a'), 'c': (inc, 'b')}
>>> dependencies, dependents = get_deps(dsk)
>>>... | 77 | 78 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def ndependencies(dependencies, dependents):
num_needed = {}
result = {}
for k, v in dependencies.items():
num_needed[k] = len(v)
if not v:
resul... |
792 | def validate_attr(self, append) -> None:
if append:
existing_fields = getattr(self.attrs, self.kind_attr, None)
if existing_fields is not None and existing_fields != list(self.values):
raise ValueError("appended items do not match existing items in table!")
... | validate that we have the same order as the existing & same dtype | 13 | 59 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def validate_attr(self, append) -> None:
if append:
existing_fields = getattr(self.attrs, self.kind_attr, None)
if existing_fields is not None and ex... |
793 | def __getitem__(self, parameters):
item = typing._type_check(parameters,
f'{self._name} accepts only single type')
return typing._GenericAlias(self, (item,))
Final = _FinalForm('Final',
doc= | A special typing construct to indicate that a name
cannot be re-assigned or overridden in a subclass.
For example:
MAX_SIZE: Final = 9000
MAX_SIZE += 1 # Error reported by type checker | 32 | 18 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def __getitem__(self, parameters):
item = typing._type_check(parameters,
f'{self._name} accepts only single type')
return typing... |
794 | def type_spec_from_value(value):
if is_extension_type(value):
return value._type_spec # pylint: disable=protected-access
# Get a TensorSpec for array-like data without
# converting the data to a Tensor
if hasattr(value, "shape") and hasattr(value, "dtype"):
return tf.TensorSpec(val... | Grab type_spec without converting array-likes to tensors. | 7 | 36 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def type_spec_from_value(value):
if is_extension_type(value):
return value._type_spec # pylint: disable=protected-access
# Get a TensorSpec for array-like data without
... |
795 | def make_window():
sg.theme(settings.get('-theme-', 'DarkBlue2')) # set the theme
layout = [[sg.Text('Settings Window')],
[sg.Input(settings.get('-input-', ''), k='-IN-')],
[sg.Listbox(sg.theme_list(), default_values=[settings['-theme-'],], size=(15, 10), k='-LISTBOX-')],
... |
Creates a new window. The default values for some elements are pulled directly from the
"User Settings" without the use of temp variables.
Some get_entry calls don't have a default value, such as theme, because there was an initial call
that would have set the default value if the setting wasn't pres... | 145 | 49 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def make_window():
sg.theme(settings.get('-theme-', 'DarkBlue2')) # set the theme
layout = [[sg.Text('Settings Window')],
[sg.Input(settings.get('-input-', ''),... |
796 | async def _collect(self) -> CommonUsageMetrics:
dau_count = await self._store.count_daily_users()
return CommonUsageMetrics(
daily_active_users=dau_count,
)
| Collect the common metrics and either create the CommonUsageMetrics object to
use if it doesn't exist yet, or update it.
| 20 | 13 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
async def _collect(self) -> CommonUsageMetrics:
dau_count = await self._store.count_daily_users()
return CommonUsageMetrics(
daily_active_users=dau_coun... |
797 | def getPreprocessorSymbols(cls):
if cls.preprocessor_symbols is None:
cls.preprocessor_symbols = OrderedDict()
for plugin in getActivePlugins():
value = plugin.getPreprocessorSymbols()
if value is not None:
assert type(value... | Let plugins provide C defines to be used in compilation.
Notes:
The plugins can each contribute, but are hopefully using
a namespace for their defines.
Returns:
OrderedDict(), where None value indicates no define value,
i.e. "-Dkey=value" vs. "-Dkey"
... | 38 | 65 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def getPreprocessorSymbols(cls):
if cls.preprocessor_symbols is None:
cls.preprocessor_symbols = OrderedDict()
for plugin in getActivePlugins():
... |
798 | def _check_xy(self, renderer=None):
if renderer is None:
renderer = self.figure._get_renderer()
b = self.get_annotation_clip()
if b or (b is None and self.xycoords == "data"):
# check if self.xy is inside the axes.
xy_pixel = self._get_position_xy(ren... | Check whether the annotation at *xy_pixel* should be drawn. | 9 | 38 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _check_xy(self, renderer=None):
if renderer is None:
renderer = self.figure._get_renderer()
b = self.get_annotation_clip()
if b or (b is None... |
799 | def _split_ssh_args(argstring):
# In Python3, shlex.split doesn't work on a byte string.
return [to_text(x.strip()) for x in shlex.split(argstring) if x.strip()]
|
Takes a string like '-o Foo=1 -o Bar="foo bar"' and returns a
list ['-o', 'Foo=1', '-o', 'Bar=foo bar'] that can be added to
the argument list. The list will not contain any empty elements.
| 34 | 20 | Python |
###User : Below is a Python method which does a task. Create a documentation for the below code :
```Python
def _split_ssh_args(argstring):
# In Python3, shlex.split doesn't work on a byte string.
return [to_text(x.strip()) for x in shlex.split(argstring) if x.strip()]
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.