body stringlengths 26 98.2k | body_hash int64 -9,222,864,604,528,158,000 9,221,803,474B | docstring stringlengths 1 16.8k | path stringlengths 5 230 | name stringlengths 1 96 | repository_name stringlengths 7 89 | lang stringclasses 1
value | body_without_docstring stringlengths 20 98.2k |
|---|---|---|---|---|---|---|---|
def kill_workers(*args):
'Syntax: [storm kill_workers]\n\n Kill the workers running on this supervisor. This command should be run\n on a supervisor node. If the cluster is running in secure mode, then user needs\n to have admin rights on the node to be able to successfully kill all workers.\n '
exe... | 180,778,641,335,337,180 | Syntax: [storm kill_workers]
Kill the workers running on this supervisor. This command should be run
on a supervisor node. If the cluster is running in secure mode, then user needs
to have admin rights on the node to be able to successfully kill all workers. | bin/storm.py | kill_workers | JamiesZhang/Storm | python | def kill_workers(*args):
'Syntax: [storm kill_workers]\n\n Kill the workers running on this supervisor. This command should be run\n on a supervisor node. If the cluster is running in secure mode, then user needs\n to have admin rights on the node to be able to successfully kill all workers.\n '
exe... |
def admin(*args):
"Syntax: [storm admin cmd [options]]\n\n The storm admin command provides access to several operations that can help\n an administrator debug or fix a cluster.\n\n remove_corrupt_topologies - This command should be run on a nimbus node as\n the same user nimbus runs as. It will go dir... | 3,164,070,418,118,184,000 | Syntax: [storm admin cmd [options]]
The storm admin command provides access to several operations that can help
an administrator debug or fix a cluster.
remove_corrupt_topologies - This command should be run on a nimbus node as
the same user nimbus runs as. It will go directly to zookeeper + blobstore
and find topol... | bin/storm.py | admin | JamiesZhang/Storm | python | def admin(*args):
"Syntax: [storm admin cmd [options]]\n\n The storm admin command provides access to several operations that can help\n an administrator debug or fix a cluster.\n\n remove_corrupt_topologies - This command should be run on a nimbus node as\n the same user nimbus runs as. It will go dir... |
def shell(resourcesdir, command, *args):
'Syntax: [storm shell resourcesdir command args]\n\n Archives resources to jar and uploads jar to Nimbus, and executes following arguments on "local". Useful for non JVM languages.\n eg: `storm shell resources/ python topology.py arg1 arg2`\n '
tmpjarpath = (('s... | -4,195,633,902,917,097,500 | Syntax: [storm shell resourcesdir command args]
Archives resources to jar and uploads jar to Nimbus, and executes following arguments on "local". Useful for non JVM languages.
eg: `storm shell resources/ python topology.py arg1 arg2` | bin/storm.py | shell | JamiesZhang/Storm | python | def shell(resourcesdir, command, *args):
'Syntax: [storm shell resourcesdir command args]\n\n Archives resources to jar and uploads jar to Nimbus, and executes following arguments on "local". Useful for non JVM languages.\n eg: `storm shell resources/ python topology.py arg1 arg2`\n '
tmpjarpath = (('s... |
def repl():
'Syntax: [storm repl]\n\n Opens up a Clojure REPL with the storm jars and configuration\n on the classpath. Useful for debugging.\n '
cppaths = [CLUSTER_CONF_DIR]
exec_storm_class('clojure.main', jvmtype='-client', extrajars=cppaths) | -630,971,226,495,617,300 | Syntax: [storm repl]
Opens up a Clojure REPL with the storm jars and configuration
on the classpath. Useful for debugging. | bin/storm.py | repl | JamiesZhang/Storm | python | def repl():
'Syntax: [storm repl]\n\n Opens up a Clojure REPL with the storm jars and configuration\n on the classpath. Useful for debugging.\n '
cppaths = [CLUSTER_CONF_DIR]
exec_storm_class('clojure.main', jvmtype='-client', extrajars=cppaths) |
def nimbus(klass='org.apache.storm.daemon.nimbus.Nimbus'):
'Syntax: [storm nimbus]\n\n Launches the nimbus daemon. This command should be run under\n supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apache.org/documentation/Setti... | -5,802,446,814,074,783,000 | Syntax: [storm nimbus]
Launches the nimbus daemon. This command should be run under
supervision with a tool like daemontools or monit.
See Setting up a Storm cluster for more information.
(http://storm.apache.org/documentation/Setting-up-a-Storm-cluster) | bin/storm.py | nimbus | JamiesZhang/Storm | python | def nimbus(klass='org.apache.storm.daemon.nimbus.Nimbus'):
'Syntax: [storm nimbus]\n\n Launches the nimbus daemon. This command should be run under\n supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apache.org/documentation/Setti... |
def pacemaker(klass='org.apache.storm.pacemaker.Pacemaker'):
'Syntax: [storm pacemaker]\n\n Launches the Pacemaker daemon. This command should be run under\n supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apache.org/documentati... | -8,574,779,595,315,885,000 | Syntax: [storm pacemaker]
Launches the Pacemaker daemon. This command should be run under
supervision with a tool like daemontools or monit.
See Setting up a Storm cluster for more information.
(http://storm.apache.org/documentation/Setting-up-a-Storm-cluster) | bin/storm.py | pacemaker | JamiesZhang/Storm | python | def pacemaker(klass='org.apache.storm.pacemaker.Pacemaker'):
'Syntax: [storm pacemaker]\n\n Launches the Pacemaker daemon. This command should be run under\n supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apache.org/documentati... |
def supervisor(klass='org.apache.storm.daemon.supervisor.Supervisor'):
'Syntax: [storm supervisor]\n\n Launches the supervisor daemon. This command should be run\n under supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apache.org... | -6,424,705,986,325,982,000 | Syntax: [storm supervisor]
Launches the supervisor daemon. This command should be run
under supervision with a tool like daemontools or monit.
See Setting up a Storm cluster for more information.
(http://storm.apache.org/documentation/Setting-up-a-Storm-cluster) | bin/storm.py | supervisor | JamiesZhang/Storm | python | def supervisor(klass='org.apache.storm.daemon.supervisor.Supervisor'):
'Syntax: [storm supervisor]\n\n Launches the supervisor daemon. This command should be run\n under supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apache.org... |
def ui():
'Syntax: [storm ui]\n\n Launches the UI daemon. The UI provides a web interface for a Storm\n cluster and shows detailed stats about running topologies. This command\n should be run under supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information... | 4,587,990,193,543,811,600 | Syntax: [storm ui]
Launches the UI daemon. The UI provides a web interface for a Storm
cluster and shows detailed stats about running topologies. This command
should be run under supervision with a tool like daemontools or monit.
See Setting up a Storm cluster for more information.
(http://storm.apache.org/documentat... | bin/storm.py | ui | JamiesZhang/Storm | python | def ui():
'Syntax: [storm ui]\n\n Launches the UI daemon. The UI provides a web interface for a Storm\n cluster and shows detailed stats about running topologies. This command\n should be run under supervision with a tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information... |
def logviewer():
'Syntax: [storm logviewer]\n\n Launches the log viewer daemon. It provides a web interface for viewing\n storm log files. This command should be run under supervision with a\n tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apa... | -3,782,745,241,320,201,000 | Syntax: [storm logviewer]
Launches the log viewer daemon. It provides a web interface for viewing
storm log files. This command should be run under supervision with a
tool like daemontools or monit.
See Setting up a Storm cluster for more information.
(http://storm.apache.org/documentation/Setting-up-a-Storm-cluster) | bin/storm.py | logviewer | JamiesZhang/Storm | python | def logviewer():
'Syntax: [storm logviewer]\n\n Launches the log viewer daemon. It provides a web interface for viewing\n storm log files. This command should be run under supervision with a\n tool like daemontools or monit.\n\n See Setting up a Storm cluster for more information.\n (http://storm.apa... |
def drpcclient(*args):
'Syntax: [storm drpc-client [options] ([function argument]*)|(argument*)]\n\n Provides a very simple way to send DRPC requests.\n If a -f argument is supplied to set the function name all of the arguments are treated\n as arguments to the function. If no function is given the argume... | 4,256,926,995,565,434,400 | Syntax: [storm drpc-client [options] ([function argument]*)|(argument*)]
Provides a very simple way to send DRPC requests.
If a -f argument is supplied to set the function name all of the arguments are treated
as arguments to the function. If no function is given the arguments must
be pairs of function argument.
The... | bin/storm.py | drpcclient | JamiesZhang/Storm | python | def drpcclient(*args):
'Syntax: [storm drpc-client [options] ([function argument]*)|(argument*)]\n\n Provides a very simple way to send DRPC requests.\n If a -f argument is supplied to set the function name all of the arguments are treated\n as arguments to the function. If no function is given the argume... |
def drpc():
'Syntax: [storm drpc]\n\n Launches a DRPC daemon. This command should be run under supervision\n with a tool like daemontools or monit.\n\n See Distributed RPC for more information.\n (http://storm.apache.org/documentation/Distributed-RPC)\n '
cppaths = [CLUSTER_CONF_DIR]
jvmopts ... | 709,845,887,769,927,800 | Syntax: [storm drpc]
Launches a DRPC daemon. This command should be run under supervision
with a tool like daemontools or monit.
See Distributed RPC for more information.
(http://storm.apache.org/documentation/Distributed-RPC) | bin/storm.py | drpc | JamiesZhang/Storm | python | def drpc():
'Syntax: [storm drpc]\n\n Launches a DRPC daemon. This command should be run under supervision\n with a tool like daemontools or monit.\n\n See Distributed RPC for more information.\n (http://storm.apache.org/documentation/Distributed-RPC)\n '
cppaths = [CLUSTER_CONF_DIR]
jvmopts ... |
def dev_zookeeper():
'Syntax: [storm dev-zookeeper]\n\n Launches a fresh Zookeeper server using "dev.zookeeper.path" as its local dir and\n "storm.zookeeper.port" as its port. This is only intended for development/testing, the\n Zookeeper instance launched is not configured to be used in production.\n '... | 3,622,109,075,034,233,000 | Syntax: [storm dev-zookeeper]
Launches a fresh Zookeeper server using "dev.zookeeper.path" as its local dir and
"storm.zookeeper.port" as its port. This is only intended for development/testing, the
Zookeeper instance launched is not configured to be used in production. | bin/storm.py | dev_zookeeper | JamiesZhang/Storm | python | def dev_zookeeper():
'Syntax: [storm dev-zookeeper]\n\n Launches a fresh Zookeeper server using "dev.zookeeper.path" as its local dir and\n "storm.zookeeper.port" as its port. This is only intended for development/testing, the\n Zookeeper instance launched is not configured to be used in production.\n '... |
def version():
'Syntax: [storm version]\n\n Prints the version number of this Storm release.\n '
cppaths = [CLUSTER_CONF_DIR]
exec_storm_class('org.apache.storm.utils.VersionInfo', jvmtype='-client', extrajars=[CLUSTER_CONF_DIR]) | 6,929,928,848,461,484,000 | Syntax: [storm version]
Prints the version number of this Storm release. | bin/storm.py | version | JamiesZhang/Storm | python | def version():
'Syntax: [storm version]\n\n Prints the version number of this Storm release.\n '
cppaths = [CLUSTER_CONF_DIR]
exec_storm_class('org.apache.storm.utils.VersionInfo', jvmtype='-client', extrajars=[CLUSTER_CONF_DIR]) |
def print_classpath():
'Syntax: [storm classpath]\n\n Prints the classpath used by the storm client when running commands.\n '
print(get_classpath([], client=True)) | -1,740,646,617,593,392,600 | Syntax: [storm classpath]
Prints the classpath used by the storm client when running commands. | bin/storm.py | print_classpath | JamiesZhang/Storm | python | def print_classpath():
'Syntax: [storm classpath]\n\n Prints the classpath used by the storm client when running commands.\n '
print(get_classpath([], client=True)) |
def print_server_classpath():
'Syntax: [storm server_classpath]\n\n Prints the classpath used by the storm servers when running commands.\n '
print(get_classpath([], daemon=True)) | -5,675,609,904,092,449,000 | Syntax: [storm server_classpath]
Prints the classpath used by the storm servers when running commands. | bin/storm.py | print_server_classpath | JamiesZhang/Storm | python | def print_server_classpath():
'Syntax: [storm server_classpath]\n\n Prints the classpath used by the storm servers when running commands.\n '
print(get_classpath([], daemon=True)) |
def monitor(*args):
"Syntax: [storm monitor topology-name [-i interval-secs] [-m component-id] [-s stream-id] [-w [emitted | transferred]]]\n\n Monitor given topology's throughput interactively.\n One can specify poll-interval, component-id, stream-id, watch-item[emitted | transferred]\n By default,\n ... | -4,058,287,528,590,285,300 | Syntax: [storm monitor topology-name [-i interval-secs] [-m component-id] [-s stream-id] [-w [emitted | transferred]]]
Monitor given topology's throughput interactively.
One can specify poll-interval, component-id, stream-id, watch-item[emitted | transferred]
By default,
poll-interval is 4 seconds;
all compone... | bin/storm.py | monitor | JamiesZhang/Storm | python | def monitor(*args):
"Syntax: [storm monitor topology-name [-i interval-secs] [-m component-id] [-s stream-id] [-w [emitted | transferred]]]\n\n Monitor given topology's throughput interactively.\n One can specify poll-interval, component-id, stream-id, watch-item[emitted | transferred]\n By default,\n ... |
def print_commands():
'Print all client commands and link to documentation'
print(('Commands:\n\t' + '\n\t'.join(sorted(COMMANDS.keys()))))
print('\nHelp: \n\thelp \n\thelp <command>')
print('\nDocumentation for the storm client can be found at http://storm.apache.org/documentation/Command-line-client.h... | 6,484,770,233,767,060,000 | Print all client commands and link to documentation | bin/storm.py | print_commands | JamiesZhang/Storm | python | def print_commands():
print(('Commands:\n\t' + '\n\t'.join(sorted(COMMANDS.keys()))))
print('\nHelp: \n\thelp \n\thelp <command>')
print('\nDocumentation for the storm client can be found at http://storm.apache.org/documentation/Command-line-client.html\n')
print('Configs can be overridden using on... |
def print_usage(command=None):
'Print one help message or list of available commands'
if (command != None):
if (command in COMMANDS):
print((COMMANDS[command].__doc__ or ('No documentation provided for <%s>' % command)))
else:
print(('<%s> is not a valid command' % comman... | 7,656,778,314,449,597,000 | Print one help message or list of available commands | bin/storm.py | print_usage | JamiesZhang/Storm | python | def print_usage(command=None):
if (command != None):
if (command in COMMANDS):
print((COMMANDS[command].__doc__ or ('No documentation provided for <%s>' % command)))
else:
print(('<%s> is not a valid command' % command))
else:
print_commands() |
def __init__(self, latitude, longitude):
'Init BOM data collector.'
self.observations_data = None
self.daily_forecasts_data = None
self.geohash = self.geohash_encode(latitude, longitude)
_LOGGER.debug(f'geohash: {self.geohash}') | 4,990,285,546,407,237,000 | Init BOM data collector. | custom_components/bureau_of_meteorology/PyBoM/collector.py | __init__ | QziP22/HomeAssistantConfig | python | def __init__(self, latitude, longitude):
self.observations_data = None
self.daily_forecasts_data = None
self.geohash = self.geohash_encode(latitude, longitude)
_LOGGER.debug(f'geohash: {self.geohash}') |
async def get_location_name(self):
'Get JSON location name from BOM API endpoint.'
url = (BASE_URL + LOCATIONS_URL.format(self.geohash))
async with aiohttp.ClientSession() as session:
response = (await session.get(url))
if ((response is not None) and (response.status == 200)):
locations_... | -6,402,569,652,663,228,000 | Get JSON location name from BOM API endpoint. | custom_components/bureau_of_meteorology/PyBoM/collector.py | get_location_name | QziP22/HomeAssistantConfig | python | async def get_location_name(self):
url = (BASE_URL + LOCATIONS_URL.format(self.geohash))
async with aiohttp.ClientSession() as session:
response = (await session.get(url))
if ((response is not None) and (response.status == 200)):
locations_data = (await response.json())
self.loc... |
async def get_observations_data(self):
'Get JSON observations data from BOM API endpoint.'
url = OBSERVATIONS_URL.format(self.geohash)
async with aiohttp.ClientSession() as session:
response = (await session.get(url))
if ((response is not None) and (response.status == 200)):
self.observa... | -483,857,492,838,944,900 | Get JSON observations data from BOM API endpoint. | custom_components/bureau_of_meteorology/PyBoM/collector.py | get_observations_data | QziP22/HomeAssistantConfig | python | async def get_observations_data(self):
url = OBSERVATIONS_URL.format(self.geohash)
async with aiohttp.ClientSession() as session:
response = (await session.get(url))
if ((response is not None) and (response.status == 200)):
self.observations_data = (await response.json())
(await... |
async def format_observations_data(self):
'Flatten out wind and gust data.'
flattened = {}
wind = self.observations_data['data']['wind']
flattened['wind_speed_kilometre'] = wind['speed_kilometre']
flattened['wind_speed_knot'] = wind['speed_knot']
flattened['wind_direction'] = wind['direction']
... | -3,654,381,207,656,843,300 | Flatten out wind and gust data. | custom_components/bureau_of_meteorology/PyBoM/collector.py | format_observations_data | QziP22/HomeAssistantConfig | python | async def format_observations_data(self):
flattened = {}
wind = self.observations_data['data']['wind']
flattened['wind_speed_kilometre'] = wind['speed_kilometre']
flattened['wind_speed_knot'] = wind['speed_knot']
flattened['wind_direction'] = wind['direction']
if (self.observations_data['da... |
async def get_daily_forecasts_data(self):
'Get JSON daily forecasts data from BOM API endpoint.'
url = (BASE_URL + DAILY_FORECASTS_URL.format(self.geohash))
async with aiohttp.ClientSession() as session:
response = (await session.get(url))
if ((response is not None) and (response.status == 200))... | 8,678,075,092,967,983,000 | Get JSON daily forecasts data from BOM API endpoint. | custom_components/bureau_of_meteorology/PyBoM/collector.py | get_daily_forecasts_data | QziP22/HomeAssistantConfig | python | async def get_daily_forecasts_data(self):
url = (BASE_URL + DAILY_FORECASTS_URL.format(self.geohash))
async with aiohttp.ClientSession() as session:
response = (await session.get(url))
if ((response is not None) and (response.status == 200)):
self.daily_forecasts_data = (await response.... |
async def format_forecast_data(self):
'Flatten out forecast data.'
flattened = {}
days = len(self.daily_forecasts_data['data'])
for day in range(0, days):
icon = self.daily_forecasts_data['data'][day]['icon_descriptor']
flattened['mdi_icon'] = MDI_ICON_MAP[icon]
uv = self.daily_f... | 5,247,102,597,364,240,000 | Flatten out forecast data. | custom_components/bureau_of_meteorology/PyBoM/collector.py | format_forecast_data | QziP22/HomeAssistantConfig | python | async def format_forecast_data(self):
flattened = {}
days = len(self.daily_forecasts_data['data'])
for day in range(0, days):
icon = self.daily_forecasts_data['data'][day]['icon_descriptor']
flattened['mdi_icon'] = MDI_ICON_MAP[icon]
uv = self.daily_forecasts_data['data'][day]['... |
@Throttle(MIN_TIME_BETWEEN_UPDATES)
async def async_update(self):
'Refresh the data on the collector object.'
(await self.get_observations_data())
(await self.get_daily_forecasts_data()) | 581,590,499,631,114,500 | Refresh the data on the collector object. | custom_components/bureau_of_meteorology/PyBoM/collector.py | async_update | QziP22/HomeAssistantConfig | python | @Throttle(MIN_TIME_BETWEEN_UPDATES)
async def async_update(self):
(await self.get_observations_data())
(await self.get_daily_forecasts_data()) |
def basic_auth(username, password) -> bool:
' HTTP basic authorization '
query_result = Application.query.join(User, (User.id == Application.userIntID)).with_entities(Application, User).filter((Application.appStatus == 1), (User.enable == 1), (Application.appID == username)).first()
if (not query_result):
... | 351,060,006,020,399,360 | HTTP basic authorization | server/actor_libs/auth/base.py | basic_auth | Mateus-dang/ActorCloud | python | def basic_auth(username, password) -> bool:
' '
query_result = Application.query.join(User, (User.id == Application.userIntID)).with_entities(Application, User).filter((Application.appStatus == 1), (User.enable == 1), (Application.appID == username)).first()
if (not query_result):
raise AuthFailed(... |
def token_auth(token) -> bool:
' HTTP bearer token authorization '
jwt = JWT(current_app.config['SECRET_KEY'])
try:
data = jwt.loads(token)
except Exception:
raise AuthFailed(field='token')
if data.get('consumer_id'):
...
else:
if (('user_id' or 'role_id') not in ... | 7,266,384,097,824,410,000 | HTTP bearer token authorization | server/actor_libs/auth/base.py | token_auth | Mateus-dang/ActorCloud | python | def token_auth(token) -> bool:
' '
jwt = JWT(current_app.config['SECRET_KEY'])
try:
data = jwt.loads(token)
except Exception:
raise AuthFailed(field='token')
if data.get('consumer_id'):
...
else:
if (('user_id' or 'role_id') not in data):
raise AuthFa... |
def execute_gremlin(client: NeptuneClient, query: str) -> pd.DataFrame:
'Return results of a Gremlin traversal as pandas dataframe.\n\n Parameters\n ----------\n client : neptune.Client\n instance of the neptune client to use\n traversal : str\n The gremlin traversal to execute\n\n Retu... | -4,956,243,994,760,486,000 | Return results of a Gremlin traversal as pandas dataframe.
Parameters
----------
client : neptune.Client
instance of the neptune client to use
traversal : str
The gremlin traversal to execute
Returns
-------
Union[pandas.DataFrame, Iterator[pandas.DataFrame]]
Results as Pandas DataFrame
Examples
--------... | awswrangler/neptune/neptune.py | execute_gremlin | minwook-shin/aws-data-wrangler | python | def execute_gremlin(client: NeptuneClient, query: str) -> pd.DataFrame:
'Return results of a Gremlin traversal as pandas dataframe.\n\n Parameters\n ----------\n client : neptune.Client\n instance of the neptune client to use\n traversal : str\n The gremlin traversal to execute\n\n Retu... |
def execute_opencypher(client: NeptuneClient, query: str) -> pd.DataFrame:
'Return results of a openCypher traversal as pandas dataframe.\n\n Parameters\n ----------\n client : NeptuneClient\n instance of the neptune client to use\n query : str\n The openCypher query to execute\n\n Retu... | -6,708,623,386,071,468,000 | Return results of a openCypher traversal as pandas dataframe.
Parameters
----------
client : NeptuneClient
instance of the neptune client to use
query : str
The openCypher query to execute
Returns
-------
Union[pandas.DataFrame, Iterator[pandas.DataFrame]]
Results as Pandas DataFrame
Examples
--------
Ru... | awswrangler/neptune/neptune.py | execute_opencypher | minwook-shin/aws-data-wrangler | python | def execute_opencypher(client: NeptuneClient, query: str) -> pd.DataFrame:
'Return results of a openCypher traversal as pandas dataframe.\n\n Parameters\n ----------\n client : NeptuneClient\n instance of the neptune client to use\n query : str\n The openCypher query to execute\n\n Retu... |
def execute_sparql(client: NeptuneClient, query: str) -> pd.DataFrame:
'Return results of a SPARQL query as pandas dataframe.\n\n Parameters\n ----------\n client : NeptuneClient\n instance of the neptune client to use\n query : str\n The SPARQL traversal to execute\n\n Returns\n ---... | -8,020,320,469,512,584,000 | Return results of a SPARQL query as pandas dataframe.
Parameters
----------
client : NeptuneClient
instance of the neptune client to use
query : str
The SPARQL traversal to execute
Returns
-------
Union[pandas.DataFrame, Iterator[pandas.DataFrame]]
Results as Pandas DataFrame
Examples
--------
Run a SPAR... | awswrangler/neptune/neptune.py | execute_sparql | minwook-shin/aws-data-wrangler | python | def execute_sparql(client: NeptuneClient, query: str) -> pd.DataFrame:
'Return results of a SPARQL query as pandas dataframe.\n\n Parameters\n ----------\n client : NeptuneClient\n instance of the neptune client to use\n query : str\n The SPARQL traversal to execute\n\n Returns\n ---... |
def to_property_graph(client: NeptuneClient, df: pd.DataFrame, batch_size: int=50, use_header_cardinality: bool=True) -> bool:
'Write records stored in a DataFrame into Amazon Neptune.\n\n If writing to a property graph then DataFrames for vertices and edges must be written separately.\n DataFrames for vertic... | 2,575,334,211,846,941,700 | Write records stored in a DataFrame into Amazon Neptune.
If writing to a property graph then DataFrames for vertices and edges must be written separately.
DataFrames for vertices must have a ~label column with the label and a ~id column for the vertex id.
If the ~id column does not exist, the specified id does not exi... | awswrangler/neptune/neptune.py | to_property_graph | minwook-shin/aws-data-wrangler | python | def to_property_graph(client: NeptuneClient, df: pd.DataFrame, batch_size: int=50, use_header_cardinality: bool=True) -> bool:
'Write records stored in a DataFrame into Amazon Neptune.\n\n If writing to a property graph then DataFrames for vertices and edges must be written separately.\n DataFrames for vertic... |
def to_rdf_graph(client: NeptuneClient, df: pd.DataFrame, batch_size: int=50, subject_column: str='s', predicate_column: str='p', object_column: str='o', graph_column: str='g') -> bool:
"Write records stored in a DataFrame into Amazon Neptune.\n\n The DataFrame must consist of triples with column names for the s... | 3,725,097,353,831,719,000 | Write records stored in a DataFrame into Amazon Neptune.
The DataFrame must consist of triples with column names for the subject, predicate, and object specified.
If you want to add data into a named graph then you will also need the graph column.
Parameters
----------
client (NeptuneClient) :
instance of the nep... | awswrangler/neptune/neptune.py | to_rdf_graph | minwook-shin/aws-data-wrangler | python | def to_rdf_graph(client: NeptuneClient, df: pd.DataFrame, batch_size: int=50, subject_column: str='s', predicate_column: str='p', object_column: str='o', graph_column: str='g') -> bool:
"Write records stored in a DataFrame into Amazon Neptune.\n\n The DataFrame must consist of triples with column names for the s... |
def connect(host: str, port: int, iam_enabled: bool=False, **kwargs: Any) -> NeptuneClient:
'Create a connection to a Neptune cluster.\n\n Parameters\n ----------\n host : str\n The host endpoint to connect to\n port : int\n The port endpoint to connect to\n iam_enabled : bool, optional... | -8,125,250,883,127,492,000 | Create a connection to a Neptune cluster.
Parameters
----------
host : str
The host endpoint to connect to
port : int
The port endpoint to connect to
iam_enabled : bool, optional
True if IAM is enabled on the cluster. Defaults to False.
Returns
-------
NeptuneClient
[description] | awswrangler/neptune/neptune.py | connect | minwook-shin/aws-data-wrangler | python | def connect(host: str, port: int, iam_enabled: bool=False, **kwargs: Any) -> NeptuneClient:
'Create a connection to a Neptune cluster.\n\n Parameters\n ----------\n host : str\n The host endpoint to connect to\n port : int\n The port endpoint to connect to\n iam_enabled : bool, optional... |
def flatten_nested_df(df: pd.DataFrame, include_prefix: bool=True, seperator: str='_', recursive: bool=True) -> pd.DataFrame:
'Flatten the lists and dictionaries of the input data frame.\n\n Parameters\n ----------\n df : pd.DataFrame\n The input data frame\n include_prefix : bool, optional\n ... | -7,279,316,436,020,046,000 | Flatten the lists and dictionaries of the input data frame.
Parameters
----------
df : pd.DataFrame
The input data frame
include_prefix : bool, optional
If True, then it will prefix the new column name with the original column name.
Defaults to True.
seperator : str, optional
The seperator to use betwe... | awswrangler/neptune/neptune.py | flatten_nested_df | minwook-shin/aws-data-wrangler | python | def flatten_nested_df(df: pd.DataFrame, include_prefix: bool=True, seperator: str='_', recursive: bool=True) -> pd.DataFrame:
'Flatten the lists and dictionaries of the input data frame.\n\n Parameters\n ----------\n df : pd.DataFrame\n The input data frame\n include_prefix : bool, optional\n ... |
def calc_bbox_overlap_union_iou(pred: (np.ndarray or None), teacher: np.ndarray) -> Tuple[(float, float, float)]:
'\n :param pred: ndarray (4, )\n :param teacher: ndarray (4, )\n :return: overlap, union, iou\n '
teacher_area = ((teacher[2] - teacher[0]) * (teacher[3] - teacher[1]))
if (pred is N... | 1,256,112,090,592,343,300 | :param pred: ndarray (4, )
:param teacher: ndarray (4, )
:return: overlap, union, iou | deepext_with_lightning/metrics/object_detection.py | calc_bbox_overlap_union_iou | pei223/deepext-with-lightning | python | def calc_bbox_overlap_union_iou(pred: (np.ndarray or None), teacher: np.ndarray) -> Tuple[(float, float, float)]:
'\n :param pred: ndarray (4, )\n :param teacher: ndarray (4, )\n :return: overlap, union, iou\n '
teacher_area = ((teacher[2] - teacher[0]) * (teacher[3] - teacher[1]))
if (pred is N... |
def update(self, preds: List[np.ndarray], targets: Union[(np.ndarray, torch.Tensor)]) -> None:
'\n :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))\n :param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))\n ... | -6,679,091,060,328,572,000 | :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))
:param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))
:return: | deepext_with_lightning/metrics/object_detection.py | update | pei223/deepext-with-lightning | python | def update(self, preds: List[np.ndarray], targets: Union[(np.ndarray, torch.Tensor)]) -> None:
'\n :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))\n :param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))\n ... |
def update(self, preds: List[np.ndarray], targets: Union[(np.ndarray, torch.Tensor)]) -> None:
'\n :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))\n :param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))\n ... | 446,419,670,556,090,200 | :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))
:param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))
:return: | deepext_with_lightning/metrics/object_detection.py | update | pei223/deepext-with-lightning | python | def update(self, preds: List[np.ndarray], targets: Union[(np.ndarray, torch.Tensor)]) -> None:
'\n :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))\n :param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))\n ... |
def update(self, preds: List[np.ndarray], targets: Union[(np.ndarray, torch.Tensor)]) -> None:
'\n :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))\n :param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))\n ... | 5,742,389,805,774,645,000 | :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))
:param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))
:return: | deepext_with_lightning/metrics/object_detection.py | update | pei223/deepext-with-lightning | python | def update(self, preds: List[np.ndarray], targets: Union[(np.ndarray, torch.Tensor)]) -> None:
'\n :param preds: Sorted by score. (Batch size, bounding boxes by batch, 5(x_min, y_min, x_max, y_max, label))\n :param targets: (batch size, bounding box count, 5(x_min, y_min, x_max, y_max, label))\n ... |
def _update_tp_fp_score(self, pred_bboxes: np.ndarray, target_bboxes: np.ndarray):
'\n :param pred_bboxes: (N, 6(xmin, ymin, xmax, ymax, class, score))\n :param target_bboxes: (N, 5(xmin, ymin, xmax, ymax, class))\n '
detected_indices = []
for i in range(pred_bboxes.shape[0]):
(... | 6,164,836,586,989,498,000 | :param pred_bboxes: (N, 6(xmin, ymin, xmax, ymax, class, score))
:param target_bboxes: (N, 5(xmin, ymin, xmax, ymax, class)) | deepext_with_lightning/metrics/object_detection.py | _update_tp_fp_score | pei223/deepext-with-lightning | python | def _update_tp_fp_score(self, pred_bboxes: np.ndarray, target_bboxes: np.ndarray):
'\n :param pred_bboxes: (N, 6(xmin, ymin, xmax, ymax, class, score))\n :param target_bboxes: (N, 5(xmin, ymin, xmax, ymax, class))\n '
detected_indices = []
for i in range(pred_bboxes.shape[0]):
(... |
def _update_num_annotations(self, target_bboxes: np.ndarray):
'\n :param target_bboxes: (N, 5(xmin, ymin, xmax, ymax, class))\n '
counts = list(map((lambda i: np.count_nonzero((target_bboxes[:, 4] == i))), range(self._n_classes)))
self.num_annotations_by_classes = list(map((lambda i: (counts[i... | -2,288,792,466,958,422,300 | :param target_bboxes: (N, 5(xmin, ymin, xmax, ymax, class)) | deepext_with_lightning/metrics/object_detection.py | _update_num_annotations | pei223/deepext-with-lightning | python | def _update_num_annotations(self, target_bboxes: np.ndarray):
'\n \n '
counts = list(map((lambda i: np.count_nonzero((target_bboxes[:, 4] == i))), range(self._n_classes)))
self.num_annotations_by_classes = list(map((lambda i: (counts[i] + self.num_annotations_by_classes[i])), range(self._n_cla... |
@click.group()
def cli():
'This script showcases different terminal UI helpers in Click.'
pass | -6,101,637,174,138,122,000 | This script showcases different terminal UI helpers in Click. | examples/termui/termui.py | cli | D4N/asyncclick | python | @click.group()
def cli():
pass |
@cli.command()
def colordemo():
'Demonstrates ANSI color support.'
for color in ('red', 'green', 'blue'):
click.echo(click.style('I am colored {}'.format(color), fg=color))
click.echo(click.style('I am background colored {}'.format(color), bg=color)) | -6,081,257,435,468,193,000 | Demonstrates ANSI color support. | examples/termui/termui.py | colordemo | D4N/asyncclick | python | @cli.command()
def colordemo():
for color in ('red', 'green', 'blue'):
click.echo(click.style('I am colored {}'.format(color), fg=color))
click.echo(click.style('I am background colored {}'.format(color), bg=color)) |
@cli.command()
def pager():
'Demonstrates using the pager.'
lines = []
for x in range(200):
lines.append('{}. Hello World!'.format(click.style(str(x), fg='green')))
click.echo_via_pager('\n'.join(lines)) | -7,169,205,609,182,572,000 | Demonstrates using the pager. | examples/termui/termui.py | pager | D4N/asyncclick | python | @cli.command()
def pager():
lines = []
for x in range(200):
lines.append('{}. Hello World!'.format(click.style(str(x), fg='green')))
click.echo_via_pager('\n'.join(lines)) |
@cli.command()
@click.option('--count', default=8000, type=click.IntRange(1, 100000), help='The number of items to process.')
def progress(count):
'Demonstrates the progress bar.'
items = range(count)
def process_slowly(item):
time.sleep((0.002 * random.random()))
def filter(items):
fo... | 6,746,375,855,419,562,000 | Demonstrates the progress bar. | examples/termui/termui.py | progress | D4N/asyncclick | python | @cli.command()
@click.option('--count', default=8000, type=click.IntRange(1, 100000), help='The number of items to process.')
def progress(count):
items = range(count)
def process_slowly(item):
time.sleep((0.002 * random.random()))
def filter(items):
for item in items:
if ... |
@cli.command()
@click.argument('url')
def open(url):
'Opens a file or URL In the default application.'
click.launch(url) | -104,038,030,430,769,630 | Opens a file or URL In the default application. | examples/termui/termui.py | open | D4N/asyncclick | python | @cli.command()
@click.argument('url')
def open(url):
click.launch(url) |
@cli.command()
@click.argument('url')
def locate(url):
'Opens a file or URL In the default application.'
click.launch(url, locate=True) | 1,854,477,687,427,131,400 | Opens a file or URL In the default application. | examples/termui/termui.py | locate | D4N/asyncclick | python | @cli.command()
@click.argument('url')
def locate(url):
click.launch(url, locate=True) |
@cli.command()
def edit():
'Opens an editor with some text in it.'
MARKER = '# Everything below is ignored\n'
message = click.edit('\n\n{}'.format(MARKER))
if (message is not None):
msg = message.split(MARKER, 1)[0].rstrip('\n')
if (not msg):
click.echo('Empty message!')
... | -2,586,215,052,840,120,000 | Opens an editor with some text in it. | examples/termui/termui.py | edit | D4N/asyncclick | python | @cli.command()
def edit():
MARKER = '# Everything below is ignored\n'
message = click.edit('\n\n{}'.format(MARKER))
if (message is not None):
msg = message.split(MARKER, 1)[0].rstrip('\n')
if (not msg):
click.echo('Empty message!')
else:
click.echo('Messa... |
@cli.command()
def clear():
'Clears the entire screen.'
click.clear() | -3,175,494,085,147,564,500 | Clears the entire screen. | examples/termui/termui.py | clear | D4N/asyncclick | python | @cli.command()
def clear():
click.clear() |
@cli.command()
def pause():
'Waits for the user to press a button.'
click.pause() | 2,847,341,040,750,745,000 | Waits for the user to press a button. | examples/termui/termui.py | pause | D4N/asyncclick | python | @cli.command()
def pause():
click.pause() |
@cli.command()
def menu():
'Shows a simple menu.'
menu = 'main'
while 1:
if (menu == 'main'):
click.echo('Main menu:')
click.echo(' d: debug menu')
click.echo(' q: quit')
char = click.getchar()
if (char == 'd'):
menu = 'de... | 5,626,892,119,203,902,000 | Shows a simple menu. | examples/termui/termui.py | menu | D4N/asyncclick | python | @cli.command()
def menu():
menu = 'main'
while 1:
if (menu == 'main'):
click.echo('Main menu:')
click.echo(' d: debug menu')
click.echo(' q: quit')
char = click.getchar()
if (char == 'd'):
menu = 'debug'
elif ... |
def sort(a, axis=(- 1), kind=None, order=None):
"\n Return a sorted copy of an array.\n\n Parameters\n ----------\n a : array_like\n Array to be sorted.\n axis : int or None, optional\n Axis along which to sort. If None, the array is flattened before\n sorting. The default is -1,... | -6,831,761,510,882,574,000 | Return a sorted copy of an array.
Parameters
----------
a : array_like
Array to be sorted.
axis : int or None, optional
Axis along which to sort. If None, the array is flattened before
sorting. The default is -1, which sorts along the last axis.
kind : {'quicksort', 'mergesort', 'heapsort', 'stable'}, opti... | src/pnumpy/sort.py | sort | Quansight/numpy-threading-extensions | python | def sort(a, axis=(- 1), kind=None, order=None):
"\n Return a sorted copy of an array.\n\n Parameters\n ----------\n a : array_like\n Array to be sorted.\n axis : int or None, optional\n Axis along which to sort. If None, the array is flattened before\n sorting. The default is -1,... |
def lexsort(*args, **kwargs):
'\n Perform an indirect stable sort using a sequence of keys.\n\n Given multiple sorting keys, which can be interpreted as columns in a\n spreadsheet, lexsort returns an array of integer indices that describes\n the sort order by multiple columns. The last key in the sequen... | -7,031,114,629,765,578,000 | Perform an indirect stable sort using a sequence of keys.
Given multiple sorting keys, which can be interpreted as columns in a
spreadsheet, lexsort returns an array of integer indices that describes
the sort order by multiple columns. The last key in the sequence is used
for the primary sort order, the second-to-last... | src/pnumpy/sort.py | lexsort | Quansight/numpy-threading-extensions | python | def lexsort(*args, **kwargs):
'\n Perform an indirect stable sort using a sequence of keys.\n\n Given multiple sorting keys, which can be interpreted as columns in a\n spreadsheet, lexsort returns an array of integer indices that describes\n the sort order by multiple columns. The last key in the sequen... |
def argsort(a, axis=(- 1), kind=None, order=None):
"\n Returns the indices that would sort an array.\n\n Perform an indirect sort along the given axis using the algorithm specified\n by the `kind` keyword. It returns an array of indices of the same shape as\n `a` that index data along the given axis in ... | -5,738,350,829,677,030,000 | Returns the indices that would sort an array.
Perform an indirect sort along the given axis using the algorithm specified
by the `kind` keyword. It returns an array of indices of the same shape as
`a` that index data along the given axis in sorted order.
Parameters
----------
a : array_like
Array to sort.
axis : ... | src/pnumpy/sort.py | argsort | Quansight/numpy-threading-extensions | python | def argsort(a, axis=(- 1), kind=None, order=None):
"\n Returns the indices that would sort an array.\n\n Perform an indirect sort along the given axis using the algorithm specified\n by the `kind` keyword. It returns an array of indices of the same shape as\n `a` that index data along the given axis in ... |
def argmax(a, axis=None, out=None):
'\n Returns the indices of the maximum values along an axis.\n\n Parameters\n ----------\n a : array_like\n Input array.\n axis : int, optional\n By default, the index is into the flattened array, otherwise\n along the specified axis.\n out ... | -8,006,752,523,648,650,000 | Returns the indices of the maximum values along an axis.
Parameters
----------
a : array_like
Input array.
axis : int, optional
By default, the index is into the flattened array, otherwise
along the specified axis.
out : array, optional
If provided, the result will be inserted into this array. It shoul... | src/pnumpy/sort.py | argmax | Quansight/numpy-threading-extensions | python | def argmax(a, axis=None, out=None):
'\n Returns the indices of the maximum values along an axis.\n\n Parameters\n ----------\n a : array_like\n Input array.\n axis : int, optional\n By default, the index is into the flattened array, otherwise\n along the specified axis.\n out ... |
def argmin(a, axis=None, out=None):
'\n Returns the indices of the minimum values along an axis.\n\n Parameters\n ----------\n a : array_like\n Input array.\n axis : int, optional\n By default, the index is into the flattened array, otherwise\n along the specified axis.\n out ... | -7,225,755,640,550,826,000 | Returns the indices of the minimum values along an axis.
Parameters
----------
a : array_like
Input array.
axis : int, optional
By default, the index is into the flattened array, otherwise
along the specified axis.
out : array, optional
If provided, the result will be inserted into this array. It shoul... | src/pnumpy/sort.py | argmin | Quansight/numpy-threading-extensions | python | def argmin(a, axis=None, out=None):
'\n Returns the indices of the minimum values along an axis.\n\n Parameters\n ----------\n a : array_like\n Input array.\n axis : int, optional\n By default, the index is into the flattened array, otherwise\n along the specified axis.\n out ... |
def searchsorted(a, v, side='left', sorter=None):
"\n Find indices where elements should be inserted to maintain order.\n\n Find the indices into a sorted array `a` such that, if the\n corresponding elements in `v` were inserted before the indices, the\n order of `a` would be preserved.\n\n Assuming ... | 6,932,750,288,715,982,000 | Find indices where elements should be inserted to maintain order.
Find the indices into a sorted array `a` such that, if the
corresponding elements in `v` were inserted before the indices, the
order of `a` would be preserved.
Assuming that `a` is sorted:
====== ============================
`side` returned index `i... | src/pnumpy/sort.py | searchsorted | Quansight/numpy-threading-extensions | python | def searchsorted(a, v, side='left', sorter=None):
"\n Find indices where elements should be inserted to maintain order.\n\n Find the indices into a sorted array `a` such that, if the\n corresponding elements in `v` were inserted before the indices, the\n order of `a` would be preserved.\n\n Assuming ... |
def onlywhite(line):
'Return true if the line does only consist of whitespace characters.'
for c in line:
if ((c != ' ') and (c != ' ')):
return (c == ' ')
return line | 2,643,177,320,319,262,000 | Return true if the line does only consist of whitespace characters. | dev/html2text.py | onlywhite | awenz-uw/arlo | python | def onlywhite(line):
for c in line:
if ((c != ' ') and (c != ' ')):
return (c == ' ')
return line |
def dumb_property_dict(style):
'returns a hash of css attributes'
return dict([(x.strip(), y.strip()) for (x, y) in [z.split(':', 1) for z in style.split(';') if (':' in z)]]) | -1,786,496,490,863,415,000 | returns a hash of css attributes | dev/html2text.py | dumb_property_dict | awenz-uw/arlo | python | def dumb_property_dict(style):
return dict([(x.strip(), y.strip()) for (x, y) in [z.split(':', 1) for z in style.split(';') if (':' in z)]]) |
def dumb_css_parser(data):
'returns a hash of css selectors, each of which contains a hash of css attributes'
data += ';'
importIndex = data.find('@import')
while (importIndex != (- 1)):
data = (data[0:importIndex] + data[(data.find(';', importIndex) + 1):])
importIndex = data.find('@imp... | -4,408,751,051,728,895,500 | returns a hash of css selectors, each of which contains a hash of css attributes | dev/html2text.py | dumb_css_parser | awenz-uw/arlo | python | def dumb_css_parser(data):
data += ';'
importIndex = data.find('@import')
while (importIndex != (- 1)):
data = (data[0:importIndex] + data[(data.find(';', importIndex) + 1):])
importIndex = data.find('@import')
elements = [x.split('{') for x in data.split('}') if ('{' in x.strip())]... |
def element_style(attrs, style_def, parent_style):
"returns a hash of the 'final' style attributes of the element"
style = parent_style.copy()
if ('class' in attrs):
for css_class in attrs['class'].split():
css_style = style_def[('.' + css_class)]
style.update(css_style)
... | -1,413,663,789,123,905,300 | returns a hash of the 'final' style attributes of the element | dev/html2text.py | element_style | awenz-uw/arlo | python | def element_style(attrs, style_def, parent_style):
style = parent_style.copy()
if ('class' in attrs):
for css_class in attrs['class'].split():
css_style = style_def[('.' + css_class)]
style.update(css_style)
if ('style' in attrs):
immediate_style = dumb_property_... |
def google_list_style(style):
'finds out whether this is an ordered or unordered list'
if ('list-style-type' in style):
list_style = style['list-style-type']
if (list_style in ['disc', 'circle', 'square', 'none']):
return 'ul'
return 'ol' | 6,299,580,701,757,265,000 | finds out whether this is an ordered or unordered list | dev/html2text.py | google_list_style | awenz-uw/arlo | python | def google_list_style(style):
if ('list-style-type' in style):
list_style = style['list-style-type']
if (list_style in ['disc', 'circle', 'square', 'none']):
return 'ul'
return 'ol' |
def google_has_height(style):
"check if the style of the element has the 'height' attribute explicitly defined"
if ('height' in style):
return True
return False | 640,041,204,446,125,700 | check if the style of the element has the 'height' attribute explicitly defined | dev/html2text.py | google_has_height | awenz-uw/arlo | python | def google_has_height(style):
if ('height' in style):
return True
return False |
def google_text_emphasis(style):
'return a list of all emphasis modifiers of the element'
emphasis = []
if ('text-decoration' in style):
emphasis.append(style['text-decoration'])
if ('font-style' in style):
emphasis.append(style['font-style'])
if ('font-weight' in style):
emp... | 3,806,007,217,956,230,700 | return a list of all emphasis modifiers of the element | dev/html2text.py | google_text_emphasis | awenz-uw/arlo | python | def google_text_emphasis(style):
emphasis = []
if ('text-decoration' in style):
emphasis.append(style['text-decoration'])
if ('font-style' in style):
emphasis.append(style['font-style'])
if ('font-weight' in style):
emphasis.append(style['font-weight'])
return emphasis |
def google_fixed_width_font(style):
'check if the css of the current element defines a fixed width font'
font_family = ''
if ('font-family' in style):
font_family = style['font-family']
if (('Courier New' == font_family) or ('Consolas' == font_family)):
return True
return False | -2,883,019,796,638,176,000 | check if the css of the current element defines a fixed width font | dev/html2text.py | google_fixed_width_font | awenz-uw/arlo | python | def google_fixed_width_font(style):
font_family =
if ('font-family' in style):
font_family = style['font-family']
if (('Courier New' == font_family) or ('Consolas' == font_family)):
return True
return False |
def list_numbering_start(attrs):
'extract numbering from list element attributes'
if ('start' in attrs):
return (int(attrs['start']) - 1)
else:
return 0 | 1,401,048,153,577,154,300 | extract numbering from list element attributes | dev/html2text.py | list_numbering_start | awenz-uw/arlo | python | def list_numbering_start(attrs):
if ('start' in attrs):
return (int(attrs['start']) - 1)
else:
return 0 |
def escape_md(text):
'Escapes markdown-sensitive characters within other markdown constructs.'
return md_chars_matcher.sub('\\\\\\1', text) | -5,401,994,510,614,652,000 | Escapes markdown-sensitive characters within other markdown constructs. | dev/html2text.py | escape_md | awenz-uw/arlo | python | def escape_md(text):
return md_chars_matcher.sub('\\\\\\1', text) |
def escape_md_section(text, snob=False):
'Escapes markdown-sensitive characters across whole document sections.'
text = md_backslash_matcher.sub('\\\\\\1', text)
if snob:
text = md_chars_matcher_all.sub('\\\\\\1', text)
text = md_dot_matcher.sub('\\1\\\\\\2', text)
text = md_plus_matcher.sub... | -1,093,320,531,801,034,600 | Escapes markdown-sensitive characters across whole document sections. | dev/html2text.py | escape_md_section | awenz-uw/arlo | python | def escape_md_section(text, snob=False):
text = md_backslash_matcher.sub('\\\\\\1', text)
if snob:
text = md_chars_matcher_all.sub('\\\\\\1', text)
text = md_dot_matcher.sub('\\1\\\\\\2', text)
text = md_plus_matcher.sub('\\1\\\\\\2', text)
text = md_dash_matcher.sub('\\1\\\\\\2', text)... |
def previousIndex(self, attrs):
' returns the index of certain set of attributes (of a link) in the\n self.a list\n\n If the set of attributes is not found, returns None\n '
if (not has_key(attrs, 'href')):
return None
i = (- 1)
for a in self.a:
i += 1
... | 6,450,246,326,084,345,000 | returns the index of certain set of attributes (of a link) in the
self.a list
If the set of attributes is not found, returns None | dev/html2text.py | previousIndex | awenz-uw/arlo | python | def previousIndex(self, attrs):
' returns the index of certain set of attributes (of a link) in the\n self.a list\n\n If the set of attributes is not found, returns None\n '
if (not has_key(attrs, 'href')):
return None
i = (- 1)
for a in self.a:
i += 1
... |
def handle_emphasis(self, start, tag_style, parent_style):
'handles various text emphases'
tag_emphasis = google_text_emphasis(tag_style)
parent_emphasis = google_text_emphasis(parent_style)
strikethrough = (('line-through' in tag_emphasis) and self.hide_strikethrough)
bold = (('bold' in tag_emphasi... | 7,691,076,359,448,883,000 | handles various text emphases | dev/html2text.py | handle_emphasis | awenz-uw/arlo | python | def handle_emphasis(self, start, tag_style, parent_style):
tag_emphasis = google_text_emphasis(tag_style)
parent_emphasis = google_text_emphasis(parent_style)
strikethrough = (('line-through' in tag_emphasis) and self.hide_strikethrough)
bold = (('bold' in tag_emphasis) and (not ('bold' in parent_e... |
def google_nest_count(self, style):
'calculate the nesting count of google doc lists'
nest_count = 0
if ('margin-left' in style):
nest_count = (int(style['margin-left'][:(- 2)]) / self.google_list_indent)
return nest_count | 5,612,216,284,702,896,000 | calculate the nesting count of google doc lists | dev/html2text.py | google_nest_count | awenz-uw/arlo | python | def google_nest_count(self, style):
nest_count = 0
if ('margin-left' in style):
nest_count = (int(style['margin-left'][:(- 2)]) / self.google_list_indent)
return nest_count |
def optwrap(self, text):
'Wrap all paragraphs in the provided text.'
if (not self.body_width):
return text
assert wrap, 'Requires Python 2.3.'
result = ''
newlines = 0
for para in text.split('\n'):
if (len(para) > 0):
if (not skipwrap(para)):
result +=... | -4,554,985,554,149,714,400 | Wrap all paragraphs in the provided text. | dev/html2text.py | optwrap | awenz-uw/arlo | python | def optwrap(self, text):
if (not self.body_width):
return text
assert wrap, 'Requires Python 2.3.'
result =
newlines = 0
for para in text.split('\n'):
if (len(para) > 0):
if (not skipwrap(para)):
result += '\n'.join(wrap(para, self.body_width))
... |
def weight_variable(shape):
'weight_variable generates a weight variable of a given shape.'
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial, name='W') | 1,714,315,251,192,376,300 | weight_variable generates a weight variable of a given shape. | cnn_phi_psi.py | weight_variable | Graveheart/ProteinSSPrediction | python | def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial, name='W') |
def bias_variable(shape):
'bias_variable generates a bias variable of a given shape.'
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial, name='B') | -9,042,792,790,202,244,000 | bias_variable generates a bias variable of a given shape. | cnn_phi_psi.py | bias_variable | Graveheart/ProteinSSPrediction | python | def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial, name='B') |
def conv1d(x, W):
'conv1d returns a 1d convolution layer.'
return tf.nn.conv1d(x, W, 1, 'SAME') | -6,424,299,776,053,027,000 | conv1d returns a 1d convolution layer. | cnn_phi_psi.py | conv1d | Graveheart/ProteinSSPrediction | python | def conv1d(x, W):
return tf.nn.conv1d(x, W, 1, 'SAME') |
def convert_to_degrees(arr):
'Covert all phi and psi angles to degrees'
arr[0] = math.degrees(arr[0])
arr[1] = math.degrees(arr[1])
return arr | -3,319,070,818,183,693,000 | Covert all phi and psi angles to degrees | cnn_phi_psi.py | convert_to_degrees | Graveheart/ProteinSSPrediction | python | def convert_to_degrees(arr):
arr[0] = math.degrees(arr[0])
arr[1] = math.degrees(arr[1])
return arr |
def __init__(self, feed):
'\n Constructor\n '
self.feed = feed
self.cache = []
if os.path.isfile(CACHE_FILE):
self.cache = [line.strip() for line in codecs.open(CACHE_FILE, 'r', 'utf-8').readlines()] | 5,810,451,100,013,958,000 | Constructor | feedputter.py | __init__ | amake/puttools-py | python | def __init__(self, feed):
'\n \n '
self.feed = feed
self.cache = []
if os.path.isfile(CACHE_FILE):
self.cache = [line.strip() for line in codecs.open(CACHE_FILE, 'r', 'utf-8').readlines()] |
def get_to(self, target, method):
'\n Fetch linked torrents and save to the specified output folder.\n '
for item in self.__get_items():
title = item.find('title').text.strip()
link = item.find('link').text
log(('Found ' + title))
if (title in self.cache):
... | 642,667,914,975,769,600 | Fetch linked torrents and save to the specified output folder. | feedputter.py | get_to | amake/puttools-py | python | def get_to(self, target, method):
'\n \n '
for item in self.__get_items():
title = item.find('title').text.strip()
link = item.find('link').text
log(('Found ' + title))
if (title in self.cache):
log('Already gotten. Skipping.')
continue
... |
def __init__(self, index, previous_hash, timestamp=None, forger=None, transactions: List[Transaction]=None, signature=None, **kwargs):
'\n Create block\n :param index: the block index at the chain (0 for the genesis block and so on)\n :param previous_hash: hash of previous block\n :param... | -2,504,936,648,075,513,300 | Create block
:param index: the block index at the chain (0 for the genesis block and so on)
:param previous_hash: hash of previous block
:param timestamp: block creation time
:param forger: public_address of forger wallet
:param transactions: list of transactions
:param signature: signature of the block hash by the for... | src/blockchain/block.py | __init__ | thewh1teagle/yoyocoin | python | def __init__(self, index, previous_hash, timestamp=None, forger=None, transactions: List[Transaction]=None, signature=None, **kwargs):
'\n Create block\n :param index: the block index at the chain (0 for the genesis block and so on)\n :param previous_hash: hash of previous block\n :param... |
def hash(self):
'\n Calculate the block hash (block number, previous hash, transactions)\n :return: String hash of block data (hex)\n '
block_dict = self._raw_data()
block_string = json.dumps(block_dict, sort_keys=True).encode()
return hashlib.sha256(block_string).hexdigest() | 6,995,041,676,680,614,000 | Calculate the block hash (block number, previous hash, transactions)
:return: String hash of block data (hex) | src/blockchain/block.py | hash | thewh1teagle/yoyocoin | python | def hash(self):
'\n Calculate the block hash (block number, previous hash, transactions)\n :return: String hash of block data (hex)\n '
block_dict = self._raw_data()
block_string = json.dumps(block_dict, sort_keys=True).encode()
return hashlib.sha256(block_string).hexdigest() |
def add_transaction(self, transaction: Transaction):
"\n Add transaction to block\n :param transaction: Transaction object (see transaction.py)\n :raise Validation error if transaction isn't valid.\n :return: None\n "
self.transactions.append(transaction) | -7,499,446,428,048,659,000 | Add transaction to block
:param transaction: Transaction object (see transaction.py)
:raise Validation error if transaction isn't valid.
:return: None | src/blockchain/block.py | add_transaction | thewh1teagle/yoyocoin | python | def add_transaction(self, transaction: Transaction):
"\n Add transaction to block\n :param transaction: Transaction object (see transaction.py)\n :raise Validation error if transaction isn't valid.\n :return: None\n "
self.transactions.append(transaction) |
def is_signature_verified(self) -> bool:
'\n Check if block signature is valid\n :return: bool\n '
try:
return self.forger_public_key.verify(self.signature, self.hash().encode())
except ecdsa.BadSignatureError:
return False | 1,621,767,926,656,757,800 | Check if block signature is valid
:return: bool | src/blockchain/block.py | is_signature_verified | thewh1teagle/yoyocoin | python | def is_signature_verified(self) -> bool:
'\n Check if block signature is valid\n :return: bool\n '
try:
return self.forger_public_key.verify(self.signature, self.hash().encode())
except ecdsa.BadSignatureError:
return False |
def create_signature(self, forger_private_address: str):
'\n Create block signature for this block\n :param forger_private_address: base64(wallet private address)\n :return: None\n '
forger_private_key_string = bytes.fromhex(forger_private_address)
forger_private_key = ecdsa.Sign... | -4,406,126,929,190,984,000 | Create block signature for this block
:param forger_private_address: base64(wallet private address)
:return: None | src/blockchain/block.py | create_signature | thewh1teagle/yoyocoin | python | def create_signature(self, forger_private_address: str):
'\n Create block signature for this block\n :param forger_private_address: base64(wallet private address)\n :return: None\n '
forger_private_key_string = bytes.fromhex(forger_private_address)
forger_private_key = ecdsa.Sign... |
def validate(self, blockchain_state, is_test_net=False):
'\n Validate block\n 1. check block index (is the next block in the blockchain state)\n 2. check previous hash (is the hash of the previous block)\n 3. check forger wallet (is lottery member?)\n 4. check block signature\n ... | -8,639,253,438,226,391,000 | Validate block
1. check block index (is the next block in the blockchain state)
2. check previous hash (is the hash of the previous block)
3. check forger wallet (is lottery member?)
4. check block signature
5. validate transactions
:param is_test_net: if True ignore InsufficientBalanceError and NonLotteryMemberError
... | src/blockchain/block.py | validate | thewh1teagle/yoyocoin | python | def validate(self, blockchain_state, is_test_net=False):
'\n Validate block\n 1. check block index (is the next block in the blockchain state)\n 2. check previous hash (is the hash of the previous block)\n 3. check forger wallet (is lottery member?)\n 4. check block signature\n ... |
def transform_X(self, X):
'\n transforms X\n\n :param\n X: Input X\n :return\n transformed X\n '
raise NotImplementedError() | 5,377,569,359,033,843,000 | transforms X
:param
X: Input X
:return
transformed X | GP/data_transformation.py | transform_X | VirgiAgl/V_savigp | python | def transform_X(self, X):
'\n transforms X\n\n :param\n X: Input X\n :return\n transformed X\n '
raise NotImplementedError() |
def transform_Y(self, Y):
'\n transforms Y\n\n :param\n Y: Input Y\n :return\n transformed Y\n '
raise NotImplementedError() | 4,951,803,811,033,387,000 | transforms Y
:param
Y: Input Y
:return
transformed Y | GP/data_transformation.py | transform_Y | VirgiAgl/V_savigp | python | def transform_Y(self, Y):
'\n transforms Y\n\n :param\n Y: Input Y\n :return\n transformed Y\n '
raise NotImplementedError() |
def untransform_X(self, X):
'\n Untransforms X to its original values\n\n :param\n X: transformed X\n :return\n untransformed X\n '
raise NotImplementedError() | -280,706,843,099,893,820 | Untransforms X to its original values
:param
X: transformed X
:return
untransformed X | GP/data_transformation.py | untransform_X | VirgiAgl/V_savigp | python | def untransform_X(self, X):
'\n Untransforms X to its original values\n\n :param\n X: transformed X\n :return\n untransformed X\n '
raise NotImplementedError() |
def untransform_Y(self, Y):
'\n Untransforms Y\n :param\n Y: transformed Y\n :return\n untransfomred Y\n '
raise NotImplementedError() | -6,146,962,964,687,816,000 | Untransforms Y
:param
Y: transformed Y
:return
untransfomred Y | GP/data_transformation.py | untransform_Y | VirgiAgl/V_savigp | python | def untransform_Y(self, Y):
'\n Untransforms Y\n :param\n Y: transformed Y\n :return\n untransfomred Y\n '
raise NotImplementedError() |
def untransform_NLPD(self, NLPD):
'\n Untransfomrs NLPD to the original Y space\n\n :param\n NLPD: transfomred NLPD\n :return\n untransformed NLPD\n '
raise NotImplementedError() | -1,423,142,593,506,293,000 | Untransfomrs NLPD to the original Y space
:param
NLPD: transfomred NLPD
:return
untransformed NLPD | GP/data_transformation.py | untransform_NLPD | VirgiAgl/V_savigp | python | def untransform_NLPD(self, NLPD):
'\n Untransfomrs NLPD to the original Y space\n\n :param\n NLPD: transfomred NLPD\n :return\n untransformed NLPD\n '
raise NotImplementedError() |
def _args_useful_check(self):
'\n need sql which mapping the target features and arguments\n :return:\n '
arg_msg_list = FeatureFieldRel.objects.filter(feature_name__in=self.target_features, is_delete=False)
for arg_msg in arg_msg_list:
if (arg_msg.raw_field_name in self.argumen... | 2,747,303,079,021,362,700 | need sql which mapping the target features and arguments
:return: | procuratorate/dataocean_judger.py | _args_useful_check | diudiu/featurefactory | python | def _args_useful_check(self):
'\n need sql which mapping the target features and arguments\n :return:\n '
arg_msg_list = FeatureFieldRel.objects.filter(feature_name__in=self.target_features, is_delete=False)
for arg_msg in arg_msg_list:
if (arg_msg.raw_field_name in self.argumen... |
@pytest.fixture
def j1713_profile():
'\n Numpy array of J1713+0747 profile.\n '
path = 'psrsigsim/data/J1713+0747_profile.npy'
return np.load(path) | 959,887,131,043,089,500 | Numpy array of J1713+0747 profile. | tests/test_simulate.py | j1713_profile | bshapiroalbert/PsrSigSim | python | @pytest.fixture
def j1713_profile():
'\n \n '
path = 'psrsigsim/data/J1713+0747_profile.npy'
return np.load(path) |
@pytest.fixture
def PSRfits():
'\n Fixture psrfits class\n '
fitspath = 'data/test.fits'
tempfits = 'data/B1855+09.L-wide.PUPPI.11y.x.sum.sm'
return PSRFITS(path=fitspath, template=tempfits, fits_mode='copy') | 6,057,200,503,390,064,000 | Fixture psrfits class | tests/test_simulate.py | PSRfits | bshapiroalbert/PsrSigSim | python | @pytest.fixture
def PSRfits():
'\n \n '
fitspath = 'data/test.fits'
tempfits = 'data/B1855+09.L-wide.PUPPI.11y.x.sum.sm'
return PSRFITS(path=fitspath, template=tempfits, fits_mode='copy') |
@pytest.fixture
def param_dict():
'\n Fixture parameter dictionary.\n '
pdict = {'fcent': 430, 'bandwidth': 100, 'sample_rate': 1.5625, 'dtype': np.float32, 'Npols': 1, 'Nchan': 64, 'sublen': 2.0, 'fold': True, 'period': 1.0, 'Smean': 1.0, 'profiles': [0.5, 0.5, 1.0], 'tobs': 4.0, 'name': 'J0000+0000', 'd... | 3,766,590,244,466,666,000 | Fixture parameter dictionary. | tests/test_simulate.py | param_dict | bshapiroalbert/PsrSigSim | python | @pytest.fixture
def param_dict():
'\n \n '
pdict = {'fcent': 430, 'bandwidth': 100, 'sample_rate': 1.5625, 'dtype': np.float32, 'Npols': 1, 'Nchan': 64, 'sublen': 2.0, 'fold': True, 'period': 1.0, 'Smean': 1.0, 'profiles': [0.5, 0.5, 1.0], 'tobs': 4.0, 'name': 'J0000+0000', 'dm': 10.0, 'tau_d': 5e-08, 'ta... |
@pytest.fixture
def simulation():
'\n Fixture Simulation class. Cannot be the only simulation tested.\n '
sim = Simulation(fcent=430, bandwidth=100, sample_rate=((1.0 * 2048) * (10 ** (- 6))), dtype=np.float32, Npols=1, Nchan=64, sublen=2.0, fold=True, period=1.0, Smean=1.0, profiles=None, tobs=4.0, name=... | -6,312,856,719,583,736,000 | Fixture Simulation class. Cannot be the only simulation tested. | tests/test_simulate.py | simulation | bshapiroalbert/PsrSigSim | python | @pytest.fixture
def simulation():
'\n \n '
sim = Simulation(fcent=430, bandwidth=100, sample_rate=((1.0 * 2048) * (10 ** (- 6))), dtype=np.float32, Npols=1, Nchan=64, sublen=2.0, fold=True, period=1.0, Smean=1.0, profiles=None, tobs=4.0, name='J0000+0000', dm=10.0, tau_d=5e-08, tau_d_ref_f=1500.0, apertur... |
def test_initsim(param_dict):
'\n Test initializing the simulation from dictionary, parfile\n '
sim = Simulation(psrdict=param_dict)
with pytest.raises(NotImplementedError):
sim2 = Simulation(parfile='testpar.par') | 5,675,763,485,965,984,000 | Test initializing the simulation from dictionary, parfile | tests/test_simulate.py | test_initsim | bshapiroalbert/PsrSigSim | python | def test_initsim(param_dict):
'\n \n '
sim = Simulation(psrdict=param_dict)
with pytest.raises(NotImplementedError):
sim2 = Simulation(parfile='testpar.par') |
def test_initsig(simulation):
'\n Test init_signal function.\n '
simulation.init_signal()
simulation.init_signal(from_template=True) | 8,913,096,984,652,106,000 | Test init_signal function. | tests/test_simulate.py | test_initsig | bshapiroalbert/PsrSigSim | python | def test_initsig(simulation):
'\n \n '
simulation.init_signal()
simulation.init_signal(from_template=True) |
def test_initprof(simulation, j1713_profile):
'\n Test init_profile function.\n '
simulation.init_profile()
with pytest.raises(NotImplementedError):
def gprof(x, p0):
return (p0[0] * np.exp(((- 0.5) * (((x - p0[1]) / p0[2]) ** 2))))
simulation._profiles = gprof
sim... | 6,492,907,136,872,744,000 | Test init_profile function. | tests/test_simulate.py | test_initprof | bshapiroalbert/PsrSigSim | python | def test_initprof(simulation, j1713_profile):
'\n \n '
simulation.init_profile()
with pytest.raises(NotImplementedError):
def gprof(x, p0):
return (p0[0] * np.exp(((- 0.5) * (((x - p0[1]) / p0[2]) ** 2))))
simulation._profiles = gprof
simulation.init_profile()
... |
def test_initpsr(simulation):
'\n Test init_pulsar function.\n '
simulation.init_pulsar() | 5,682,775,826,932,213,000 | Test init_pulsar function. | tests/test_simulate.py | test_initpsr | bshapiroalbert/PsrSigSim | python | def test_initpsr(simulation):
'\n \n '
simulation.init_pulsar() |
def test_initism(simulation):
'\n Test init_ism function.\n '
simulation.init_ism() | -1,547,126,899,779,636,200 | Test init_ism function. | tests/test_simulate.py | test_initism | bshapiroalbert/PsrSigSim | python | def test_initism(simulation):
'\n \n '
simulation.init_ism() |
def test_inittscope(simulation):
'\n Test init_telescope function.\n '
simulation._tscope_name = 'GBT'
simulation.init_telescope()
simulation._tscope_name = 'Arecibo'
simulation.init_telescope()
simulation._tscope_name = 'TestScope'
simulation.init_telescope()
simulation._system_na... | 4,775,317,322,399,384,000 | Test init_telescope function. | tests/test_simulate.py | test_inittscope | bshapiroalbert/PsrSigSim | python | def test_inittscope(simulation):
'\n \n '
simulation._tscope_name = 'GBT'
simulation.init_telescope()
simulation._tscope_name = 'Arecibo'
simulation.init_telescope()
simulation._tscope_name = 'TestScope'
simulation.init_telescope()
simulation._system_name = ['Sys1', 'Sys2']
sim... |
def test_simulate(simulation):
'\n Test simulate function.\n '
simulation.simulate() | 1,285,313,123,298,872,600 | Test simulate function. | tests/test_simulate.py | test_simulate | bshapiroalbert/PsrSigSim | python | def test_simulate(simulation):
'\n \n '
simulation.simulate() |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.