code
stringlengths
4
4.48k
docstring
stringlengths
1
6.45k
_id
stringlengths
24
24
def get_instance_dict_from_attrs(obj, attr_names): <NEW_LINE> <INDENT> d = {} <NEW_LINE> for attr_name in attr_names: <NEW_LINE> <INDENT> nested_attrs = attr_name.split('.') <NEW_LINE> attr = obj <NEW_LINE> for nested_attr in nested_attrs: <NEW_LINE> <INDENT> attr = getattr(attr, nested_attr) <NEW_LINE> <DEDENT> if callable(attr): <NEW_LINE> <INDENT> v = attr() <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> v = attr <NEW_LINE> <DEDENT> d[attr_name] = v <NEW_LINE> <DEDENT> return d
Given a model instance and a list of attributes, returns a dict
625941c257b8e32f52483433
def p_principal(p): <NEW_LINE> <INDENT> p[0] = Tree('principal', [p[6]])
principal : VAZIO PRINCIPAL ABRE_PAR FECHA_PAR NOVA_LINHA sequencia_decl FIM
625941c28e71fb1e9831d744
def debconf_progress_stop(self): <NEW_LINE> <INDENT> return
Stop the current progress bar.
625941c2a8ecb033257d3067
def get_coordinate_systems_for_filters(self, galaxy_name, filters): <NEW_LINE> <INDENT> coordinate_systems = [] <NEW_LINE> for fltr in filters: coordinate_systems.append(self.get_coordinate_system_for_filter(galaxy_name, fltr)) <NEW_LINE> return coordinate_systems
This function ... :param galaxy_name: :param filters: :return:
625941c27047854f462a13a5
def test_display_current_time_at_current_time(self): <NEW_LINE> <INDENT> production_code_time_provider = ProductionCodeTimeProvider() <NEW_LINE> class_under_test = TimeDisplay() <NEW_LINE> current_time = datetime.datetime.now() <NEW_LINE> expected_time = "<span class=\"tinyBoldText\">{}:{}</span>".format(current_time.hour, current_time.minute) <NEW_LINE> self.assertEqual(class_under_test.get_current_time_as_html_fragment(production_code_time_provider), expected_time)
Just as justification for working example with the time provider used in production. (Will always pass.)
625941c271ff763f4b549621
def getState(self): <NEW_LINE> <INDENT> return self.__state
@types: -> AvailabilityZone.State
625941c232920d7e50b28168
@register.tag <NEW_LINE> def is_registered_user(parser, token): <NEW_LINE> <INDENT> bits = token.split_contents() <NEW_LINE> if len(bits) != 5: <NEW_LINE> <INDENT> message = '%s tag requires 5 arguments' % bits[0] <NEW_LINE> raise TemplateSyntaxError(_(message)) <NEW_LINE> <DEDENT> user = bits[1] <NEW_LINE> event = bits[2] <NEW_LINE> context_var = bits[4] <NEW_LINE> return IsRegisteredUserNode(user, event, context_var)
Example: {% is_registered_user user event as registered_user %}
625941c20a50d4780f666e2a
def get_path(self): <NEW_LINE> <INDENT> return super(DynamicStorageDownloadView, self).get_path().upper()
Return uppercase path.
625941c2cc0a2c11143dce2a
def create_feed_dict(self, image=None): <NEW_LINE> <INDENT> image = np.expand_dims(image, axis=0) <NEW_LINE> feed_dict = {self.tensor_name_input_image: image} <NEW_LINE> return feed_dict
Create and return a feed-dict with an image. :param image: The input image is a 3-dim array which is already decoded. The pixels MUST be values between 0 and 255 (float or int). :return: Dict for feeding to the Inception graph in TensorFlow.
625941c24e4d5625662d4374
def has_no_title(self, title, **kwargs): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> self.assert_no_title(title, **kwargs) <NEW_LINE> return True <NEW_LINE> <DEDENT> except ExpectationNotMet: <NEW_LINE> <INDENT> return False
Checks if the page doesn't have the given title. Args: title (str | RegexObject): The string that the title should include. **kwargs: Arbitrary keyword arguments for :class:`TitleQuery`. Returns: bool: Whether it doesn't match.
625941c2bde94217f3682d8c
def from_str( self, text: str, *, interpolate: bool = True, overrides: Dict[str, Any] = {} ) -> "Config": <NEW_LINE> <INDENT> config = get_configparser(interpolate=interpolate) <NEW_LINE> if overrides: <NEW_LINE> <INDENT> config = get_configparser(interpolate=False) <NEW_LINE> <DEDENT> try: <NEW_LINE> <INDENT> config.read_string(text) <NEW_LINE> <DEDENT> except ParsingError as e: <NEW_LINE> <INDENT> desc = f"Make sure the sections and values are formatted correctly.\n\n{e}" <NEW_LINE> raise ConfigValidationError(desc=desc) from None <NEW_LINE> <DEDENT> config._sections = self._sort(config._sections) <NEW_LINE> self._set_overrides(config, overrides) <NEW_LINE> self.clear() <NEW_LINE> self.interpret_config(config) <NEW_LINE> if overrides and interpolate: <NEW_LINE> <INDENT> self = self.interpolate() <NEW_LINE> <DEDENT> self.is_interpolated = interpolate <NEW_LINE> return self
Load the config from a string.
625941c245492302aab5e25b
def write_deps(self, filename, fileids=None): <NEW_LINE> <INDENT> f = open(filename, 'w') <NEW_LINE> for t in self.parsed_sents(fileids): <NEW_LINE> <INDENT> f.write(' '.join(str(d[1]) for d in t.depset.deps)+'\n') <NEW_LINE> <DEDENT> f.close()
Writes the dependencies to a text file, one sentence per line.
625941c2f548e778e58cd516
def walk(self,root=None): <NEW_LINE> <INDENT> if root: <NEW_LINE> <INDENT> yield root <NEW_LINE> for item in root.next: <NEW_LINE> <INDENT> self.walk(item) <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> for item in self.firsts: <NEW_LINE> <INDENT> self.walk(item)
if root is none, walks the whole tree, otherwise starts at the given root, recursive depth tree traverse
625941c2adb09d7d5db6c72a
def format_weekday(self, char='E', num=4): <NEW_LINE> <INDENT> if num < 3: <NEW_LINE> <INDENT> if char.islower(): <NEW_LINE> <INDENT> value = 7 - self.locale.first_week_day + self.value.weekday() <NEW_LINE> return self.format(value % 7 + 1, num) <NEW_LINE> <DEDENT> num = 3 <NEW_LINE> <DEDENT> weekday = self.value.weekday() <NEW_LINE> width = {3: 'abbreviated', 4: 'wide', 5: 'narrow', 6: 'short'}[num] <NEW_LINE> if char == 'c': <NEW_LINE> <INDENT> context = 'stand-alone' <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> context = 'format' <NEW_LINE> <DEDENT> return get_day_names(width, context, self.locale)[weekday]
Return weekday from parsed datetime according to format pattern. >>> format = DateTimeFormat(date(2016, 2, 28), Locale.parse('en_US')) >>> format.format_weekday() u'Sunday' 'E': Day of week - Use one through three letters for the abbreviated day name, four for the full (wide) name, five for the narrow name, or six for the short name. >>> format.format_weekday('E',2) u'Sun' 'e': Local day of week. Same as E except adds a numeric value that will depend on the local starting day of the week, using one or two letters. For this example, Monday is the first day of the week. >>> format.format_weekday('e',2) '01' 'c': Stand-Alone local day of week - Use one letter for the local numeric value (same as 'e'), three for the abbreviated day name, four for the full (wide) name, five for the narrow name, or six for the short name. >>> format.format_weekday('c',1) '1' :param char: pattern format character ('e','E','c') :param num: count of format character
625941c2a79ad161976cc0df
def get_duplicated_rows(df): <NEW_LINE> <INDENT> grouped = df.groupby(['Date', 'Trap', 'Species']) <NEW_LINE> num=grouped.count().Latitude.to_dict() <NEW_LINE> df['N_Dupl']=-999 <NEW_LINE> for idx in df.index: <NEW_LINE> <INDENT> d = df.loc[idx, 'Date'] <NEW_LINE> t = df.loc[idx, 'Trap'] <NEW_LINE> s = df.loc[idx, 'Species'] <NEW_LINE> df.loc[idx, 'N_Dupl'] = num[(d, t, s)] <NEW_LINE> <DEDENT> return df
Calculates number of duplicated rows by Date, Trap, Species
625941c2bde94217f3682d8d
def test_add_staff_successfully(self): <NEW_LINE> <INDENT> self.dojo.create_room("office", ["Blue", "Green", "Pink"]) <NEW_LINE> self.dojo.create_room("living_space", ["A", "B", "C"]) <NEW_LINE> initial_staff_count = len(self.dojo.all_staff) <NEW_LINE> staff = self.dojo.add_staff("Donna") <NEW_LINE> self.assertTrue(staff) <NEW_LINE> new_staff_count = len(self.dojo.all_staff) <NEW_LINE> self.assertEqual(new_staff_count - initial_staff_count, 1)
Tests that a staff is created successfully
625941c24f88993c3716c003
def extractConceptSet(rdfOntology): <NEW_LINE> <INDENT> concepts = set() <NEW_LINE> concepts = concepts.union( [ s for s, p, o in extractSPO(rdfOntology) ] ) <NEW_LINE> concepts = concepts.union( [ o for s, p, o in extractSPO(rdfOntology) ] ) <NEW_LINE> return concepts
extracts a set of all concepts present in the given ontology @param[in] rdfOntology the rdflib.Graph object representing the ontology @returns a set of all concepts present in the given ontology
625941c2293b9510aa2c3232
def t_ID(self, token): <NEW_LINE> <INDENT> global reserved <NEW_LINE> token.type = reserved.get(token.value, 'IDENTIFIER') <NEW_LINE> return token
[a-zA-Z_][-a-zA-Z_0-9]*
625941c292d797404e304123
def check_page_content(fullpage): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> if len(parse_page(fullpage)) == 6: <NEW_LINE> <INDENT> return True <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> <DEDENT> except IndexError: <NEW_LINE> <INDENT> return False
Takes a string with a full page. If it is valid, returns True
625941c2e76e3b2f99f3a7a8
def find_node(self, **kwargs): <NEW_LINE> <INDENT> for child in kwargs['parent'].children(): <NEW_LINE> <INDENT> if child.type().name() == kwargs['node_type']: <NEW_LINE> <INDENT> if kwargs['node_type'] == 'geo': <NEW_LINE> <INDENT> shop_geometrypath_parm = child.evalParm('shop_geometrypath') <NEW_LINE> if shop_geometrypath_parm == kwargs['archive'].path(): <NEW_LINE> <INDENT> return child <NEW_LINE> <DEDENT> <DEDENT> elif kwargs['node_type'] == 'pxrdelayedreadarchive::22': <NEW_LINE> <INDENT> file_parm_rvalue = child.parm('file').rawValue() <NEW_LINE> if self.saveto_parm.rawValue() == file_parm_rvalue: <NEW_LINE> <INDENT> return child
Find a specific node in the node graph.
625941c24f6381625f1149d6
def img_code_overdue_decode(token): <NEW_LINE> <INDENT> obj = decrypt(token) <NEW_LINE> if int(obj['_time']) + obj['ex'] <= time.time(): <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> return obj
解析图片验证码的对象
625941c263b5f9789fde707f
def nodes(self): <NEW_LINE> <INDENT> nodes = [] <NEW_LINE> for vmware_vm in self._vmware_vms.values(): <NEW_LINE> <INDENT> nodes.append( {"class": VMwareVM.__name__, "name": vmware_vm["name"], "server": vmware_vm["server"], "symbol": vmware_vm["symbol"], "categories": [vmware_vm["category"]]} ) <NEW_LINE> <DEDENT> return nodes
Returns all the node data necessary to represent a node in the nodes view and create a node on the scene.
625941c28a349b6b435e810d
def Geman_McClure(x, a=1.0): <NEW_LINE> <INDENT> return (x**2) / (1 + (x**2 / a**2))
a = outlier threshold
625941c2baa26c4b54cb10bb
def check_router_transport(transport): <NEW_LINE> <INDENT> if not isinstance(transport, dict): <NEW_LINE> <INDENT> raise InvalidConfigException("'transport' items must be dictionaries ({} encountered)\n\n{}".format(type(transport), pformat(transport))) <NEW_LINE> <DEDENT> if 'type' not in transport: <NEW_LINE> <INDENT> raise InvalidConfigException("missing mandatory attribute 'type' in component") <NEW_LINE> <DEDENT> ttype = transport['type'] <NEW_LINE> if ttype not in [ 'web', 'websocket', 'rawsocket', 'flashpolicy', 'websocket.testee', 'stream.testee' ]: <NEW_LINE> <INDENT> raise InvalidConfigException("invalid attribute value '{}' for attribute 'type' in transport item\n\n{}".format(ttype, pformat(transport))) <NEW_LINE> <DEDENT> if ttype == 'websocket': <NEW_LINE> <INDENT> check_listening_transport_websocket(transport) <NEW_LINE> <DEDENT> elif ttype == 'rawsocket': <NEW_LINE> <INDENT> check_listening_transport_rawsocket(transport) <NEW_LINE> <DEDENT> elif ttype == 'web': <NEW_LINE> <INDENT> check_listening_transport_web(transport) <NEW_LINE> <DEDENT> elif ttype == 'flashpolicy': <NEW_LINE> <INDENT> check_listening_transport_flashpolicy(transport) <NEW_LINE> <DEDENT> elif ttype == 'websocket.testee': <NEW_LINE> <INDENT> check_listening_transport_websocket_testee(transport) <NEW_LINE> <DEDENT> elif ttype == 'stream.testee': <NEW_LINE> <INDENT> check_listening_transport_stream_testee(transport) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> raise InvalidConfigException("logic error")
Check router transports. https://github.com/crossbario/crossbardocs/blob/master/pages/docs/administration/router/Router-Transports.md :param transport: Router transport item to check. :type transport: dict
625941c25f7d997b87174a30
def test_get_recognizer_multiplier(self): <NEW_LINE> <INDENT> url = self.get_server_url() + "/settings/recognizer_multiplier" <NEW_LINE> headers = {"Content-Type": "application/json"} <NEW_LINE> self.settings.options.recognizer_multiplier = 3000 <NEW_LINE> result = self.client.get(url, headers=headers) <NEW_LINE> expected_content = { "recognizer_multiplier": 3000 } <NEW_LINE> self.assertEqual(json.dumps(expected_content, sort_keys=True), json.dumps(json.loads(result.get_data().decode('utf-8')), sort_keys=True)) <NEW_LINE> self.assertEqual(result.status_code, 200)
Test for api get recognizer_multiplier.
625941c276d4e153a657eaca
def delete(self, obj_id): <NEW_LINE> <INDENT> return self._get_wrapper( obj_id ).deleteModel()
Remove a dataset series identified by the ``obj_id`` parameter. :param obj_id: the EOID of the object to be deleted :rtype: no output returned
625941c267a9b606de4a7e55
def is_empty(self): <NEW_LINE> <INDENT> return self.num_items == 0
returns True if the heap is empty, False otherwise
625941c285dfad0860c3adf4
def increment_key(dictionary,key): <NEW_LINE> <INDENT> if dictionary.has_key(key): <NEW_LINE> <INDENT> dictionary[key] += 1 <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> dictionary[key] = 1
If the dictionary has the key, increment the value, otherwise initialize the key with value 1.
625941c201c39578d7e74dd5
def create_app(config="src.config.DevConfig", test_config=None): <NEW_LINE> <INDENT> from src.general import general_bp <NEW_LINE> from src.cli import admin_bp <NEW_LINE> from src.auth.blueprint import auth_bp <NEW_LINE> from src.links.blueprint import link_bp <NEW_LINE> from src.importer.blueprint import importer_bp <NEW_LINE> from src.collections.blueprint import collection_bp <NEW_LINE> app = Flask(__name__) <NEW_LINE> app.config.from_object(config) <NEW_LINE> celery.config_from_object("src.config.CeleryConfig") <NEW_LINE> if test_config: <NEW_LINE> <INDENT> app.config.from_mapping(test_config) <NEW_LINE> <DEDENT> from src.model import db, User, Link <NEW_LINE> db.init_app(app) <NEW_LINE> migrate.init_app(app, db, directory="alembic") <NEW_LINE> limiter = Limiter( app=app, key_func=get_remote_address, default_limits=["5 per second", "150 per day"], ) <NEW_LINE> limiter.limit(link_bp) <NEW_LINE> limiter.limit(auth_bp) <NEW_LINE> limiter.limit(general_bp) <NEW_LINE> CORS(app) <NEW_LINE> app.register_blueprint(general_bp, url_prefix="/v1") <NEW_LINE> app.register_blueprint(auth_bp, url_prefix="/v1/auth") <NEW_LINE> app.register_blueprint(link_bp, url_prefix="/v1/links") <NEW_LINE> app.register_blueprint(importer_bp, url_prefix="/v1/import") <NEW_LINE> app.register_blueprint(collection_bp, url_prefix="/v1/collections") <NEW_LINE> app.register_blueprint(admin_bp) <NEW_LINE> app.teardown_appcontext(teardown_handler) <NEW_LINE> app.register_error_handler(404, handlers.handle_not_found) <NEW_LINE> app.register_error_handler(500, handlers.handle_server_error) <NEW_LINE> app.register_error_handler(InvalidUsage, handlers.handle_invalid_data) <NEW_LINE> app.register_error_handler(SQLAlchemyError, handlers.handle_sqa_general) <NEW_LINE> app.register_error_handler(ValidationError, handlers.handle_validation_error) <NEW_LINE> app.register_error_handler(AuthError, handlers.handle_auth_error) <NEW_LINE> return app
The application factory for Espresso. Sets up configuration parameters, sets up the database connection and hooks up the view blueprints for all the API routes. Arguments: - config: The class object to configure Flask from - test_config: Key-value mappings to override common configuration (i.e. for running unit tests and overriding the database URI)
625941c291f36d47f21ac48a
@implicit_stochastic <NEW_LINE> @scope.define <NEW_LINE> def GMM1(weights, mus, sigmas, low=None, high=None, q=None, rng=None, size=()): <NEW_LINE> <INDENT> weights, mus, sigmas = list(map(np.asarray, (weights, mus, sigmas))) <NEW_LINE> assert len(weights) == len(mus) == len(sigmas) <NEW_LINE> n_samples = np.prod(size) <NEW_LINE> if low is None and high is None: <NEW_LINE> <INDENT> active = np.argmax(rng.multinomial(1, weights, (n_samples,)), axis=1) <NEW_LINE> samples = rng.normal(loc=mus[active], scale=sigmas[active]) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> low = float(low) <NEW_LINE> high = float(high) <NEW_LINE> if low >= high: <NEW_LINE> <INDENT> raise ValueError('low >= high', (low, high)) <NEW_LINE> <DEDENT> samples = [] <NEW_LINE> while len(samples) < n_samples: <NEW_LINE> <INDENT> active = np.argmax(rng.multinomial(1, weights)) <NEW_LINE> draw = rng.normal(loc=mus[active], scale=sigmas[active]) <NEW_LINE> if low <= draw < high: <NEW_LINE> <INDENT> samples.append(draw) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> samples = np.reshape(np.asarray(samples), size) <NEW_LINE> if q is None: <NEW_LINE> <INDENT> return samples <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> return np.round(old_div(samples, q)) * q
Sample from truncated 1-D Gaussian Mixture Model
625941c296565a6dacc8f666
def select_menu() -> Menu: <NEW_LINE> <INDENT> s = [f'({m.value}){m.name}' for m in Menu] <NEW_LINE> while True: <NEW_LINE> <INDENT> print(*s, sep=' ', end='') <NEW_LINE> n = int(input(':')) <NEW_LINE> if 1 <= n <= len(Menu): <NEW_LINE> <INDENT> return Menu(n)
메뉴 선택
625941c266673b3332b9202b
def do_exploit(self): <NEW_LINE> <INDENT> if self.current_plugin: <NEW_LINE> <INDENT> rn = self.exec_plugin() <NEW_LINE> if (rn == 'Cookie is required!'): <NEW_LINE> <INDENT> return ["", 'Cookie is required!', ""] <NEW_LINE> <DEDENT> if not rn[0]: <NEW_LINE> <INDENT> logger.error(rn[1]) <NEW_LINE> <DEDENT> return rn <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> logger.error("先选择一个POC插件")
执行插件 :return:
625941c2d99f1b3c44c6752e
def test_edit_state_same(self): <NEW_LINE> <INDENT> self.assertEditValueNoop( "state", IncidentState.on_scene, IncidentState.on_scene )
Edit incident state to same value is a no-op.
625941c2287bf620b61d39ff
def _satisfied_by(t, o): <NEW_LINE> <INDENT> return t.satisfied_by(o)
Pickleable type check function.
625941c238b623060ff0ad88
def update(self): <NEW_LINE> <INDENT> self.y += (self.random_rain_drop_rate) <NEW_LINE> self.rect.y = self.y
Move the raindrops down the screen
625941c24527f215b584c3f3
@cli.command() <NEW_LINE> @click.option("--force", "-f", is_flag=True, help="Drop existing database and user.") <NEW_LINE> def init_test_db(force=False): <NEW_LINE> <INDENT> db.init_db_connection(config.POSTGRES_ADMIN_URI) <NEW_LINE> if force: <NEW_LINE> <INDENT> res = db.run_sql_script_without_transaction(os.path.join(ADMIN_SQL_DIR, 'drop_test_db.sql')) <NEW_LINE> if not res: <NEW_LINE> <INDENT> raise Exception('Failed to drop existing database and user! Exit code: %i' % res) <NEW_LINE> <DEDENT> <DEDENT> print('Creating user and a database for testing...') <NEW_LINE> res = db.run_sql_script_without_transaction(os.path.join(ADMIN_SQL_DIR, 'create_test_db.sql')) <NEW_LINE> if not res: <NEW_LINE> <INDENT> raise Exception('Failed to create test user and database! Exit code: %i' % res) <NEW_LINE> <DEDENT> res = db.run_sql_script_without_transaction(os.path.join(ADMIN_SQL_DIR, 'create_extensions.sql')) <NEW_LINE> db.engine.dispose() <NEW_LINE> print("Done!")
Same as `init_db` command, but creates a database that will be used to run tests and doesn't import data (no need to do that). the `PG_CONNECT_TEST` variable must be defined in the config file.
625941c2c432627299f04bdf
def complist(self): <NEW_LINE> <INDENT> ans = [] <NEW_LINE> for comp in (self.component1, self.component2): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> ans.extend(comp.complist()) <NEW_LINE> <DEDENT> except AttributeError: <NEW_LINE> <INDENT> ans.append(comp) <NEW_LINE> <DEDENT> <DEDENT> return ans
Return a list of all components and sub-components.
625941c21f5feb6acb0c4aed
def extract_hashtags(entities): <NEW_LINE> <INDENT> tags = [t['text'] for t in entities["hashtags"]] <NEW_LINE> tags = cat(tags) <NEW_LINE> return tags if is_ascii(tags) else ""
gets tags tweet by tweet returning their names seperated by spaces.
625941c2956e5f7376d70e08
def process_file(f): <NEW_LINE> <INDENT> data = [] <NEW_LINE> info = {} <NEW_LINE> with open("{}/{}".format(datadir, f), "r") as html: <NEW_LINE> <INDENT> soup = BeautifulSoup(html, "lxml") <NEW_LINE> t = soup.find(id="DataGrid1").find_all("tr") <NEW_LINE> for i in range(1, len(t)): <NEW_LINE> <INDENT> info["courier"], info["airport"] = f[:6].split("-") <NEW_LINE> d = t[i].find_all("td") <NEW_LINE> if d[1].get_text() != "TOTAL": <NEW_LINE> <INDENT> info["year"] = int(d[0].get_text()) <NEW_LINE> info["month"] = int(d[1].get_text()) <NEW_LINE> flights = {} <NEW_LINE> flights["domestic"] = int(d[2].get_text().replace(',', '')) <NEW_LINE> flights["international"] = int(d[3].get_text().replace(',', '')) <NEW_LINE> info["flights"] = flights <NEW_LINE> data.append(info) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return data
This function extracts data from the file given as the function argument in a list of dictionaries. This is example of the data structure you should return: data = [{"courier": "FL", "airport": "ATL", "year": 2012, "month": 12, "flights": {"domestic": 100, "international": 100} }, {"courier": "..."} ] Note - year, month, and the flight data should be integers. You should skip the rows that contain the TOTAL data for a year.
625941c2442bda511e8be3b5
def quit(self): <NEW_LINE> <INDENT> self.running = False
Exits the ui and the run wrapper
625941c2d6c5a10208143fe3
def __update_service_query(self): <NEW_LINE> <INDENT> if self.service_query['status'] == u'OK': <NEW_LINE> <INDENT> self.__status_update(u"Updating") <NEW_LINE> <DEDENT> try: <NEW_LINE> <INDENT> status_no = self.status_dict[self.service_query['status']] <NEW_LINE> if not status_no == 15: <NEW_LINE> <INDENT> update_query_url = self.update_query_uri.format(url=self._update_service_url, fid=self.service_query['id'], durum=status_no, comment=self.__clear_text(self.service_query['comment'])) <NEW_LINE> _response = urlopen(update_query_url).read() <NEW_LINE> if _response == '0': <NEW_LINE> <INDENT> self.log.info(u'Servis URL: %s' % repr(update_query_url)) <NEW_LINE> error_text = u'Update Service Response: %s' % _response <NEW_LINE> self.log.info(error_text) <NEW_LINE> self.__status_update(u'!Unable To Update!', comment=error_text) <NEW_LINE> <DEDENT> elif self.service_query['status'] == u"Updating": <NEW_LINE> <INDENT> self.__status_update(u"Updated") <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> self.__status_update(self.service_query['status'], comment=u'Not Updated.') <NEW_LINE> <DEDENT> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> self.log.exception(e.message) <NEW_LINE> self.stop() <NEW_LINE> error_text = u'Status not reported to Update Service.' <NEW_LINE> self.__status_update(u'!Not Updated!', comment=error_text) <NEW_LINE> raise Exception(error_text) <NEW_LINE> <DEDENT> finally: <NEW_LINE> <INDENT> self.__write_csv() <NEW_LINE> if not self.parent.process_started: <NEW_LINE> <INDENT> self.parent.info_flow.AppendText(u"\r\n> Warning: Process STOPPED.") <NEW_LINE> self.parent.stat_gauge.SetValue(0)
Update Service Query
625941c23539df3088e2e2e5
def run_algorithm(self, generations): <NEW_LINE> <INDENT> for _ in range(generations): <NEW_LINE> <INDENT> self.selection() <NEW_LINE> self.crossover() <NEW_LINE> self.mutate() <NEW_LINE> self.compare_best_gene() <NEW_LINE> <DEDENT> print("Final best value: {}".format(self.best_dist))
Runs genetic algorithm for # generations
625941c29b70327d1c4e0d6e
def merge(self, other_range): <NEW_LINE> <INDENT> logging.info("Merging time ranges:\n\t%s\n\t%s" % (self, other_range)) <NEW_LINE> self.length = self.length + other_range.length <NEW_LINE> other_range.length = 0 <NEW_LINE> logging.info("Resulting time range: %s" % (self,))
Merge the two ranges together, resulting in that the other range will have a length of zero once this method returns
625941c2be383301e01b5424
def __init__(self, *args, **kwds): <NEW_LINE> <INDENT> if args or kwds: <NEW_LINE> <INDENT> super(DetectObjectRequest, self).__init__(*args, **kwds)
Constructor. Any message fields that are implicitly/explicitly set to None will be assigned a default value. The recommend use is keyword arguments as this is more robust to future message changes. You cannot mix in-order arguments and keyword arguments. The available fields are: :param args: complete set of field values, in .msg order :param kwds: use keyword arguments corresponding to message field names to set specific fields.
625941c2187af65679ca50b8
def rewrite_url(self, matches): <NEW_LINE> <INDENT> text_before = matches.groups()[0] <NEW_LINE> url = matches.groups()[1] <NEW_LINE> text_after = matches.groups()[2] <NEW_LINE> quotes_used = '' <NEW_LINE> if url[:1] in '"\'': <NEW_LINE> <INDENT> quotes_used = url[:1] <NEW_LINE> url = url[1:] <NEW_LINE> <DEDENT> if url[-1:] in '"\'': <NEW_LINE> <INDENT> url = url[:-1] <NEW_LINE> <DEDENT> url = self.replace_url(url) or url <NEW_LINE> result = 'url({before}{quotes}{url}{quotes}{after})'.format( before=text_before, quotes=quotes_used, url=url, after=text_after ) <NEW_LINE> return result
Rewrite found URL pattern.
625941c273bcbd0ca4b2c011
def __plot_corrected_maps__(self): <NEW_LINE> <INDENT> if self.verbose == True: print('... plotting corrected maps') <NEW_LINE> for dsName in self.dsNames: <NEW_LINE> <INDENT> fig, axes = plt.subplots(figsize=(12,7), ncols=3) <NEW_LINE> plot_raster(self.velMaps[dsName], extent=self.extent, mask=self.masks[dsName], cmap='jet', cbarOrient='horizontal', fig=fig, ax=axes[0]) <NEW_LINE> axes[0].set_title(dsName+' original data') <NEW_LINE> plot_raster(self.diffPlanes[dsName], extent=self.extent, mask=self.masks[dsName], cmap='jet', cbarOrient='horizontal', fig=fig, ax=axes[1]) <NEW_LINE> axes[1].set_title('Difference plane') <NEW_LINE> plot_raster(self.corrVelocities[dsName], extent=self.extent, mask=self.masks[dsName], vmin=-4, vmax=-2.3, cmap='jet', cbarOrient='horizontal', fig=fig, ax=axes[2]) <NEW_LINE> axes[2].set_title('Corrected image') <NEW_LINE> axes[2].scatter(self.GPSsamples[dsName].lon, self.GPSsamples[dsName].lat, 16, c=self.GPSsamples[dsName].GPSlos, vmin=-4, vmax=-2.3, cmap='jet')
Plot corrected maps.
625941c23539df3088e2e2e6
def start(self, source): <NEW_LINE> <INDENT> vote = self.VOTE <NEW_LINE> minLineLength = self.MIN_LINE_LENGTH <NEW_LINE> maxLineGap = self.MAX_LINE_GAP <NEW_LINE> lines = cv2.HoughLinesP(source, 1, np.pi / 180., vote, minLineLength, maxLineGap) <NEW_LINE> return lines
@return: lines
625941c23317a56b86939bf7
def to_pixel(self, wcs): <NEW_LINE> <INDENT> return PixelPolygonRegion.from_sky(self, wcs)
Thisn function ... :param wcs: :return:
625941c21b99ca400220aa4b
def realms(self, region=None): <NEW_LINE> <INDENT> url = f'{self.base}/realms/{region}.json' <NEW_LINE> r = requests.get(url) <NEW_LINE> return r
Get realm data. Newest version endpoints for different static data. Parameters ---------- region : str na, euw, jp, kr... Returns ------- Response
625941c2bd1bec0571d905c9
def write_control_point(self, C, filename): <NEW_LINE> <INDENT> with open(filename, 'wb') as f: <NEW_LINE> <INDENT> pickle.dump(C, f)
Output control point Parameters ---------- C : dict control point filename : str(file path and name) write data file name Returns ---------- None
625941c2a8370b771705283b
def __init__(self): <NEW_LINE> <INDENT> self.swagger_types = { 'soc_parent_type': 'str', 'parent_controller': 'str', 'parent_esm': 'str', 'parent_drawer': 'str' } <NEW_LINE> self.attribute_map = { 'soc_parent_type': 'socParentType', 'parent_controller': 'parentController', 'parent_esm': 'parentEsm', 'parent_drawer': 'parentDrawer' } <NEW_LINE> self._soc_parent_type = None <NEW_LINE> self._parent_controller = None <NEW_LINE> self._parent_esm = None <NEW_LINE> self._parent_drawer = None
SocParent - a model defined in Swagger :param dict swaggerTypes: The key is attribute name and the value is attribute type. :param dict attributeMap: The key is attribute name and the value is json key in definition.
625941c2a934411ee375162e
def update_log(username, op): <NEW_LINE> <INDENT> log_file = os.path.join(server.app.root_path, 'uploads', '.admin.log') <NEW_LINE> with open(log_file, 'a') as log: <NEW_LINE> <INDENT> log.write('%s\t%s\t%s\n' % (username, ' ', op))
Updates the admin log
625941c2091ae35668666efc
def phinorm2v(self, phinorm, t, **kwargs): <NEW_LINE> <INDENT> return self._phinorm2Quan(self._getVSpline, phinorm, t, **kwargs)
Calculates the flux surface volume corresponding to the passed phinorm (normalized toroidal flux) values. By default, EFIT only computes this inside the LCFS. Args: phinorm (Array-like or scalar float): Values of the normalized toroidal flux to map to v. t (Array-like or scalar float): Times to perform the conversion at. If `t` is a single value, it is used for all of the elements of `phinorm`. If the `each_t` keyword is True, then `t` must be scalar or have exactly one dimension. If the `each_t` keyword is False, `t` must have the same shape as `phinorm`. Keyword Args: sqrt (Boolean): Set to True to return the square root of v. Only the square root of positive values is taken. Negative values are replaced with zeros, consistent with Steve Wolfe's IDL implementation efit_rz2rho.pro. Default is False. each_t (Boolean): When True, the elements in `phinorm` are evaluated at each value in `t`. If True, `t` must have only one dimension (or be a scalar). If False, `t` must match the shape of `phinorm` or be a scalar. Default is True (evaluate ALL `phinorm` at EACH element in `t`). k (positive int): The degree of polynomial spline interpolation to use in converting coordinates. return_t (Boolean): Set to True to return a tuple of (`v`, `time_idxs`), where `time_idxs` is the array of time indices actually used in evaluating `v` with nearest-neighbor interpolation. (This is mostly present as an internal helper.) Default is False (only return `v`). Returns: `v` or (`v`, `time_idxs`) * **v** (`Array or scalar float`) - The flux surface volume. If all of the input arguments are scalar, then a scalar is returned. Otherwise, a scipy Array is returned. * **time_idxs** (Array with same shape as `v`) - The indices (in :py:meth:`self.getTimeBase`) that were used for nearest-neighbor interpolation. Only returned if `return_t` is True. Examples: All assume that `Eq_instance` is a valid instance of the appropriate extension of the :py:class:`Equilibrium` abstract class. Find single v value for phinorm=0.7, t=0.26s:: v_val = Eq_instance.phinorm2v(0.7, 0.26) Find v values at phinorm values of 0.5 and 0.7 at the single time t=0.26s:: v_arr = Eq_instance.phinorm2v([0.5, 0.7], 0.26) Find v values at phinorm=0.5 at times t=[0.2s, 0.3s]:: v_arr = Eq_instance.phinorm2v(0.5, [0.2, 0.3]) Find v values at (phinorm, t) points (0.6, 0.2s) and (0.5, 0.3s):: v_arr = Eq_instance.phinorm2v([0.6, 0.5], [0.2, 0.3], each_t=False)
625941c2d8ef3951e32434d8
def test_title_oneline(): <NEW_LINE> <INDENT> t = h.Title("PythonClass - Session 6 example blah blah I'm so long blaaaaaaah") <NEW_LINE> f = cStringIO.StringIO() <NEW_LINE> t.render(f) <NEW_LINE> f.reset() <NEW_LINE> assert f.read() == "<title>PythonClass - Session 6 example blah blah I'm so long blaaaaaaah</title>\n"
Test that the title tag displays on one line (implicitly tests inline performance)
625941c2d18da76e2353246e
def check(self): <NEW_LINE> <INDENT> with self.mutex: <NEW_LINE> <INDENT> enabled = self.safetyEnabled <NEW_LINE> stopTime = self.safetyStopTime <NEW_LINE> <DEDENT> if not enabled or RobotState.isDisabled() or RobotState.isTest(): <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> if stopTime < Timer.getFPGATimestamp(): <NEW_LINE> <INDENT> logger.warning("%s... Output not updated often enough." % self.getDescription()) <NEW_LINE> self.stopMotor()
Check if this motor has exceeded its timeout. This method is called periodically to determine if this motor has exceeded its timeout value. If it has, the stop method is called, and the motor is shut down until its value is updated again.
625941c2379a373c97cfaadf
def isSwear(self, swearlist): <NEW_LINE> <INDENT> for word in self.contents.split(): <NEW_LINE> <INDENT> if word.lower() in swearlist: <NEW_LINE> <INDENT> return True <NEW_LINE> <DEDENT> <DEDENT> return False
checks if the message contains a swear
625941c26e29344779a625ae
def tearDown(self): <NEW_LINE> <INDENT> db.session.remove() <NEW_LINE> db.drop_all()
Remove all test suite utilities.
625941c2187af65679ca50b9
def test_empty_repo(): <NEW_LINE> <INDENT> pass
Empty repos have no branch pristine-tar branch Methods tested: - L{gbp.deb.git.DebianGitRepository.has_pristine_tar_branch} - L{gbp.deb.pristinetar.DebianPristineTar.has_commit} >>> import gbp.deb.git >>> repo = gbp.deb.git.DebianGitRepository(repo_dir) >>> repo.has_pristine_tar_branch() False >>> repo.pristine_tar.has_commit('upstream', '1.0', 'gzip') False
625941c22ae34c7f2600d0cc
def __download_single(self, item_id, request): <NEW_LINE> <INDENT> query = '/'.join([request, str(item_id)]) <NEW_LINE> res = requests.get(query) <NEW_LINE> res = res.json() <NEW_LINE> attr_json[res.get('id')] = res
Downloads 1 item's worth of detail and load into dict
625941c2aad79263cf3909d9
@get.command(name='members') <NEW_LINE> def get_members(): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> results = api.get_members() <NEW_LINE> <DEDENT> except DkronException as ex: <NEW_LINE> <INDENT> print('Error while fetching: %s' % str(ex)) <NEW_LINE> exit(1) <NEW_LINE> <DEDENT> print(json.dumps(results))
Get system members
625941c255399d3f0558864e
def grant_request(self): <NEW_LINE> <INDENT> path = "/oauth/v2/auth" <NEW_LINE> params = { "response_type": "code", "client_id": self.client_id, "scope": self.scope, "redirect_uri": self.redirect, } <NEW_LINE> query = urllib.parse.urlencode(params, True) <NEW_LINE> grant_access_url = urllib.parse.urlunsplit( (self.SCHEME, self.BASE, path, query, "") ) <NEW_LINE> webbrowser.open_new(grant_access_url) <NEW_LINE> httpServer = http.server.HTTPServer(("localhost", 8080), HTTPAuthHandler) <NEW_LINE> httpServer.handle_request() <NEW_LINE> if hasattr(httpServer, "grant_token"): <NEW_LINE> <INDENT> self.grant_token = httpServer.grant_token[0]
Opens a Web Browser to log in and grant access. This first step creates a grant token to use in requesting the access token.
625941c25e10d32532c5eec2
def invert(self): <NEW_LINE> <INDENT> return _({self[k]: k for k in self._})
invert dict's key and value @param : none @return : _(dict) e.g. _({"k1": "v1", "k2": "v2"}).invert()._ {"v1": "k1", "v2": "k2"}
625941c263d6d428bbe4448a
def ui_acct_list(browser_session, ui_loginpage, ui_user): <NEW_LINE> <INDENT> selenium = browser_session <NEW_LINE> if 'Welcome to Cloud Meter' in selenium.page_source: <NEW_LINE> <INDENT> browser = Browser(selenium) <NEW_LINE> return browser, LoginView(browser) <NEW_LINE> <DEDENT> elif 'Login to Your Account' in selenium.page_source: <NEW_LINE> <INDENT> browser, login = ui_loginpage() <NEW_LINE> <DEDENT> elif not selenium.current_url.startswith('http'): <NEW_LINE> <INDENT> browser, login = ui_loginpage() <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> browser = Browser(selenium) <NEW_LINE> login = None <NEW_LINE> <DEDENT> if login: <NEW_LINE> <INDENT> wait = WebDriverWait(selenium, 10) <NEW_LINE> login.password.fill(ui_user['password']) <NEW_LINE> login.login.click() <NEW_LINE> wait.until(wait_for_page_text('Accounts')) <NEW_LINE> <DEDENT> return browser, login
Tool to navigate to the account list by logging in.
625941c25510c4643540f384
def merge(a, b): <NEW_LINE> <INDENT> from copy import deepcopy <NEW_LINE> if not isinstance(b, dict): <NEW_LINE> <INDENT> return b <NEW_LINE> <DEDENT> result = deepcopy(a) <NEW_LINE> for k, v in b.iteritems(): <NEW_LINE> <INDENT> if k in result and isinstance(result[k], dict): <NEW_LINE> <INDENT> result[k] = merge(result[k], v) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> result[k] = deepcopy(v) <NEW_LINE> <DEDENT> <DEDENT> return result
recursively merges dict's. not just simple a['key'] = b['key'], if both a and bhave a key who's value is a dict then dict_merge is called on both values and the result stored in the returned dictionary.
625941c2f8510a7c17cf9696
def parse_datetime(s): <NEW_LINE> <INDENT> dt, tm = _split_datetime(s) <NEW_LINE> return parse_date(dt) + parse_time(tm)
Parses ISO-8601 compliant timestamp and returns a tuple (year, month, day, hour, minute, second). Formats accepted are those listed in the descriptions of parse_date() and parse_time() with ' ' or 'T' used to separate date and time parts.
625941c2097d151d1a222df6
def _json_routing_tables(self): <NEW_LINE> <INDENT> with FecTimer(MAPPING, "Json routing tables") as timer: <NEW_LINE> <INDENT> if timer.skip_if_cfg_false( "Reports", "write_json_routing_tables"): <NEW_LINE> <INDENT> return <NEW_LINE> <DEDENT> write_json_routing_tables(self._router_tables, self._json_folder)
Write, time and log the routing tables as json if requested
625941c2046cf37aa974cce4
def suffix_replace(original, old, new): <NEW_LINE> <INDENT> return original[:-len(old)] + new
Replaces the old suffix of the original string by a new suffix
625941c2dc8b845886cb54cf
def request_all_artist_toptags(self): <NEW_LINE> <INDENT> for track in self.tracks: <NEW_LINE> <INDENT> artist = track.metadata["artist"] <NEW_LINE> if settings.ENABLE_IGNORE_FEAT_ARTISTS: <NEW_LINE> <INDENT> artist = strip_feat_artist(artist) <NEW_LINE> <DEDENT> params = dict( method="artist.gettoptags", artist=artist, api_key=settings.LASTFM_KEY) <NEW_LINE> self.dispatch("all_artist", params)
request toptags of all artists in the album (via artist)
625941c232920d7e50b28169
def __eq__(self, conversion_electrode): <NEW_LINE> <INDENT> if len(self) != len(conversion_electrode): <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> for pair1 in conversion_electrode: <NEW_LINE> <INDENT> found = False <NEW_LINE> rxn1 = pair1.rxn <NEW_LINE> all_formulas1 = set([rxn1.all_comp[i].reduced_formula for i in xrange(len(rxn1.all_comp)) if abs(rxn1.coeffs[i]) > 1e-5]) <NEW_LINE> for pair2 in self: <NEW_LINE> <INDENT> rxn2 = pair2.rxn <NEW_LINE> all_formulas2 = set([rxn2.all_comp[i].reduced_formula for i in xrange(len(rxn2.all_comp)) if abs(rxn2.coeffs[i]) > 1e-5]) <NEW_LINE> if all_formulas1 == all_formulas2: <NEW_LINE> <INDENT> found = True <NEW_LINE> break <NEW_LINE> <DEDENT> <DEDENT> if not found: <NEW_LINE> <INDENT> return False <NEW_LINE> <DEDENT> <DEDENT> return True
Check if two electrodes are exactly the same:
625941c215fb5d323cde0aa8
def create_report_header(self): <NEW_LINE> <INDENT> header = '{:20}|{:^15}|{:^15}|{:>15}'.format("Donor Name", "Total Given", "Num Gifts", "Average Gift") + '\n' <NEW_LINE> header += ("-" * len(header)) <NEW_LINE> return header
Generate formatted header for report
625941c22ae34c7f2600d0cd
def _load_predaily(daily_path: str, abs_path: str, ps_cache_path: str, identifier: Identifier, current: MutableMapping[Identifier, int], first: MutableMapping[Identifier, date], cache_path: Optional[str] = None, cf_cache: Optional[_CF_PersistentIndex] = None) -> List[Event]: <NEW_LINE> <INDENT> events: List[Event] = [] <NEW_LINE> abs_for_this_ident = sorted(abs.parse_versions(abs_path, identifier), key=lambda a: a.identifier.version) <NEW_LINE> N_versions = len(abs_for_this_ident) <NEW_LINE> events_for_this_ident = sorted(daily.scan(daily_path, identifier, cache_path=cache_path), key=lambda d: d.event_date) <NEW_LINE> replacements = [e for e in events_for_this_ident if e.event_type == EventType.REPLACED] <NEW_LINE> crosslists = [e for e in events_for_this_ident if e.event_type == EventType.CROSSLIST] <NEW_LINE> assert len(replacements) < len(abs_for_this_ident) <NEW_LINE> repl_map = {} <NEW_LINE> for i, event in enumerate(replacements[::-1]): <NEW_LINE> <INDENT> repl_map[abs_for_this_ident[-(i + 1)].identifier.version] = event <NEW_LINE> <DEDENT> for i, abs_datum in enumerate(abs_for_this_ident): <NEW_LINE> <INDENT> if abs_datum.identifier.version in repl_map: <NEW_LINE> <INDENT> event_date = _datetime_from_date( repl_map[abs_datum.identifier.version].event_date, identifier ) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> event_date = _datetime_from_date(abs_datum.submitted_date, identifier) <NEW_LINE> <DEDENT> while crosslists and crosslists[0].event_date < event_date.date(): <NEW_LINE> <INDENT> cross = crosslists.pop(0) <NEW_LINE> last = events[-1] <NEW_LINE> last.version.metadata.secondary_classification = [ c for c in last.version.metadata.secondary_classification if c not in cross.categories ] <NEW_LINE> <DEDENT> if abs_datum.identifier.version not in repl_map: <NEW_LINE> <INDENT> events.append(_event_from_abs(abs_path, ps_cache_path, abs_datum, event_date, cf_cache=cf_cache)) <NEW_LINE> current[events[-1].identifier.arxiv_id] = events[-1].identifier.version <NEW_LINE> if events[-1].identifier.version == 1: <NEW_LINE> <INDENT> first[events[-1].identifier.arxiv_id] = event_date <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return events
Generate inferred events prior to daily.log based on abs files. Approach: - v1 announced date is the v1 submission date - if there are multiple versions: - scan the daily.log for all replacements of that e-print - align from the most recent version, backward - if there are any remaining versions between v1 and the lowest v from the previous step, use the submission date for that v from the abs file as the announced date. - if we have explicit cross-list events, exclude those crosses from any events that we generate here.
625941c245492302aab5e25c
def log_process_ngr_line(self, log): <NEW_LINE> <INDENT> count = 0 <NEW_LINE> if log != '': <NEW_LINE> <INDENT> if log[0] == '[': <NEW_LINE> <INDENT> if log[0:12] == '[DoS attack:': <NEW_LINE> <INDENT> network_log = self.log_process_parse_ngr_log(self.dos_inc_type, log) <NEW_LINE> count = self.sql_server.sql_execute(network_log.sql_insert_network_log_string()) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> self.warning_logs.append(log) <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return count
process a single log record.
625941c2e5267d203edcdc3a
def test_cim_element_contract(test_case): <NEW_LINE> <INDENT> for required in test_case.valid_metadata: <NEW_LINE> <INDENT> invalid = dict(test_case.valid_metadata) <NEW_LINE> del invalid[required] <NEW_LINE> test_case.assertRaises( DaoContractException, test_case.element.cim_element, invalid, [])
Standard method to test contract enforced.
625941c2046cf37aa974cce5
def read(handle): <NEW_LINE> <INDENT> record = Record(handle) <NEW_LINE> record.comment_line = str(handle.readline()).rstrip() <NEW_LINE> sample_loci_line = str(handle.readline()).rstrip().replace(',', '') <NEW_LINE> all_loci = sample_loci_line.split(' ') <NEW_LINE> record.loci_list.extend(all_loci) <NEW_LINE> line = handle.readline() <NEW_LINE> while line!="": <NEW_LINE> <INDENT> line = line.rstrip() <NEW_LINE> if line.upper()=="POP": <NEW_LINE> <INDENT> record.stack.append("POP") <NEW_LINE> break <NEW_LINE> <DEDENT> record.loci_list.append(line) <NEW_LINE> line = handle.readline() <NEW_LINE> <DEDENT> next_line = handle.readline().rstrip() <NEW_LINE> indiv_name, allele_list, record.marker_len = get_indiv(next_line) <NEW_LINE> record.stack.append(next_line) <NEW_LINE> return record
Parses a handle containing a GenePop file. handle is a file-like object that contains a GenePop record.
625941c28c3a873295158353
def predict(self, testFeatures): <NEW_LINE> <INDENT> if(not self._fitCalled): <NEW_LINE> <INDENT> print('The fit method has not been called yet') <NEW_LINE> return None <NEW_LINE> <DEDENT> preProcTestFeatures = self.pp.preProc(testFeatures) <NEW_LINE> w = self.w <NEW_LINE> X= add_ones(preProcTestFeatures) <NEW_LINE> b = X.dot(w) <NEW_LINE> return b
Method that calculates the predicted outputs given the input features. testFeatures: l x d 2D numpy array where d is the data dimension and l is the number of points to make predictions about returns an l dimensional 1D numpy array composed of the predictions
625941c267a9b606de4a7e56
def valid_configfile(s): <NEW_LINE> <INDENT> try: <NEW_LINE> <INDENT> with open(s, 'r') as f: <NEW_LINE> <INDENT> pass <NEW_LINE> <DEDENT> <DEDENT> except Exception as e: <NEW_LINE> <INDENT> raise argparse.ArgumentTypeError('{} ({})'.format(e.strerror, s)) <NEW_LINE> <DEDENT> return s
Validate that specified argument is a file path that can be opened
625941c25166f23b2e1a50f4
def __init__(self, action_id, next_state_symbol): <NEW_LINE> <INDENT> super(ATPAction, self).__init__(action_id) <NEW_LINE> self._next_state_symbol = next_state_symbol
Deterministic actions for activity-travel MDP. Args: action_id (): next_state_symbol ():
625941c2eab8aa0e5d26daf3
def test_connection(self): <NEW_LINE> <INDENT> return [x for x in self.current_session.query(Site)]
Returns all site.
625941c2bd1bec0571d905ca
def Z_Tensor_1D(resistivities, thicknesses, frequencies): <NEW_LINE> <INDENT> if len(resistivities) != len(thicknesses) + 1: <NEW_LINE> <INDENT> print("Length of inputs incorrect!") <NEW_LINE> return <NEW_LINE> <DEDENT> mu = 4*np.pi*1E-7; <NEW_LINE> n = len(resistivities); <NEW_LINE> master_Z, master_absZ, master_phase = [], [], [] <NEW_LINE> for frequency in frequencies: <NEW_LINE> <INDENT> w = 2*np.pi*frequency; <NEW_LINE> impedances = list(range(n)); <NEW_LINE> impedances[n-1] = np.sqrt(w*mu*resistivities[n-1]*1j); <NEW_LINE> for j in range(n-2,-1,-1): <NEW_LINE> <INDENT> resistivity = resistivities[j]; <NEW_LINE> thickness = thicknesses[j]; <NEW_LINE> dj = np.sqrt((w * mu * (1.0/resistivity))*1j); <NEW_LINE> wj = dj * resistivity; <NEW_LINE> ej = np.exp(-2*thickness*dj); <NEW_LINE> belowImpedance = impedances[j + 1]; <NEW_LINE> rj = (wj - belowImpedance)/(wj + belowImpedance); <NEW_LINE> re = rj*ej; <NEW_LINE> Zj = wj * ((1 - re)/(1 + re)); <NEW_LINE> impedances[j] = Zj; <NEW_LINE> <DEDENT> Z = impedances[0]; <NEW_LINE> phase = math.atan2(Z.imag, Z.real) <NEW_LINE> master_Z.append(Z) <NEW_LINE> master_absZ.append(abs(Z)) <NEW_LINE> master_phase.append(phase) <NEW_LINE> <DEDENT> return np.array(master_Z)
Calculate 1D Z-Tensor for given ground resistivity profile. Parameters ----------- resistivities = array or list of resistivity values in Ohm.m thicknesses = array or list of thicknesses in m. **len(resistivities) must be len(thicknesses) + 1** frequencies = array or list of frequencies to get response of Returns ----------- Z = complex array of Z tensor values Taken from: http://www.digitalearthlab.com/tutorial/tutorial-1d-mt-forward/
625941c263b5f9789fde7080
def testEventMissingEventWithSnmpTrapVersionV1(self): <NEW_LINE> <INDENT> self.log.info('\n\n\n ***** Test case : testEventMissingEventWithSnmpTrapVersionV1 *****') <NEW_LINE> self.log.info('Set the EventType value as snmp-trap using snmpset for EventMissingEvent') <NEW_LINE> self.event_table.set_event('polatisEventType', '10', 3, 'INTEGER') <NEW_LINE> self.verify_trap_and_log('missing', '10', 'snmp_trapv1')
Test case that verifies, logs are not returned in the polatisLogTable and V1 trap is received in the configured trap receiver against the event - 'EventMissing'
625941c2be7bc26dc91cd59f
def _toggle_help(history): <NEW_LINE> <INDENT> help_buffer_control = history.history_layout.help_buffer_control <NEW_LINE> if history.app.layout.current_control == help_buffer_control: <NEW_LINE> <INDENT> history.app.layout.focus_previous() <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> history.app.layout.current_control = help_buffer_control
Display/hide help.
625941c210dbd63aa1bd2b3f
def playerStandings(): <NEW_LINE> <INDENT> DB = connect() <NEW_LINE> c = DB.cursor() <NEW_LINE> c.execute("SELECT * FROM win") <NEW_LINE> c.execute("SELECT * FROM loss") <NEW_LINE> c.execute("SELECT * FROM matchesplayed") <NEW_LINE> c.execute("SELECT * FROM standings") <NEW_LINE> players_standings = c.fetchall() <NEW_LINE> DB.commit() <NEW_LINE> DB.close() <NEW_LINE> return players_standings
Returns a list of the players and their win records, sorted by wins. The first entry in the list should be the player in first place, or a player tied for first place if there is currently a tie. Returns: A list of tuples, each of which contains (id, name, wins, matches): id: the player's unique id (assigned by the database) name: the player's full name (as registered) wins: the number of matches the player has won matches: the number of matches the player has played
625941c2435de62698dfdbe7
def test_required_agreement_submit(self): <NEW_LINE> <INDENT> kwargs = { "initial": {"person": self.neville}, "widgets": {"person": HiddenInput()}, } <NEW_LINE> terms = RequiredConsentsForm(**kwargs).get_terms() <NEW_LINE> data = { term.slug: term.options[0].pk for term in terms.exclude(required_type=Term.PROFILE_REQUIRE_TYPE) } <NEW_LINE> data["person"] = self.neville.pk <NEW_LINE> form = RequiredConsentsForm(data, initial={"person": self.neville}) <NEW_LINE> self.assertFalse(form.is_valid()) <NEW_LINE> for term in terms.filter(required_type=Term.PROFILE_REQUIRE_TYPE): <NEW_LINE> <INDENT> data[term.slug] = term.options[0].pk <NEW_LINE> <DEDENT> form = RequiredConsentsForm(data, initial={"person": self.neville}) <NEW_LINE> self.assertTrue(form.is_valid())
Make sure the form passes only when required terms are set.
625941c256ac1b37e626416e
def show_rankings(y_true, y_score=None): <NEW_LINE> <INDENT> if y_score is not None: <NEW_LINE> <INDENT> y_true = y_true[np.argsort(y_score)[::-1]] <NEW_LINE> <DEDENT> BLOCK_W, BLOCK_H = (10, 10) <NEW_LINE> NEG_BLOCK = np.zeros((BLOCK_W, BLOCK_H, 3), 'uint8') <NEW_LINE> NEG_BLOCK[2:-2, 2:-2, :] = 100 <NEW_LINE> POS_BLOCK = NEG_BLOCK.copy() <NEW_LINE> POS_BLOCK[:, :, 1] = 0 <NEW_LINE> POS_BLOCK[:, :, 2] = 0 <NEW_LINE> POS_BLOCK = Image.fromarray(POS_BLOCK) <NEW_LINE> NEG_BLOCK = Image.fromarray(NEG_BLOCK) <NEW_LINE> n_samples = len(y_true) <NEW_LINE> n_samples_per_row = min(100, n_samples) <NEW_LINE> img_width = int(n_samples_per_row * BLOCK_W) <NEW_LINE> img_height = int(np.ceil(float(n_samples) / n_samples_per_row) * BLOCK_H) <NEW_LINE> whole_img = Image.new('RGB', (img_width, img_height)) <NEW_LINE> for sample_i in range(n_samples): <NEW_LINE> <INDENT> x_coord = sample_i % n_samples_per_row * BLOCK_W <NEW_LINE> y_coord = sample_i / n_samples_per_row * BLOCK_H <NEW_LINE> if y_true[sample_i]: <NEW_LINE> <INDENT> whole_img.paste(POS_BLOCK, (x_coord, y_coord)) <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> whole_img.paste(NEG_BLOCK, (x_coord, y_coord)) <NEW_LINE> <DEDENT> <DEDENT> return whole_img
Input: `y_true` - binary labels `y_score` - the scores used to rank the samples of X, optional if not given, will assume that y_true is already sorted in order from highest ranked to lowest
625941c263f4b57ef00010b9
def __str__(self): <NEW_LINE> <INDENT> return repr(self.colname)
String representation.
625941c2d99f1b3c44c6752f
def _get_button_boundaries(button_list): <NEW_LINE> <INDENT> button_boundaries = [] <NEW_LINE> for button in button_list: <NEW_LINE> <INDENT> if button.active or button.number > 199: <NEW_LINE> <INDENT> button_coords = button.field_coords <NEW_LINE> boundary_x_small, boundary_y_small = button_coords[0] <NEW_LINE> boundary_x_big, boundary_y_big = button_coords[-1] <NEW_LINE> button_boundaries.append([[boundary_x_small, boundary_x_big], [boundary_y_small, boundary_y_big]][:]) <NEW_LINE> <DEDENT> <DEDENT> return button_boundaries
returns the button boundaries of the one button list's buttons :param button_list: list[Button, ...]; list that holds buttons :return: lsit[list[list[int, int], list[int, int]], list, ...]; list that holds the button boundaries
625941c2b545ff76a8913db2
def update_vehicle_profile(spawn_profile, new_vehicle): <NEW_LINE> <INDENT> return spawn_profile.update_vehicle_profile(new_vehicle)
Updates the vehicle profile of the spawning profile :param new_vehicle: new vehicle profile :param spawn_profile: spawning profile :type spawn_profile: SpawningProfile :type new_vehicle: VehicleProfile :return: updated spawning profile
625941c2b57a9660fec3381d
def get_city(): <NEW_LINE> <INDENT> url = 'http://map.amap.com/subway/index.html?&1100' <NEW_LINE> response = requests.get(url=url, headers=headers) <NEW_LINE> html = response.text <NEW_LINE> html = html.encode('ISO-8859-1') <NEW_LINE> html = html.decode('utf-8') <NEW_LINE> soup = BeautifulSoup(html, 'lxml') <NEW_LINE> res1 = soup.find_all(class_="city-list fl")[0] <NEW_LINE> res2 = soup.find_all(class_="more-city-list")[0] <NEW_LINE> with open('stations.csv', 'a+') as f: <NEW_LINE> <INDENT> print('城市行政区划代码', '城市名', '地铁站ID', '地铁站名', '经度', '纬度', '所属线路', sep=',', file=f) <NEW_LINE> <DEDENT> for i in res1.find_all('a'): <NEW_LINE> <INDENT> ID = i['id'] <NEW_LINE> cityname = i['cityname'] <NEW_LINE> name = i.get_text() <NEW_LINE> get_message(ID, cityname, name) <NEW_LINE> <DEDENT> for i in res2.find_all('a'): <NEW_LINE> <INDENT> ID = i['id'] <NEW_LINE> cityname = i['cityname'] <NEW_LINE> name = i.get_text() <NEW_LINE> get_message(ID, cityname, name)
城市信息获取
625941c28e71fb1e9831d745
def ride_details(self, ride_id): <NEW_LINE> <INDENT> sql = "SELECT origin, meet_point, contribution, free_spots, start_date, " "finish_date, driver_id, destination, terms FROM carpool_rides WHERE id=%s" % ride_id <NEW_LINE> self.cursor.execute(sql) <NEW_LINE> result = self.cursor.fetchall() <NEW_LINE> if not result: <NEW_LINE> <INDENT> return jsonify( {"message": "The ride offer with ride_id {} does not exist".format(ride_id)} ), 404 <NEW_LINE> <DEDENT> ride_info_detail = {} <NEW_LINE> for info in result: <NEW_LINE> <INDENT> driver_id = info[6] <NEW_LINE> driver_info = self.get_user_info(driver_id) <NEW_LINE> ride_info_detail['Driver details'] = driver_info <NEW_LINE> ride_info_detail['origin'] = info[0] <NEW_LINE> ride_info_detail['meet_point'] = info[1] <NEW_LINE> ride_info_detail['contribution'] = info[2] <NEW_LINE> ride_info_detail['free_spots'] = info[3] <NEW_LINE> ride_info_detail['start_date'] = info[4] <NEW_LINE> ride_info_detail['finish_date'] = info[5] <NEW_LINE> ride_info_detail['destination'] = info[7] <NEW_LINE> ride_info_detail['terms'] = info[8] <NEW_LINE> <DEDENT> return jsonify({"Ride details": ride_info_detail})
Returns the details of a ride offer with the ride_id provided Also contains the driver information
625941c2004d5f362079a2d0
def servicios_cliente_get_data(request): <NEW_LINE> <INDENT> datos = {'data':[]} <NEW_LINE> servid = request.GET.get('servicio_id', None) <NEW_LINE> if(servid): <NEW_LINE> <INDENT> for serv in ServicioCliente.objects.filter(servicio_id=servid).all(): <NEW_LINE> <INDENT> lista = [] <NEW_LINE> lista.append(serv.servicio.nombre_servicio) <NEW_LINE> lista.append(serv.cliente.nombre) <NEW_LINE> lista.append(serv.cliente.pais.nombre) <NEW_LINE> lista.append(serv.cliente.rif) <NEW_LINE> lista.append(serv.precio) <NEW_LINE> lista.append(serv.tipo_moneda) <NEW_LINE> lista.append(serv.monto_facturado) <NEW_LINE> lista.append(serv.servicio_prestado) <NEW_LINE> datos['data'].append(lista) <NEW_LINE> <DEDENT> return JsonResponse(datos,safe=False) <NEW_LINE> <DEDENT> return JsonResponse("No se envío el id del servicio",safe=False)
! Metodo que extrae los datos de los clientes relacionados con el servicio y la muestra en una url ajax como json @author Rodrigo Boet (rboet at cenditel.gob.ve) @copyright GNU/GPLv2 @date 25-10-2016 @param request <b>{object}</b> Recibe la peticion @return Retorna el json con las subunidades que consiguió
625941c294891a1f4081ba44
def test_entropy_is_zero_for_unimodal_function(): <NEW_LINE> <INDENT> def func_one_min(x): <NEW_LINE> <INDENT> return x**2 <NEW_LINE> <DEDENT> initial_models = 2*random_sample(100) - 1 <NEW_LINE> entropy = estimate_entropy(func_one_min, initial_models, 1e-8, 1e5) <NEW_LINE> assert entropy == 0
Test that the entropy of a function with one extremum is zero.
625941c263f4b57ef00010ba
@bp.route('/<int:competition_id>/next-stage-teams', methods=['GET']) <NEW_LINE> def next_stage_teams(competition_id): <NEW_LINE> <INDENT> teams_per_group = int(request.args.get('teams_per_group', '2')) <NEW_LINE> stage = int(request.args.get('stage')) <NEW_LINE> competition = models.Competition.from_cache_by_id(competition_id) <NEW_LINE> if not competition: <NEW_LINE> <INDENT> raise AppError(error_code=errors.competition_id_noexistent) <NEW_LINE> <DEDENT> c_teams = competition.competings_by_stage(stage - 1) <NEW_LINE> competition_individual = competition.options.get('individual', None) == 'true' <NEW_LINE> rank_ids = [c_team.current_rank.id for c_team in c_teams] <NEW_LINE> rank_additions = models.CompetitionTeamRankAddition.query.filter(models.CompetitionTeamRankAddition.rank_id.in_(rank_ids)).all() <NEW_LINE> rank_additions = dict([(rank_addition.rank_id, rank_addition) for rank_addition in rank_additions]) <NEW_LINE> c_teams_data = [] <NEW_LINE> for c_team in c_teams: <NEW_LINE> <INDENT> c_team_data = c_team.__json__(include_keys=['current_rank']) <NEW_LINE> team = c_team.team <NEW_LINE> if competition_individual: <NEW_LINE> <INDENT> team_data = {'id': team.id, 'name': team.creator.fullname, 'logo': team.creator.user_profile} <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> team_data = {'id': team.id, 'name': team.name, 'logo': team.logo} <NEW_LINE> <DEDENT> c_team_data['team'] = team_data <NEW_LINE> c_team_data['rank_addition'] = rank_additions.get(c_team.current_rank.id, None) <NEW_LINE> c_teams_data.append(c_team_data) <NEW_LINE> <DEDENT> c_teams_data = sorted(c_teams_data, key=lambda e: e['current_rank'].group) <NEW_LINE> results = {} <NEW_LINE> for key, value in itertools.groupby(c_teams_data, key=lambda t: t['current_rank'].group): <NEW_LINE> <INDENT> results[key] = list(v for v in value) <NEW_LINE> <DEDENT> results = dict( [ (group, sorted(group_teams_data, key=lambda d: ( d['current_rank'].pts, (d['rank_addition'].goals_for - d['rank_addition'].goals_against) if d.get('rank_addition') else 0, d['rank_addition'].goals_for if d.get('rank_addition') else 0), reverse=True)[:teams_per_group] ) for (group, group_teams_data) in results.items()]) <NEW_LINE> return json_response(results=results)
获取该赛事下的晋级下一阶段的参赛队伍
625941c2a8ecb033257d3069
def __init__(self, *args, **kwds): <NEW_LINE> <INDENT> if args or kwds: <NEW_LINE> <INDENT> super(Object, self).__init__(*args, **kwds) <NEW_LINE> if self.n is None: <NEW_LINE> <INDENT> self.n = 0 <NEW_LINE> <DEDENT> <DEDENT> else: <NEW_LINE> <INDENT> self.n = 0
Constructor. Any message fields that are implicitly/explicitly set to None will be assigned a default value. The recommend use is keyword arguments as this is more robust to future message changes. You cannot mix in-order arguments and keyword arguments. The available fields are: n :param args: complete set of field values, in .msg order :param kwds: use keyword arguments corresponding to message field names to set specific fields.
625941c26aa9bd52df036d3e
def items(self): <NEW_LINE> <INDENT> return Organisation.objects.all()
Return published entries.
625941c2627d3e7fe0d68dea
def urlize(text, trim_url_limit=None, nofollow=False, target=None): <NEW_LINE> <INDENT> trim_url = lambda x, limit=trim_url_limit: limit is not None and (x[:limit] + (len(x) >=limit and '...' or '')) or x <NEW_LINE> words = _word_split_re.split(text_type(escape(text))) <NEW_LINE> nofollow_attr = nofollow and ' rel="nofollow"' or '' <NEW_LINE> if target is not None and isinstance(target, string_types): <NEW_LINE> <INDENT> target_attr = ' target="%s"' % target <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> target_attr = '' <NEW_LINE> <DEDENT> for i, word in enumerate(words): <NEW_LINE> <INDENT> match = _punctuation_re.match(word) <NEW_LINE> if match: <NEW_LINE> <INDENT> lead, middle, trail = match.groups() <NEW_LINE> if middle.startswith('www.') or ( '@' not in middle and not middle.startswith('http://') and not middle.startswith('https://') and len(middle) > 0 and middle[0] in _letters + _digits and ( middle.endswith('.org') or middle.endswith('.net') or middle.endswith('.com') )): <NEW_LINE> <INDENT> middle = '<a href="http://%s"%s%s>%s</a>' % (middle, nofollow_attr, target_attr, trim_url(middle)) <NEW_LINE> <DEDENT> if middle.startswith('http://') or middle.startswith('https://'): <NEW_LINE> <INDENT> middle = '<a href="%s"%s%s>%s</a>' % (middle, nofollow_attr, target_attr, trim_url(middle)) <NEW_LINE> <DEDENT> if '@' in middle and not middle.startswith('www.') and not ':' in middle and _simple_email_re.match(middle): <NEW_LINE> <INDENT> middle = '<a href="mailto:%s">%s</a>' % (middle, middle) <NEW_LINE> <DEDENT> if lead + middle + trail != word: <NEW_LINE> <INDENT> words[i] = lead + middle + trail <NEW_LINE> <DEDENT> <DEDENT> <DEDENT> return u''.join(words)
Converts any URLs in text into clickable links. Works on http://, https:// and www. links. Links can have trailing punctuation (periods, commas, close-parens) and leading punctuation (opening parens) and it'll still do the right thing. If trim_url_limit is not None, the URLs in link text will be limited to trim_url_limit characters. If nofollow is True, the URLs in link text will get a rel="nofollow" attribute. If target is not None, a target attribute will be added to the link.
625941c2ff9c53063f47c190
def requires_auth(endpoint_class): <NEW_LINE> <INDENT> def fdec(f): <NEW_LINE> <INDENT> @wraps(f) <NEW_LINE> def decorated(*args, **kwargs): <NEW_LINE> <INDENT> if args: <NEW_LINE> <INDENT> resource_name = args[0] <NEW_LINE> resource = app.config['DOMAIN'][args[0]] <NEW_LINE> <DEDENT> else: <NEW_LINE> <INDENT> resource_name = resource = None <NEW_LINE> <DEDENT> if endpoint_class == 'resource': <NEW_LINE> <INDENT> public = resource['public_methods'] <NEW_LINE> roles = resource['allowed_roles'] <NEW_LINE> auth = resource['authentication'] <NEW_LINE> <DEDENT> elif endpoint_class == 'item': <NEW_LINE> <INDENT> public = resource['public_item_methods'] <NEW_LINE> roles = resource['allowed_item_roles'] <NEW_LINE> auth = resource['authentication'] <NEW_LINE> <DEDENT> elif endpoint_class == 'home': <NEW_LINE> <INDENT> public = app.config['PUBLIC_METHODS'] + ['OPTIONS'] <NEW_LINE> roles = app.config['ALLOWED_ROLES'] <NEW_LINE> auth = app.auth <NEW_LINE> <DEDENT> if auth and request.method not in public: <NEW_LINE> <INDENT> auth = auth() <NEW_LINE> if not auth.authorized(roles, resource_name, request.method): <NEW_LINE> <INDENT> return auth.authenticate() <NEW_LINE> <DEDENT> <DEDENT> return f(*args, **kwargs) <NEW_LINE> <DEDENT> return decorated <NEW_LINE> <DEDENT> return fdec
Enables Authorization logic for decorated functions. :param endpoint_class: the 'class' to which the decorated endpoint belongs to. Can be 'resource' (resource endpoint), 'item' (item endpoint) and 'home' for the API entry point. .. versionchanged:: 0.0.7 Passing the 'resource' argument when inoking auth.authenticate() .. versionchanged:: 0.0.5 Support for Cross-Origin Resource Sharing (CORS): 'OPTIONS' request method is now public by default. The actual method ('GET', etc.) will still be protected if so configured. .. versionadded:: 0.0.4
625941c2cc40096d615958ed
def snapshot_get(repository, snapshot, ignore_unavailable=False, hosts=None, profile=None): <NEW_LINE> <INDENT> es = _get_instance(hosts, profile) <NEW_LINE> try: <NEW_LINE> <INDENT> return es.snapshot.get(repository=repository, snapshot=snapshot, ignore_unavailable=ignore_unavailable) <NEW_LINE> <DEDENT> except elasticsearch.TransportError as e: <NEW_LINE> <INDENT> raise CommandExecutionError("Cannot obtain details of snapshot {0} in repository {1}, server returned code {2} with message {3}".format(snapshot, repository, e.status_code, e.error))
.. versionadded:: 2017.7.0 Obtain snapshot residing in specified repository. repository Repository name snapshot Snapshot name, use _all to obtain all snapshots in specified repository ignore_unavailable Ignore unavailable snapshots CLI example:: salt myminion elasticsearch.snapshot_get testrepo testsnapshot
625941c20a50d4780f666e2c
def lstm_step_backward(dnext_h, dnext_c, cache): <NEW_LINE> <INDENT> f, g, i, o, prev_h, prev_c, next_c, x, Wh, Wx = cache <NEW_LINE> N, H = dnext_h.shape <NEW_LINE> do = dnext_h * np.tanh(next_c) <NEW_LINE> dnext_c += dnext_h * o * (1 - np.tanh(next_c)**2) <NEW_LINE> dprev_c = dnext_c * f <NEW_LINE> di = dnext_c * g <NEW_LINE> dg = dnext_c * i <NEW_LINE> df = dnext_c * prev_c <NEW_LINE> dg0 = (1 - g**2) * dg <NEW_LINE> df0 = df * (f*(1-f)) <NEW_LINE> di0 = di * (i*(1-i)) <NEW_LINE> do0 = do * (o*(1-o)) <NEW_LINE> dmiddle_state = np.hstack((di0, df0, do0, dg0)) <NEW_LINE> dWx = x.T.dot(dmiddle_state) <NEW_LINE> dWh = prev_h.T.dot(dmiddle_state) <NEW_LINE> dprev_h = dmiddle_state.dot(Wh.T) <NEW_LINE> dx = dmiddle_state.dot(Wx.T) <NEW_LINE> db = np.ones(N).dot(dmiddle_state) <NEW_LINE> return dx, dprev_h, dprev_c, dWx, dWh, db
Backward pass for a single timestep of an LSTM. Inputs: - dnext_h: Gradients of next hidden state, of shape (N, H) - dnext_c: Gradients of next cell state, of shape (N, H) - cache: Values from the forward pass Returns a tuple of: - dx: Gradient of input data, of shape (N, D) - dprev_h: Gradient of previous hidden state, of shape (N, H) - dprev_c: Gradient of previous cell state, of shape (N, H) - dWx: Gradient of input-to-hidden weights, of shape (D, 4H) - dWh: Gradient of hidden-to-hidden weights, of shape (H, 4H) - db: Gradient of biases, of shape (4H,)
625941c23c8af77a43ae373a
def get_best_model(dataset): <NEW_LINE> <INDENT> best_model = 0 <NEW_LINE> rd.shuffle(dataset) <NEW_LINE> features = [] <NEW_LINE> for i in dataset: <NEW_LINE> <INDENT> i = i[:-1] <NEW_LINE> features.append(i) <NEW_LINE> <DEDENT> labels = [] <NEW_LINE> for i in dataset: <NEW_LINE> <INDENT> labels.append(i[-1]) <NEW_LINE> <DEDENT> scaler = MinMaxScaler() <NEW_LINE> features = list(scaler.fit_transform(features)) <NEW_LINE> X = np.array(features, dtype=np.float32) <NEW_LINE> y = np.array(labels, dtype=np.float32) <NEW_LINE> X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3) <NEW_LINE> best_model = get_optimal_decision_tree(X_train, X_test, y_train, y_test) <NEW_LINE> accuracy = metrics.accuracy_score( y_test, best_model.predict(X_test) ) <NEW_LINE> return best_model, accuracy
Creates the dataset and trains the model, optimises it with the genetic algorithm and returns the model and its accuracy
625941c26fece00bbac2d6d9
def spotify_scope(scope_name: str) -> Spotify: <NEW_LINE> <INDENT> scope = Spotify(auth_manager=SpotifyOAuth(scope=scope_name)) <NEW_LINE> scope.trace = False <NEW_LINE> return scope
Create a Spotify object with a particular scope. :param scope_name: name of the scope :return: Spotify object at a specific scope
625941c2656771135c3eb808