text
stringlengths 1
22.8M
|
|---|
In labour law, unfair dismissal is an act of employment termination made without good reason or contrary to the country's specific legislation.
Situation per country
Australia
Australia has long-standing protection for employees in relation to dismissal. Most of that protection was however confined in one of two ways. An employer could not dismiss an employee for a prohibited reason, most typically membership of a union. An individual however could not challenge their own dismissal as being unfair and instead had to rely upon a union challenging the fairness of the dismissal. This remedy however was generally only available in the state tribunals. A similar definition existed at the Commonwealth level, however it was considerably limited by the requirement under the Constitution to establish an inter-state dispute. The ability for an individual to seek relief from unfair dismissal was first established in a statutory scheme in South Australia in 1972, followed thereafter by Western Australia, Queensland, New South Wales and Victoria in the early 1990s.
Protection from unfair dismissal at the Commonwealth level was enhanced in 1984 by the Commonwealth Conciliation and Arbitration Commission with its ruling in the Termination, Change and Redundancy Case, that awards should contain a provision that dismissal "shall not be harsh, unjust or unreasonable" and subsequent awards following it were upheld by the High Court of Australia. The Parliament of Australia later extended the reach of protection from unfair dismissal with the passage of the Industrial Relations Reform Act 1993, which was based on the external affairs power and the ILO Termination of Employment Convention, 1982.
In current Australian law, unfair dismissal occurs where the Fair Work Commission, acting under section 385 of the Fair Work Act 2009, determines that:
a person has been dismissed;
the dismissal was harsh, unjust or unreasonable;
it was not consistent with the Small Business Fair Dismissal Code; and
it was not a case of genuine redundancy.
If the Fair Work Commission determines that a dismissal was unfair, the Commission must decide whether to order reinstatement or compensation. The commission is required to first consider whether reinstatement is appropriate and can only order compensation (capped at 6 months pay) if it is satisfied that reinstatement is inappropriate.
Canada
Labour law in Canada falls within both federal and provincial jurisdiction, depending on the sector affected. Complaints relating to unjust dismissal () (where "the employee has been dismissed and considers the dismissal to be unjust," which in certain cases also includes constructive dismissal) can be made under the Canada Labour Code, as well as similar provisions in effect in Quebec and Nova Scotia, all of which were introduced in the late 1970s.
Under the federal Code, non-unionized employees with more than twelve months of continuous employment, other than managers, have the ability to file complaints for unjust dismissal within 90 days of being so dismissed. In making the complaint, the employee has the right to "make a request in writing to the employer to provide a written statement giving the reasons for the dismissal," which must be supplied within 15 days of the request. Complaints are initially investigated by an inspector, who will then work towards a settlement within a reasonable time, failing which the Minister of Labour may refer the matter to an adjudicator in cases other than where "that person has been laid off because of lack of work or because of the discontinuance of a function" or "a procedure for redress has been provided elsewhere in or under this or any other Act of Parliament." Where the dismissal is determined to be unjust, the adjudicator has broad remedial authority, including ordering the payment of compensation and reinstatement to employment.
While many employers have attempted to contract out of these provisions through the payment of a severance package together with a signed release from pursuing any claims under the Code, the Supreme Court of Canada ruled in 2016 that the Code's provisions effectively ousted such common law remedies.
France
Unfair dismissal became part of French labour law in 1973, but certain other protections had been previously instituted as far back as 1892.
The Labour Code () governs the procedure under which dismissal () may occur, as well as specifying the grounds under which it is valid or not. Dismissal may occur on grounds of personal performance () or economic reasons ().
Where the employer believes that there is a valid reason () for dismissal on personal grounds, it must give five working days' notice to the employee that a meeting with him must take place, and a decision to dismiss (exercised in writing, sent by registered mail) can only be made not less than two days afterwards.
Where dismissal occurs on economic grounds, the employee has the right to be notified of the employer's obligation during the following 12 months to inform him of any position that becomes available that calls for his qualifications. Failure to give prior notice, as well as failure to advise of any open position, will be causes for unfair dismissal.
An employee may challenge a dismissal by making a complaint to the Labour Court ().
Where an employee has at least two years' service, the employer faces several claims:
Failure to follow procedural requirements may result in compensation of one month's pay being awarded to the employee.
Where unfair dismissal () has been determined to have occurred, the Court may order reinstatement of employment (). If either party refuses to accept that remedy, compensation of not less than six months' pay will be awarded instead The employer will also be ordered to repay any unemployment benefits the employee may have received, to a maximum of six months' paid.
Where unfair dismissal occurs because of the failure to observe the notification obligations for recall rights, the court may award:
where the employee has at least two years' service and the workforce consists of at least 11 workers, a minimum of two months' pay
in all other cases, an amount in line with the existence and extent of any detriment the employee faced.
Where an employee has less than two years' service, or where the workforce has fewer than 11 employees, recall rights are not available, as well as the normal remedies for unfair dismissal. The remedy of one month's pay is still available in cases involving failure to follow procedural requirements, and an appropriate amount of compensation may still be ordered in cases where dismissal was improperly executed ().
Where an employee has had at least one year's service, the employer also faces a separate claim for severance pay (). The amount is equal to 20% of the base monthly pay times the number of years' service up to 10 years, plus 2/15 of base monthly pay times the number of years' service greater than 10 years.
Namibia
Unfair dismissal in Namibia is defined by the Labour Act, 2007, under which the employer has the burden of the proof that a dismissal was fair. Explicitly listed as cases or unfair dismissal are those due to discrimination in terms of race, religion, political opinion, marital or socio-economic status, as well as dismissals that arise from trade union activities. Any termination of employment that does not give any valid and fair reason is automatically assumed unfair.
Poland
Rules and grounds of employment termination in Poland are regulated in the Labor Code of Poland.
Unjustified dismissal of an employee includes:
failure to comply with the appropriate form of termination notice,
failure to inform the employee of the legal remedies available to him in this situation,
shortening the notice period,
dismissing a person covered by special protection against dismissal,
dismissing without justified reason.
Each employee has a right to file an appeal against termination to the Court. Available remedies are:
reinstatement to work under the previous conditions or
on the compensation to be paid by the former employer.
The amount of compensation depends on since then, the employee has been unemployed for a long time. The employee is entitled to compensation in the amount of remuneration for the period from 2 weeks to 3 months, not lower than he would receive if he worked during the notice period. If the employee has returned to work, the employer will have to pay him compensation for the period of unemployment. As in the above case, it amounts to the sum of remuneration for the period from 2 weeks to 3 months, not lower than he would have earned while working on notice. The amount of compensation of the protected persons is especially due for the entire period of unemployment – these are pregnant women, trade union members, persons in the protected period due to age.
United Kingdom
After the release of the Donovan Report in 1968, the British Parliament passed the Industrial Relations Act 1971 which introduced the concept of unfair dismissal into UK law and its enforcement by the National Industrial Relations Court. The Trade Union and Labour Relations Act 1974 abolished the court and replaced it with a network of industrial tribunals (later renamed employment tribunals). The scheme is currently governed by Part X of the Employment Rights Act 1996.
Employees have the right not to be unfairly dismissed (with the exception of a number of exclusions). Following discussions with an employer, an employee can agree not to pursue a claim for unfair dismissal if they reach a settlement agreement (historically a compromise agreement). For a settlement agreement to be binding the employee must have taken advice as to the effect of the agreement from a relevant independent adviser, that is a qualified lawyer; a Trade Union certified and authorised officer, official, employee or member; or a certified advice centre worker.
In 2011, Aikens LJ summarized the jurisprudence on what constitutes an unfair dismissal:
The reason for the dismissal of an employee is a set of facts known to an employer, or it may be a set of beliefs held by him, which causes him to dismiss an employee.
An employer cannot rely on facts of which he did not know at the time of the dismissal of an employee to establish that the "real reason" for dismissing the employee was one of those set out in the statute or was of a kind that justified the dismissal of the employee holding the position he did.
Once the employer has established before a tribunal that the "real reason" for dismissing the employee is one within s. 98(1)(b), i.e. that it was a "valid reason", the Employment Tribunal has to decide whether the dismissal was fair or unfair. That requires, first and foremost, the application of the statutory test set out in s. 98(4)(a).
In applying that sub-section, the tribunal must decide on the reasonableness of the employer's decision to dismiss for the "real reason". That involves a consideration, at least in misconduct cases, of three aspects of the employer's conduct. First, did the employer carry out an investigation into the matter that was reasonable in the circumstances of the case; secondly, did the employer believe that the employee was guilty of the misconduct complained of and, thirdly, did the employer have reasonable grounds for that belief. If the answer to each of those questions is "yes", the tribunal must then decide on the reasonableness of the response of the employer.
In doing the exercise set out above, the tribunal must consider, by the objective standards of the hypothetical reasonable employer, rather than by reference to its own subjective views, whether the employer has acted within a "band or range of reasonable responses" to the particular misconduct found of the particular employee. If it has, then the employer's decision to dismiss will be reasonable. But that is not the same thing as saying that a decision of an employer to dismiss will only be regarded as unreasonable if it is shown to be perverse.
The tribunal must not simply consider whether they think that the dismissal was fair and thereby substitute their decision as to what was the right course to adopt for that of the employer. It must determine whether the decision of the employer to dismiss the employee fell within the band of reasonable responses which "a reasonable employer might have adopted".
A tribunal may not substitute its own evaluation of a witness for that of the employer at the time of its investigation and dismissal, save in exceptional circumstances.
A tribunal must focus its attention on the fairness of the conduct of the employer at the time of the investigation and dismissal (or any appeal process) and not on whether in fact the employee has suffered an injustice.
See also
Wrongful dismissal
Further reading
Notes
References
Labour law
|
DigiLocker is a digitization service provided by the Indian Ministry of Electronics and Information Technology (MeitY) under its Digital India initiative. DigiLocker allows access to digital versions of various documents including drivers licenses, vehicle registration certificates and academic mark sheets. It also provides 1 GB storage space to each account to upload scanned copies of legacy documents.
Users need to possess an Aadhaar number to use DigiLocker. During registration, user identity is verified using a one-time password sent to the linked mobile number.
The beta version of the service was rolled out in February 2015, and was launched to the public by Prime Minister Narendra Modi on 1 July 2015. Storage space for uploaded legacy documents was initially 100 MB. Individual files are limited to 10 MB.
In July 2016, DigiLocker recorded 2.013 million users with a repository of 2.413 million documents. The number of users saw a large jump of 753,000 new users in April when the central government urged municipal bodies to use DigiLocker to make their administration paperless.
From 2017, the facility was extended to allow students of the ICSE board to store their class X and XII certificates in DigiLocker and share them as required. In February 2017, Kotak Mahindra Bank started providing access to documents in DigiLocker from within its net-banking application, allowing users to electronically sign and share them. In May 2017, over 108 hospitals, including the Tata Memorial Hospital were planning to launch the use of DigiLocker for storing cancer patients' medical documents and test reports. According to a UIDAI architect, patients would be provided a number key, which they could share with other hospitals to grant them access to their test reports.
As of December 2019, DigiLocker provides access to over 372 crore authentic documents from 149 issuers. Over 3.3 crore users are registered on the platform and 43 requester organisations are accepting documents from DigiLocker. In 2023, Government of India integrated Passport Application Form with Digilocker.
There is also an associated facility for e-signing documents. The service is intended to minimise the use of physical documents and reduce administrative expense, while proving the authenticity of the documents, providing secure access to government-issued documents and making it easy for the residents to receive services.
Structure of DigiLocker
Each user's digital locker has the following sections.
My Certificates: This section has two subsections:
Digital Documents: This contains the URI's of the documents issued to the user by government departments or other agencies.
Uploaded Documents: This subsection lists all the documents which are uploaded by the user. Each file to be uploaded should not be more than 10MB in size. Only pdf, jpg, jpeg, png, bmp and gif file types can be uploaded.
My Profile: This section displays the complete profile of the user as available in the UIDAI database.
My Issuer: This section displays the issuers' names and the number of documents issued to the user by the issuer.
My Requester: This section displays the requesters' names and the number of documents requested from the user by the requesters.
Directories: This section displays the complete list of registered issuers and requesters along with their URLs.
Amendments to IT Act for Digital Locker
DigiLocker is not merely a technical platform. The Ministry of Electronics and IT, has notified rules concerning the service. Amendments made to the Information Technology Act, 2000 in February 2017 state that the documents provided and shared through DigiLocker are at par with the corresponding physical certificates.
According to this Rule, – (1) Issuers may start issuing and Requesters may start accepting digitally (or electronically) signed certificates or documents shared from subscribers’ Digital Locker accounts at par with the physical documents in accordance with the provisions of the Act and rules made thereunder.
(2) When such certificate or document mentioned in sub-rule (1) has been issued or pushed in the Digital Locker System by an issuer and subsequently accessed or accepted by a requester through the URI, it shall be deemed to have been shared by the issuer directly in electronic form.
Important Notifications from Government Departments Regarding DigiLocker
Insurance Regulator (Insurance Regulatory Authority of India): IRDAI advises all Insurance companies for issuance of Digital Insurance Policies via DigiLocker
Security measures of DigiLocker
Following are the security measures used in the system
256 Bit SSL Encryption
Mobile Authentication based Sign Up
ISO 27001 certified Data Centre
Data Redundancy
Timed Log Out
Security Audit
See also
India Stack
Aadhar
Direct Benefit Transfer
ESign (India)
UMANG
Unified Payments Interface
Common man empowerment:
Har ghar jal (water connection for each house)
One Nation, One Ration Card (food security card)
Pradhan Mantri Awas Yojana (affordable housing for all)
Saubhagya electrification scheme (electrification of all houses)
Swachh Bharat (toilet for all houses)
Ujjwala Yojana (clean cooking gas connections for all)
References
External links
Ministry of Communications and Information Technology (India)
Internet in India
E-government in India
Modi administration initiatives
Digital India initiatives
2015 establishments in India
|
George Prothero (18 March 1818 – 16 November 1894) was a Welsh first-class cricketer and clergyman.
The son of Thomas Prothero, he was born in March 1818 at Newport. He was educated at Harrow School, matriculating at Wadham College, Oxford in 1838, and graduating B.A. in 1843. While studying at Oxford, he made a single appearance in first-class cricket for Oxford University against the Marylebone Cricket Club at Lord's in 1839. Batting twice in the match, he was dismissed for 3 runs by Henry Walker in the Oxford first-innings, while in their second-innings he was dismissed without scoring by John Bayley.
After graduating from Oxford, Prothero took holy orders in the Church of England. His first ecclesiastical post was as vicar of Clifton upon Teme in Worcestershire from 1847 to 1853, before moving to Whippingham on the Isle of Wight, where he was rector from 1857. He was made a Canon of Westminster in 1869 and was a Chaplain-in-Ordinary to Queen Victoria. Prothero died at Whippingham in November 1894. He had married Emma Money-Kyrle in June 1846, with the couple having five children. These included the politician and cricketer Rowland Prothero, the historian George Prothero, and the Royal Navy admiral Arthur Prothero.
References
External links
1818 births
1894 deaths
Sportspeople from Newport, Wales
People educated at Harrow School
Alumni of Brasenose College, Oxford
Welsh cricketers
Oxford University cricketers
19th-century Welsh Anglican priests
Canons of Westminster
Honorary Chaplains to the King
|
Hilko Ristau (born 24 April 1974) is a German former professional footballer. He made his debut on the professional league level in the 2. Bundesliga for SG Wattenscheid 09 on 29 August 1997 coming on as a substitute in the 31st minute in the game against SpVgg Unterhaching.
References
Living people
1974 births
Footballers from Bremerhaven
Men's association football defenders
German men's footballers
OSC Bremerhaven players
Bonner SC players
SG Wattenscheid 09 players
VfL Bochum players
1. FC Saarbrücken players
Rot-Weiss Essen players
Bundesliga players
2. Bundesliga players
|
Spy in the Ocean is a 2023 BBC Television documentary series portraying sea creatures filmed in the wild, using camera-equipped marine life-shaped animatronic puppets.
See also
Spy in the Wild
References
External links
BBC television documentaries
2020s British documentary television series
2023 British television series debuts
Nature educational television series
|
```c
/* Perform non-arithmetic operations on values, for GDB.
1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004
Free Software Foundation, Inc.
This file is part of GDB.
This program is free software; you can redistribute it and/or modify
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
along with this program; if not, write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330,
Boston, MA 02111-1307, USA. */
#include "defs.h"
#include "symtab.h"
#include "gdbtypes.h"
#include "value.h"
#include "frame.h"
#include "inferior.h"
#include "gdbcore.h"
#include "target.h"
#include "demangle.h"
#include "language.h"
#include "gdbcmd.h"
#include "regcache.h"
#include "cp-abi.h"
#include "block.h"
#include "infcall.h"
#include "dictionary.h"
#include "cp-support.h"
#include <errno.h>
#include "gdb_string.h"
#include "gdb_assert.h"
#include "cp-support.h"
#include "observer.h"
extern int overload_debug;
/* Local functions. */
static int typecmp (int staticp, int varargs, int nargs,
struct field t1[], struct value *t2[]);
static struct value *search_struct_field (char *, struct value *, int,
struct type *, int);
static struct value *search_struct_method (char *, struct value **,
struct value **,
int, int *, struct type *);
static int find_oload_champ_namespace (struct type **arg_types, int nargs,
const char *func_name,
const char *qualified_name,
struct symbol ***oload_syms,
struct badness_vector **oload_champ_bv);
static
int find_oload_champ_namespace_loop (struct type **arg_types, int nargs,
const char *func_name,
const char *qualified_name,
int namespace_len,
struct symbol ***oload_syms,
struct badness_vector **oload_champ_bv,
int *oload_champ);
static int find_oload_champ (struct type **arg_types, int nargs, int method,
int num_fns,
struct fn_field *fns_ptr,
struct symbol **oload_syms,
struct badness_vector **oload_champ_bv);
static int oload_method_static (int method, struct fn_field *fns_ptr,
int index);
enum oload_classification { STANDARD, NON_STANDARD, INCOMPATIBLE };
static enum
oload_classification classify_oload_match (struct badness_vector
* oload_champ_bv,
int nargs,
int static_offset);
static int check_field_in (struct type *, const char *);
static struct value *value_struct_elt_for_reference (struct type *domain,
int offset,
struct type *curtype,
char *name,
struct type *intype,
enum noside noside);
static struct value *value_namespace_elt (const struct type *curtype,
char *name,
enum noside noside);
static struct value *value_maybe_namespace_elt (const struct type *curtype,
char *name,
enum noside noside);
static CORE_ADDR allocate_space_in_inferior (int);
static struct value *cast_into_complex (struct type *, struct value *);
static struct fn_field *find_method_list (struct value ** argp, char *method,
int offset,
struct type *type, int *num_fns,
struct type **basetype,
int *boffset);
void _initialize_valops (void);
/* Flag for whether we want to abandon failed expression evals by default. */
#if 0
static int auto_abandon = 0;
#endif
int overload_resolution = 0;
/* Find the address of function name NAME in the inferior. */
struct value *
find_function_in_inferior (const char *name)
{
struct symbol *sym;
struct minimal_symbol *msymbol;
sym = lookup_symbol (name, 0, VAR_DOMAIN, 0, NULL);
if (sym != NULL)
{
if (SYMBOL_CLASS (sym) != LOC_BLOCK)
error (_("\"%s\" exists in this program but is not a function."),
name);
if (TYPE_PROTOTYPED (SYMBOL_TYPE (sym)))
return value_of_variable (sym, NULL);
}
msymbol = lookup_minimal_symbol (name, NULL, NULL);
if (msymbol != NULL)
{
struct type *type;
CORE_ADDR maddr;
type = lookup_pointer_type (builtin_type_char);
type = lookup_function_type (type);
type = lookup_pointer_type (type);
maddr = SYMBOL_VALUE_ADDRESS (msymbol);
return value_from_pointer (type, maddr);
}
if (!target_has_execution)
error ("evaluation of this expression requires the target program to be active");
else
error ("evaluation of this expression requires the program to have a function \"%s\".", name);
}
/* Allocate NBYTES of space in the inferior using the inferior's malloc
and return a value that is a pointer to the allocated space. */
struct value *
value_allocate_space_in_inferior (int len)
{
struct value *blocklen;
struct value *val = find_function_in_inferior (NAME_OF_MALLOC);
blocklen = value_from_longest (builtin_type_int, (LONGEST) len);
val = call_function_by_hand (val, 1, &blocklen);
if (value_logical_not (val))
{
if (!target_has_execution)
error ("No memory available to program now: you need to start the target first");
else
error ("No memory available to program: call to malloc failed");
}
return val;
}
static CORE_ADDR
allocate_space_in_inferior (int len)
{
return value_as_long (value_allocate_space_in_inferior (len));
}
/* Cast value ARG2 to type TYPE and return as a value.
More general than a C cast: accepts any two types of the same length,
and if ARG2 is an lvalue it can be cast into anything at all. */
/* In C++, casts may change pointer or object representations. */
struct value *
value_cast (struct type *type, struct value *arg2)
{
enum type_code code1;
enum type_code code2;
int scalar;
struct type *type2;
int convert_to_boolean = 0;
if (VALUE_TYPE (arg2) == type)
return arg2;
CHECK_TYPEDEF (type);
code1 = TYPE_CODE (type);
COERCE_REF (arg2);
type2 = check_typedef (VALUE_TYPE (arg2));
/* A cast to an undetermined-length array_type, such as (TYPE [])OBJECT,
is treated like a cast to (TYPE [N])OBJECT,
where N is sizeof(OBJECT)/sizeof(TYPE). */
if (code1 == TYPE_CODE_ARRAY)
{
struct type *element_type = TYPE_TARGET_TYPE (type);
unsigned element_length = TYPE_LENGTH (check_typedef (element_type));
if (element_length > 0
&& TYPE_ARRAY_UPPER_BOUND_TYPE (type) == BOUND_CANNOT_BE_DETERMINED)
{
struct type *range_type = TYPE_INDEX_TYPE (type);
int val_length = TYPE_LENGTH (type2);
LONGEST low_bound, high_bound, new_length;
if (get_discrete_bounds (range_type, &low_bound, &high_bound) < 0)
low_bound = 0, high_bound = 0;
new_length = val_length / element_length;
if (val_length % element_length != 0)
warning ("array element type size does not divide object size in cast");
/* FIXME-type-allocation: need a way to free this type when we are
done with it. */
range_type = create_range_type ((struct type *) NULL,
TYPE_TARGET_TYPE (range_type),
low_bound,
new_length + low_bound - 1);
VALUE_TYPE (arg2) = create_array_type ((struct type *) NULL,
element_type, range_type);
return arg2;
}
}
if (current_language->c_style_arrays
&& TYPE_CODE (type2) == TYPE_CODE_ARRAY)
arg2 = value_coerce_array (arg2);
if (TYPE_CODE (type2) == TYPE_CODE_FUNC)
arg2 = value_coerce_function (arg2);
type2 = check_typedef (VALUE_TYPE (arg2));
COERCE_VARYING_ARRAY (arg2, type2);
code2 = TYPE_CODE (type2);
if (code1 == TYPE_CODE_COMPLEX)
return cast_into_complex (type, arg2);
if (code1 == TYPE_CODE_BOOL)
{
code1 = TYPE_CODE_INT;
convert_to_boolean = 1;
}
if (code1 == TYPE_CODE_CHAR)
code1 = TYPE_CODE_INT;
if (code2 == TYPE_CODE_BOOL || code2 == TYPE_CODE_CHAR)
code2 = TYPE_CODE_INT;
scalar = (code2 == TYPE_CODE_INT || code2 == TYPE_CODE_FLT
|| code2 == TYPE_CODE_ENUM || code2 == TYPE_CODE_RANGE);
if (code1 == TYPE_CODE_STRUCT
&& code2 == TYPE_CODE_STRUCT
&& TYPE_NAME (type) != 0)
{
/* Look in the type of the source to see if it contains the
type of the target as a superclass. If so, we'll need to
offset the object in addition to changing its type. */
struct value *v = search_struct_field (type_name_no_tag (type),
arg2, 0, type2, 1);
if (v)
{
VALUE_TYPE (v) = type;
return v;
}
}
if (code1 == TYPE_CODE_FLT && scalar)
return value_from_double (type, value_as_double (arg2));
else if ((code1 == TYPE_CODE_INT || code1 == TYPE_CODE_ENUM
|| code1 == TYPE_CODE_RANGE)
&& (scalar || code2 == TYPE_CODE_PTR))
{
LONGEST longest;
if (deprecated_hp_som_som_object_present /* if target compiled by HP aCC */
&& (code2 == TYPE_CODE_PTR))
{
unsigned int *ptr;
struct value *retvalp;
switch (TYPE_CODE (TYPE_TARGET_TYPE (type2)))
{
/* With HP aCC, pointers to data members have a bias */
case TYPE_CODE_MEMBER:
retvalp = value_from_longest (type, value_as_long (arg2));
/* force evaluation */
ptr = (unsigned int *) VALUE_CONTENTS (retvalp);
*ptr &= ~0x20000000; /* zap 29th bit to remove bias */
return retvalp;
/* While pointers to methods don't really point to a function */
case TYPE_CODE_METHOD:
error ("Pointers to methods not supported with HP aCC");
default:
break; /* fall out and go to normal handling */
}
}
/* When we cast pointers to integers, we mustn't use
POINTER_TO_ADDRESS to find the address the pointer
represents, as value_as_long would. GDB should evaluate
expressions just as the compiler would --- and the compiler
sees a cast as a simple reinterpretation of the pointer's
bits. */
if (code2 == TYPE_CODE_PTR)
longest = extract_unsigned_integer (VALUE_CONTENTS (arg2),
TYPE_LENGTH (type2));
else
longest = value_as_long (arg2);
return value_from_longest (type, convert_to_boolean ?
(LONGEST) (longest ? 1 : 0) : longest);
}
else if (code1 == TYPE_CODE_PTR && (code2 == TYPE_CODE_INT ||
code2 == TYPE_CODE_ENUM ||
code2 == TYPE_CODE_RANGE))
{
/* TYPE_LENGTH (type) is the length of a pointer, but we really
want the length of an address! -- we are really dealing with
addresses (i.e., gdb representations) not pointers (i.e.,
target representations) here.
This allows things like "print *(int *)0x01000234" to work
without printing a misleading message -- which would
otherwise occur when dealing with a target having two byte
pointers and four byte addresses. */
int addr_bit = TARGET_ADDR_BIT;
LONGEST longest = value_as_long (arg2);
if (addr_bit < sizeof (LONGEST) * HOST_CHAR_BIT)
{
if (longest >= ((LONGEST) 1 << addr_bit)
|| longest <= -((LONGEST) 1 << addr_bit))
warning ("value truncated");
}
return value_from_longest (type, longest);
}
else if (TYPE_LENGTH (type) == TYPE_LENGTH (type2))
{
if (code1 == TYPE_CODE_PTR && code2 == TYPE_CODE_PTR)
{
struct type *t1 = check_typedef (TYPE_TARGET_TYPE (type));
struct type *t2 = check_typedef (TYPE_TARGET_TYPE (type2));
if (TYPE_CODE (t1) == TYPE_CODE_STRUCT
&& TYPE_CODE (t2) == TYPE_CODE_STRUCT
&& !value_logical_not (arg2))
{
struct value *v;
/* Look in the type of the source to see if it contains the
type of the target as a superclass. If so, we'll need to
offset the pointer rather than just change its type. */
if (TYPE_NAME (t1) != NULL)
{
v = search_struct_field (type_name_no_tag (t1),
value_ind (arg2), 0, t2, 1);
if (v)
{
v = value_addr (v);
VALUE_TYPE (v) = type;
return v;
}
}
/* Look in the type of the target to see if it contains the
type of the source as a superclass. If so, we'll need to
offset the pointer rather than just change its type.
FIXME: This fails silently with virtual inheritance. */
if (TYPE_NAME (t2) != NULL)
{
v = search_struct_field (type_name_no_tag (t2),
value_zero (t1, not_lval), 0, t1, 1);
if (v)
{
CORE_ADDR addr2 = value_as_address (arg2);
addr2 -= (VALUE_ADDRESS (v)
+ VALUE_OFFSET (v)
+ VALUE_EMBEDDED_OFFSET (v));
return value_from_pointer (type, addr2);
}
}
}
/* No superclass found, just fall through to change ptr type. */
}
VALUE_TYPE (arg2) = type;
arg2 = value_change_enclosing_type (arg2, type);
VALUE_POINTED_TO_OFFSET (arg2) = 0; /* pai: chk_val */
return arg2;
}
else if (VALUE_LVAL (arg2) == lval_memory)
{
return value_at_lazy (type, VALUE_ADDRESS (arg2) + VALUE_OFFSET (arg2),
VALUE_BFD_SECTION (arg2));
}
else if (code1 == TYPE_CODE_VOID)
{
return value_zero (builtin_type_void, not_lval);
}
else
{
error ("Invalid cast.");
return 0;
}
}
/* Create a value of type TYPE that is zero, and return it. */
struct value *
value_zero (struct type *type, enum lval_type lv)
{
struct value *val = allocate_value (type);
memset (VALUE_CONTENTS (val), 0, TYPE_LENGTH (check_typedef (type)));
VALUE_LVAL (val) = lv;
return val;
}
/* Return a value with type TYPE located at ADDR.
Call value_at only if the data needs to be fetched immediately;
if we can be 'lazy' and defer the fetch, perhaps indefinately, call
value_at_lazy instead. value_at_lazy simply records the address of
the data and sets the lazy-evaluation-required flag. The lazy flag
is tested in the VALUE_CONTENTS macro, which is used if and when
the contents are actually required.
Note: value_at does *NOT* handle embedded offsets; perform such
adjustments before or after calling it. */
struct value *
value_at (struct type *type, CORE_ADDR addr, asection *sect)
{
struct value *val;
if (TYPE_CODE (check_typedef (type)) == TYPE_CODE_VOID)
error ("Attempt to dereference a generic pointer.");
val = allocate_value (type);
read_memory (addr, VALUE_CONTENTS_ALL_RAW (val), TYPE_LENGTH (type));
VALUE_LVAL (val) = lval_memory;
VALUE_ADDRESS (val) = addr;
VALUE_BFD_SECTION (val) = sect;
return val;
}
/* Return a lazy value with type TYPE located at ADDR (cf. value_at). */
struct value *
value_at_lazy (struct type *type, CORE_ADDR addr, asection *sect)
{
struct value *val;
if (TYPE_CODE (check_typedef (type)) == TYPE_CODE_VOID)
error ("Attempt to dereference a generic pointer.");
val = allocate_value (type);
VALUE_LVAL (val) = lval_memory;
VALUE_ADDRESS (val) = addr;
VALUE_LAZY (val) = 1;
VALUE_BFD_SECTION (val) = sect;
return val;
}
/* Called only from the VALUE_CONTENTS and VALUE_CONTENTS_ALL macros,
if the current data for a variable needs to be loaded into
VALUE_CONTENTS(VAL). Fetches the data from the user's process, and
clears the lazy flag to indicate that the data in the buffer is valid.
If the value is zero-length, we avoid calling read_memory, which would
abort. We mark the value as fetched anyway -- all 0 bytes of it.
This function returns a value because it is used in the VALUE_CONTENTS
macro as part of an expression, where a void would not work. The
value is ignored. */
int
value_fetch_lazy (struct value *val)
{
CORE_ADDR addr = VALUE_ADDRESS (val) + VALUE_OFFSET (val);
int length = TYPE_LENGTH (VALUE_ENCLOSING_TYPE (val));
struct type *type = VALUE_TYPE (val);
if (length)
read_memory (addr, VALUE_CONTENTS_ALL_RAW (val), length);
VALUE_LAZY (val) = 0;
return 0;
}
/* Store the contents of FROMVAL into the location of TOVAL.
Return a new value with the location of TOVAL and contents of FROMVAL. */
struct value *
value_assign (struct value *toval, struct value *fromval)
{
struct type *type;
struct value *val;
struct frame_id old_frame;
if (!toval->modifiable)
error ("Left operand of assignment is not a modifiable lvalue.");
COERCE_REF (toval);
type = VALUE_TYPE (toval);
if (VALUE_LVAL (toval) != lval_internalvar)
fromval = value_cast (type, fromval);
else
COERCE_ARRAY (fromval);
CHECK_TYPEDEF (type);
/* Since modifying a register can trash the frame chain, and modifying memory
can trash the frame cache, we save the old frame and then restore the new
frame afterwards. */
old_frame = get_frame_id (deprecated_selected_frame);
switch (VALUE_LVAL (toval))
{
case lval_internalvar:
set_internalvar (VALUE_INTERNALVAR (toval), fromval);
val = value_copy (VALUE_INTERNALVAR (toval)->value);
val = value_change_enclosing_type (val, VALUE_ENCLOSING_TYPE (fromval));
VALUE_EMBEDDED_OFFSET (val) = VALUE_EMBEDDED_OFFSET (fromval);
VALUE_POINTED_TO_OFFSET (val) = VALUE_POINTED_TO_OFFSET (fromval);
return val;
case lval_internalvar_component:
set_internalvar_component (VALUE_INTERNALVAR (toval),
VALUE_OFFSET (toval),
VALUE_BITPOS (toval),
VALUE_BITSIZE (toval),
fromval);
break;
case lval_memory:
{
char *dest_buffer;
CORE_ADDR changed_addr;
int changed_len;
char buffer[sizeof (LONGEST)];
if (VALUE_BITSIZE (toval))
{
/* We assume that the argument to read_memory is in units of
host chars. FIXME: Is that correct? */
changed_len = (VALUE_BITPOS (toval)
+ VALUE_BITSIZE (toval)
+ HOST_CHAR_BIT - 1)
/ HOST_CHAR_BIT;
if (changed_len > (int) sizeof (LONGEST))
error ("Can't handle bitfields which don't fit in a %d bit word.",
(int) sizeof (LONGEST) * HOST_CHAR_BIT);
read_memory (VALUE_ADDRESS (toval) + VALUE_OFFSET (toval),
buffer, changed_len);
modify_field (buffer, value_as_long (fromval),
VALUE_BITPOS (toval), VALUE_BITSIZE (toval));
changed_addr = VALUE_ADDRESS (toval) + VALUE_OFFSET (toval);
dest_buffer = buffer;
}
else
{
changed_addr = VALUE_ADDRESS (toval) + VALUE_OFFSET (toval);
changed_len = TYPE_LENGTH (type);
dest_buffer = VALUE_CONTENTS (fromval);
}
write_memory (changed_addr, dest_buffer, changed_len);
if (deprecated_memory_changed_hook)
deprecated_memory_changed_hook (changed_addr, changed_len);
}
break;
case lval_reg_frame_relative:
case lval_register:
{
struct frame_info *frame;
int value_reg;
/* Figure out which frame this is in currently. */
if (VALUE_LVAL (toval) == lval_register)
{
frame = get_current_frame ();
value_reg = VALUE_REGNO (toval);
}
else
{
frame = frame_find_by_id (VALUE_FRAME_ID (toval));
value_reg = VALUE_FRAME_REGNUM (toval);
}
if (!frame)
error ("Value being assigned to is no longer active.");
if (VALUE_LVAL (toval) == lval_reg_frame_relative
&& CONVERT_REGISTER_P (VALUE_FRAME_REGNUM (toval), type))
{
/* If TOVAL is a special machine register requiring
conversion of program values to a special raw format. */
VALUE_TO_REGISTER (frame, VALUE_FRAME_REGNUM (toval),
type, VALUE_CONTENTS (fromval));
}
else
{
/* TOVAL is stored in a series of registers in the frame
specified by the structure. Copy that value out,
modify it, and copy it back in. */
int amount_copied;
int amount_to_copy;
char *buffer;
int reg_offset;
int byte_offset;
int regno;
/* Locate the first register that falls in the value that
needs to be transfered. Compute the offset of the
value in that register. */
{
int offset;
for (reg_offset = value_reg, offset = 0;
offset + register_size (current_gdbarch, reg_offset) <= VALUE_OFFSET (toval);
reg_offset++);
byte_offset = VALUE_OFFSET (toval) - offset;
}
/* Compute the number of register aligned values that need
to be copied. */
if (VALUE_BITSIZE (toval))
amount_to_copy = byte_offset + 1;
else
amount_to_copy = byte_offset + TYPE_LENGTH (type);
/* And a bounce buffer. Be slightly over generous. */
buffer = (char *) alloca (amount_to_copy + MAX_REGISTER_SIZE);
/* Copy it in. */
for (regno = reg_offset, amount_copied = 0;
amount_copied < amount_to_copy;
amount_copied += register_size (current_gdbarch, regno), regno++)
frame_register_read (frame, regno, buffer + amount_copied);
/* Modify what needs to be modified. */
if (VALUE_BITSIZE (toval))
modify_field (buffer + byte_offset,
value_as_long (fromval),
VALUE_BITPOS (toval), VALUE_BITSIZE (toval));
else
memcpy (buffer + byte_offset, VALUE_CONTENTS (fromval),
TYPE_LENGTH (type));
/* Copy it out. */
for (regno = reg_offset, amount_copied = 0;
amount_copied < amount_to_copy;
amount_copied += register_size (current_gdbarch, regno), regno++)
put_frame_register (frame, regno, buffer + amount_copied);
}
if (deprecated_register_changed_hook)
deprecated_register_changed_hook (-1);
observer_notify_target_changed (¤t_target);
break;
}
default:
error ("Left operand of assignment is not an lvalue.");
}
/* Assigning to the stack pointer, frame pointer, and other
(architecture and calling convention specific) registers may
cause the frame cache to be out of date. Assigning to memory
also can. We just do this on all assignments to registers or
memory, for simplicity's sake; I doubt the slowdown matters. */
switch (VALUE_LVAL (toval))
{
case lval_memory:
case lval_register:
case lval_reg_frame_relative:
reinit_frame_cache ();
/* Having destoroyed the frame cache, restore the selected frame. */
/* FIXME: cagney/2002-11-02: There has to be a better way of
doing this. Instead of constantly saving/restoring the
frame. Why not create a get_selected_frame() function that,
having saved the selected frame's ID can automatically
re-find the previously selected frame automatically. */
{
struct frame_info *fi = frame_find_by_id (old_frame);
if (fi != NULL)
select_frame (fi);
}
break;
default:
break;
}
/* If the field does not entirely fill a LONGEST, then zero the sign bits.
If the field is signed, and is negative, then sign extend. */
if ((VALUE_BITSIZE (toval) > 0)
&& (VALUE_BITSIZE (toval) < 8 * (int) sizeof (LONGEST)))
{
LONGEST fieldval = value_as_long (fromval);
LONGEST valmask = (((ULONGEST) 1) << VALUE_BITSIZE (toval)) - 1;
fieldval &= valmask;
if (!TYPE_UNSIGNED (type) && (fieldval & (valmask ^ (valmask >> 1))))
fieldval |= ~valmask;
fromval = value_from_longest (type, fieldval);
}
val = value_copy (toval);
memcpy (VALUE_CONTENTS_RAW (val), VALUE_CONTENTS (fromval),
TYPE_LENGTH (type));
VALUE_TYPE (val) = type;
val = value_change_enclosing_type (val, VALUE_ENCLOSING_TYPE (fromval));
VALUE_EMBEDDED_OFFSET (val) = VALUE_EMBEDDED_OFFSET (fromval);
VALUE_POINTED_TO_OFFSET (val) = VALUE_POINTED_TO_OFFSET (fromval);
return val;
}
/* Extend a value VAL to COUNT repetitions of its type. */
struct value *
value_repeat (struct value *arg1, int count)
{
struct value *val;
if (VALUE_LVAL (arg1) != lval_memory)
error ("Only values in memory can be extended with '@'.");
if (count < 1)
error ("Invalid number %d of repetitions.", count);
val = allocate_repeat_value (VALUE_ENCLOSING_TYPE (arg1), count);
read_memory (VALUE_ADDRESS (arg1) + VALUE_OFFSET (arg1),
VALUE_CONTENTS_ALL_RAW (val),
TYPE_LENGTH (VALUE_ENCLOSING_TYPE (val)));
VALUE_LVAL (val) = lval_memory;
VALUE_ADDRESS (val) = VALUE_ADDRESS (arg1) + VALUE_OFFSET (arg1);
return val;
}
struct value *
value_of_variable (struct symbol *var, struct block *b)
{
struct value *val;
struct frame_info *frame = NULL;
if (!b)
frame = NULL; /* Use selected frame. */
else if (symbol_read_needs_frame (var))
{
frame = block_innermost_frame (b);
if (!frame)
{
if (BLOCK_FUNCTION (b)
&& SYMBOL_PRINT_NAME (BLOCK_FUNCTION (b)))
error ("No frame is currently executing in block %s.",
SYMBOL_PRINT_NAME (BLOCK_FUNCTION (b)));
else
error ("No frame is currently executing in specified block");
}
}
val = read_var_value (var, frame);
if (!val)
error ("Address of symbol \"%s\" is unknown.", SYMBOL_PRINT_NAME (var));
return val;
}
/* Given a value which is an array, return a value which is a pointer to its
first element, regardless of whether or not the array has a nonzero lower
bound.
FIXME: A previous comment here indicated that this routine should be
substracting the array's lower bound. It's not clear to me that this
is correct. Given an array subscripting operation, it would certainly
work to do the adjustment here, essentially computing:
(&array[0] - (lowerbound * sizeof array[0])) + (index * sizeof array[0])
However I believe a more appropriate and logical place to account for
the lower bound is to do so in value_subscript, essentially computing:
(&array[0] + ((index - lowerbound) * sizeof array[0]))
As further evidence consider what would happen with operations other
than array subscripting, where the caller would get back a value that
had an address somewhere before the actual first element of the array,
and the information about the lower bound would be lost because of
the coercion to pointer type.
*/
struct value *
value_coerce_array (struct value *arg1)
{
struct type *type = check_typedef (VALUE_TYPE (arg1));
if (VALUE_LVAL (arg1) != lval_memory)
error ("Attempt to take address of value not located in memory.");
return value_from_pointer (lookup_pointer_type (TYPE_TARGET_TYPE (type)),
(VALUE_ADDRESS (arg1) + VALUE_OFFSET (arg1)));
}
/* Given a value which is a function, return a value which is a pointer
to it. */
struct value *
value_coerce_function (struct value *arg1)
{
struct value *retval;
if (VALUE_LVAL (arg1) != lval_memory)
error ("Attempt to take address of value not located in memory.");
retval = value_from_pointer (lookup_pointer_type (VALUE_TYPE (arg1)),
(VALUE_ADDRESS (arg1) + VALUE_OFFSET (arg1)));
VALUE_BFD_SECTION (retval) = VALUE_BFD_SECTION (arg1);
return retval;
}
/* Return a pointer value for the object for which ARG1 is the contents. */
struct value *
value_addr (struct value *arg1)
{
struct value *arg2;
struct type *type = check_typedef (VALUE_TYPE (arg1));
if (TYPE_CODE (type) == TYPE_CODE_REF)
{
/* Copy the value, but change the type from (T&) to (T*).
We keep the same location information, which is efficient,
and allows &(&X) to get the location containing the reference. */
arg2 = value_copy (arg1);
VALUE_TYPE (arg2) = lookup_pointer_type (TYPE_TARGET_TYPE (type));
return arg2;
}
if (TYPE_CODE (type) == TYPE_CODE_FUNC)
return value_coerce_function (arg1);
if (VALUE_LVAL (arg1) != lval_memory)
error ("Attempt to take address of value not located in memory.");
/* Get target memory address */
arg2 = value_from_pointer (lookup_pointer_type (VALUE_TYPE (arg1)),
(VALUE_ADDRESS (arg1)
+ VALUE_OFFSET (arg1)
+ VALUE_EMBEDDED_OFFSET (arg1)));
/* This may be a pointer to a base subobject; so remember the
full derived object's type ... */
arg2 = value_change_enclosing_type (arg2, lookup_pointer_type (VALUE_ENCLOSING_TYPE (arg1)));
/* ... and also the relative position of the subobject in the full object */
VALUE_POINTED_TO_OFFSET (arg2) = VALUE_EMBEDDED_OFFSET (arg1);
VALUE_BFD_SECTION (arg2) = VALUE_BFD_SECTION (arg1);
return arg2;
}
/* Given a value of a pointer type, apply the C unary * operator to it. */
struct value *
value_ind (struct value *arg1)
{
struct type *base_type;
struct value *arg2;
COERCE_ARRAY (arg1);
base_type = check_typedef (VALUE_TYPE (arg1));
if (TYPE_CODE (base_type) == TYPE_CODE_MEMBER)
error ("not implemented: member types in value_ind");
/* Allow * on an integer so we can cast it to whatever we want.
This returns an int, which seems like the most C-like thing
to do. "long long" variables are rare enough that
BUILTIN_TYPE_LONGEST would seem to be a mistake. */
if (TYPE_CODE (base_type) == TYPE_CODE_INT)
return value_at_lazy (builtin_type_int,
(CORE_ADDR) value_as_long (arg1),
VALUE_BFD_SECTION (arg1));
else if (TYPE_CODE (base_type) == TYPE_CODE_PTR)
{
struct type *enc_type;
/* We may be pointing to something embedded in a larger object */
/* Get the real type of the enclosing object */
enc_type = check_typedef (VALUE_ENCLOSING_TYPE (arg1));
enc_type = TYPE_TARGET_TYPE (enc_type);
/* Retrieve the enclosing object pointed to */
arg2 = value_at_lazy (enc_type,
value_as_address (arg1) - VALUE_POINTED_TO_OFFSET (arg1),
VALUE_BFD_SECTION (arg1));
/* Re-adjust type */
VALUE_TYPE (arg2) = TYPE_TARGET_TYPE (base_type);
/* Add embedding info */
arg2 = value_change_enclosing_type (arg2, enc_type);
VALUE_EMBEDDED_OFFSET (arg2) = VALUE_POINTED_TO_OFFSET (arg1);
/* We may be pointing to an object of some derived type */
arg2 = value_full_object (arg2, NULL, 0, 0, 0);
return arg2;
}
error ("Attempt to take contents of a non-pointer value.");
return 0; /* For lint -- never reached */
}
/* Pushing small parts of stack frames. */
/* Push one word (the size of object that a register holds). */
CORE_ADDR
push_word (CORE_ADDR sp, ULONGEST word)
{
int len = DEPRECATED_REGISTER_SIZE;
char buffer[MAX_REGISTER_SIZE];
store_unsigned_integer (buffer, len, word);
if (INNER_THAN (1, 2))
{
/* stack grows downward */
sp -= len;
write_memory (sp, buffer, len);
}
else
{
/* stack grows upward */
write_memory (sp, buffer, len);
sp += len;
}
return sp;
}
/* Push LEN bytes with data at BUFFER. */
CORE_ADDR
push_bytes (CORE_ADDR sp, char *buffer, int len)
{
if (INNER_THAN (1, 2))
{
/* stack grows downward */
sp -= len;
write_memory (sp, buffer, len);
}
else
{
/* stack grows upward */
write_memory (sp, buffer, len);
sp += len;
}
return sp;
}
/* Create a value for an array by allocating space in the inferior, copying
the data into that space, and then setting up an array value.
The array bounds are set from LOWBOUND and HIGHBOUND, and the array is
populated from the values passed in ELEMVEC.
The element type of the array is inherited from the type of the
first element, and all elements must have the same size (though we
don't currently enforce any restriction on their types). */
struct value *
value_array (int lowbound, int highbound, struct value **elemvec)
{
int nelem;
int idx;
unsigned int typelength;
struct value *val;
struct type *rangetype;
struct type *arraytype;
CORE_ADDR addr;
/* Validate that the bounds are reasonable and that each of the elements
have the same size. */
nelem = highbound - lowbound + 1;
if (nelem <= 0)
{
error ("bad array bounds (%d, %d)", lowbound, highbound);
}
typelength = TYPE_LENGTH (VALUE_ENCLOSING_TYPE (elemvec[0]));
for (idx = 1; idx < nelem; idx++)
{
if (TYPE_LENGTH (VALUE_ENCLOSING_TYPE (elemvec[idx])) != typelength)
{
error ("array elements must all be the same size");
}
}
rangetype = create_range_type ((struct type *) NULL, builtin_type_int,
lowbound, highbound);
arraytype = create_array_type ((struct type *) NULL,
VALUE_ENCLOSING_TYPE (elemvec[0]), rangetype);
if (!current_language->c_style_arrays)
{
val = allocate_value (arraytype);
for (idx = 0; idx < nelem; idx++)
{
memcpy (VALUE_CONTENTS_ALL_RAW (val) + (idx * typelength),
VALUE_CONTENTS_ALL (elemvec[idx]),
typelength);
}
VALUE_BFD_SECTION (val) = VALUE_BFD_SECTION (elemvec[0]);
return val;
}
/* Allocate space to store the array in the inferior, and then initialize
it by copying in each element. FIXME: Is it worth it to create a
local buffer in which to collect each value and then write all the
bytes in one operation? */
addr = allocate_space_in_inferior (nelem * typelength);
for (idx = 0; idx < nelem; idx++)
{
write_memory (addr + (idx * typelength), VALUE_CONTENTS_ALL (elemvec[idx]),
typelength);
}
/* Create the array type and set up an array value to be evaluated lazily. */
val = value_at_lazy (arraytype, addr, VALUE_BFD_SECTION (elemvec[0]));
return (val);
}
/* Create a value for a string constant by allocating space in the inferior,
copying the data into that space, and returning the address with type
TYPE_CODE_STRING. PTR points to the string constant data; LEN is number
of characters.
Note that string types are like array of char types with a lower bound of
zero and an upper bound of LEN - 1. Also note that the string may contain
embedded null bytes. */
struct value *
value_string (char *ptr, int len)
{
struct value *val;
int lowbound = current_language->string_lower_bound;
struct type *rangetype = create_range_type ((struct type *) NULL,
builtin_type_int,
lowbound, len + lowbound - 1);
struct type *stringtype
= create_string_type ((struct type *) NULL, rangetype);
CORE_ADDR addr;
if (current_language->c_style_arrays == 0)
{
val = allocate_value (stringtype);
memcpy (VALUE_CONTENTS_RAW (val), ptr, len);
return val;
}
/* Allocate space to store the string in the inferior, and then
copy LEN bytes from PTR in gdb to that address in the inferior. */
addr = allocate_space_in_inferior (len);
write_memory (addr, ptr, len);
val = value_at_lazy (stringtype, addr, NULL);
return (val);
}
struct value *
value_bitstring (char *ptr, int len)
{
struct value *val;
struct type *domain_type = create_range_type (NULL, builtin_type_int,
0, len - 1);
struct type *type = create_set_type ((struct type *) NULL, domain_type);
TYPE_CODE (type) = TYPE_CODE_BITSTRING;
val = allocate_value (type);
memcpy (VALUE_CONTENTS_RAW (val), ptr, TYPE_LENGTH (type));
return val;
}
/* See if we can pass arguments in T2 to a function which takes arguments
of types T1. T1 is a list of NARGS arguments, and T2 is a NULL-terminated
vector. If some arguments need coercion of some sort, then the coerced
values are written into T2. Return value is 0 if the arguments could be
matched, or the position at which they differ if not.
STATICP is nonzero if the T1 argument list came from a
static member function. T2 will still include the ``this'' pointer,
but it will be skipped.
For non-static member functions, we ignore the first argument,
which is the type of the instance variable. This is because we want
to handle calls with objects from derived classes. This is not
entirely correct: we should actually check to make sure that a
requested operation is type secure, shouldn't we? FIXME. */
static int
typecmp (int staticp, int varargs, int nargs,
struct field t1[], struct value *t2[])
{
int i;
if (t2 == 0)
internal_error (__FILE__, __LINE__, "typecmp: no argument list");
/* Skip ``this'' argument if applicable. T2 will always include THIS. */
if (staticp)
t2 ++;
for (i = 0;
(i < nargs) && TYPE_CODE (t1[i].type) != TYPE_CODE_VOID;
i++)
{
struct type *tt1, *tt2;
if (!t2[i])
return i + 1;
tt1 = check_typedef (t1[i].type);
tt2 = check_typedef (VALUE_TYPE (t2[i]));
if (TYPE_CODE (tt1) == TYPE_CODE_REF
/* We should be doing hairy argument matching, as below. */
&& (TYPE_CODE (check_typedef (TYPE_TARGET_TYPE (tt1))) == TYPE_CODE (tt2)))
{
if (TYPE_CODE (tt2) == TYPE_CODE_ARRAY)
t2[i] = value_coerce_array (t2[i]);
else
t2[i] = value_addr (t2[i]);
continue;
}
/* djb - 20000715 - Until the new type structure is in the
place, and we can attempt things like implicit conversions,
we need to do this so you can take something like a map<const
char *>, and properly access map["hello"], because the
argument to [] will be a reference to a pointer to a char,
and the argument will be a pointer to a char. */
while ( TYPE_CODE(tt1) == TYPE_CODE_REF ||
TYPE_CODE (tt1) == TYPE_CODE_PTR)
{
tt1 = check_typedef( TYPE_TARGET_TYPE(tt1) );
}
while ( TYPE_CODE(tt2) == TYPE_CODE_ARRAY ||
TYPE_CODE(tt2) == TYPE_CODE_PTR ||
TYPE_CODE(tt2) == TYPE_CODE_REF)
{
tt2 = check_typedef( TYPE_TARGET_TYPE(tt2) );
}
if (TYPE_CODE (tt1) == TYPE_CODE (tt2))
continue;
/* Array to pointer is a `trivial conversion' according to the ARM. */
/* We should be doing much hairier argument matching (see section 13.2
of the ARM), but as a quick kludge, just check for the same type
code. */
if (TYPE_CODE (t1[i].type) != TYPE_CODE (VALUE_TYPE (t2[i])))
return i + 1;
}
if (varargs || t2[i] == NULL)
return 0;
return i + 1;
}
/* Helper function used by value_struct_elt to recurse through baseclasses.
Look for a field NAME in ARG1. Adjust the address of ARG1 by OFFSET bytes,
and search in it assuming it has (class) type TYPE.
If found, return value, else return NULL.
If LOOKING_FOR_BASECLASS, then instead of looking for struct fields,
look for a baseclass named NAME. */
static struct value *
search_struct_field (char *name, struct value *arg1, int offset,
struct type *type, int looking_for_baseclass)
{
int i;
int nbases = TYPE_N_BASECLASSES (type);
CHECK_TYPEDEF (type);
if (!looking_for_baseclass)
for (i = TYPE_NFIELDS (type) - 1; i >= nbases; i--)
{
char *t_field_name = TYPE_FIELD_NAME (type, i);
if (t_field_name && (strcmp_iw (t_field_name, name) == 0))
{
struct value *v;
if (TYPE_FIELD_STATIC (type, i))
{
v = value_static_field (type, i);
if (v == 0)
error ("field %s is nonexistent or has been optimised out",
name);
}
else
{
v = value_primitive_field (arg1, offset, i, type);
if (v == 0)
error ("there is no field named %s", name);
}
return v;
}
if (t_field_name
&& (t_field_name[0] == '\0'
|| (TYPE_CODE (type) == TYPE_CODE_UNION
&& (strcmp_iw (t_field_name, "else") == 0))))
{
struct type *field_type = TYPE_FIELD_TYPE (type, i);
if (TYPE_CODE (field_type) == TYPE_CODE_UNION
|| TYPE_CODE (field_type) == TYPE_CODE_STRUCT)
{
/* Look for a match through the fields of an anonymous union,
or anonymous struct. C++ provides anonymous unions.
In the GNU Chill (now deleted from GDB)
implementation of variant record types, each
<alternative field> has an (anonymous) union type,
each member of the union represents a <variant
alternative>. Each <variant alternative> is
represented as a struct, with a member for each
<variant field>. */
struct value *v;
int new_offset = offset;
/* This is pretty gross. In G++, the offset in an
anonymous union is relative to the beginning of the
enclosing struct. In the GNU Chill (now deleted
from GDB) implementation of variant records, the
bitpos is zero in an anonymous union field, so we
have to add the offset of the union here. */
if (TYPE_CODE (field_type) == TYPE_CODE_STRUCT
|| (TYPE_NFIELDS (field_type) > 0
&& TYPE_FIELD_BITPOS (field_type, 0) == 0))
new_offset += TYPE_FIELD_BITPOS (type, i) / 8;
v = search_struct_field (name, arg1, new_offset, field_type,
looking_for_baseclass);
if (v)
return v;
}
}
}
for (i = 0; i < nbases; i++)
{
struct value *v;
struct type *basetype = check_typedef (TYPE_BASECLASS (type, i));
/* If we are looking for baseclasses, this is what we get when we
hit them. But it could happen that the base part's member name
is not yet filled in. */
int found_baseclass = (looking_for_baseclass
&& TYPE_BASECLASS_NAME (type, i) != NULL
&& (strcmp_iw (name, TYPE_BASECLASS_NAME (type, i)) == 0));
if (BASETYPE_VIA_VIRTUAL (type, i))
{
int boffset;
struct value *v2 = allocate_value (basetype);
boffset = baseclass_offset (type, i,
VALUE_CONTENTS (arg1) + offset,
VALUE_ADDRESS (arg1)
+ VALUE_OFFSET (arg1) + offset);
if (boffset == -1)
error ("virtual baseclass botch");
/* The virtual base class pointer might have been clobbered by the
user program. Make sure that it still points to a valid memory
location. */
boffset += offset;
if (boffset < 0 || boffset >= TYPE_LENGTH (type))
{
CORE_ADDR base_addr;
base_addr = VALUE_ADDRESS (arg1) + VALUE_OFFSET (arg1) + boffset;
if (target_read_memory (base_addr, VALUE_CONTENTS_RAW (v2),
TYPE_LENGTH (basetype)) != 0)
error ("virtual baseclass botch");
VALUE_LVAL (v2) = lval_memory;
VALUE_ADDRESS (v2) = base_addr;
}
else
{
VALUE_LVAL (v2) = VALUE_LVAL (arg1);
VALUE_ADDRESS (v2) = VALUE_ADDRESS (arg1);
VALUE_OFFSET (v2) = VALUE_OFFSET (arg1) + boffset;
if (VALUE_LAZY (arg1))
VALUE_LAZY (v2) = 1;
else
memcpy (VALUE_CONTENTS_RAW (v2),
VALUE_CONTENTS_RAW (arg1) + boffset,
TYPE_LENGTH (basetype));
}
if (found_baseclass)
return v2;
v = search_struct_field (name, v2, 0, TYPE_BASECLASS (type, i),
looking_for_baseclass);
}
else if (found_baseclass)
v = value_primitive_field (arg1, offset, i, type);
else
v = search_struct_field (name, arg1,
offset + TYPE_BASECLASS_BITPOS (type, i) / 8,
basetype, looking_for_baseclass);
if (v)
return v;
}
return NULL;
}
/* Return the offset (in bytes) of the virtual base of type BASETYPE
* in an object pointed to by VALADDR (on the host), assumed to be of
* type TYPE. OFFSET is number of bytes beyond start of ARG to start
* looking (in case VALADDR is the contents of an enclosing object).
*
* This routine recurses on the primary base of the derived class because
* the virtual base entries of the primary base appear before the other
* virtual base entries.
*
* If the virtual base is not found, a negative integer is returned.
* The magnitude of the negative integer is the number of entries in
* the virtual table to skip over (entries corresponding to various
* ancestral classes in the chain of primary bases).
*
* Important: This assumes the HP / Taligent C++ runtime
* conventions. Use baseclass_offset() instead to deal with g++
* conventions. */
void
find_rt_vbase_offset (struct type *type, struct type *basetype, char *valaddr,
int offset, int *boffset_p, int *skip_p)
{
int boffset; /* offset of virtual base */
int index; /* displacement to use in virtual table */
int skip;
struct value *vp;
CORE_ADDR vtbl; /* the virtual table pointer */
struct type *pbc; /* the primary base class */
/* Look for the virtual base recursively in the primary base, first.
* This is because the derived class object and its primary base
* subobject share the primary virtual table. */
boffset = 0;
pbc = TYPE_PRIMARY_BASE (type);
if (pbc)
{
find_rt_vbase_offset (pbc, basetype, valaddr, offset, &boffset, &skip);
if (skip < 0)
{
*boffset_p = boffset;
*skip_p = -1;
return;
}
}
else
skip = 0;
/* Find the index of the virtual base according to HP/Taligent
runtime spec. (Depth-first, left-to-right.) */
index = virtual_base_index_skip_primaries (basetype, type);
if (index < 0)
{
*skip_p = skip + virtual_base_list_length_skip_primaries (type);
*boffset_p = 0;
return;
}
/* pai: FIXME -- 32x64 possible problem */
/* First word (4 bytes) in object layout is the vtable pointer */
vtbl = *(CORE_ADDR *) (valaddr + offset);
/* Before the constructor is invoked, things are usually zero'd out. */
if (vtbl == 0)
error ("Couldn't find virtual table -- object may not be constructed yet.");
/* Find virtual base's offset -- jump over entries for primary base
* ancestors, then use the index computed above. But also adjust by
* HP_ACC_VBASE_START for the vtable slots before the start of the
* virtual base entries. Offset is negative -- virtual base entries
* appear _before_ the address point of the virtual table. */
/* pai: FIXME -- 32x64 problem, if word = 8 bytes, change multiplier
& use long type */
/* epstein : FIXME -- added param for overlay section. May not be correct */
vp = value_at (builtin_type_int, vtbl + 4 * (-skip - index - HP_ACC_VBASE_START), NULL);
boffset = value_as_long (vp);
*skip_p = -1;
*boffset_p = boffset;
return;
}
/* Helper function used by value_struct_elt to recurse through baseclasses.
Look for a field NAME in ARG1. Adjust the address of ARG1 by OFFSET bytes,
and search in it assuming it has (class) type TYPE.
If found, return value, else if name matched and args not return (value)-1,
else return NULL. */
static struct value *
search_struct_method (char *name, struct value **arg1p,
struct value **args, int offset,
int *static_memfuncp, struct type *type)
{
int i;
struct value *v;
int name_matched = 0;
char dem_opname[64];
CHECK_TYPEDEF (type);
for (i = TYPE_NFN_FIELDS (type) - 1; i >= 0; i--)
{
char *t_field_name = TYPE_FN_FIELDLIST_NAME (type, i);
/* FIXME! May need to check for ARM demangling here */
if (strncmp (t_field_name, "__", 2) == 0 ||
strncmp (t_field_name, "op", 2) == 0 ||
strncmp (t_field_name, "type", 4) == 0)
{
if (cplus_demangle_opname (t_field_name, dem_opname, DMGL_ANSI))
t_field_name = dem_opname;
else if (cplus_demangle_opname (t_field_name, dem_opname, 0))
t_field_name = dem_opname;
}
if (t_field_name && (strcmp_iw (t_field_name, name) == 0))
{
int j = TYPE_FN_FIELDLIST_LENGTH (type, i) - 1;
struct fn_field *f = TYPE_FN_FIELDLIST1 (type, i);
name_matched = 1;
check_stub_method_group (type, i);
if (j > 0 && args == 0)
error ("cannot resolve overloaded method `%s': no arguments supplied", name);
else if (j == 0 && args == 0)
{
v = value_fn_field (arg1p, f, j, type, offset);
if (v != NULL)
return v;
}
else
while (j >= 0)
{
if (!typecmp (TYPE_FN_FIELD_STATIC_P (f, j),
TYPE_VARARGS (TYPE_FN_FIELD_TYPE (f, j)),
TYPE_NFIELDS (TYPE_FN_FIELD_TYPE (f, j)),
TYPE_FN_FIELD_ARGS (f, j), args))
{
if (TYPE_FN_FIELD_VIRTUAL_P (f, j))
return value_virtual_fn_field (arg1p, f, j, type, offset);
if (TYPE_FN_FIELD_STATIC_P (f, j) && static_memfuncp)
*static_memfuncp = 1;
v = value_fn_field (arg1p, f, j, type, offset);
if (v != NULL)
return v;
}
j--;
}
}
}
for (i = TYPE_N_BASECLASSES (type) - 1; i >= 0; i--)
{
int base_offset;
if (BASETYPE_VIA_VIRTUAL (type, i))
{
if (TYPE_HAS_VTABLE (type))
{
/* HP aCC compiled type, search for virtual base offset
according to HP/Taligent runtime spec. */
int skip;
find_rt_vbase_offset (type, TYPE_BASECLASS (type, i),
VALUE_CONTENTS_ALL (*arg1p),
offset + VALUE_EMBEDDED_OFFSET (*arg1p),
&base_offset, &skip);
if (skip >= 0)
error ("Virtual base class offset not found in vtable");
}
else
{
struct type *baseclass = check_typedef (TYPE_BASECLASS (type, i));
char *base_valaddr;
/* The virtual base class pointer might have been clobbered by the
user program. Make sure that it still points to a valid memory
location. */
if (offset < 0 || offset >= TYPE_LENGTH (type))
{
base_valaddr = (char *) alloca (TYPE_LENGTH (baseclass));
if (target_read_memory (VALUE_ADDRESS (*arg1p)
+ VALUE_OFFSET (*arg1p) + offset,
base_valaddr,
TYPE_LENGTH (baseclass)) != 0)
error ("virtual baseclass botch");
}
else
base_valaddr = VALUE_CONTENTS (*arg1p) + offset;
base_offset =
baseclass_offset (type, i, base_valaddr,
VALUE_ADDRESS (*arg1p)
+ VALUE_OFFSET (*arg1p) + offset);
if (base_offset == -1)
error ("virtual baseclass botch");
}
}
else
{
base_offset = TYPE_BASECLASS_BITPOS (type, i) / 8;
}
v = search_struct_method (name, arg1p, args, base_offset + offset,
static_memfuncp, TYPE_BASECLASS (type, i));
if (v == (struct value *) - 1)
{
name_matched = 1;
}
else if (v)
{
/* FIXME-bothner: Why is this commented out? Why is it here? */
/* *arg1p = arg1_tmp; */
return v;
}
}
if (name_matched)
return (struct value *) - 1;
else
return NULL;
}
/* Given *ARGP, a value of type (pointer to a)* structure/union,
extract the component named NAME from the ultimate target structure/union
and return it as a value with its appropriate type.
ERR is used in the error message if *ARGP's type is wrong.
C++: ARGS is a list of argument types to aid in the selection of
an appropriate method. Also, handle derived types.
STATIC_MEMFUNCP, if non-NULL, points to a caller-supplied location
where the truthvalue of whether the function that was resolved was
a static member function or not is stored.
ERR is an error message to be printed in case the field is not found. */
struct value *
value_struct_elt (struct value **argp, struct value **args,
char *name, int *static_memfuncp, char *err)
{
struct type *t;
struct value *v;
COERCE_ARRAY (*argp);
t = check_typedef (VALUE_TYPE (*argp));
/* Follow pointers until we get to a non-pointer. */
while (TYPE_CODE (t) == TYPE_CODE_PTR || TYPE_CODE (t) == TYPE_CODE_REF)
{
*argp = value_ind (*argp);
/* Don't coerce fn pointer to fn and then back again! */
if (TYPE_CODE (VALUE_TYPE (*argp)) != TYPE_CODE_FUNC)
COERCE_ARRAY (*argp);
t = check_typedef (VALUE_TYPE (*argp));
}
if (TYPE_CODE (t) == TYPE_CODE_MEMBER)
error ("not implemented: member type in value_struct_elt");
if (TYPE_CODE (t) != TYPE_CODE_STRUCT
&& TYPE_CODE (t) != TYPE_CODE_UNION)
error ("Attempt to extract a component of a value that is not a %s.", err);
/* Assume it's not, unless we see that it is. */
if (static_memfuncp)
*static_memfuncp = 0;
if (!args)
{
/* if there are no arguments ...do this... */
/* Try as a field first, because if we succeed, there
is less work to be done. */
v = search_struct_field (name, *argp, 0, t, 0);
if (v)
return v;
/* C++: If it was not found as a data field, then try to
return it as a pointer to a method. */
if (destructor_name_p (name, t))
error ("Cannot get value of destructor");
v = search_struct_method (name, argp, args, 0, static_memfuncp, t);
if (v == (struct value *) - 1)
error ("Cannot take address of a method");
else if (v == 0)
{
if (TYPE_NFN_FIELDS (t))
error ("There is no member or method named %s.", name);
else
error ("There is no member named %s.", name);
}
return v;
}
if (destructor_name_p (name, t))
{
if (!args[1])
{
/* Destructors are a special case. */
int m_index, f_index;
v = NULL;
if (get_destructor_fn_field (t, &m_index, &f_index))
{
v = value_fn_field (NULL, TYPE_FN_FIELDLIST1 (t, m_index),
f_index, NULL, 0);
}
if (v == NULL)
error ("could not find destructor function named %s.", name);
else
return v;
}
else
{
error ("destructor should not have any argument");
}
}
else
v = search_struct_method (name, argp, args, 0, static_memfuncp, t);
if (v == (struct value *) - 1)
{
error ("One of the arguments you tried to pass to %s could not be converted to what the function wants.", name);
}
else if (v == 0)
{
/* See if user tried to invoke data as function. If so,
hand it back. If it's not callable (i.e., a pointer to function),
gdb should give an error. */
v = search_struct_field (name, *argp, 0, t, 0);
}
if (!v)
error ("Structure has no component named %s.", name);
return v;
}
/* Search through the methods of an object (and its bases)
* to find a specified method. Return the pointer to the
* fn_field list of overloaded instances.
* Helper function for value_find_oload_list.
* ARGP is a pointer to a pointer to a value (the object)
* METHOD is a string containing the method name
* OFFSET is the offset within the value
* TYPE is the assumed type of the object
* NUM_FNS is the number of overloaded instances
* BASETYPE is set to the actual type of the subobject where the method is found
* BOFFSET is the offset of the base subobject where the method is found */
static struct fn_field *
find_method_list (struct value **argp, char *method, int offset,
struct type *type, int *num_fns,
struct type **basetype, int *boffset)
{
int i;
struct fn_field *f;
CHECK_TYPEDEF (type);
*num_fns = 0;
/* First check in object itself */
for (i = TYPE_NFN_FIELDS (type) - 1; i >= 0; i--)
{
/* pai: FIXME What about operators and type conversions? */
char *fn_field_name = TYPE_FN_FIELDLIST_NAME (type, i);
if (fn_field_name && (strcmp_iw (fn_field_name, method) == 0))
{
int len = TYPE_FN_FIELDLIST_LENGTH (type, i);
struct fn_field *f = TYPE_FN_FIELDLIST1 (type, i);
*num_fns = len;
*basetype = type;
*boffset = offset;
/* Resolve any stub methods. */
check_stub_method_group (type, i);
return f;
}
}
/* Not found in object, check in base subobjects */
for (i = TYPE_N_BASECLASSES (type) - 1; i >= 0; i--)
{
int base_offset;
if (BASETYPE_VIA_VIRTUAL (type, i))
{
if (TYPE_HAS_VTABLE (type))
{
/* HP aCC compiled type, search for virtual base offset
* according to HP/Taligent runtime spec. */
int skip;
find_rt_vbase_offset (type, TYPE_BASECLASS (type, i),
VALUE_CONTENTS_ALL (*argp),
offset + VALUE_EMBEDDED_OFFSET (*argp),
&base_offset, &skip);
if (skip >= 0)
error ("Virtual base class offset not found in vtable");
}
else
{
/* probably g++ runtime model */
base_offset = VALUE_OFFSET (*argp) + offset;
base_offset =
baseclass_offset (type, i,
VALUE_CONTENTS (*argp) + base_offset,
VALUE_ADDRESS (*argp) + base_offset);
if (base_offset == -1)
error ("virtual baseclass botch");
}
}
else
/* non-virtual base, simply use bit position from debug info */
{
base_offset = TYPE_BASECLASS_BITPOS (type, i) / 8;
}
f = find_method_list (argp, method, base_offset + offset,
TYPE_BASECLASS (type, i), num_fns, basetype,
boffset);
if (f)
return f;
}
return NULL;
}
/* Return the list of overloaded methods of a specified name.
* ARGP is a pointer to a pointer to a value (the object)
* METHOD is the method name
* OFFSET is the offset within the value contents
* NUM_FNS is the number of overloaded instances
* BASETYPE is set to the type of the base subobject that defines the method
* BOFFSET is the offset of the base subobject which defines the method */
struct fn_field *
value_find_oload_method_list (struct value **argp, char *method, int offset,
int *num_fns, struct type **basetype,
int *boffset)
{
struct type *t;
t = check_typedef (VALUE_TYPE (*argp));
/* code snarfed from value_struct_elt */
while (TYPE_CODE (t) == TYPE_CODE_PTR || TYPE_CODE (t) == TYPE_CODE_REF)
{
*argp = value_ind (*argp);
/* Don't coerce fn pointer to fn and then back again! */
if (TYPE_CODE (VALUE_TYPE (*argp)) != TYPE_CODE_FUNC)
COERCE_ARRAY (*argp);
t = check_typedef (VALUE_TYPE (*argp));
}
if (TYPE_CODE (t) == TYPE_CODE_MEMBER)
error ("Not implemented: member type in value_find_oload_lis");
if (TYPE_CODE (t) != TYPE_CODE_STRUCT
&& TYPE_CODE (t) != TYPE_CODE_UNION)
error ("Attempt to extract a component of a value that is not a struct or union");
return find_method_list (argp, method, 0, t, num_fns, basetype, boffset);
}
/* Given an array of argument types (ARGTYPES) (which includes an
entry for "this" in the case of C++ methods), the number of
arguments NARGS, the NAME of a function whether it's a method or
not (METHOD), and the degree of laxness (LAX) in conforming to
overload resolution rules in ANSI C++, find the best function that
matches on the argument types according to the overload resolution
rules.
In the case of class methods, the parameter OBJ is an object value
in which to search for overloaded methods.
In the case of non-method functions, the parameter FSYM is a symbol
corresponding to one of the overloaded functions.
Return value is an integer: 0 -> good match, 10 -> debugger applied
non-standard coercions, 100 -> incompatible.
If a method is being searched for, VALP will hold the value.
If a non-method is being searched for, SYMP will hold the symbol for it.
If a method is being searched for, and it is a static method,
then STATICP will point to a non-zero value.
Note: This function does *not* check the value of
overload_resolution. Caller must check it to see whether overload
resolution is permitted.
*/
int
find_overload_match (struct type **arg_types, int nargs, char *name, int method,
int lax, struct value **objp, struct symbol *fsym,
struct value **valp, struct symbol **symp, int *staticp)
{
struct value *obj = (objp ? *objp : NULL);
int oload_champ; /* Index of best overloaded function */
struct badness_vector *oload_champ_bv = NULL; /* The measure for the current best match */
struct value *temp = obj;
struct fn_field *fns_ptr = NULL; /* For methods, the list of overloaded methods */
struct symbol **oload_syms = NULL; /* For non-methods, the list of overloaded function symbols */
int num_fns = 0; /* Number of overloaded instances being considered */
struct type *basetype = NULL;
int boffset;
int ix;
int static_offset;
struct cleanup *old_cleanups = NULL;
const char *obj_type_name = NULL;
char *func_name = NULL;
enum oload_classification match_quality;
/* Get the list of overloaded methods or functions */
if (method)
{
obj_type_name = TYPE_NAME (VALUE_TYPE (obj));
/* Hack: evaluate_subexp_standard often passes in a pointer
value rather than the object itself, so try again */
if ((!obj_type_name || !*obj_type_name) &&
(TYPE_CODE (VALUE_TYPE (obj)) == TYPE_CODE_PTR))
obj_type_name = TYPE_NAME (TYPE_TARGET_TYPE (VALUE_TYPE (obj)));
fns_ptr = value_find_oload_method_list (&temp, name, 0,
&num_fns,
&basetype, &boffset);
if (!fns_ptr || !num_fns)
error ("Couldn't find method %s%s%s",
obj_type_name,
(obj_type_name && *obj_type_name) ? "::" : "",
name);
/* If we are dealing with stub method types, they should have
been resolved by find_method_list via value_find_oload_method_list
above. */
gdb_assert (TYPE_DOMAIN_TYPE (fns_ptr[0].type) != NULL);
oload_champ = find_oload_champ (arg_types, nargs, method, num_fns,
fns_ptr, oload_syms, &oload_champ_bv);
}
else
{
const char *qualified_name = SYMBOL_CPLUS_DEMANGLED_NAME (fsym);
func_name = cp_func_name (qualified_name);
/* If the name is NULL this must be a C-style function.
Just return the same symbol. */
if (func_name == NULL)
{
*symp = fsym;
return 0;
}
old_cleanups = make_cleanup (xfree, func_name);
make_cleanup (xfree, oload_syms);
make_cleanup (xfree, oload_champ_bv);
oload_champ = find_oload_champ_namespace (arg_types, nargs,
func_name,
qualified_name,
&oload_syms,
&oload_champ_bv);
}
/* Check how bad the best match is. */
match_quality
= classify_oload_match (oload_champ_bv, nargs,
oload_method_static (method, fns_ptr,
oload_champ));
if (match_quality == INCOMPATIBLE)
{
if (method)
error ("Cannot resolve method %s%s%s to any overloaded instance",
obj_type_name,
(obj_type_name && *obj_type_name) ? "::" : "",
name);
else
error ("Cannot resolve function %s to any overloaded instance",
func_name);
}
else if (match_quality == NON_STANDARD)
{
if (method)
warning ("Using non-standard conversion to match method %s%s%s to supplied arguments",
obj_type_name,
(obj_type_name && *obj_type_name) ? "::" : "",
name);
else
warning ("Using non-standard conversion to match function %s to supplied arguments",
func_name);
}
if (method)
{
if (staticp != NULL)
*staticp = oload_method_static (method, fns_ptr, oload_champ);
if (TYPE_FN_FIELD_VIRTUAL_P (fns_ptr, oload_champ))
*valp = value_virtual_fn_field (&temp, fns_ptr, oload_champ, basetype, boffset);
else
*valp = value_fn_field (&temp, fns_ptr, oload_champ, basetype, boffset);
}
else
{
*symp = oload_syms[oload_champ];
}
if (objp)
{
if (TYPE_CODE (VALUE_TYPE (temp)) != TYPE_CODE_PTR
&& TYPE_CODE (VALUE_TYPE (*objp)) == TYPE_CODE_PTR)
{
temp = value_addr (temp);
}
*objp = temp;
}
if (old_cleanups != NULL)
do_cleanups (old_cleanups);
switch (match_quality)
{
case INCOMPATIBLE:
return 100;
case NON_STANDARD:
return 10;
default: /* STANDARD */
return 0;
}
}
/* Find the best overload match, searching for FUNC_NAME in namespaces
contained in QUALIFIED_NAME until it either finds a good match or
runs out of namespaces. It stores the overloaded functions in
*OLOAD_SYMS, and the badness vector in *OLOAD_CHAMP_BV. The
calling function is responsible for freeing *OLOAD_SYMS and
*OLOAD_CHAMP_BV. */
static int
find_oload_champ_namespace (struct type **arg_types, int nargs,
const char *func_name,
const char *qualified_name,
struct symbol ***oload_syms,
struct badness_vector **oload_champ_bv)
{
int oload_champ;
find_oload_champ_namespace_loop (arg_types, nargs,
func_name,
qualified_name, 0,
oload_syms, oload_champ_bv,
&oload_champ);
return oload_champ;
}
/* Helper function for find_oload_champ_namespace; NAMESPACE_LEN is
how deep we've looked for namespaces, and the champ is stored in
OLOAD_CHAMP. The return value is 1 if the champ is a good one, 0
if it isn't.
It is the caller's responsibility to free *OLOAD_SYMS and
*OLOAD_CHAMP_BV. */
static int
find_oload_champ_namespace_loop (struct type **arg_types, int nargs,
const char *func_name,
const char *qualified_name,
int namespace_len,
struct symbol ***oload_syms,
struct badness_vector **oload_champ_bv,
int *oload_champ)
{
int next_namespace_len = namespace_len;
int searched_deeper = 0;
int num_fns = 0;
struct cleanup *old_cleanups;
int new_oload_champ;
struct symbol **new_oload_syms;
struct badness_vector *new_oload_champ_bv;
char *new_namespace;
if (next_namespace_len != 0)
{
gdb_assert (qualified_name[next_namespace_len] == ':');
next_namespace_len += 2;
}
next_namespace_len
+= cp_find_first_component (qualified_name + next_namespace_len);
/* Initialize these to values that can safely be xfree'd. */
*oload_syms = NULL;
*oload_champ_bv = NULL;
/* First, see if we have a deeper namespace we can search in. If we
get a good match there, use it. */
if (qualified_name[next_namespace_len] == ':')
{
searched_deeper = 1;
if (find_oload_champ_namespace_loop (arg_types, nargs,
func_name, qualified_name,
next_namespace_len,
oload_syms, oload_champ_bv,
oload_champ))
{
return 1;
}
};
/* If we reach here, either we're in the deepest namespace or we
didn't find a good match in a deeper namespace. But, in the
latter case, we still have a bad match in a deeper namespace;
note that we might not find any match at all in the current
namespace. (There's always a match in the deepest namespace,
because this overload mechanism only gets called if there's a
function symbol to start off with.) */
old_cleanups = make_cleanup (xfree, *oload_syms);
old_cleanups = make_cleanup (xfree, *oload_champ_bv);
new_namespace = alloca (namespace_len + 1);
strncpy (new_namespace, qualified_name, namespace_len);
new_namespace[namespace_len] = '\0';
new_oload_syms = make_symbol_overload_list (func_name,
new_namespace);
while (new_oload_syms[num_fns])
++num_fns;
new_oload_champ = find_oload_champ (arg_types, nargs, 0, num_fns,
NULL, new_oload_syms,
&new_oload_champ_bv);
/* Case 1: We found a good match. Free earlier matches (if any),
and return it. Case 2: We didn't find a good match, but we're
not the deepest function. Then go with the bad match that the
deeper function found. Case 3: We found a bad match, and we're
the deepest function. Then return what we found, even though
it's a bad match. */
if (new_oload_champ != -1
&& classify_oload_match (new_oload_champ_bv, nargs, 0) == STANDARD)
{
*oload_syms = new_oload_syms;
*oload_champ = new_oload_champ;
*oload_champ_bv = new_oload_champ_bv;
do_cleanups (old_cleanups);
return 1;
}
else if (searched_deeper)
{
xfree (new_oload_syms);
xfree (new_oload_champ_bv);
discard_cleanups (old_cleanups);
return 0;
}
else
{
gdb_assert (new_oload_champ != -1);
*oload_syms = new_oload_syms;
*oload_champ = new_oload_champ;
*oload_champ_bv = new_oload_champ_bv;
discard_cleanups (old_cleanups);
return 0;
}
}
/* Look for a function to take NARGS args of types ARG_TYPES. Find
the best match from among the overloaded methods or functions
(depending on METHOD) given by FNS_PTR or OLOAD_SYMS, respectively.
The number of methods/functions in the list is given by NUM_FNS.
Return the index of the best match; store an indication of the
quality of the match in OLOAD_CHAMP_BV.
It is the caller's responsibility to free *OLOAD_CHAMP_BV. */
static int
find_oload_champ (struct type **arg_types, int nargs, int method,
int num_fns, struct fn_field *fns_ptr,
struct symbol **oload_syms,
struct badness_vector **oload_champ_bv)
{
int ix;
struct badness_vector *bv; /* A measure of how good an overloaded instance is */
int oload_champ = -1; /* Index of best overloaded function */
int oload_ambiguous = 0; /* Current ambiguity state for overload resolution */
/* 0 => no ambiguity, 1 => two good funcs, 2 => incomparable funcs */
*oload_champ_bv = NULL;
/* Consider each candidate in turn */
for (ix = 0; ix < num_fns; ix++)
{
int jj;
int static_offset = oload_method_static (method, fns_ptr, ix);
int nparms;
struct type **parm_types;
if (method)
{
nparms = TYPE_NFIELDS (TYPE_FN_FIELD_TYPE (fns_ptr, ix));
}
else
{
/* If it's not a method, this is the proper place */
nparms=TYPE_NFIELDS(SYMBOL_TYPE(oload_syms[ix]));
}
/* Prepare array of parameter types */
parm_types = (struct type **) xmalloc (nparms * (sizeof (struct type *)));
for (jj = 0; jj < nparms; jj++)
parm_types[jj] = (method
? (TYPE_FN_FIELD_ARGS (fns_ptr, ix)[jj].type)
: TYPE_FIELD_TYPE (SYMBOL_TYPE (oload_syms[ix]), jj));
/* Compare parameter types to supplied argument types. Skip THIS for
static methods. */
bv = rank_function (parm_types, nparms, arg_types + static_offset,
nargs - static_offset);
if (!*oload_champ_bv)
{
*oload_champ_bv = bv;
oload_champ = 0;
}
else
/* See whether current candidate is better or worse than previous best */
switch (compare_badness (bv, *oload_champ_bv))
{
case 0:
oload_ambiguous = 1; /* top two contenders are equally good */
break;
case 1:
oload_ambiguous = 2; /* incomparable top contenders */
break;
case 2:
*oload_champ_bv = bv; /* new champion, record details */
oload_ambiguous = 0;
oload_champ = ix;
break;
case 3:
default:
break;
}
xfree (parm_types);
if (overload_debug)
{
if (method)
fprintf_filtered (gdb_stderr,"Overloaded method instance %s, # of parms %d\n", fns_ptr[ix].physname, nparms);
else
fprintf_filtered (gdb_stderr,"Overloaded function instance %s # of parms %d\n", SYMBOL_DEMANGLED_NAME (oload_syms[ix]), nparms);
for (jj = 0; jj < nargs - static_offset; jj++)
fprintf_filtered (gdb_stderr,"...Badness @ %d : %d\n", jj, bv->rank[jj]);
fprintf_filtered (gdb_stderr,"Overload resolution champion is %d, ambiguous? %d\n", oload_champ, oload_ambiguous);
}
}
return oload_champ;
}
/* Return 1 if we're looking at a static method, 0 if we're looking at
a non-static method or a function that isn't a method. */
static int
oload_method_static (int method, struct fn_field *fns_ptr, int index)
{
if (method && TYPE_FN_FIELD_STATIC_P (fns_ptr, index))
return 1;
else
return 0;
}
/* Check how good an overload match OLOAD_CHAMP_BV represents. */
static enum oload_classification
classify_oload_match (struct badness_vector *oload_champ_bv,
int nargs,
int static_offset)
{
int ix;
for (ix = 1; ix <= nargs - static_offset; ix++)
{
if (oload_champ_bv->rank[ix] >= 100)
return INCOMPATIBLE; /* truly mismatched types */
else if (oload_champ_bv->rank[ix] >= 10)
return NON_STANDARD; /* non-standard type conversions needed */
}
return STANDARD; /* Only standard conversions needed. */
}
/* C++: return 1 is NAME is a legitimate name for the destructor
of type TYPE. If TYPE does not have a destructor, or
if NAME is inappropriate for TYPE, an error is signaled. */
int
destructor_name_p (const char *name, const struct type *type)
{
/* destructors are a special case. */
if (name[0] == '~')
{
char *dname = type_name_no_tag (type);
char *cp = strchr (dname, '<');
unsigned int len;
/* Do not compare the template part for template classes. */
if (cp == NULL)
len = strlen (dname);
else
len = cp - dname;
if (strlen (name + 1) != len || strncmp (dname, name + 1, len) != 0)
error ("name of destructor must equal name of class");
else
return 1;
}
return 0;
}
/* Helper function for check_field: Given TYPE, a structure/union,
return 1 if the component named NAME from the ultimate
target structure/union is defined, otherwise, return 0. */
static int
check_field_in (struct type *type, const char *name)
{
int i;
for (i = TYPE_NFIELDS (type) - 1; i >= TYPE_N_BASECLASSES (type); i--)
{
char *t_field_name = TYPE_FIELD_NAME (type, i);
if (t_field_name && (strcmp_iw (t_field_name, name) == 0))
return 1;
}
/* C++: If it was not found as a data field, then try to
return it as a pointer to a method. */
/* Destructors are a special case. */
if (destructor_name_p (name, type))
{
int m_index, f_index;
return get_destructor_fn_field (type, &m_index, &f_index);
}
for (i = TYPE_NFN_FIELDS (type) - 1; i >= 0; --i)
{
if (strcmp_iw (TYPE_FN_FIELDLIST_NAME (type, i), name) == 0)
return 1;
}
for (i = TYPE_N_BASECLASSES (type) - 1; i >= 0; i--)
if (check_field_in (TYPE_BASECLASS (type, i), name))
return 1;
return 0;
}
/* C++: Given ARG1, a value of type (pointer to a)* structure/union,
return 1 if the component named NAME from the ultimate
target structure/union is defined, otherwise, return 0. */
int
check_field (struct value *arg1, const char *name)
{
struct type *t;
COERCE_ARRAY (arg1);
t = VALUE_TYPE (arg1);
/* Follow pointers until we get to a non-pointer. */
for (;;)
{
CHECK_TYPEDEF (t);
if (TYPE_CODE (t) != TYPE_CODE_PTR && TYPE_CODE (t) != TYPE_CODE_REF)
break;
t = TYPE_TARGET_TYPE (t);
}
if (TYPE_CODE (t) == TYPE_CODE_MEMBER)
error ("not implemented: member type in check_field");
if (TYPE_CODE (t) != TYPE_CODE_STRUCT
&& TYPE_CODE (t) != TYPE_CODE_UNION)
error ("Internal error: `this' is not an aggregate");
return check_field_in (t, name);
}
/* C++: Given an aggregate type CURTYPE, and a member name NAME,
return the appropriate member. This function is used to resolve
user expressions of the form "DOMAIN::NAME". For more details on
what happens, see the comment before
value_struct_elt_for_reference. */
struct value *
value_aggregate_elt (struct type *curtype,
char *name,
enum noside noside)
{
switch (TYPE_CODE (curtype))
{
case TYPE_CODE_STRUCT:
case TYPE_CODE_UNION:
return value_struct_elt_for_reference (curtype, 0, curtype, name, NULL,
noside);
case TYPE_CODE_NAMESPACE:
return value_namespace_elt (curtype, name, noside);
default:
internal_error (__FILE__, __LINE__,
"non-aggregate type in value_aggregate_elt");
}
}
/* C++: Given an aggregate type CURTYPE, and a member name NAME,
return the address of this member as a "pointer to member"
type. If INTYPE is non-null, then it will be the type
of the member we are looking for. This will help us resolve
"pointers to member functions". This function is used
to resolve user expressions of the form "DOMAIN::NAME". */
static struct value *
value_struct_elt_for_reference (struct type *domain, int offset,
struct type *curtype, char *name,
struct type *intype,
enum noside noside)
{
struct type *t = curtype;
int i;
struct value *v;
if (TYPE_CODE (t) != TYPE_CODE_STRUCT
&& TYPE_CODE (t) != TYPE_CODE_UNION)
error ("Internal error: non-aggregate type to value_struct_elt_for_reference");
for (i = TYPE_NFIELDS (t) - 1; i >= TYPE_N_BASECLASSES (t); i--)
{
char *t_field_name = TYPE_FIELD_NAME (t, i);
if (t_field_name && strcmp (t_field_name, name) == 0)
{
if (TYPE_FIELD_STATIC (t, i))
{
v = value_static_field (t, i);
if (v == NULL)
error ("static field %s has been optimized out",
name);
return v;
}
if (TYPE_FIELD_PACKED (t, i))
error ("pointers to bitfield members not allowed");
return value_from_longest
(lookup_reference_type (lookup_member_type (TYPE_FIELD_TYPE (t, i),
domain)),
offset + (LONGEST) (TYPE_FIELD_BITPOS (t, i) >> 3));
}
}
/* C++: If it was not found as a data field, then try to
return it as a pointer to a method. */
/* Destructors are a special case. */
if (destructor_name_p (name, t))
{
error ("member pointers to destructors not implemented yet");
}
/* Perform all necessary dereferencing. */
while (intype && TYPE_CODE (intype) == TYPE_CODE_PTR)
intype = TYPE_TARGET_TYPE (intype);
for (i = TYPE_NFN_FIELDS (t) - 1; i >= 0; --i)
{
char *t_field_name = TYPE_FN_FIELDLIST_NAME (t, i);
char dem_opname[64];
if (strncmp (t_field_name, "__", 2) == 0 ||
strncmp (t_field_name, "op", 2) == 0 ||
strncmp (t_field_name, "type", 4) == 0)
{
if (cplus_demangle_opname (t_field_name, dem_opname, DMGL_ANSI))
t_field_name = dem_opname;
else if (cplus_demangle_opname (t_field_name, dem_opname, 0))
t_field_name = dem_opname;
}
if (t_field_name && strcmp (t_field_name, name) == 0)
{
int j = TYPE_FN_FIELDLIST_LENGTH (t, i);
struct fn_field *f = TYPE_FN_FIELDLIST1 (t, i);
check_stub_method_group (t, i);
if (intype == 0 && j > 1)
error ("non-unique member `%s' requires type instantiation", name);
if (intype)
{
while (j--)
if (TYPE_FN_FIELD_TYPE (f, j) == intype)
break;
if (j < 0)
error ("no member function matches that type instantiation");
}
else
j = 0;
if (TYPE_FN_FIELD_VIRTUAL_P (f, j))
{
return value_from_longest
(lookup_reference_type
(lookup_member_type (TYPE_FN_FIELD_TYPE (f, j),
domain)),
(LONGEST) METHOD_PTR_FROM_VOFFSET (TYPE_FN_FIELD_VOFFSET (f, j)));
}
else
{
struct symbol *s = lookup_symbol (TYPE_FN_FIELD_PHYSNAME (f, j),
0, VAR_DOMAIN, 0, NULL);
if (s == NULL)
{
v = 0;
}
else
{
v = read_var_value (s, 0);
#if 0
VALUE_TYPE (v) = lookup_reference_type
(lookup_member_type (TYPE_FN_FIELD_TYPE (f, j),
domain));
#endif
}
return v;
}
}
}
for (i = TYPE_N_BASECLASSES (t) - 1; i >= 0; i--)
{
struct value *v;
int base_offset;
if (BASETYPE_VIA_VIRTUAL (t, i))
base_offset = 0;
else
base_offset = TYPE_BASECLASS_BITPOS (t, i) / 8;
v = value_struct_elt_for_reference (domain,
offset + base_offset,
TYPE_BASECLASS (t, i),
name,
intype,
noside);
if (v)
return v;
}
/* As a last chance, pretend that CURTYPE is a namespace, and look
it up that way; this (frequently) works for types nested inside
classes. */
return value_maybe_namespace_elt (curtype, name, noside);
}
/* C++: Return the member NAME of the namespace given by the type
CURTYPE. */
static struct value *
value_namespace_elt (const struct type *curtype,
char *name,
enum noside noside)
{
struct value *retval = value_maybe_namespace_elt (curtype, name,
noside);
if (retval == NULL)
error ("No symbol \"%s\" in namespace \"%s\".", name,
TYPE_TAG_NAME (curtype));
return retval;
}
/* A helper function used by value_namespace_elt and
value_struct_elt_for_reference. It looks up NAME inside the
context CURTYPE; this works if CURTYPE is a namespace or if CURTYPE
is a class and NAME refers to a type in CURTYPE itself (as opposed
to, say, some base class of CURTYPE). */
static struct value *
value_maybe_namespace_elt (const struct type *curtype,
char *name,
enum noside noside)
{
const char *namespace_name = TYPE_TAG_NAME (curtype);
struct symbol *sym;
sym = cp_lookup_symbol_namespace (namespace_name, name, NULL,
get_selected_block (0), VAR_DOMAIN,
NULL);
if (sym == NULL)
return NULL;
else if ((noside == EVAL_AVOID_SIDE_EFFECTS)
&& (SYMBOL_CLASS (sym) == LOC_TYPEDEF))
return allocate_value (SYMBOL_TYPE (sym));
else
return value_of_variable (sym, get_selected_block (0));
}
/* Given a pointer value V, find the real (RTTI) type
of the object it points to.
Other parameters FULL, TOP, USING_ENC as with value_rtti_type()
and refer to the values computed for the object pointed to. */
struct type *
value_rtti_target_type (struct value *v, int *full, int *top, int *using_enc)
{
struct value *target;
target = value_ind (v);
return value_rtti_type (target, full, top, using_enc);
}
/* Given a value pointed to by ARGP, check its real run-time type, and
if that is different from the enclosing type, create a new value
using the real run-time type as the enclosing type (and of the same
type as ARGP) and return it, with the embedded offset adjusted to
be the correct offset to the enclosed object
RTYPE is the type, and XFULL, XTOP, and XUSING_ENC are the other
parameters, computed by value_rtti_type(). If these are available,
they can be supplied and a second call to value_rtti_type() is avoided.
(Pass RTYPE == NULL if they're not available */
struct value *
value_full_object (struct value *argp, struct type *rtype, int xfull, int xtop,
int xusing_enc)
{
struct type *real_type;
int full = 0;
int top = -1;
int using_enc = 0;
struct value *new_val;
if (rtype)
{
real_type = rtype;
full = xfull;
top = xtop;
using_enc = xusing_enc;
}
else
real_type = value_rtti_type (argp, &full, &top, &using_enc);
/* If no RTTI data, or if object is already complete, do nothing */
if (!real_type || real_type == VALUE_ENCLOSING_TYPE (argp))
return argp;
/* If we have the full object, but for some reason the enclosing
type is wrong, set it *//* pai: FIXME -- sounds iffy */
if (full)
{
argp = value_change_enclosing_type (argp, real_type);
return argp;
}
/* Check if object is in memory */
if (VALUE_LVAL (argp) != lval_memory)
{
warning ("Couldn't retrieve complete object of RTTI type %s; object may be in register(s).", TYPE_NAME (real_type));
return argp;
}
/* All other cases -- retrieve the complete object */
/* Go back by the computed top_offset from the beginning of the object,
adjusting for the embedded offset of argp if that's what value_rtti_type
used for its computation. */
new_val = value_at_lazy (real_type, VALUE_ADDRESS (argp) - top +
(using_enc ? 0 : VALUE_EMBEDDED_OFFSET (argp)),
VALUE_BFD_SECTION (argp));
VALUE_TYPE (new_val) = VALUE_TYPE (argp);
VALUE_EMBEDDED_OFFSET (new_val) = using_enc ? top + VALUE_EMBEDDED_OFFSET (argp) : top;
return new_val;
}
/* Return the value of the local variable, if one exists.
Flag COMPLAIN signals an error if the request is made in an
inappropriate context. */
struct value *
value_of_local (const char *name, int complain)
{
struct symbol *func, *sym;
struct block *b;
struct value * ret;
if (deprecated_selected_frame == 0)
{
if (complain)
error ("no frame selected");
else
return 0;
}
func = get_frame_function (deprecated_selected_frame);
if (!func)
{
if (complain)
error ("no `%s' in nameless context", name);
else
return 0;
}
b = SYMBOL_BLOCK_VALUE (func);
if (dict_empty (BLOCK_DICT (b)))
{
if (complain)
error ("no args, no `%s'", name);
else
return 0;
}
/* Calling lookup_block_symbol is necessary to get the LOC_REGISTER
symbol instead of the LOC_ARG one (if both exist). */
sym = lookup_block_symbol (b, name, NULL, VAR_DOMAIN);
if (sym == NULL)
{
if (complain)
error ("current stack frame does not contain a variable named `%s'", name);
else
return NULL;
}
ret = read_var_value (sym, deprecated_selected_frame);
if (ret == 0 && complain)
error ("`%s' argument unreadable", name);
return ret;
}
/* C++/Objective-C: return the value of the class instance variable,
if one exists. Flag COMPLAIN signals an error if the request is
made in an inappropriate context. */
struct value *
value_of_this (int complain)
{
if (current_language->la_language == language_objc)
return value_of_local ("self", complain);
else
return value_of_local ("this", complain);
}
/* Create a slice (sub-string, sub-array) of ARRAY, that is LENGTH elements
long, starting at LOWBOUND. The result has the same lower bound as
the original ARRAY. */
struct value *
value_slice (struct value *array, int lowbound, int length)
{
struct type *slice_range_type, *slice_type, *range_type;
LONGEST lowerbound, upperbound;
struct value *slice;
struct type *array_type;
array_type = check_typedef (VALUE_TYPE (array));
COERCE_VARYING_ARRAY (array, array_type);
if (TYPE_CODE (array_type) != TYPE_CODE_ARRAY
&& TYPE_CODE (array_type) != TYPE_CODE_STRING
&& TYPE_CODE (array_type) != TYPE_CODE_BITSTRING)
error ("cannot take slice of non-array");
range_type = TYPE_INDEX_TYPE (array_type);
if (get_discrete_bounds (range_type, &lowerbound, &upperbound) < 0)
error ("slice from bad array or bitstring");
if (lowbound < lowerbound || length < 0
|| lowbound + length - 1 > upperbound)
error ("slice out of range");
/* FIXME-type-allocation: need a way to free this type when we are
done with it. */
slice_range_type = create_range_type ((struct type *) NULL,
TYPE_TARGET_TYPE (range_type),
lowbound, lowbound + length - 1);
if (TYPE_CODE (array_type) == TYPE_CODE_BITSTRING)
{
int i;
slice_type = create_set_type ((struct type *) NULL, slice_range_type);
TYPE_CODE (slice_type) = TYPE_CODE_BITSTRING;
slice = value_zero (slice_type, not_lval);
for (i = 0; i < length; i++)
{
int element = value_bit_index (array_type,
VALUE_CONTENTS (array),
lowbound + i);
if (element < 0)
error ("internal error accessing bitstring");
else if (element > 0)
{
int j = i % TARGET_CHAR_BIT;
if (BITS_BIG_ENDIAN)
j = TARGET_CHAR_BIT - 1 - j;
VALUE_CONTENTS_RAW (slice)[i / TARGET_CHAR_BIT] |= (1 << j);
}
}
/* We should set the address, bitssize, and bitspos, so the clice
can be used on the LHS, but that may require extensions to
value_assign. For now, just leave as a non_lval. FIXME. */
}
else
{
struct type *element_type = TYPE_TARGET_TYPE (array_type);
LONGEST offset
= (lowbound - lowerbound) * TYPE_LENGTH (check_typedef (element_type));
slice_type = create_array_type ((struct type *) NULL, element_type,
slice_range_type);
TYPE_CODE (slice_type) = TYPE_CODE (array_type);
slice = allocate_value (slice_type);
if (VALUE_LAZY (array))
VALUE_LAZY (slice) = 1;
else
memcpy (VALUE_CONTENTS (slice), VALUE_CONTENTS (array) + offset,
TYPE_LENGTH (slice_type));
if (VALUE_LVAL (array) == lval_internalvar)
VALUE_LVAL (slice) = lval_internalvar_component;
else
VALUE_LVAL (slice) = VALUE_LVAL (array);
VALUE_ADDRESS (slice) = VALUE_ADDRESS (array);
VALUE_OFFSET (slice) = VALUE_OFFSET (array) + offset;
}
return slice;
}
/* Create a value for a FORTRAN complex number. Currently most of
the time values are coerced to COMPLEX*16 (i.e. a complex number
composed of 2 doubles. This really should be a smarter routine
that figures out precision inteligently as opposed to assuming
doubles. FIXME: fmb */
struct value *
value_literal_complex (struct value *arg1, struct value *arg2, struct type *type)
{
struct value *val;
struct type *real_type = TYPE_TARGET_TYPE (type);
val = allocate_value (type);
arg1 = value_cast (real_type, arg1);
arg2 = value_cast (real_type, arg2);
memcpy (VALUE_CONTENTS_RAW (val),
VALUE_CONTENTS (arg1), TYPE_LENGTH (real_type));
memcpy (VALUE_CONTENTS_RAW (val) + TYPE_LENGTH (real_type),
VALUE_CONTENTS (arg2), TYPE_LENGTH (real_type));
return val;
}
/* Cast a value into the appropriate complex data type. */
static struct value *
cast_into_complex (struct type *type, struct value *val)
{
struct type *real_type = TYPE_TARGET_TYPE (type);
if (TYPE_CODE (VALUE_TYPE (val)) == TYPE_CODE_COMPLEX)
{
struct type *val_real_type = TYPE_TARGET_TYPE (VALUE_TYPE (val));
struct value *re_val = allocate_value (val_real_type);
struct value *im_val = allocate_value (val_real_type);
memcpy (VALUE_CONTENTS_RAW (re_val),
VALUE_CONTENTS (val), TYPE_LENGTH (val_real_type));
memcpy (VALUE_CONTENTS_RAW (im_val),
VALUE_CONTENTS (val) + TYPE_LENGTH (val_real_type),
TYPE_LENGTH (val_real_type));
return value_literal_complex (re_val, im_val, type);
}
else if (TYPE_CODE (VALUE_TYPE (val)) == TYPE_CODE_FLT
|| TYPE_CODE (VALUE_TYPE (val)) == TYPE_CODE_INT)
return value_literal_complex (val, value_zero (real_type, not_lval), type);
else
error ("cannot cast non-number to complex");
}
void
_initialize_valops (void)
{
#if 0
deprecated_add_show_from_set
(add_set_cmd ("abandon", class_support, var_boolean, (char *) &auto_abandon,
"Set automatic abandonment of expressions upon failure.",
&setlist),
&showlist);
#endif
deprecated_add_show_from_set
(add_set_cmd ("overload-resolution", class_support, var_boolean, (char *) &overload_resolution,
"Set overload resolution in evaluating C++ functions.",
&setlist),
&showlist);
overload_resolution = 1;
}
```
|
```ruby
# code is released under a tri EPL/GPL/LGPL license. You can use it,
# redistribute it and/or modify it under the terms of the:
#
require 'socket'
server = TCPServer.new('127.0.0.1', 0)
loop do
socket = server.accept
begin
request = "Hello, World from TruffleRuby! #{socket.gets}"
socket.print "HTTP/1.1 200 OK\r\n" +
"Content-Type: text/plain\r\n" +
"Content-Length: #{request.bytesize}\r\n" +
"Connection: close\r\n\r\n"
socket.print request
socket.print "\n"
ensure
socket.close
end
end
```
|
```objective-c
//===- ReduceInstructions.h -------------------------------------*- C++ -*-===//
//
// See path_to_url for license information.
//
//===your_sha256_hash------===//
//
// This file implements a function which calls the Generic Delta pass in order
// to reduce uninteresting Arguments from defined functions.
//
//===your_sha256_hash------===//
#ifndef LLVM_TOOLS_LLVM_REDUCE_DELTAS_REDUCEINSTRUCTIONS_H
#define LLVM_TOOLS_LLVM_REDUCE_DELTAS_REDUCEINSTRUCTIONS_H
#include "Delta.h"
#include "llvm/Transforms/Utils/BasicBlockUtils.h"
#include "llvm/Transforms/Utils/Cloning.h"
namespace llvm {
void reduceInstructionsDeltaPass(TestRunner &Test);
} // namespace llvm
#endif
```
|
James Everett Martin (June 22, 1932 – June 3, 2017) was the President of the University of Arkansas from 1980 to 1984, and of Auburn University from 1984 to 1992.
Biography
James Everett Martin was born on June 22, 1932, in Vinemont, Alabama. He graduated from Auburn University with a B.S. in 1954, an M.A. from North Carolina State University in 1956 and a PhD from Iowa State University in 1962.
He was a professor at Oklahoma State University and the University of Maryland. He served as Dean of Agriculture and Life Sciences at Virginia Polytechnic Institute and State University. In 1975, he was appointed as vice president for the Division of Agriculture at the University of Arkansas. He was then its president from 1980 to 1982,and became the first president of the University of Arkansas System from 1982 to 1984. From 1984 to 1992, he served as the President of Auburn University until he retired.
He was a member of the Auburn Rotary Club. The James E. Martin Aquatics Center is named after him.
During his college days at Auburn, James met an Auburn cheerleader named Ann Freeman. They later married and had three children, and five grandchildren.
He died on June 3, 2017, at the age of 84.
References
1932 births
2017 deaths
People from Cullman County, Alabama
Auburn University alumni
Auburn Tigers men's basketball players
North Carolina State University alumni
Iowa State University alumni
Oklahoma State University faculty
University of Maryland, College Park faculty
Virginia Tech faculty
Presidents of Auburn University
Leaders of the University of Arkansas
Presidents of the University of Arkansas System
|
```rust
mod shared;
pub use shared::*;
mod locale;
pub use locale::*;
```
|
3+2 or Three Plus Two (; ) were a Belarusian pop group that represented Belarus in the Eurovision Song Contest 2010 in Oslo, Norway.
Formation
The band was formed by the Belarusian television channel ONT and the project "New Voices of Belarus" in 2009. All the group members are the finalists of that TV show.
Eurovision Song Contest 2010
After the victory in the national selection for the Eurovision 2010, public interest has grown rapidly. On 25 February 2010, 3+2 were chosen internally to represent Belarus in the Eurovision Song Contest 2010 with the song "Butterflies", performing in the first semi-final to be held on 25 May 2010.
Songwriters from Belarus and neighbouring countries offered the band their compositions. The producers of 3+2 considered different variants of the song, clips and performance. The band director decided to contact well-known producer and writer Max Fadeev for some conceptual ideas. Within a week he managed to create the full original set for the band. The song Butterflies was written by Max Fadeev, Swedish composer Robert Wells and by Polish poet Malka Chaplin especially for the group. They previously chose to perform "Far Away" at the contest, but changed it to "Butterflies" later due to poor reactions on the internet. Butterflies placed 24th in the grand final with a total of 18 points.
References
Belarusian pop music groups
Eurovision Song Contest entrants for Belarus
Eurovision Song Contest entrants of 2010
|
```lua
This program is free software; you can redistribute it and/or modify
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
]]
local lang_fastmapping = require 'factories.language.fastmapping_factory'
local object_generator = require 'language.object_generator'
local factory = require 'levels.contributed.fast_mapping.factories.fast_mapping_factory'
local game = require 'dmlab.system.game'
local lcm = require 'language.make'
local maze_generation = require 'dmlab.system.maze_generation'
local object_generator = require 'language.object_generator'
local placers = require 'language.placers'
local random = require 'common.random'
local selectors = require 'language.selectors'
local texter = require 'language.texter'
local ALL_ATTRIBUTES = {
shape = factory.EVAL_SHAPES,
pattern = {'solid'},
color = {"red"},
size = {"medium"},
}
local CONFOUNDER_SHAPES = {}
for i = 1, #factory.EVAL_SHAPES do
CONFOUNDER_SHAPES[i] = 'runcible' .. i
end
local OBJECT_COUNT = 2
local DISTRACTOR_OBJECT_COUNT = 1
local GOAL_REWARD = 10
local LEARNING_REWARD = 0
local EPISODE_LENGTH_SECONDS = 60
local fastMap = lang_fastmapping.buildMap(OBJECT_COUNT, DISTRACTOR_OBJECT_COUNT)
local fastTask = factory.createTask{
confoundNames = true,
confounderShapes = CONFOUNDER_SHAPES,
episodeLengthSeconds = EPISODE_LENGTH_SECONDS,
distractorObjectCount = DISTRACTOR_OBJECT_COUNT,
goalShapes = factory.EVAL_SHAPES,
objectCount = OBJECT_COUNT,
goalReward = GOAL_REWARD,
learningReward = LEARNING_REWARD,
}
return factory.createLevelApi{
episodeLengthSeconds = 60,
instructor = factory.defaultInstructor,
levelMapSelector = selectors.createIdentity(fastMap),
objectContext = object_generator.createContext{attributes = ALL_ATTRIBUTES},
taskSelector = selectors.createDiscreteDistribution({{10, fastTask}}),
}
```
|
```objective-c
#pragma once
#include <cassert>
#include <stdexcept> // for std::logic_error
#include <string>
#include <type_traits>
#include <vector>
#include <functional>
#include <iosfwd>
#include <base/defines.h>
#include <base/types.h>
#include <base/unaligned.h>
#include <base/simd.h>
#include <fmt/core.h>
#include <fmt/ostream.h>
#include <city.h>
#if defined(__SSE2__)
#include <emmintrin.h>
#endif
#if defined(__SSE4_2__)
#include <smmintrin.h>
#include <nmmintrin.h>
#define CRC_INT _mm_crc32_u64
#endif
#if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32)
#include <arm_acle.h>
#define CRC_INT __crc32cd
#endif
#if defined(__aarch64__) && defined(__ARM_NEON)
#include <arm_neon.h>
#pragma clang diagnostic ignored "-Wreserved-identifier"
#endif
#if defined(__s390x__)
#include <base/crc32c_s390x.h>
#define CRC_INT s390x_crc32c
#endif
/**
* The std::string_view-like container to avoid creating strings to find substrings in the hash table.
*/
struct StringRef
{
const char * data = nullptr;
size_t size = 0;
/// Non-constexpr due to reinterpret_cast.
template <typename CharT>
requires (sizeof(CharT) == 1)
StringRef(const CharT * data_, size_t size_) : data(reinterpret_cast<const char *>(data_)), size(size_)
{
/// Sanity check for overflowed values.
assert(size < 0x8000000000000000ULL);
}
constexpr StringRef(const char * data_, size_t size_) : data(data_), size(size_) {}
StringRef(const std::string & s) : data(s.data()), size(s.size()) {} /// NOLINT
constexpr explicit StringRef(std::string_view s) : data(s.data()), size(s.size()) {}
constexpr StringRef(const char * data_) : StringRef(std::string_view{data_}) {} /// NOLINT
constexpr StringRef() = default;
bool empty() const { return size == 0; }
std::string toString() const { return std::string(data, size); }
explicit operator std::string() const { return toString(); }
std::string_view toView() const { return std::string_view(data, size); }
constexpr explicit operator std::string_view() const { return std::string_view(data, size); }
};
using StringRefs = std::vector<StringRef>;
#if defined(__SSE2__)
/** Compare strings for equality.
* The approach is controversial and does not win in all cases.
* For more information, see hash_map_string_2.cpp
*/
inline bool compare8(const char * p1, const char * p2)
{
return 0xFFFF == _mm_movemask_epi8(_mm_cmpeq_epi8(
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p1)),
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p2))));
}
inline bool compare64(const char * p1, const char * p2)
{
return 0xFFFF == _mm_movemask_epi8(
_mm_and_si128(
_mm_and_si128(
_mm_cmpeq_epi8(
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p1)),
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p2))),
_mm_cmpeq_epi8(
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p1) + 1),
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p2) + 1))),
_mm_and_si128(
_mm_cmpeq_epi8(
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p1) + 2),
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p2) + 2)),
_mm_cmpeq_epi8(
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p1) + 3),
_mm_loadu_si128(reinterpret_cast<const __m128i *>(p2) + 3)))));
}
#elif defined(__aarch64__) && defined(__ARM_NEON)
inline bool compare8(const char * p1, const char * p2)
{
uint64_t mask = getNibbleMask(vceqq_u8(
vld1q_u8(reinterpret_cast<const unsigned char *>(p1)), vld1q_u8(reinterpret_cast<const unsigned char *>(p2))));
return 0xFFFFFFFFFFFFFFFF == mask;
}
inline bool compare64(const char * p1, const char * p2)
{
uint64_t mask = getNibbleMask(vandq_u8(
vandq_u8(vceqq_u8(vld1q_u8(reinterpret_cast<const unsigned char *>(p1)), vld1q_u8(reinterpret_cast<const unsigned char *>(p2))),
vceqq_u8(vld1q_u8(reinterpret_cast<const unsigned char *>(p1 + 16)), vld1q_u8(reinterpret_cast<const unsigned char *>(p2 + 16)))),
vandq_u8(vceqq_u8(vld1q_u8(reinterpret_cast<const unsigned char *>(p1 + 32)), vld1q_u8(reinterpret_cast<const unsigned char *>(p2 + 32))),
vceqq_u8(vld1q_u8(reinterpret_cast<const unsigned char *>(p1 + 48)), vld1q_u8(reinterpret_cast<const unsigned char *>(p2 + 48))))));
return 0xFFFFFFFFFFFFFFFF == mask;
}
#endif
#if defined(__SSE2__) || (defined(__aarch64__) && defined(__ARM_NEON))
inline bool memequalWide(const char * p1, const char * p2, size_t size)
{
/** The order of branches and the trick with overlapping comparisons
* are the same as in memcpy implementation.
* See the comments in base/glibc-compatibility/memcpy/memcpy.h
*/
if (size <= 16)
{
if (size >= 8)
{
/// Chunks of 8..16 bytes.
return unalignedLoad<uint64_t>(p1) == unalignedLoad<uint64_t>(p2)
&& unalignedLoad<uint64_t>(p1 + size - 8) == unalignedLoad<uint64_t>(p2 + size - 8);
}
else if (size >= 4)
{
/// Chunks of 4..7 bytes.
return unalignedLoad<uint32_t>(p1) == unalignedLoad<uint32_t>(p2)
&& unalignedLoad<uint32_t>(p1 + size - 4) == unalignedLoad<uint32_t>(p2 + size - 4);
}
else if (size >= 2)
{
/// Chunks of 2..3 bytes.
return unalignedLoad<uint16_t>(p1) == unalignedLoad<uint16_t>(p2)
&& unalignedLoad<uint16_t>(p1 + size - 2) == unalignedLoad<uint16_t>(p2 + size - 2);
}
else if (size >= 1)
{
/// A single byte.
return *p1 == *p2;
}
return true;
}
while (size >= 64)
{
if (compare64(p1, p2))
{
p1 += 64;
p2 += 64;
size -= 64;
}
else
return false;
}
switch (size / 16) // NOLINT(bugprone-switch-missing-default-case)
{
case 3: if (!compare8(p1 + 32, p2 + 32)) return false; [[fallthrough]];
case 2: if (!compare8(p1 + 16, p2 + 16)) return false; [[fallthrough]];
case 1: if (!compare8(p1, p2)) return false; [[fallthrough]];
default: ;
}
return compare8(p1 + size - 16, p2 + size - 16);
}
#endif
inline bool operator== (StringRef lhs, StringRef rhs)
{
if (lhs.size != rhs.size)
return false;
if (lhs.size == 0)
return true;
#if defined(__SSE2__) || (defined(__aarch64__) && defined(__ARM_NEON))
return memequalWide(lhs.data, rhs.data, lhs.size);
#else
return 0 == memcmp(lhs.data, rhs.data, lhs.size);
#endif
}
inline bool operator!= (StringRef lhs, StringRef rhs)
{
return !(lhs == rhs);
}
inline bool operator< (StringRef lhs, StringRef rhs)
{
int cmp = memcmp(lhs.data, rhs.data, std::min(lhs.size, rhs.size));
return cmp < 0 || (cmp == 0 && lhs.size < rhs.size);
}
inline bool operator> (StringRef lhs, StringRef rhs)
{
int cmp = memcmp(lhs.data, rhs.data, std::min(lhs.size, rhs.size));
return cmp > 0 || (cmp == 0 && lhs.size > rhs.size);
}
/** Hash functions.
* You can use either CityHash64,
* or a function based on the crc32 statement,
* which is obviously less qualitative, but on real data sets,
* when used in a hash table, works much faster.
* For more information, see hash_map_string_3.cpp
*/
struct StringRefHash64
{
size_t operator() (StringRef x) const
{
return CityHash_v1_0_2::CityHash64(x.data, x.size);
}
};
#if defined(CRC_INT)
/// Parts are taken from CityHash.
inline UInt64 hashLen16(UInt64 u, UInt64 v)
{
return CityHash_v1_0_2::Hash128to64(CityHash_v1_0_2::uint128(u, v));
}
inline UInt64 shiftMix(UInt64 val)
{
return val ^ (val >> 47);
}
inline UInt64 rotateByAtLeast1(UInt64 val, UInt8 shift)
{
return (val >> shift) | (val << (64 - shift));
}
inline size_t hashLessThan8(const char * data, size_t size)
{
static constexpr UInt64 k2 = 0x9ae16a3b2f90404fULL;
static constexpr UInt64 k3 = 0xc949d7c7509e6557ULL;
if (size >= 4)
{
UInt64 a = unalignedLoadLittleEndian<uint32_t>(data);
return hashLen16(size + (a << 3), unalignedLoadLittleEndian<uint32_t>(data + size - 4));
}
if (size > 0)
{
uint8_t a = data[0];
uint8_t b = data[size >> 1];
uint8_t c = data[size - 1];
uint32_t y = static_cast<uint32_t>(a) + (static_cast<uint32_t>(b) << 8);
uint32_t z = static_cast<uint32_t>(size) + (static_cast<uint32_t>(c) << 2);
return shiftMix(y * k2 ^ z * k3) * k2;
}
return k2;
}
inline size_t hashLessThan16(const char * data, size_t size)
{
if (size > 8)
{
UInt64 a = unalignedLoadLittleEndian<UInt64>(data);
UInt64 b = unalignedLoadLittleEndian<UInt64>(data + size - 8);
return hashLen16(a, rotateByAtLeast1(b + size, static_cast<UInt8>(size))) ^ b;
}
return hashLessThan8(data, size);
}
struct CRC32Hash
{
unsigned operator() (StringRef x) const
{
const char * pos = x.data;
size_t size = x.size;
if (size == 0)
return 0;
chassert(pos);
if (size < 8)
{
return static_cast<unsigned>(hashLessThan8(x.data, x.size));
}
const char * end = pos + size;
unsigned res = -1U;
do
{
UInt64 word = unalignedLoadLittleEndian<UInt64>(pos);
res = static_cast<unsigned>(CRC_INT(res, word));
pos += 8;
} while (pos + 8 < end);
UInt64 word = unalignedLoadLittleEndian<UInt64>(end - 8); /// I'm not sure if this is normal.
res = static_cast<unsigned>(CRC_INT(res, word));
return res;
}
};
struct StringRefHash : CRC32Hash {};
#else
struct CRC32Hash
{
unsigned operator() (StringRef /* x */) const
{
throw std::logic_error{"Not implemented CRC32Hash without SSE"};
}
};
struct StringRefHash : StringRefHash64 {};
#endif
namespace std
{
template <>
struct hash<StringRef> : public StringRefHash {};
}
namespace ZeroTraits
{
inline bool check(const StringRef & x) { return 0 == x.size; }
inline void set(StringRef & x) { x.size = 0; }
}
namespace PackedZeroTraits
{
template <typename Second, template <typename, typename> class PackedPairNoInit>
inline bool check(const PackedPairNoInit<StringRef, Second> p)
{ return 0 == p.key.size; }
template <typename Second, template <typename, typename> class PackedPairNoInit>
inline void set(PackedPairNoInit<StringRef, Second> & p)
{ p.key.size = 0; }
}
std::ostream & operator<<(std::ostream & os, const StringRef & str);
template<> struct fmt::formatter<StringRef> : fmt::ostream_formatter {};
```
|
```xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<target>System.out</target>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFO</level>
</filter>
<encoder>
<pattern>%date{yyyy-MM-dd HH:mm:ss.SSS} %-5level [%thread] %logger{26} - %msg%n</pattern>
</encoder>
</appender>
<logger name="org.asynchttpclient" level="INFO"/>
<logger name="io.netty" level="INFO"/>
<logger name="io.swagger" level="OFF"/>
<root level="INFO">
<appender-ref ref="STDOUT"/>
</root>
</configuration>
```
|
```smalltalk
namespace Xamarin.Forms.Platform.WPF.Interfaces
{
using System;
using System.Threading;
using System.Threading.Tasks;
using System.Windows;
public interface IContentLoader
{
Task<object> LoadContentAsync(FrameworkElement parent, object oldContent, object newContent, CancellationToken cancellationToken);
void OnSizeContentChanged(FrameworkElement parent, object content);
}
public class DefaultContentLoader : IContentLoader
{
public Task<object> LoadContentAsync(FrameworkElement parent, object oldContent, object newContent, CancellationToken cancellationToken)
{
if (!Application.Current.Dispatcher.CheckAccess())
throw new InvalidOperationException("UIThreadRequired");
var scheduler = TaskScheduler.FromCurrentSynchronizationContext();
return Task.Factory.StartNew(() => LoadContent(newContent), cancellationToken, TaskCreationOptions.None, scheduler);
}
protected virtual object LoadContent(object content)
{
if (content is FrameworkElement)
return content;
if (content is Uri)
return Application.LoadComponent(content as Uri);
if (content is string)
{
if (Uri.TryCreate(content as string, UriKind.RelativeOrAbsolute, out Uri uri))
{
return Application.LoadComponent(uri);
}
}
return null;
}
public void OnSizeContentChanged(FrameworkElement parent, object page)
{
}
}
}
```
|
Vipava (; , ) is a town in western Slovenia. It is the largest settlement and the seat of the Municipality of Vipava. Vipava is located near the numerous sources of the Vipava River, in the upper Vipava Valley, above sea level. Historically, it is part of the traditional region of Inner Carniola, but it is now generally regarded as part of the Slovenian Littoral.
History
The region around the town was probably settled by the Illyrians and Celts in the pre-Roman era. Some trace the name Vipava to the Celtic root vip (river). In 394, the Battle of the Frigidus took place in the vicinity of the town. In the late 6th century, Slavic tribes, ancestors of modern Slovenes, settled the area. In the late 8th century, the Vipava Valley was included in the Frankish Empire and the Christianization of Slovenes started.
In the Middle Ages, the valley was first included in the Duchy of Friuli. Between 1340 and 1355, Vipava and its surroundings were constantly contended between the Counts of Gorizia, the Patriarches of Aquileia and the Habsburg Duchy of Carniola. Modern Vipava was first mentioned in 1367. In the same period, it was finally included in the County of Gorizia. After a short Venetian interim, Vipava fell under the Habsburg domain in 1501 and in 1535 it was included in Carniola. In the mid-16th century, it emerged as an important center of the Protestant Reformation. It remained part of Carniola until 1918, when it was occupied by the Italian troops and annexed to the Kingdom of Italy.
In the period between 1922 and 1943, it was subjected to a violent policy of Fascist Italianization. Many locals joined the militant antifascist organization TIGR. During World War II, the entire area became an important center of Partisan resistance. In 1945, it was liberated by Partisan troops and in 1947 it became part of the Socialist Federal Republic of Yugoslavia, and of independent Slovenia in 1991.
Mass graves
Vipava is the site of five known mass graves from the end of or after the Second World War. The Cemetery Mass Grave () is located next to the southwest wall of the Vipava Cemetery. It contained the remains of eight Slovene civilians murdered by the Yugoslav Army on July 14, 1945. The identities of six victims are known. The remains of six were exhumed in 1999 and reinterred in the cemetery. The Military Cemetery Mass Grave () is located by the west edge of the First World War military cemetery. It contains the remains of 15 Chetnik soldiers killed in late April or early May 1945. Three additional graves contain the remains of German prisoners of war that died of typhus at the nearby prison camp in 1945. The Vipava Field Mass Grave () extends south of the dairy to Močilnik Creek. It is partially covered by the freeway and contains a large number of remains. The Princova Baronovka Mass Grave () lies in the southern part of the town. The Bevk Street Mass Grave () is located at Bevk Streek () no. 16. Human remains were unearthed during excavations for the building there.
Economy
Vipava is an important agricultural center of western Slovenia. It is renowned for its wine production. Tourism is also important, as well as small and medium-sized businesses. Many locals work in the nearby town of Ajdovščina.
Language, culture, and religion
The vast majority of the people of Vipava, around 93%, are Slovenes. Others are mostly descendants of immigrants from other regions of the former Yugoslavia. Over 96% of the people use Slovene as their first language; among the remaining 4%, most speak Bosnian as their first language. The native inhabitants speak a variant of the Inner Carniolan dialect of Slovene.
Around 77% of the people are Catholic, a little less than 1% are adherents of Sunni Islam, and others are irreligious. The parish church in the town is dedicated to Saint Stephen and belongs to the Diocese of Koper.
Notable people
Notable people that were born or lived in Vipava include:
Drago Bajc (1904–1928), poet
Andreas Baumkirchner (1420–1471), nobleman, leader of an unsuccessful conspiracy against Frederick III, Holy Roman Emperor
Sigismund von Herberstein (1486–1566), diplomat and author
Štefan Kociančič (1813–1883), theologian and translator
Sebastian Krelj (1538–1567), Slovene Protestant writer and preacher
Anton Lavrin (1789–1869), Austrian diplomat and Egyptologist
Gallery
References
External links
Vipava on Geopedia
Vipava Valley Tourist Association site
Populated places in the Municipality of Vipava
|
Gostuń is a village in the administrative district of Gmina Ostrowite, within Słupca County, Greater Poland Voivodeship, in west-central Poland. It lies approximately south-west of Ostrowite, north-east of Słupca, and east of the regional capital Poznań.
References
Villages in Słupca County
|
Adolf Ivar Arwidsson (7 August 1791 – 21 June 1858) was a Finnish political journalist, writer and historian. His writing was critical of Finland's status at the time as a Grand Duchy under the Russian Tsars. Its sharpness cost him his job as a lecturer at The Royal Academy of Turku and he had to emigrate to Sweden, where he continued his political activity. The Finnish national movement considered Arwidsson the mastermind of an independent Finland.
Life
Adolf Ivar Arwidsson was born in 1791 in Padasjoki in southern Finland. His father, a chaplain, later moved the family to Laukaa in mid-Finland. Laukaa was severely affected by the Finnish war of 1808–1809, and Arwidsson was left facing life under the Russian Empire, to which Finland now an belonged as an autonomous Grand Duchy. In 1809, while still at high school in Porvoo, Arwidsson was a representative at the Diet of Porvoo, at which the Finnish estates swore oaths of allegiance to the Tsars. Enabling support from the Swedish speaking upper strata of the Finnish society for a separate Finnish identity was expressed by the university docent A. I. Arwidsson (1791–1858) in a phrase that, somewhat modified, became an often quoted Fennoman credo: "Swedes we are no longer, Russians we do not want to become, let us therefore be Finns." (Swedish form: "Svenskar äro vi icke längre, ryssar vilja vi icke bli, låt oss alltså vara finnar" Finnish form: "Ruotsalaisia emme enää ole, venäläisiksi emme tahdo tulla, olkaamme siis suomalaisia.")
In 1814 the Royal Academy of Turku awarded him his Magister degree in philosophy. In 1817 the same institution awarded him his doctorate, and he became a lecturer at the academy. Arwidsson's native language was Swedish; all his works are in Swedish, though he was a fluent speaker of Finnish.
After his dissertation Arwidsson spent a year in Sweden. During this time he made contact with the exiled Finns in Uppsala and Stockholm. In 1820 after his return Arwidsson, who had so far written lyric poetry, submitted for publication a political text whose sharp and radical tone soon ensured attention in the capital, Saint Petersburg. Finally, as a consequence, in 1822 he lost his position as a lecturer and was banished from the university. Cut off from his training in his chosen career, in 1823 Arwidsson emigrated to Stockholm, where in 1825 he gained his civil rights, and found work as a librarian in the royal library.
In 1827 Arwidsson undertook a research trip to Finland, but was immediately deported back to Sweden by the authorities. This experience led to a further radicalisation of his political work, and as a result he participated in several public debates in Sweden, in each of which he represented the situation in Finland in a dark light, but at the same time tried to portray the Finnish-national identity positively. Apart from his political work, Arwidsson also produced several historical research works. In 1843 he was appointed director of the royal library. In the same year he was allowed to travel to Finland, but he only took advantage of this possibility in 1858, when he undertook a round trip through Finland. During this journey Arwidsson caught pneumonia, and died on 21 June in Viipuri. He was buried in his childhood home town of Laukaa. The following verses written by Elias Lönnrot were later carved onto his gravestone:
Academic works
Svenska fornsånger ("Old Swedish Songs", 1834–42)
Förteckning öfver Kongl. Bibliothekets i Stockholm Isländska Handskrifter ("Inventory of the Icelandic Manuscripts in the Royal Library of Stockholm", 1848)
Political works
The political works of Adolf Ivar Arwidssons form two main phases. The first is his time as a lecturer in Turku. The second period of intensive political activity followed after his emigration to Sweden, where Arwidsson participated intensively in the debate over the situation of his homeland.
References
Liisa Castrén: Adolf Ivar Arwidsson – Nuori Arwidsson ja hänen ympäristönsä. Otava, Helsinki 1944.
Liisa Castrén: Adolf Ivar Arwidsson isänmaallisena herättäjänä. Suomen Historiallinen Seura, Helsinki 1951.
Olavi Junnila: Ruotsiin muuttanut Adolf Iwar Arwidsson ja Suomi. Suomen Historiallinen Seura, Helsinki 1972.
Kari Tarkiainen: Adolf Ivar Arwidsson, in Matti Klinge (Hrsg.): Suomen kansallisbiografia 1. SKS, Helsinki 2003, .
1791 births
1858 deaths
People from Padasjoki
Finnish emigrants to Sweden
19th-century Finnish politicians
19th-century Finnish journalists
Finnish folk-song collectors
Deaths from pneumonia in Finland
19th-century musicologists
|
```turing
$ cat > dune-project <<EOF
> (lang dune 2.2)
> (using menhir 2.1)
> EOF
$ cat >dune <<EOF
> (env (_ (menhir_flags :standard "--comment")))
> (menhir
> (modules parser)
> (mode promote))
> (library (name test))
> EOF
$ dune printenv --field menhir_flags 2>&1 | sed "s/(using menhir .*)/(using menhir <version>)/"
(menhir_flags (--comment))
```
|
The Haken-Kelso-Bunz (HKB) is a theoretical model of motor coordination originally formulated by Hermann Haken, J. A. Scott Kelso and H. Bunz. The model attempts to provide the framework for understanding coordinated behavior in living things. It accounts for experimental observations on human bimanual coordination that revealed fundamental features of self-organization: multistability, and phase transitions (switching). HKB is one of the most extensively tested quantitative models in the field of human movement behavior.
Phase Transitions ('Switches')
The HKB model differs from other motor coordination models with the addition of phase transitions (‘switches’). Kelso initially observed this phenomenon while conducting an experiment looking at subjects’ finger movements. Subjects oscillated their fingers rhythmically in the transverse plane (i.e., abduction-adduction) in one of two patterns, parallel or anti-parallel. In the parallel pattern, the finger muscles contract in an alternating fashion; in the anti-parallel pattern, the homologous finger muscles contract simultaneously. Kelso's study observed that when the subject begins in the parallel mode and increases the speed of movement, a spontaneous switch to symmetrical, anti-parallel movement occurs. This transition happens swiftly at a certain critical frequency. Surprisingly, after the switch has occurred and the movement rate decreases, Kelso's subjects remain in the symmetrical model (did not switch back). Kelso's study indicates that while humans are able to produce two patterns at low frequency values, only one—the symmetrical, anti-parallel mode remains stable as frequency is scaled beyond a critical value.
Prediction
The HKB model states that dynamic instability causes switching to occur. HKB measures stability in the following ways:
1. Critical slowing down. If a perturbation is applied to a system that takes it away from its stationary state, the time for a system to return to the stationary state (local relaxation time) is a measure of the system's stability. The less stable the pattern, the longer it should take to return to the established pattern. HKB predicts critical slowing down. As the parallel pattern loses stability as frequency is increased, the local relaxation time should increase as the system approaches the critical point.
2. Critical fluctuations. If switching patterns of behavior is due to loss of stability, direct measures of fluctuations of the order parameter should be detectable as the critical point approaches.
Equation
In the HKB model ϕ is the relative phase or phase relation between the fingers. The parameter k in the model has a correspondence to the cycle-to-cycle period of the finger movements, or, the inverse of the movement rate or oscillation frequency in the experiment.
The equation:
The equation predicts that for k > 0.25 relative phase values of 0 ±π are both stable, a condition coined as bistability. An increase in movement rate, starting in parallel-phase, leads to a switch to anti-parallel phase at a critical frequency. Starting with a large k and decreasing k leads to a destabilization of the fixed point at π which becomes unstable at the value kc=0.25.
Uses
The HKB model has had a profound effect on many conceptual, methodological, and practical models since its inception. HKB has been able to model task context, biomechanical factors, perception, cognitive demands, learning and memory. The latest noninvasive neuroimaging methods such as fMRI, MEG and high density EEG arrays are increasingly being used along with behavioral recordings and analysis to identify the neural circuitry and mechanisms of pattern stability and switching.
See also
Excitator model
References
Motor control
|
```c++
//
// Aspia Project
//
// This program is free software: you can redistribute it and/or modify
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
//
// along with this program. If not, see <path_to_url
//
#include "base/win/desktop.h"
#include "base/logging.h"
namespace base {
//your_sha256_hash----------------------------------
Desktop::Desktop(Desktop&& other) noexcept
{
desktop_ = other.desktop_;
own_ = other.own_;
other.desktop_ = nullptr;
}
//your_sha256_hash----------------------------------
Desktop::Desktop(HDESK desktop, bool own)
: desktop_(desktop),
own_(own)
{
// Nothing
}
//your_sha256_hash----------------------------------
Desktop::~Desktop()
{
close();
}
//your_sha256_hash----------------------------------
// static
Desktop Desktop::desktop(const wchar_t* desktop_name)
{
const ACCESS_MASK desired_access =
DESKTOP_CREATEMENU | DESKTOP_CREATEWINDOW |
DESKTOP_ENUMERATE | DESKTOP_HOOKCONTROL |
DESKTOP_WRITEOBJECTS | DESKTOP_READOBJECTS |
DESKTOP_SWITCHDESKTOP | GENERIC_WRITE;
HDESK desktop = OpenDesktopW(desktop_name, 0, FALSE, desired_access);
if (!desktop)
{
PLOG(LS_ERROR) << "OpenDesktopW failed";
return Desktop();
}
return Desktop(desktop, true);
}
//your_sha256_hash----------------------------------
// static
Desktop Desktop::inputDesktop()
{
const ACCESS_MASK desired_access = GENERIC_READ | GENERIC_WRITE | GENERIC_EXECUTE;
HDESK desktop = OpenInputDesktop(0, FALSE, desired_access);
if (!desktop)
{
PLOG(LS_ERROR) << "OpenInputDesktop failed";
return Desktop();
}
return Desktop(desktop, true);
}
//your_sha256_hash----------------------------------
// static
Desktop Desktop::threadDesktop()
{
HDESK desktop = GetThreadDesktop(GetCurrentThreadId());
if (!desktop)
{
PLOG(LS_ERROR) << "GetThreadDesktop failed";
return Desktop();
}
return Desktop(desktop, false);
}
//your_sha256_hash----------------------------------
// static
std::vector<std::wstring> Desktop::desktopList(HWINSTA winsta)
{
std::vector<std::wstring> list;
if (!EnumDesktopsW(winsta, enumDesktopProc, reinterpret_cast<LPARAM>(&list)))
{
PLOG(LS_ERROR) << "EnumDesktopsW failed";
return {};
}
return list;
}
//your_sha256_hash----------------------------------
bool Desktop::name(wchar_t* name, DWORD length) const
{
if (!desktop_)
return false;
if (!GetUserObjectInformationW(desktop_, UOI_NAME, name, length, nullptr))
{
PLOG(LS_ERROR) << "Failed to query the desktop name";
return false;
}
return true;
}
//your_sha256_hash----------------------------------
bool Desktop::isSame(const Desktop& other) const
{
wchar_t this_name[128];
if (!name(this_name, sizeof(this_name)))
return false;
wchar_t other_name[128];
if (!other.name(other_name, sizeof(other_name)))
return false;
return wcscmp(this_name, other_name) == 0;
}
//your_sha256_hash----------------------------------
bool Desktop::setThreadDesktop() const
{
if (!SetThreadDesktop(desktop_))
{
PLOG(LS_ERROR) << "SetThreadDesktop failed";
return false;
}
return true;
}
//your_sha256_hash----------------------------------
bool Desktop::isValid() const
{
return (desktop_ != nullptr);
}
//your_sha256_hash----------------------------------
void Desktop::close()
{
if (own_ && desktop_)
{
if (!CloseDesktop(desktop_))
{
PLOG(LS_ERROR) << "CloseDesktop failed";
}
}
desktop_ = nullptr;
}
//your_sha256_hash----------------------------------
Desktop& Desktop::operator=(Desktop&& other) noexcept
{
close();
desktop_ = other.desktop_;
own_ = other.own_;
other.desktop_ = nullptr;
return *this;
}
//your_sha256_hash----------------------------------
// static
BOOL CALLBACK Desktop::enumDesktopProc(LPWSTR desktop, LPARAM lparam)
{
std::vector<std::wstring>* list = reinterpret_cast<std::vector<std::wstring>*>(lparam);
if (!list)
{
LOG(LS_ERROR) << "Invalid desktop list pointer";
return FALSE;
}
if (!desktop)
{
LOG(LS_ERROR) << "Invalid desktop name";
return FALSE;
}
list->emplace_back(desktop);
return TRUE;
}
} // namespace base
```
|
Ron Herbert (1933 – 2 April 2021) was a professional rugby league footballer who played for the Warrington Wolves.
He made 41 appearances and scored 20 tries. On debut he was the youngest ever player for the Wolves. He came to the attention of the Wolves because of his blistering speed and spent 8 seasons with the club, his appearances were limited by recurrent shoulder injuries.
See also
List of Warrington Wolves players
References
1933 births
2021 deaths
English rugby league players
Rugby league centres
Warrington Wolves players
|
Krasnokholmsky (masculine), Krasnokholmskaya (feminine), or Krasnokholmskoye (neuter) may refer to:
Krasnokholmsky District, a district of Tver Oblast, Russia
Krasnokholmsky (rural locality), a rural locality (a selo) in the Republic of Bashkortostan, Russia
Krasnokholmskoye, a rural locality (a settlement) in Kaliningrad Oblast, Russia
|
```javascript
const { defineConfig } = require('cypress')
module.exports = defineConfig({
fixturesFolder: false,
viewportHeight: 300,
viewportWidth: 500,
e2e: {
supportFile: false,
},
})
```
|
Mohammad Ali Siddiqui (2 February 1944 – 4 November 2014) was a Bangladeshi playback singer. He was a singer in the 1960s, 1970s and 1980s. He has sung a total of 250 songs in his career spanning over three decades. He was awarded with several prizes including the National Award, Dinesh Padak, Bandhan Lifetime Award and Shilpakala Academy Award.
Discography
Film songs
References
1944 births
2014 deaths
People from Netrokona District
20th-century Bangladeshi male singers
20th-century Bangladeshi singers
Bangladeshi playback singers
|
```python
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing,
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# specific language governing permissions and limitations
import numpy as np
import tvm
import tvm.testing
from tvm import te
from tvm.script import tir as T
from tvm import relay, tir
from tvm.relay.backend.te_compiler import lower_to_primfunc
from tvm.tir.tensor_intrin.hexagon import VRMPY_u8u8i32_INTRIN
def _check(original, transformed):
func = original
mod = tvm.IRModule.from_expr(func.with_attr("global_symbol", "main"))
mod = tvm.tir.transform.PlanAndUpdateBufferAllocationLocation()(mod)
tvm.ir.assert_structural_equal(mod["main"], transformed.with_attr("global_symbol", "main"))
@T.prim_func
def element_func(a: T.handle, c: T.handle) -> None:
A = T.match_buffer(a, (16, 16))
C = T.match_buffer(c, (16, 16))
B = T.alloc_buffer((16, 16))
for i0 in range(0, 16):
for j0 in range(0, 16):
with T.block():
i, j = T.axis.remap("SS", [i0, j0])
B[i, j] = A[i, j] + 1.0
for j0 in range(0, 16):
with T.block():
i, j = T.axis.remap("SS", [i0, j0])
C[i, j] = B[i, j] * 2.0
@T.prim_func
def transformed_element_func(a: T.handle, c: T.handle) -> None:
A = T.match_buffer(a, [16, 16])
C = T.match_buffer(c, [16, 16])
for i_0 in range(0, 16):
with T.block():
T.reads([A[i_0, 0:16]])
T.writes([C[i_0, 0:16]])
B = T.alloc_buffer([16, 16])
for j_0 in T.serial(0, 16):
with T.block():
i, j = T.axis.remap("SS", [i_0, j_0])
B[i, j] = A[i, j] + 1.0
for j_0 in T.serial(0, 16):
with T.block():
i, j = T.axis.remap("SS", [i_0, j_0])
C[i, j] = B[i, j] * 2.0
@T.prim_func
def original_func() -> None:
A = T.alloc_buffer((128, 128), "float32")
for i0, j0 in T.grid(128, 128):
with T.block():
i, j = T.axis.remap("SS", [i0, j0])
A[i, j] = T.float32(0)
for i0, j0, k0 in T.grid(32, 32, 32):
with T.block():
i, j, k = T.axis.remap("SSR", [i0, j0, k0])
B = T.alloc_buffer((128, 128), "float32")
C = T.alloc_buffer((128, 128), "float32")
D = T.alloc_buffer((128, 128), "float32")
if k == 0:
for ii, jj in T.grid(4, 4):
B[i * 4 + ii, j * 4 + jj] = A[i * 4 + ii, j * 4 + jj]
for ii, jj in T.grid(4, 4):
for kk in range(0, 4):
B[i * 4 + ii, j * 4 + jj] += C[i * 4 + ii, k * 4 + kk]
for kk in range(0, 4):
B[i * 4 + ii, j * 4 + jj] += (
D[j * 4 + jj, k * 4 + kk] * C[i * 4 + ii, k * 4 + kk]
)
@T.prim_func
def transformed_func() -> None:
A = T.alloc_buffer([128, 128])
for i0, j0 in T.grid(128, 128):
with T.block():
i, j = T.axis.remap("SS", [i0, j0])
A[i, j] = T.float32(0)
for i0, j0, k0 in T.grid(32, 32, 32):
with T.block():
i, j, k = T.axis.remap("SSR", [i0, j0, k0])
B = T.alloc_buffer([128, 128])
if k == 0:
for ii, jj in T.grid(4, 4):
B[i * 4 + ii, j * 4 + jj] = A[i * 4 + ii, j * 4 + jj]
for ii, jj in T.grid(4, 4):
with T.block(""):
T.reads([B[((i * 4) + ii), ((j * 4) + jj)]])
T.writes([B[((i * 4) + ii), ((j * 4) + jj)]])
C = T.alloc_buffer([128, 128])
for kk in T.serial(0, 4):
B[((i * 4) + ii), ((j * 4) + jj)] = (
B[((i * 4) + ii), ((j * 4) + jj)] + C[((i * 4) + ii), ((k * 4) + kk)]
)
for kk in T.serial(0, 4):
with T.block(""):
T.reads(
[
B[((i * 4) + ii), ((j * 4) + jj)],
C[((i * 4) + ii), ((k * 4) + kk)],
]
)
T.writes([B[((i * 4) + ii), ((j * 4) + jj)]])
D = T.alloc_buffer([128, 128])
B[((i * 4) + ii), ((j * 4) + jj)] = B[
((i * 4) + ii), ((j * 4) + jj)
] + (
D[((j * 4) + jj), ((k * 4) + kk)]
* C[((i * 4) + ii), ((k * 4) + kk)]
)
@T.prim_func
def match_buffer_func() -> None:
C = T.alloc_buffer((128, 128))
for i in range(128):
with T.block():
vi = T.axis.S(128, i)
C0 = T.match_buffer(C[vi, 0:128], (128))
for j in range(128):
with T.block():
jj = T.axis.S(128, j)
C1 = T.match_buffer(C0[jj], ())
C1[()] = 0
@T.prim_func
def transformed_match_buffer_func() -> None:
for i in range(0, 128):
with T.block():
vi = T.axis.S(128, i)
C = T.alloc_buffer((128, 128))
C0 = T.match_buffer(C[vi, 0:128], (128))
for j in range(128):
with T.block():
jj = T.axis.S(128, j)
C1 = T.match_buffer(C0[jj], ())
C1[()] = 0
@T.prim_func
def opaque_access(a: T.handle, b: T.handle) -> None:
A = T.match_buffer(a, [1024])
B = T.match_buffer(b, [1024])
A_cache = T.alloc_buffer([1024])
for i in T.serial(0, 8):
with T.block():
vi = T.axis.S(8, i)
with T.block():
v = T.axis.S(8, vi)
T.reads([A[(v * 128) : ((v * 128) + 128)]])
T.writes([A_cache[(v * 128) : ((v * 128) + 128)]])
T.evaluate(
T.call_extern(
"test",
A_cache.data,
(v * 128),
128,
A.data,
(v * 128),
128,
dtype="float32",
)
)
for j in T.serial(0, 128):
with T.block():
v = T.axis.S(1024, vi * 128 + j)
T.reads([A_cache[v]])
T.writes([B[v]])
B[v] = A_cache[v]
@T.prim_func
def transformed_opaque_access(a: T.handle, b: T.handle) -> None:
A = T.match_buffer(a, [1024])
B = T.match_buffer(b, [1024])
for i in T.serial(0, 8):
with T.block():
vi = T.axis.S(8, i)
T.reads(A[vi * 128 : vi * 128 + 128])
T.writes(B[vi * 128 : vi * 128 + 128])
A_cache = T.alloc_buffer([1024])
with T.block():
v = T.axis.S(8, vi)
T.reads([A[v * 128 : v * 128 + 128]])
T.writes([A_cache[v * 128 : v * 128 + 128]])
T.evaluate(
T.call_extern(
"test", A_cache.data, v * 128, 128, A.data, v * 128, 128, dtype="float32"
)
)
for j in T.serial(0, 128):
with T.block():
v = T.axis.S(1024, vi * 128 + j)
T.reads([A_cache[v]])
T.writes([B[v]])
B[v] = A_cache[v]
def test_elementwise():
_check(element_func, transformed_element_func)
def test_locate_buffer_allocation():
_check(original_func, transformed_func)
def test_match_buffer_allocation():
_check(match_buffer_func, transformed_match_buffer_func)
def test_opaque_access():
_check(opaque_access, transformed_opaque_access)
def test_lower_te():
x = te.placeholder((1,))
y = te.compute((1,), lambda i: x[i] + 2)
s = te.create_schedule(y.op)
orig_mod = tvm.driver.build_module.schedule_to_module(s, [x, y])
mod = tvm.tir.transform.PlanAndUpdateBufferAllocationLocation()(orig_mod)
tvm.ir.assert_structural_equal(
mod, orig_mod
) # PlanAndUpdateBufferAllocationLocation should do nothing on TE
def test_loop_carried_dependency():
"""The buffer allocation should be above opaque iter var's loop scopes
such that buffer accesses with loop carried dependencies are covered,
and the allocate buffer should keep the order."""
@T.prim_func
def before(A: T.Buffer((8, 8, 8), "int32"), B: T.Buffer((8, 8, 8), "int32")):
C = T.alloc_buffer([8, 8, 8], dtype="int32")
D = T.alloc_buffer([8, 8, 8], dtype="int32")
for i in T.serial(8):
for j in T.serial(8):
for k in T.serial(8):
with T.block("b0"):
vi, vj, vk = T.axis.remap("SSS", [i, j, k])
C[vi, vj, vk] = A[vi, vj, vk] + 1
for k in T.serial(8):
with T.block("b1"):
vi, vj, vk = T.axis.remap("SSS", [i, j, k])
D[vi, vj, vk] = A[vi, vj, vk] + 2
for k in T.serial(8):
with T.block("b2"):
vi, vk = T.axis.remap("SS", [i, k])
vj = T.axis.opaque(8, j)
B[vi, vj, vk] = (
C[vi, vj, vk]
+ T.if_then_else(0 < vj, C[vi, vj - 1, vk], 0, dtype="int32")
+ D[vi, vj, vk]
)
@T.prim_func
def after(A: T.Buffer((8, 8, 8), "int32"), B: T.Buffer((8, 8, 8), "int32")) -> None:
for i in T.serial(8):
with T.block():
T.reads(A[i, 0:8, 0:8])
T.writes(B[i, 0:8, 0:8])
C = T.alloc_buffer([8, 8, 8], dtype="int32")
D = T.alloc_buffer([8, 8, 8], dtype="int32")
for j in T.serial(8):
for k in T.serial(8):
with T.block("b0"):
vi, vj, vk = T.axis.remap("SSS", [i, j, k])
C[vi, vj, vk] = A[vi, vj, vk] + 1
for k in T.serial(8):
with T.block("b1"):
vi, vj, vk = T.axis.remap("SSS", [i, j, k])
D[vi, vj, vk] = A[vi, vj, vk] + 2
for k in T.serial(8):
with T.block("b2"):
vi, vk = T.axis.remap("SS", [i, k])
vj = T.axis.opaque(8, j)
B[vi, vj, vk] = (
C[vi, vj, vk]
+ T.if_then_else(0 < vj, C[vi, vj - 1, vk], 0, dtype="int32")
+ D[vi, vj, vk]
)
_check(before, after)
def test_1D_cascade_op_rolling_buffer():
"""The intermediate buffer must be allocated above rolling buffer's rolling loop,
which is marked as opaque in consumer block's iter mappings."""
@T.prim_func
def before(A: T.Buffer((4, 16), "int32"), C: T.Buffer((4, 8), "int32")):
B = T.alloc_buffer((4, 6), "int32")
for c in T.serial(4):
for i in T.serial(0, 2):
for j in T.serial(0, 6):
for k in T.serial(3):
with T.block("P1"):
T.where(i < 1 or j >= 2)
cc, vi, vj, vk = T.axis.remap("SSSR", [c, i, j, k])
if vk == 0:
B[cc, T.floormod(vi * 4 + vj, 6)] = 0
B[cc, T.floormod(vi * 4 + vj, 6)] = (
B[cc, T.floormod(vi * 4 + vj, 6)] + A[cc, vi * 4 + vj + vk]
)
for j in T.serial(0, 4):
for k in T.serial(3):
with T.block("P2"):
vi = T.axis.opaque(2, i)
cc, vj, vk = T.axis.remap("SSR", [c, j, k])
if vk == 0:
C[cc, vi * 4 + vj] = 0
C[cc, vi * 4 + vj] = (
C[cc, vi * 4 + vj] + B[cc, T.floormod(vi * 4 + vj + vk, 6)]
)
@T.prim_func
def after(A: T.Buffer((4, 16), "int32"), C: T.Buffer((4, 8), "int32")):
for c in T.serial(4):
with T.block():
T.reads(A[c, 0:12], C[c, 0:8])
T.writes(C[c, 0:8])
B = T.alloc_buffer([4, 6], dtype="int32")
for i in T.serial(2):
for j, k in T.grid(6, 3):
with T.block("P1"):
T.where(i < 1 or j >= 2)
cc, vi, vj, vk = T.axis.remap("SSSR", [c, i, j, k])
if vk == 0:
B[cc, (vi * 4 + vj) % 6] = 0
B[cc, (vi * 4 + vj) % 6] = (
B[cc, (vi * 4 + vj) % 6] + A[cc, vi * 4 + vj + vk]
)
for j, k in T.grid(4, 3):
with T.block("P2"):
vi = T.axis.opaque(2, i)
cc, vj, vk = T.axis.remap("SSR", [c, j, k])
if vk == 0:
C[cc, vi * 4 + vj] = 0
C[cc, vi * 4 + vj] = C[cc, vi * 4 + vj] + B[cc, (vi * 4 + vj + vk) % 6]
_check(before, after)
def test_allocate_const_after_tensorize():
i_size, o_size, h_size, w_size = 64, 64, 56, 56
k_height_size = k_width_size = 3
w_shape = (o_size, i_size, k_height_size, k_width_size)
data = relay.var("data", shape=(1, i_size, h_size, w_size), dtype="uint8")
weight = relay.var("weight", shape=w_shape, dtype="uint8")
conv2d = relay.nn.conv2d(
data=data,
weight=weight,
kernel_size=(k_height_size, k_width_size),
channels=o_size,
padding=(0, 0),
strides=(1, 1),
out_dtype="int32",
)
mod = tvm.IRModule.from_expr(conv2d)
executor = relay.backend.Executor("graph", {"link-params": True})
mod = mod.with_attr("executor", executor)
weight_np = np.random.uniform(1, 10, size=w_shape).astype("uint8")
target = tvm.target.Target("hexagon")
with tvm.transform.PassContext(opt_level=3):
opt_mod, _ = relay.optimize(mod, params={"weight": weight_np}, target=target)
conv2d_func = opt_mod["main"].body.args[0].op
prim_func = lower_to_primfunc(conv2d_func, target)
sch = tir.Schedule(prim_func)
block = sch.get_block("conv2d_NCHWc_int8")
loops = sch.get_loops(block)
sch.reorder(loops[8], loops[4], loops[-1])
sch.decompose_reduction(block, loops[1])
sch.tensorize(loops[4], VRMPY_u8u8i32_INTRIN)
seq = tvm.transform.Sequential(
[
tvm.tir.transform.LowerInitBlock(),
tvm.tir.transform.PlanAndUpdateBufferAllocationLocation(),
]
)
# The following error is emitted if AllocateConst nodes are not correctly handled:
# Check failed: (buffer_data_to_buffer_.count(source_var)) is false:
_ = seq(sch.mod)
def test_buffer_conditional_lowering():
"""Buffers passed as pointer arguments are unmodified
Confirm that the `tir.PlanAndUpdateBufferAllocationLocation` pass
leaves (Buffer nodes corresponding to pointer-typed PrimFunc arguments)
unchanged, rather than lowering them to `reads`, `writes`, and `alloc_buffer` nodes.
"""
@T.prim_func
def before(A: T.handle("float32")):
T.func_attr({"global_symbol": "main", "tir.noalias": True})
for i in range(1):
A_1 = T.Buffer((1,), data=A)
A_1[i] = 0
after = before
_check(before, after)
def test_dltensor_buffer_is_unlowered():
"""Buffers allocated with a LetStmt are unmodified
Confirm that the `tir.PlanAndUpdateBufferAllocationLocation` pass
leaves (Buffer nodes corresponding to PrimFunc DLTensor arguments)
unchanged, rather than lowering them to `reads`, `writes`, and
`alloc_buffer` nodes.
"""
@T.prim_func
def before(dlpack_handle: T.handle, axis: T.int64) -> T.int64:
ndim: T.int32 = T.tvm_struct_get(dlpack_handle, 0, 5, "int32")
stride_ptr: T.handle("int64") = T.tvm_struct_get(dlpack_handle, 0, 4, "handle")
if T.isnullptr(stride_ptr):
shape_ptr: T.handle("int64") = T.tvm_struct_get(dlpack_handle, 0, 3, "handle")
shape = T.decl_buffer(ndim, "int64", data=shape_ptr)
product = T.decl_buffer([], "int64")
product[()] = 1
for dim in range(axis + 1, ndim):
product[()] = product[()] * shape[dim]
return product[()]
else:
strides = T.decl_buffer(ndim, "int64", data=stride_ptr)
stride: T.int64 = strides[axis]
return stride
after = before
_check(before, after)
if __name__ == "__main__":
tvm.testing.main()
```
|
```scss
// This file is part of OpenMediaVault.
//
// @license path_to_url GPL Version 3
// @author Volker Theile <volker.theile@openmediavault.org>
//
// OpenMediaVault is free software: you can redistribute it and/or modify
// any later version.
//
// OpenMediaVault is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
@use 'scss/theme/colors' as tc;
@mixin theme($mode, $theme-config, $typography-config) {
.mat-button-toggle.mat-button-toggle-checked {
@include tc.background-color-pair('accent');
}
// We need to define these classes here as well to have a
// higher order of precedence.
@each $name in map-keys(tc.$omv-color-pairs) {
.mat-flat-button.omv-background-color-pair-#{$name} {
@include tc.background-color-pair($name);
}
}
}
```
|
```java
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
package org.activiti.engine.test.bpmn.event.end;
import static org.hamcrest.CoreMatchers.is;
import static org.junit.Assert.assertThat;
import java.util.Arrays;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.activiti.engine.impl.test.PluggableFlowableTestCase;
import org.flowable.bpmn.model.BpmnModel;
import org.flowable.bpmn.model.EndEvent;
import org.flowable.bpmn.model.ExtensionAttribute;
import org.flowable.bpmn.model.ExtensionElement;
import org.flowable.engine.delegate.DelegateExecution;
import org.flowable.engine.delegate.JavaDelegate;
import org.flowable.engine.repository.ProcessDefinition;
import org.flowable.engine.runtime.Execution;
import org.flowable.engine.runtime.ProcessInstance;
import org.flowable.engine.test.Deployment;
/**
* @author Nico Rehwaldt
* @author Joram Barrez
*/
public class TerminateEndEventTest extends PluggableFlowableTestCase {
public static int serviceTaskInvokedCount;
@Override
protected void setUp() throws Exception {
super.setUp();
serviceTaskInvokedCount = 0;
serviceTaskInvokedCount2 = 0;
}
public static class CountDelegate implements JavaDelegate {
public void execute(DelegateExecution execution) {
serviceTaskInvokedCount++;
// leave only 3 out of n subprocesses
execution.setVariableLocal("terminate", serviceTaskInvokedCount > 3);
}
}
public static int serviceTaskInvokedCount2;
public static class CountDelegate2 implements JavaDelegate {
public void execute(DelegateExecution execution) {
serviceTaskInvokedCount2++;
}
}
@Deployment
public void testProcessTerminate() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
long executionEntities = runtimeService.createExecutionQuery().processInstanceId(pi.getId()).count();
assertEquals(3, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateTask").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testProcessTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateTask").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateWithSubProcess() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// should terminate the process and
long executionEntities = runtimeService.createExecutionQuery().processInstanceId(pi.getId()).count();
assertEquals(4, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateWithSubProcess2() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// Completing the task -> terminal end event -> subprocess ends
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateEnd").singleResult();
assertNotNull(task);
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateWithSubProcessTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// Completing the task -> terminal end event -> all ends (termninate all)
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateWithCallActivity.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessNoTerminate.bpmn"
})
public void testTerminateWithCallActivity() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
long executionEntities = runtimeService.createExecutionQuery().processInstanceId(pi.getId()).count();
assertEquals(4, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateWithCallActivityTerminateAll.bpmn20.xml",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessNoTerminate.bpmn" })
public void testTerminateWithCallActivityTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId())
.taskDefinitionKey("preTerminateEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInExclusiveGatewayWithCallActivity.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessNoTerminate.bpmn"
})
public void testTerminateInExclusiveGatewayWithCallActivity() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample-terminateAfterExclusiveGateway");
long executionEntities = runtimeService.createExecutionQuery().processInstanceId(pi.getId()).count();
assertEquals(4, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateEnd").singleResult();
Map<String, Object> variables = new HashMap<String, Object>();
variables.put("input", 1);
taskService.complete(task.getId(), variables);
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInExclusiveGatewayWithMultiInstanceSubProcess() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample-terminateAfterExclusiveGateway");
long executionEntities = runtimeService.createExecutionQuery().processInstanceId(pi.getId()).count();
assertEquals(14, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateEnd").singleResult();
Map<String, Object> variables = new HashMap<String, Object>();
variables.put("input", 1);
taskService.complete(task.getId(), variables);
assertProcessEnded(pi.getId());
}
@Deployment
public void your_sha256_hashateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample-terminateAfterExclusiveGateway");
// Completing the task once should only destroy ONE multi instance
List<org.flowable.task.api.Task> tasks = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("task").list();
assertEquals(5, tasks.size());
for (int i = 0; i < 5; i++) {
taskService.complete(tasks.get(i).getId());
assertTrue(runtimeService.createProcessInstanceQuery().processInstanceId(pi.getId()).count() > 0);
}
// Other task will now finish the process instance
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTerminateEnd").singleResult();
Map<String, Object> variables = new HashMap<String, Object>();
variables.put("input", 1);
taskService.complete(task.getId(), variables);
assertEquals(0, runtimeService.createProcessInstanceQuery().processInstanceId(pi.getId()).count());
}
@Deployment
public void testTerminateInSubProcess() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// should terminate the subprocess and continue the parent
long executionEntities = runtimeService.createExecutionQuery().processInstanceId(pi.getId()).count();
assertEquals(1, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessWithBoundary() throws Exception {
Date startTime = new Date();
// Test terminating process via boundary timer
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventWithBoundary");
assertEquals(3, taskService.createTaskQuery().processInstanceId(pi.getId()).count());
// Set clock time to '1 hour and 5 seconds' ahead to fire timer
processEngineConfiguration.getClock().setCurrentTime(new Date(startTime.getTime() + ((60 * 60 * 1000) + 5000)));
waitForJobExecutorToProcessAllJobs(7000L, 25L);
// timer has fired
assertEquals(0L, managementService.createJobQuery().count());
assertProcessEnded(pi.getId());
// Test terminating subprocess
pi = runtimeService.startProcessInstanceByKey("terminateEndEventWithBoundary");
assertEquals(3, taskService.createTaskQuery().processInstanceId(pi.getId()).count());
// a job for boundary event timer should exist
assertEquals(1L, managementService.createTimerJobQuery().count());
// Complete sub process task that leads to a terminate end event
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTermInnerTask").singleResult();
taskService.complete(task.getId());
// 'preEndInnerTask' task in subprocess should have been terminated, only outerTask should exist
assertEquals(1, taskService.createTaskQuery().processInstanceId(pi.getId()).count());
// job for boundary event timer should have been removed
assertEquals(0L, managementService.createTimerJobQuery().count());
assertEquals(0L, managementService.createJobQuery().count());
// complete outerTask
task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("outerTask").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessWithBoundaryTerminateAll() throws Exception {
// Test terminating subprocess
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventWithBoundary");
assertEquals(3, taskService.createTaskQuery().processInstanceId(pi.getId()).count());
// Complete sub process task that leads to a terminate end event
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preTermInnerTask").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessConcurrent() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
long executionEntities = runtimeService.createExecutionQuery().count();
assertEquals(1, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessConcurrentTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessConcurrentTerminateAll2() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
List<org.flowable.task.api.Task> tasks = taskService.createTaskQuery().processInstanceId(pi.getId()).list();
assertEquals(2, tasks.size());
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskName("User Task").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessConcurrentMultiInstance() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
long executionEntities = runtimeService.createExecutionQuery().count();
assertEquals(12, executionEntities);
List<org.flowable.task.api.Task> tasks = taskService.createTaskQuery().processInstanceId(pi.getId()).list();
assertEquals(4, tasks.size()); // 3 user tasks in MI +1 (preNormalEnd) = 4 (2 were killed because it went directly to the terminate end event)
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
long executionEntities2 = runtimeService.createExecutionQuery().count();
assertEquals(10, executionEntities2);
tasks = taskService.createTaskQuery().list();
for (org.flowable.task.api.Task t : tasks) {
taskService.complete(t.getId());
}
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessConcurrentMultiInstance2() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
List<org.flowable.task.api.Task> tasks = taskService.createTaskQuery().processInstanceId(pi.getId()).taskName("User Task").list();
assertEquals(3, tasks.size());
for (org.flowable.task.api.Task t : tasks) {
taskService.complete(t.getId());
}
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessConcurrentMultiInstanceTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
assertProcessEnded(pi.getId());
}
@Deployment(resources = { "org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivityConcurrentCallActivity.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateAfterUserTask.bpmn",
"org/activiti/engine/test/api/oneTaskProcess.bpmn20.xml" })
public void testTerminateInCallActivityConcurrentCallActivity() throws Exception {
// GIVEN - process instance starts and creates 2 subProcessInstances (with 2 user tasks - preTerminate and my task)
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventInCallActivityConcurrentCallActivity");
assertThat(runtimeService.createProcessInstanceQuery().superProcessInstanceId(pi.getId()).list().size(), is(2));
// WHEN - complete -> terminate end event
org.flowable.task.api.Task preTerminate = taskService.createTaskQuery().taskName("preTerminate").singleResult();
taskService.complete(preTerminate.getId());
// THEN - super process is not finished together
assertEquals(1, runtimeService.createProcessInstanceQuery().processInstanceId(pi.getId()).count());
}
@Deployment
public void testTerminateInSubProcessMultiInstance() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
long executionEntities = runtimeService.createExecutionQuery().count();
assertEquals(1, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateInSubProcessSequentialConcurrentMultiInstance() throws Exception {
// Starting multi instance with 5 instances; terminating 2, finishing 3
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
long remainingExecutions = runtimeService.createExecutionQuery().count();
// outer execution still available
assertEquals(1, remainingExecutions);
// three finished
assertEquals(3, serviceTaskInvokedCount2);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
// last task remaining
assertProcessEnded(pi.getId());
}
@Deployment
public void your_sha256_hashateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivity.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessTerminate.bpmn"
})
public void testTerminateInCallActivity() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// should terminate the called process and continue the parent
long executionEntities = runtimeService.createExecutionQuery().count();
assertEquals(1, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivityMulitInstance.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessTerminate.bpmn"
})
public void testTerminateInCallActivityMultiInstance() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// should terminate the called process and continue the parent
long executionEntities = runtimeService.createExecutionQuery().count();
assertEquals(1, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivityMulitInstance.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessTerminateTerminateAll.bpmn20.xml" })
public void testTerminateInCallActivityMultiInstanceTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
assertProcessEnded(pi.getId());
}
@Deployment
public void testMiCallActivityParallel() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("testMiCallActivity");
List<org.flowable.task.api.Task> aTasks = taskService.createTaskQuery().taskName("A").list();
assertEquals(5, aTasks.size());
List<org.flowable.task.api.Task> bTasks = taskService.createTaskQuery().taskName("B").list();
assertEquals(5, bTasks.size());
// Completing B should terminate one instance (it goes to a terminate end event)
int bTasksCompleted = 0;
for (org.flowable.task.api.Task bTask : bTasks) {
taskService.complete(bTask.getId());
bTasksCompleted++;
aTasks = taskService.createTaskQuery().taskName("A").list();
assertEquals(5 - bTasksCompleted, aTasks.size());
}
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).singleResult();
assertEquals("After call activity", task.getName());
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment
public void testMiCallActivitySequential() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("testMiCallActivity");
List<org.flowable.task.api.Task> aTasks = taskService.createTaskQuery().taskName("A").list();
assertEquals(1, aTasks.size());
List<org.flowable.task.api.Task> bTasks = taskService.createTaskQuery().taskName("B").list();
assertEquals(1, bTasks.size());
// Completing B should terminate one instance (it goes to a terminate end event)
for (int i = 0; i < 9; i++) {
org.flowable.task.api.Task bTask = taskService.createTaskQuery().taskName("B").singleResult();
taskService.complete(bTask.getId());
if (i != 8) {
aTasks = taskService.createTaskQuery().taskName("A").list();
assertEquals(1, aTasks.size());
bTasks = taskService.createTaskQuery().taskName("B").list();
assertEquals(1, bTasks.size());
}
}
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).singleResult();
assertEquals("After call activity", task.getName());
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivityConcurrent.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessConcurrentTerminate.bpmn"
})
public void testTerminateInCallActivityConcurrent() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// should terminate the called process and continue the parent
long executionEntities = runtimeService.createExecutionQuery().count();
assertEquals(1, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivityConcurrent.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessConcurrentTerminateTerminateAll.bpmn20.xml"
})
public void testTerminateInCallActivityConcurrentTerminateAll() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivityConcurrentMulitInstance.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessConcurrentTerminate.bpmn"
})
public void testTerminateInCallActivityConcurrentMulitInstance() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
// should terminate the called process and continue the parent
long executionEntities = runtimeService.createExecutionQuery().count();
assertEquals(1, executionEntities);
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(pi.getId()).taskDefinitionKey("preNormalEnd").singleResult();
taskService.complete(task.getId());
assertProcessEnded(pi.getId());
}
@Deployment(resources = {
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.testTerminateInCallActivityConcurrentMulitInstance.bpmn",
"org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.subProcessConcurrentTerminateTerminateAll.bpmn20.xml" })
public void testTerminateInCallActivityConcurrentMulitInstanceTerminateALl() throws Exception {
ProcessInstance pi = runtimeService.startProcessInstanceByKey("terminateEndEventExample");
assertProcessEnded(pi.getId());
}
@Deployment
public void testTerminateNestedSubprocesses() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedSubprocesses");
List<org.flowable.task.api.Task> tasks = taskService.createTaskQuery().processInstanceId(processInstance.getId()).orderByTaskName().asc().list();
assertEquals("A", tasks.get(0).getName());
assertEquals("B", tasks.get(1).getName());
assertEquals("D", tasks.get(2).getName());
assertEquals("E", tasks.get(3).getName());
assertEquals("F", tasks.get(4).getName());
// Completing E should finish the lower subprocess and make 'H' active
taskService.complete(tasks.get(3).getId());
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("H").singleResult();
assertNotNull(task);
// Completing A should make C active
taskService.complete(tasks.get(0).getId());
task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("C").singleResult();
assertNotNull(task);
// Completing C should make I active
taskService.complete(task.getId());
task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("I").singleResult();
assertNotNull(task);
// Completing I and B should make G active
taskService.complete(task.getId());
task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("G").singleResult();
assertNull(task);
taskService.complete(tasks.get(1).getId());
task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("G").singleResult();
assertNotNull(task);
}
@Deployment
public void testTerminateNestedSubprocessesTerminateAll1() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedSubprocesses");
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("E").singleResult();
// Completing E leads to a terminate end event with termninate all set to true
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment
public void testTerminateNestedSubprocessesTerminateAll2() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedSubprocesses");
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("A").singleResult();
// Completing A and C leads to a terminate end event with termninate all set to true
taskService.complete(task.getId());
task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("C").singleResult();
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment
public void testTerminateNestedMiSubprocesses() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedMiSubprocesses");
taskService.complete(taskService.createTaskQuery().taskName("A").singleResult().getId());
// Should have 7 tasks C active
List<org.flowable.task.api.Task> tasks = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("C").list();
assertEquals(7, tasks.size());
// Completing these should lead to task I being active
for (org.flowable.task.api.Task task : tasks) {
taskService.complete(task.getId());
}
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("I").singleResult();
assertNotNull(task);
// Should have 3 instances of E active
tasks = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("E").list();
assertEquals(3, tasks.size());
// Completing these should make H active
for (org.flowable.task.api.Task t : tasks) {
taskService.complete(t.getId());
}
task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("H").singleResult();
assertNotNull(task);
}
@Deployment
public void testTerminateNestedMiSubprocessesSequential() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedMiSubprocesses");
taskService.complete(taskService.createTaskQuery().taskName("A").singleResult().getId());
// Should have 7 tasks C active after each other
for (int i = 0; i < 7; i++) {
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("C").singleResult();
taskService.complete(task.getId());
}
// I should be active now
assertNotNull(taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("I").singleResult());
// Should have 3 instances of E active after each other
for (int i = 0; i < 3; i++) {
assertEquals(1, taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("D").count());
assertEquals(1, taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("F").count());
// Completing F should not finish the subprocess
taskService.complete(taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("F").singleResult().getId());
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("E").singleResult();
taskService.complete(task.getId());
}
assertNotNull(taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("H").singleResult());
}
@Deployment
public void testTerminateNestedMiSubprocessesTerminateAll1() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedMiSubprocesses");
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("E").list().get(0);
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment
public void testTerminateNestedMiSubprocessesTerminateAll2() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedMiSubprocesses");
taskService.complete(taskService.createTaskQuery().taskName("A").singleResult().getId());
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("C").list().get(0);
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment
public void testTerminateNestedMiSubprocessesTerminateAll3() { // Same as 1, but sequential Multi-Instance
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedMiSubprocesses");
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("E").list().get(0);
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment
public void testTerminateNestedMiSubprocessesTerminateAll4() { // Same as 2, but sequential Multi-Instance
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestTerminateNestedMiSubprocesses");
taskService.complete(taskService.createTaskQuery().taskName("A").singleResult().getId());
org.flowable.task.api.Task task = taskService.createTaskQuery().processInstanceId(processInstance.getId()).taskName("C").list().get(0);
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
@Deployment
public void testNestedCallActivities() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestNestedCallActivities");
// Verify the tasks
List<org.flowable.task.api.Task> tasks = assertTaskNames(processInstance,
Arrays.asList("B", "B", "B", "B", "Before A", "Before A", "Before A", "Before A", "Before B", "Before C"));
// Completing 'before c'
taskService.complete(tasks.get(9).getId());
tasks = assertTaskNames(processInstance,
Arrays.asList("After C", "B", "B", "B", "B", "Before A", "Before A", "Before A", "Before A", "Before B"));
// Completing 'before A' of one instance
org.flowable.task.api.Task task = taskService.createTaskQuery().taskName("task_subprocess_1").singleResult();
assertNull(task);
taskService.complete(tasks.get(5).getId());
// Multi instance call activity is sequential, so expecting 5 more times the same task
for (int i = 0; i < 6; i++) {
task = taskService.createTaskQuery().taskName("subprocess1_task").singleResult();
assertNotNull("Task is null for index " + i, task);
taskService.complete(task.getId());
}
tasks = assertTaskNames(processInstance,
Arrays.asList("After A", "After C", "B", "B", "B", "B", "Before A", "Before A", "Before A", "Before B"));
}
@Deployment
public void testNestedCallActivitiesTerminateAll() {
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("TestNestedCallActivities");
// Verify the tasks
List<org.flowable.task.api.Task> tasks = assertTaskNames(processInstance,
Arrays.asList("B", "B", "B", "B", "Before A", "Before A", "Before A", "Before A", "Before B", "Before C"));
// Completing 'Before B' should lead to process instance termination
taskService.complete(tasks.get(8).getId());
assertProcessEnded(processInstance.getId());
// Completing 'Before C' too
processInstance = runtimeService.startProcessInstanceByKey("TestNestedCallActivities");
tasks = assertTaskNames(processInstance,
Arrays.asList("B", "B", "B", "B", "Before A", "Before A", "Before A", "Before A", "Before B", "Before C"));
taskService.complete(tasks.get(9).getId());
assertProcessEnded(processInstance.getId());
// Now the tricky one. 'Before A' leads to 'callActivity A', which calls subprocess02 which terminates
processInstance = runtimeService.startProcessInstanceByKey("TestNestedCallActivities");
tasks = assertTaskNames(processInstance,
Arrays.asList("B", "B", "B", "B", "Before A", "Before A", "Before A", "Before A", "Before B", "Before C"));
taskService.complete(tasks.get(5).getId());
org.flowable.task.api.Task task = taskService.createTaskQuery().taskName("subprocess1_task").singleResult();
assertNotNull(task);
taskService.complete(task.getId());
assertProcessEnded(processInstance.getId());
}
private List<org.flowable.task.api.Task> assertTaskNames(ProcessInstance processInstance, List<String> taskNames) {
List<org.flowable.task.api.Task> tasks = taskService.createTaskQuery().processInstanceId(processInstance.getId()).orderByTaskName().asc().list();
for (int i = 0; i < taskNames.size(); i++) {
assertEquals("Task name at index " + i + " does not match", taskNames.get(i), tasks.get(i).getName());
}
return tasks;
}
public void testParseTerminateEndEventDefinitionWithExtensions() {
org.flowable.engine.repository.Deployment deployment = repositoryService.createDeployment().addClasspathResource("org/activiti/engine/test/bpmn/event/end/TerminateEndEventTest.parseExtensionElements.bpmn20.xml").deploy();
ProcessDefinition processDefinitionQuery = repositoryService.createProcessDefinitionQuery().deploymentId(deployment.getId()).singleResult();
BpmnModel bpmnModel = this.processEngineConfiguration.getProcessDefinitionCache()
.get(processDefinitionQuery.getId()).getBpmnModel();
Map<String, List<ExtensionElement>> extensionElements = bpmnModel.getProcesses().get(0)
.findFlowElementsOfType(EndEvent.class).get(0).getExtensionElements();
assertThat(extensionElements.size(), is(1));
List<ExtensionElement> strangeProperties = extensionElements.get("strangeProperty");
assertThat(strangeProperties.size(), is(1));
ExtensionElement strangeProperty = strangeProperties.get(0);
assertThat(strangeProperty.getNamespace(), is("path_to_url"));
assertThat(strangeProperty.getElementText(), is("value"));
assertThat(strangeProperty.getAttributes().size(), is(1));
ExtensionAttribute id = strangeProperty.getAttributes().get("id").get(0);
assertThat(id.getName(), is("id"));
assertThat(id.getValue(), is("strangeId"));
repositoryService.deleteDeployment(deployment.getId());
}
// Unit test for ACT-4101 : NPE when there are multiple routes to terminateEndEvent, and both are reached
@Deployment
public void testThreeExecutionsArrivingInTerminateEndEvent() {
Map<String, Object> variableMap = new HashMap<String, Object>();
variableMap.put("passed_QC", false);
variableMap.put("has_bad_pixel_pattern", true);
ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("skybox_image_pull_request", variableMap);
String processInstanceId = processInstance.getId();
assertNotNull(processInstance);
while (processInstance != null) {
List<Execution> executionList = runtimeService.createExecutionQuery().processInstanceId(processInstance.getId()).list();
String activityId = "";
for (Execution execution : executionList) {
activityId = execution.getActivityId();
if (activityId == null
|| activityId.equalsIgnoreCase("quality_control_passed_gateway")
|| activityId.equalsIgnoreCase("parallelgateway1")
|| activityId.equalsIgnoreCase("catch_bad_pixel_signal")
|| activityId.equalsIgnoreCase("throw_bad_pixel_signal")
|| activityId.equalsIgnoreCase("has_bad_pixel_pattern")
|| activityId.equalsIgnoreCase("")) {
continue;
}
runtimeService.trigger(execution.getId());
}
processInstance = runtimeService.createProcessInstanceQuery().processInstanceId(processInstance.getId()).singleResult();
}
assertProcessEnded(processInstanceId);
}
}
```
|
```go
// Unless explicitly stated otherwise all files in this repository are licensed
// This product includes software developed at Datadog (path_to_url
//go:build windows
// +build windows
package controlsvc
import (
"testing"
"github.com/DataDog/datadog-agent/cmd/trace-agent/subcommands"
"github.com/DataDog/datadog-agent/cmd/trace-agent/windows/controlsvc"
"github.com/DataDog/datadog-agent/pkg/util/fxutil"
)
func TestStartServiceCommand(t *testing.T) {
fxutil.TestOneShotSubcommand(t,
Commands(func() *subcommands.GlobalParams {
return &subcommands.GlobalParams{}
}),
[]string{"start-service"},
controlsvc.StartService,
func() {})
}
func TestStopServiceCommand(t *testing.T) {
fxutil.TestOneShotSubcommand(t,
Commands(func() *subcommands.GlobalParams {
return &subcommands.GlobalParams{}
}),
[]string{"stop-service"},
controlsvc.StopService,
func() {})
}
func TestRestartServiceCommand(t *testing.T) {
fxutil.TestOneShotSubcommand(t,
Commands(func() *subcommands.GlobalParams {
return &subcommands.GlobalParams{}
}),
[]string{"restart-service"},
controlsvc.RestartService,
func() {})
}
```
|
```smalltalk
// KEngine - Toolset and framework for Unity3D
// ===================================
//
// Filename: AppEngine.cs
// Date: 2015/12/03
// Author: Kelly
// Email: 23110388@qq.com
// Github: path_to_url
//
// This library is free software; you can redistribute it and/or
// modify it under the terms of the GNU Lesser General Public
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
//
// You should have received a copy of the GNU Lesser General Public
#endregion
using System;
using System.Collections.Generic;
using UnityEngine;
using System.IO;
using System.Text;
using TableML;
//using KEngine.Table;
namespace KEngine.Modules
{
/// <summary>
/// Unity SettingModule, with Resources.Load in product, with File.Read in editor
/// </summary>
public class SettingModule : SettingModuleBase
{
private static readonly bool IsEditor;
static SettingModule()
{
IsEditor = Application.isEditor;
}
/// <summary>
/// internal constructor
/// </summary>
internal SettingModule()
{
}
/// <summary>
/// Singleton
/// </summary>
private static SettingModule _instance;
/// <summary>
/// Quick method to get TableFile from instance
/// </summary>
/// <param name="path"></param>
/// <param name="useCache"></param>
/// <returns></returns>
public static TableFile Get(string path, bool useCache = true)
{
if (_instance == null)
_instance = new SettingModule();
return _instance.GetTableFile(path, useCache);
}
/// <summary>
/// Unity Resources.Load setting file in Resources folder
/// </summary>
/// <param name="path"></param>
/// <returns></returns>
protected override string LoadSetting(string path)
{
byte[] fileContent = KResourceModule.LoadAssetsSync(GetSettingFilePath(path));
return Encoding.UTF8.GetString(fileContent);
}
/// <summary>
/// Settings
/// </summary>
/// <param name="path"></param>
/// <returns></returns>
public static string GetSettingFilePath(string path)
{
return AppConfig.SettingResourcesPath + "/" + path;
}
#if UNITY_EDITOR
/// <summary>
/// Cache all the FileSystemWatcher, prevent the duplicated one
/// </summary>
private static Dictionary<string, FileSystemWatcher> _cacheWatchers;
/// <summary>
/// Watch the setting file, when changed, trigger the delegate
/// </summary>
/// <param name="path"></param>
/// <param name="action"></param>
public static void WatchSetting(string path, System.Action<string> action)
{
if (!IsFileSystemMode)
{
Log.Error("[WatchSetting] Available in Unity Editor mode only!");
return;
}
if (_cacheWatchers == null)
_cacheWatchers = new Dictionary<string, FileSystemWatcher>();
FileSystemWatcher watcher;
var dirPath = Path.GetDirectoryName(KResourceModule.EditorProductFullPath + "/" + AppConfig.SettingResourcesPath + "/" + path);
dirPath = dirPath.Replace("\\", "/");
//if(Application.isEditor) Log.Info($"watch:{path}\n{dirPath}");
if (!Directory.Exists(dirPath))
{
Log.Error("[WatchSetting] Not found Dir: {0}", dirPath);
return;
}
if (!_cacheWatchers.TryGetValue(dirPath, out watcher))
{
_cacheWatchers[dirPath] = watcher = new FileSystemWatcher(dirPath);
Log.Info("Watching Setting Dir: {0}", dirPath);
}
watcher.IncludeSubdirectories = false;
watcher.Path = dirPath;
watcher.NotifyFilter = NotifyFilters.LastWrite;
watcher.Filter = "*";
watcher.EnableRaisingEvents = true;
watcher.InternalBufferSize = 2048;
watcher.Changed += (sender, e) =>
{
Log.LogConsole_MultiThread("Setting changed: {0}", e.FullPath);
action.Invoke(path);
};
}
#endif
/// <summary>
/// whether or not using file system file, in unity editor mode only
/// </summary>
public static bool IsFileSystemMode
{
get
{
if (IsEditor)
return true;
return false;
}
}
}
}
```
|
```smalltalk
using System.Text.Json;
namespace Microsoft.AspNetCore.Components
{
internal static class JsonSerializerOptionsProvider
{
public static readonly JsonSerializerOptions Options = new JsonSerializerOptions(JsonSerializerDefaults.Web);
}
}
```
|
```go
package client
import (
"io"
"net/http"
"net/url"
"golang.org/x/net/context"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
)
// ImagePull requests the docker host to pull an image from a remote registry.
// It executes the privileged function if the operation is unauthorized
// and it tries one more time.
// It's up to the caller to handle the io.ReadCloser and close it properly.
//
// FIXME(vdemeester): there is currently used in a few way in docker/docker
// - if not in trusted content, ref is used to pass the whole reference, and tag is empty
// - if in trusted content, ref is used to pass the reference name, and tag for the digest
func (cli *Client) ImagePull(ctx context.Context, refStr string, options types.ImagePullOptions) (io.ReadCloser, error) {
ref, err := reference.ParseNormalizedNamed(refStr)
if err != nil {
return nil, err
}
query := url.Values{}
query.Set("fromImage", reference.FamiliarName(ref))
if !options.All {
query.Set("tag", getAPITagFromNamedRef(ref))
}
resp, err := cli.tryImageCreate(ctx, query, options.RegistryAuth)
if resp.statusCode == http.StatusUnauthorized && options.PrivilegeFunc != nil {
newAuthHeader, privilegeErr := options.PrivilegeFunc()
if privilegeErr != nil {
return nil, privilegeErr
}
resp, err = cli.tryImageCreate(ctx, query, newAuthHeader)
}
if err != nil {
return nil, err
}
return resp.body, nil
}
// getAPITagFromNamedRef returns a tag from the specified reference.
// This function is necessary as long as the docker "server" api expects
// digests to be sent as tags and makes a distinction between the name
// and tag/digest part of a reference.
func getAPITagFromNamedRef(ref reference.Named) string {
if digested, ok := ref.(reference.Digested); ok {
return digested.Digest().String()
}
ref = reference.TagNameOnly(ref)
if tagged, ok := ref.(reference.Tagged); ok {
return tagged.Tag()
}
return ""
}
```
|
```html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>DNS | Node.js v12.18.3 Documentation</title>
<link rel="stylesheet" href="path_to_url">
<link rel="stylesheet" href="assets/style.css">
<link rel="stylesheet" href="assets/hljs.css">
<link rel="canonical" href="path_to_url">
</head>
<body class="alt apidoc" id="api-section-dns">
<div id="content" class="clearfix">
<div id="column2" class="interior">
<div id="intro" class="interior">
<a href="/" title="Go back to the home page">
Node.js
</a>
</div>
<ul>
<li><a href="documentation.html" class="nav-documentation">About these docs</a></li>
<li><a href="synopsis.html" class="nav-synopsis">Usage and example</a></li>
</ul>
<div class="line"></div>
<ul>
<li><a href="assert.html" class="nav-assert">Assertion testing</a></li>
<li><a href="async_hooks.html" class="nav-async_hooks">Async hooks</a></li>
<li><a href="buffer.html" class="nav-buffer">Buffer</a></li>
<li><a href="addons.html" class="nav-addons">C++ addons</a></li>
<li><a href="n-api.html" class="nav-n-api">C/C++ addons with N-API</a></li>
<li><a href="child_process.html" class="nav-child_process">Child processes</a></li>
<li><a href="cluster.html" class="nav-cluster">Cluster</a></li>
<li><a href="cli.html" class="nav-cli">Command line options</a></li>
<li><a href="console.html" class="nav-console">Console</a></li>
<li><a href="crypto.html" class="nav-crypto">Crypto</a></li>
<li><a href="debugger.html" class="nav-debugger">Debugger</a></li>
<li><a href="deprecations.html" class="nav-deprecations">Deprecated APIs</a></li>
<li><a href="dns.html" class="nav-dns active">DNS</a></li>
<li><a href="domain.html" class="nav-domain">Domain</a></li>
<li><a href="esm.html" class="nav-esm">ECMAScript modules</a></li>
<li><a href="errors.html" class="nav-errors">Errors</a></li>
<li><a href="events.html" class="nav-events">Events</a></li>
<li><a href="fs.html" class="nav-fs">File system</a></li>
<li><a href="globals.html" class="nav-globals">Globals</a></li>
<li><a href="http.html" class="nav-http">HTTP</a></li>
<li><a href="http2.html" class="nav-http2">HTTP/2</a></li>
<li><a href="https.html" class="nav-https">HTTPS</a></li>
<li><a href="inspector.html" class="nav-inspector">Inspector</a></li>
<li><a href="intl.html" class="nav-intl">Internationalization</a></li>
<li><a href="modules.html" class="nav-modules">Modules</a></li>
<li><a href="net.html" class="nav-net">Net</a></li>
<li><a href="os.html" class="nav-os">OS</a></li>
<li><a href="path.html" class="nav-path">Path</a></li>
<li><a href="perf_hooks.html" class="nav-perf_hooks">Performance hooks</a></li>
<li><a href="policy.html" class="nav-policy">Policies</a></li>
<li><a href="process.html" class="nav-process">Process</a></li>
<li><a href="punycode.html" class="nav-punycode">Punycode</a></li>
<li><a href="querystring.html" class="nav-querystring">Query strings</a></li>
<li><a href="readline.html" class="nav-readline">Readline</a></li>
<li><a href="repl.html" class="nav-repl">REPL</a></li>
<li><a href="report.html" class="nav-report">Report</a></li>
<li><a href="stream.html" class="nav-stream">Stream</a></li>
<li><a href="string_decoder.html" class="nav-string_decoder">String decoder</a></li>
<li><a href="timers.html" class="nav-timers">Timers</a></li>
<li><a href="tls.html" class="nav-tls">TLS/SSL</a></li>
<li><a href="tracing.html" class="nav-tracing">Trace events</a></li>
<li><a href="tty.html" class="nav-tty">TTY</a></li>
<li><a href="dgram.html" class="nav-dgram">UDP/datagram</a></li>
<li><a href="url.html" class="nav-url">URL</a></li>
<li><a href="util.html" class="nav-util">Utilities</a></li>
<li><a href="v8.html" class="nav-v8">V8</a></li>
<li><a href="vm.html" class="nav-vm">VM</a></li>
<li><a href="wasi.html" class="nav-wasi">WASI</a></li>
<li><a href="worker_threads.html" class="nav-worker_threads">Worker threads</a></li>
<li><a href="zlib.html" class="nav-zlib">Zlib</a></li>
</ul>
<div class="line"></div>
<ul>
<li><a href="path_to_url" class="nav-https-github-com-nodejs-node">Code repository and issue tracker</a></li>
</ul>
</div>
<div id="column1" data-id="dns" class="interior">
<header>
<h1>Node.js v12.18.3 Documentation</h1>
<div id="gtoc">
<ul>
<li>
<a href="index.html" name="toc">Index</a>
</li>
<li>
<a href="all.html">View on single page</a>
</li>
<li>
<a href="dns.json">View as JSON</a>
</li>
<li class="version-picker">
<a href="#">View another version <span>▼</span></a>
<ol class="version-picker"><li><a href="path_to_url">14.x</a></li>
<li><a href="path_to_url">13.x</a></li>
<li><a href="path_to_url">12.x <b>LTS</b></a></li>
<li><a href="path_to_url">11.x</a></li>
<li><a href="path_to_url">10.x <b>LTS</b></a></li>
<li><a href="path_to_url">9.x</a></li>
<li><a href="path_to_url">8.x</a></li>
<li><a href="path_to_url">7.x</a></li>
<li><a href="path_to_url">6.x</a></li>
<li><a href="path_to_url">5.x</a></li>
<li><a href="path_to_url">4.x</a></li>
<li><a href="path_to_url">0.12.x</a></li>
<li><a href="path_to_url">0.10.x</a></li></ol>
</li>
<li class="edit_on_github"><a href="path_to_url"><span class="github_icon"><svg height="16" width="16" viewBox="0 0 16.1 16.1" fill="currentColor"><path d="M8 0a8 8 0 0 0-2.5 15.6c.4 0 .5-.2.5-.4v-1.5c-2 .4-2.5-.5-2.7-1 0-.1-.5-.9-.8-1-.3-.2-.7-.6 0-.6.6 0 1 .6 1.2.8.7 1.2 1.9 1 2.4.7 0-.5.2-.9.5-1-1.8-.3-3.7-1-3.7-4 0-.9.3-1.6.8-2.2 0-.2-.3-1 .1-2 0 0 .7-.3 2.2.7a7.4 7.4 0 0 1 4 0c1.5-1 2.2-.8 2.2-.8.5 1.1.2 2 .1 2.1.5.6.8 1.3.8 2.2 0 3-1.9 3.7-3.6 4 .3.2.5.7.5 1.4v2.2c0 .2.1.5.5.4A8 8 0 0 0 16 8a8 8 0 0 0-8-8z"/></svg></span>Edit on GitHub</a></li>
</ul>
</div>
<hr>
</header>
<div id="toc">
<h2>Table of Contents</h2>
<ul>
<li>
<p><span class="stability_2"><a href="#dns_dns">DNS</a></span></p>
<ul>
<li>
<p><a href="#dns_class_dns_resolver">Class: <code>dns.Resolver</code></a></p>
<ul>
<li><a href="#dns_resolver_options"><code>Resolver([options])</code></a></li>
<li><a href="#dns_resolver_cancel"><code>resolver.cancel()</code></a></li>
</ul>
</li>
<li><a href="#dns_dns_getservers"><code>dns.getServers()</code></a></li>
<li>
<p><a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup(hostname[, options], callback)</code></a></p>
<ul>
<li><a href="#dns_supported_getaddrinfo_flags">Supported getaddrinfo flags</a></li>
</ul>
</li>
<li><a href="#dns_dns_lookupservice_address_port_callback"><code>dns.lookupService(address, port, callback)</code></a></li>
<li><a href="#dns_dns_resolve_hostname_rrtype_callback"><code>dns.resolve(hostname[, rrtype], callback)</code></a></li>
<li><a href="#dns_dns_resolve4_hostname_options_callback"><code>dns.resolve4(hostname[, options], callback)</code></a></li>
<li><a href="#dns_dns_resolve6_hostname_options_callback"><code>dns.resolve6(hostname[, options], callback)</code></a></li>
<li><a href="#dns_dns_resolveany_hostname_callback"><code>dns.resolveAny(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolvecname_hostname_callback"><code>dns.resolveCname(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolvemx_hostname_callback"><code>dns.resolveMx(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolvenaptr_hostname_callback"><code>dns.resolveNaptr(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolvens_hostname_callback"><code>dns.resolveNs(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolveptr_hostname_callback"><code>dns.resolvePtr(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolvesoa_hostname_callback"><code>dns.resolveSoa(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolvesrv_hostname_callback"><code>dns.resolveSrv(hostname, callback)</code></a></li>
<li><a href="#dns_dns_resolvetxt_hostname_callback"><code>dns.resolveTxt(hostname, callback)</code></a></li>
<li><a href="#dns_dns_reverse_ip_callback"><code>dns.reverse(ip, callback)</code></a></li>
<li><a href="#dns_dns_setservers_servers"><code>dns.setServers(servers)</code></a></li>
<li>
<p><a href="#dns_dns_promises_api">DNS promises API</a></p>
<ul>
<li><a href="#dns_class_dnspromises_resolver">Class: <code>dnsPromises.Resolver</code></a></li>
<li><a href="#dns_dnspromises_getservers"><code>dnsPromises.getServers()</code></a></li>
<li><a href="#dns_dnspromises_lookup_hostname_options"><code>dnsPromises.lookup(hostname[, options])</code></a></li>
<li><a href="#dns_dnspromises_lookupservice_address_port"><code>dnsPromises.lookupService(address, port)</code></a></li>
<li><a href="#dns_dnspromises_resolve_hostname_rrtype"><code>dnsPromises.resolve(hostname[, rrtype])</code></a></li>
<li><a href="#dns_dnspromises_resolve4_hostname_options"><code>dnsPromises.resolve4(hostname[, options])</code></a></li>
<li><a href="#dns_dnspromises_resolve6_hostname_options"><code>dnsPromises.resolve6(hostname[, options])</code></a></li>
<li><a href="#dns_dnspromises_resolveany_hostname"><code>dnsPromises.resolveAny(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolvecname_hostname"><code>dnsPromises.resolveCname(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolvemx_hostname"><code>dnsPromises.resolveMx(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolvenaptr_hostname"><code>dnsPromises.resolveNaptr(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolvens_hostname"><code>dnsPromises.resolveNs(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolveptr_hostname"><code>dnsPromises.resolvePtr(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolvesoa_hostname"><code>dnsPromises.resolveSoa(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolvesrv_hostname"><code>dnsPromises.resolveSrv(hostname)</code></a></li>
<li><a href="#dns_dnspromises_resolvetxt_hostname"><code>dnsPromises.resolveTxt(hostname)</code></a></li>
<li><a href="#dns_dnspromises_reverse_ip"><code>dnsPromises.reverse(ip)</code></a></li>
<li><a href="#dns_dnspromises_setservers_servers"><code>dnsPromises.setServers(servers)</code></a></li>
</ul>
</li>
<li><a href="#dns_error_codes">Error codes</a></li>
<li>
<p><a href="#dns_implementation_considerations">Implementation considerations</a></p>
<ul>
<li><a href="#dns_dns_lookup"><code>dns.lookup()</code></a></li>
<li><a href="#dns_dns_resolve_dns_resolve_and_dns_reverse"><code>dns.resolve()</code>, <code>dns.resolve*()</code> and <code>dns.reverse()</code></a></li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
<div id="apicontent">
<h1>DNS<span><a class="mark" href="#dns_dns" id="dns_dns">#</a></span></h1>
<p></p><div class="api_stability api_stability_2"><a href="documentation.html#documentation_stability_index">Stability: 2</a> - Stable</div><p></p>
<p>The <code>dns</code> module enables name resolution. For example, use it to look up IP
addresses of host names.</p>
<p>Although named for the <a href="path_to_url">Domain Name System (DNS)</a>, it does not always use the
DNS protocol for lookups. <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a> uses the operating system
facilities to perform name resolution. It may not need to perform any network
communication. To perform name resolution the way other applications on the same
system do, use <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a>.</p>
<pre><code class="language-js">const dns = require('dns');
dns.lookup('example.org', (err, address, family) => {
console.log('address: %j family: IPv%s', address, family);
});
// address: "93.184.216.34" family: IPv4
</code></pre>
<p>All other functions in the <code>dns</code> module connect to an actual DNS server to
perform name resolution. They will always use the network to perform DNS
queries. These functions do not use the same set of configuration files used by
<a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a> (e.g. <code>/etc/hosts</code>). Use these functions to always perform
DNS queries, bypassing other name-resolution facilities.</p>
<pre><code class="language-js">const dns = require('dns');
dns.resolve4('archive.org', (err, addresses) => {
if (err) throw err;
console.log(`addresses: ${JSON.stringify(addresses)}`);
addresses.forEach((a) => {
dns.reverse(a, (err, hostnames) => {
if (err) {
throw err;
}
console.log(`reverse for ${a}: ${JSON.stringify(hostnames)}`);
});
});
});
</code></pre>
<p>See the <a href="#dns_implementation_considerations">Implementation considerations section</a> for more information.</p>
<h2>Class: <code>dns.Resolver</code><span><a class="mark" href="#dns_class_dns_resolver" id="dns_class_dns_resolver">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v8.3.0</span>
</div>
<p>An independent resolver for DNS requests.</p>
<p>Creating a new resolver uses the default server settings. Setting
the servers used for a resolver using
<a href="#dns_dns_setservers_servers"><code>resolver.setServers()</code></a> does not affect
other resolvers:</p>
<pre><code class="language-js">const { Resolver } = require('dns');
const resolver = new Resolver();
resolver.setServers(['4.4.4.4']);
// This request will use the server at 4.4.4.4, independent of global settings.
resolver.resolve4('example.org', (err, addresses) => {
// ...
});
</code></pre>
<p>The following methods from the <code>dns</code> module are available:</p>
<ul>
<li><a href="#dns_dns_getservers"><code>resolver.getServers()</code></a></li>
<li><a href="#dns_dns_resolve_hostname_rrtype_callback"><code>resolver.resolve()</code></a></li>
<li><a href="#dns_dns_resolve4_hostname_options_callback"><code>resolver.resolve4()</code></a></li>
<li><a href="#dns_dns_resolve6_hostname_options_callback"><code>resolver.resolve6()</code></a></li>
<li><a href="#dns_dns_resolveany_hostname_callback"><code>resolver.resolveAny()</code></a></li>
<li><a href="#dns_dns_resolvecname_hostname_callback"><code>resolver.resolveCname()</code></a></li>
<li><a href="#dns_dns_resolvemx_hostname_callback"><code>resolver.resolveMx()</code></a></li>
<li><a href="#dns_dns_resolvenaptr_hostname_callback"><code>resolver.resolveNaptr()</code></a></li>
<li><a href="#dns_dns_resolvens_hostname_callback"><code>resolver.resolveNs()</code></a></li>
<li><a href="#dns_dns_resolveptr_hostname_callback"><code>resolver.resolvePtr()</code></a></li>
<li><a href="#dns_dns_resolvesoa_hostname_callback"><code>resolver.resolveSoa()</code></a></li>
<li><a href="#dns_dns_resolvesrv_hostname_callback"><code>resolver.resolveSrv()</code></a></li>
<li><a href="#dns_dns_resolvetxt_hostname_callback"><code>resolver.resolveTxt()</code></a></li>
<li><a href="#dns_dns_reverse_ip_callback"><code>resolver.reverse()</code></a></li>
<li><a href="#dns_dns_setservers_servers"><code>resolver.setServers()</code></a></li>
</ul>
<h3><code>Resolver([options])</code><span><a class="mark" href="#dns_resolver_options" id="dns_resolver_options">#</a></span></h3>
<div class="api_metadata">
<details class="changelog"><summary>History</summary>
<table>
<tbody><tr><th>Version</th><th>Changes</th></tr>
<tr><td>v12.18.3</td>
<td><p>The constructor now accepts an <code>options</code> object. The single supported option is <code>timeout</code>.</p></td></tr>
<tr><td>v8.3.0</td>
<td><p><span>Added in: v8.3.0</span></p></td></tr>
</tbody></table>
</details>
</div>
<p>Create a new resolver.</p>
<ul>
<li>
<p><code>options</code> <a href="path_to_url" class="type"><Object></a></p>
<ul>
<li><code>timeout</code> <a href="path_to_url#Number_type" class="type"><integer></a> Query timeout in milliseconds, or <code>-1</code> to use the
default timeout.</li>
</ul>
</li>
</ul>
<h3><code>resolver.cancel()</code><span><a class="mark" href="#dns_resolver_cancel" id="dns_resolver_cancel">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v8.3.0</span>
</div>
<p>Cancel all outstanding DNS queries made by this resolver. The corresponding
callbacks will be called with an error with code <code>ECANCELLED</code>.</p>
<h2><code>dns.getServers()</code><span><a class="mark" href="#dns_dns_getservers" id="dns_dns_getservers">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.11.3</span>
</div>
<ul>
<li>Returns: <a href="path_to_url#String_type" class="type"><string[]></a></li>
</ul>
<p>Returns an array of IP address strings, formatted according to <a href="path_to_url#section-6">RFC 5952</a>,
that are currently configured for DNS resolution. A string will include a port
section if a custom port is used.</p>
<!-- eslint-disable semi-->
<pre><code class="language-js">[
'4.4.4.4',
'2001:4860:4860::8888',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053'
]
</code></pre>
<h2><code>dns.lookup(hostname[, options], callback)</code><span><a class="mark" href="#dns_dns_lookup_hostname_options_callback" id="dns_dns_lookup_hostname_options_callback">#</a></span></h2>
<div class="api_metadata">
<details class="changelog"><summary>History</summary>
<table>
<tbody><tr><th>Version</th><th>Changes</th></tr>
<tr><td>v8.5.0</td>
<td><p>The <code>verbatim</code> option is supported now.</p></td></tr>
<tr><td>v1.2.0</td>
<td><p>The <code>all</code> option is supported now.</p></td></tr>
<tr><td>v0.1.90</td>
<td><p><span>Added in: v0.1.90</span></p></td></tr>
</tbody></table>
</details>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>options</code> <a href="path_to_url#Number_type" class="type"><integer></a> | <a href="path_to_url" class="type"><Object></a></p>
<ul>
<li><code>family</code> <a href="path_to_url#Number_type" class="type"><integer></a> The record family. Must be <code>4</code>, <code>6</code>, or <code>0</code>. The value
<code>0</code> indicates that IPv4 and IPv6 addresses are both returned. <strong>Default:</strong>
<code>0</code>.</li>
<li><code>hints</code> <a href="path_to_url#Number_type" class="type"><number></a> One or more <a href="#dns_supported_getaddrinfo_flags">supported <code>getaddrinfo</code> flags</a>. Multiple
flags may be passed by bitwise <code>OR</code>ing their values.</li>
<li><code>all</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> When <code>true</code>, the callback returns all resolved addresses in
an array. Otherwise, returns a single address. <strong>Default:</strong> <code>false</code>.</li>
<li><code>verbatim</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> When <code>true</code>, the callback receives IPv4 and IPv6
addresses in the order the DNS resolver returned them. When <code>false</code>,
IPv4 addresses are placed before IPv6 addresses.
<strong>Default:</strong> currently <code>false</code> (addresses are reordered) but this is
expected to change in the not too distant future.
New code should use <code>{ verbatim: true }</code>.</li>
</ul>
</li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>address</code> <a href="path_to_url#String_type" class="type"><string></a> A string representation of an IPv4 or IPv6 address.</li>
<li><code>family</code> <a href="path_to_url#Number_type" class="type"><integer></a> <code>4</code> or <code>6</code>, denoting the family of <code>address</code>, or <code>0</code> if
the address is not an IPv4 or IPv6 address. <code>0</code> is a likely indicator of a
bug in the name resolution service used by the operating system.</li>
</ul>
</li>
</ul>
<p>Resolves a host name (e.g. <code>'nodejs.org'</code>) into the first found A (IPv4) or
AAAA (IPv6) record. All <code>option</code> properties are optional. If <code>options</code> is an
integer, then it must be <code>4</code> or <code>6</code> if <code>options</code> is not provided, then IPv4
and IPv6 addresses are both returned if found.</p>
<p>With the <code>all</code> option set to <code>true</code>, the arguments for <code>callback</code> change to
<code>(err, addresses)</code>, with <code>addresses</code> being an array of objects with the
properties <code>address</code> and <code>family</code>.</p>
<p>On error, <code>err</code> is an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code> is the error code.
Keep in mind that <code>err.code</code> will be set to <code>'ENOTFOUND'</code> not only when
the host name does not exist but also when the lookup fails in other ways
such as no available file descriptors.</p>
<p><code>dns.lookup()</code> does not necessarily have anything to do with the DNS protocol.
The implementation uses an operating system facility that can associate names
with addresses, and vice versa. This implementation can have subtle but
important consequences on the behavior of any Node.js program. Please take some
time to consult the <a href="#dns_implementation_considerations">Implementation considerations section</a> before using
<code>dns.lookup()</code>.</p>
<p>Example usage:</p>
<pre><code class="language-js">const dns = require('dns');
const options = {
family: 6,
hints: dns.ADDRCONFIG | dns.V4MAPPED,
};
dns.lookup('example.com', options, (err, address, family) =>
console.log('address: %j family: IPv%s', address, family));
// address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6
// When options.all is true, the result will be an Array.
options.all = true;
dns.lookup('example.com', options, (err, addresses) =>
console.log('addresses: %j', addresses));
// addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","family":6}]
</code></pre>
<p>If this method is invoked as its <a href="util.html#util_util_promisify_original"><code>util.promisify()</code></a>ed version, and <code>all</code>
is not set to <code>true</code>, it returns a <code>Promise</code> for an <code>Object</code> with <code>address</code> and
<code>family</code> properties.</p>
<h3>Supported getaddrinfo flags<span><a class="mark" href="#dns_supported_getaddrinfo_flags" id="dns_supported_getaddrinfo_flags">#</a></span></h3>
<div class="api_metadata">
<details class="changelog"><summary>History</summary>
<table>
<tbody><tr><th>Version</th><th>Changes</th></tr>
<tr><td>v12.17.0</td>
<td><p>Added support for the <code>dns.ALL</code> flag.</p></td></tr>
</tbody></table>
</details>
</div>
<p>The following flags can be passed as hints to <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a>.</p>
<ul>
<li><code>dns.ADDRCONFIG</code>: Returned address types are determined by the types
of addresses supported by the current system. For example, IPv4 addresses
are only returned if the current system has at least one IPv4 address
configured. Loopback addresses are not considered.</li>
<li><code>dns.V4MAPPED</code>: If the IPv6 family was specified, but no IPv6 addresses were
found, then return IPv4 mapped IPv6 addresses. It is not supported
on some operating systems (e.g FreeBSD 10.1).</li>
<li><code>dns.ALL</code>: If <code>dns.V4MAPPED</code> is specified, return resolved IPv6 addresses as
well as IPv4 mapped IPv6 addresses.</li>
</ul>
<h2><code>dns.lookupService(address, port, callback)</code><span><a class="mark" href="#dns_dns_lookupservice_address_port_callback" id="dns_dns_lookupservice_address_port_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.11.14</span>
</div>
<ul>
<li><code>address</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li><code>port</code> <a href="path_to_url#Number_type" class="type"><number></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a> e.g. <code>example.com</code></li>
<li><code>service</code> <a href="path_to_url#String_type" class="type"><string></a> e.g. <code>http</code></li>
</ul>
</li>
</ul>
<p>Resolves the given <code>address</code> and <code>port</code> into a host name and service using
the operating system's underlying <code>getnameinfo</code> implementation.</p>
<p>If <code>address</code> is not a valid IP address, a <code>TypeError</code> will be thrown.
The <code>port</code> will be coerced to a number. If it is not a legal port, a <code>TypeError</code>
will be thrown.</p>
<p>On an error, <code>err</code> is an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code> is the error code.</p>
<pre><code class="language-js">const dns = require('dns');
dns.lookupService('127.0.0.1', 22, (err, hostname, service) => {
console.log(hostname, service);
// Prints: localhost ssh
});
</code></pre>
<p>If this method is invoked as its <a href="util.html#util_util_promisify_original"><code>util.promisify()</code></a>ed version, it returns a
<code>Promise</code> for an <code>Object</code> with <code>hostname</code> and <code>service</code> properties.</p>
<h2><code>dns.resolve(hostname[, rrtype], callback)</code><span><a class="mark" href="#dns_dns_resolve_hostname_rrtype_callback" id="dns_dns_resolve_hostname_rrtype_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.1.27</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a> Host name to resolve.</li>
<li><code>rrtype</code> <a href="path_to_url#String_type" class="type"><string></a> Resource record type. <strong>Default:</strong> <code>'A'</code>.</li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>records</code> <a href="path_to_url#String_type" class="type"><string[]></a> | <a href="path_to_url" class="type"><Object[]></a> | <a href="path_to_url" class="type"><Object></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve a host name (e.g. <code>'nodejs.org'</code>) into an array
of the resource records. The <code>callback</code> function has arguments
<code>(err, records)</code>. When successful, <code>records</code> will be an array of resource
records. The type and structure of individual results varies based on <code>rrtype</code>:</p>
<table><thead><tr><th><code>rrtype</code></th><th><code>records</code> contains</th><th>Result type</th><th>Shorthand method</th></tr></thead><tbody><tr><td><code>'A'</code></td><td>IPv4 addresses (default)</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dns_resolve4_hostname_options_callback"><code>dns.resolve4()</code></a></td></tr><tr><td><code>'AAAA'</code></td><td>IPv6 addresses</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dns_resolve6_hostname_options_callback"><code>dns.resolve6()</code></a></td></tr><tr><td><code>'ANY'</code></td><td>any records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dns_resolveany_hostname_callback"><code>dns.resolveAny()</code></a></td></tr><tr><td><code>'CNAME'</code></td><td>canonical name records</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dns_resolvecname_hostname_callback"><code>dns.resolveCname()</code></a></td></tr><tr><td><code>'MX'</code></td><td>mail exchange records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dns_resolvemx_hostname_callback"><code>dns.resolveMx()</code></a></td></tr><tr><td><code>'NAPTR'</code></td><td>name authority pointer records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dns_resolvenaptr_hostname_callback"><code>dns.resolveNaptr()</code></a></td></tr><tr><td><code>'NS'</code></td><td>name server records</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dns_resolvens_hostname_callback"><code>dns.resolveNs()</code></a></td></tr><tr><td><code>'PTR'</code></td><td>pointer records</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dns_resolveptr_hostname_callback"><code>dns.resolvePtr()</code></a></td></tr><tr><td><code>'SOA'</code></td><td>start of authority records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dns_resolvesoa_hostname_callback"><code>dns.resolveSoa()</code></a></td></tr><tr><td><code>'SRV'</code></td><td>service records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dns_resolvesrv_hostname_callback"><code>dns.resolveSrv()</code></a></td></tr><tr><td><code>'TXT'</code></td><td>text records</td><td><a href="path_to_url#String_type" class="type"><string[]></a></td><td><a href="#dns_dns_resolvetxt_hostname_callback"><code>dns.resolveTxt()</code></a></td></tr></tbody></table>
<p>On error, <code>err</code> is an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code> is one of the
<a href="#dns_error_codes">DNS error codes</a>.</p>
<h2><code>dns.resolve4(hostname[, options], callback)</code><span><a class="mark" href="#dns_dns_resolve4_hostname_options_callback" id="dns_dns_resolve4_hostname_options_callback">#</a></span></h2>
<div class="api_metadata">
<details class="changelog"><summary>History</summary>
<table>
<tbody><tr><th>Version</th><th>Changes</th></tr>
<tr><td>v7.2.0</td>
<td><p>This method now supports passing <code>options</code>, specifically <code>options.ttl</code>.</p></td></tr>
<tr><td>v0.1.16</td>
<td><p><span>Added in: v0.1.16</span></p></td></tr>
</tbody></table>
</details>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a> Host name to resolve.</li>
<li>
<p><code>options</code> <a href="path_to_url" class="type"><Object></a></p>
<ul>
<li><code>ttl</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> Retrieve the Time-To-Live value (TTL) of each record.
When <code>true</code>, the callback receives an array of
<code>{ address: '1.2.3.4', ttl: 60 }</code> objects rather than an array of strings,
with the TTL expressed in seconds.</li>
</ul>
</li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url#String_type" class="type"><string[]></a> | <a href="path_to_url" class="type"><Object[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve a IPv4 addresses (<code>A</code> records) for the
<code>hostname</code>. The <code>addresses</code> argument passed to the <code>callback</code> function
will contain an array of IPv4 addresses (e.g.
<code>['74.125.79.104', '74.125.79.105', '74.125.79.106']</code>).</p>
<h2><code>dns.resolve6(hostname[, options], callback)</code><span><a class="mark" href="#dns_dns_resolve6_hostname_options_callback" id="dns_dns_resolve6_hostname_options_callback">#</a></span></h2>
<div class="api_metadata">
<details class="changelog"><summary>History</summary>
<table>
<tbody><tr><th>Version</th><th>Changes</th></tr>
<tr><td>v7.2.0</td>
<td><p>This method now supports passing <code>options</code>, specifically <code>options.ttl</code>.</p></td></tr>
<tr><td>v0.1.16</td>
<td><p><span>Added in: v0.1.16</span></p></td></tr>
</tbody></table>
</details>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a> Host name to resolve.</li>
<li>
<p><code>options</code> <a href="path_to_url" class="type"><Object></a></p>
<ul>
<li><code>ttl</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> Retrieve the Time-To-Live value (TTL) of each record.
When <code>true</code>, the callback receives an array of
<code>{ address: '0:1:2:3:4:5:6:7', ttl: 60 }</code> objects rather than an array of
strings, with the TTL expressed in seconds.</li>
</ul>
</li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url#String_type" class="type"><string[]></a> | <a href="path_to_url" class="type"><Object[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve a IPv6 addresses (<code>AAAA</code> records) for the
<code>hostname</code>. The <code>addresses</code> argument passed to the <code>callback</code> function
will contain an array of IPv6 addresses.</p>
<h2><code>dns.resolveAny(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolveany_hostname_callback" id="dns_dns_resolveany_hostname_callback">#</a></span></h2>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>ret</code> <a href="path_to_url" class="type"><Object[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve all records (also known as <code>ANY</code> or <code>*</code> query).
The <code>ret</code> argument passed to the <code>callback</code> function will be an array containing
various types of records. Each object has a property <code>type</code> that indicates the
type of the current record. And depending on the <code>type</code>, additional properties
will be present on the object:</p>
<table><thead><tr><th>Type</th><th>Properties</th></tr></thead><tbody><tr><td><code>'A'</code></td><td><code>address</code>/<code>ttl</code></td></tr><tr><td><code>'AAAA'</code></td><td><code>address</code>/<code>ttl</code></td></tr><tr><td><code>'CNAME'</code></td><td><code>value</code></td></tr><tr><td><code>'MX'</code></td><td>Refer to <a href="#dns_dns_resolvemx_hostname_callback"><code>dns.resolveMx()</code></a></td></tr><tr><td><code>'NAPTR'</code></td><td>Refer to <a href="#dns_dns_resolvenaptr_hostname_callback"><code>dns.resolveNaptr()</code></a></td></tr><tr><td><code>'NS'</code></td><td><code>value</code></td></tr><tr><td><code>'PTR'</code></td><td><code>value</code></td></tr><tr><td><code>'SOA'</code></td><td>Refer to <a href="#dns_dns_resolvesoa_hostname_callback"><code>dns.resolveSoa()</code></a></td></tr><tr><td><code>'SRV'</code></td><td>Refer to <a href="#dns_dns_resolvesrv_hostname_callback"><code>dns.resolveSrv()</code></a></td></tr><tr><td><code>'TXT'</code></td><td>This type of record contains an array property called <code>entries</code> which refers to <a href="#dns_dns_resolvetxt_hostname_callback"><code>dns.resolveTxt()</code></a>, e.g. <code>{ entries: ['...'], type: 'TXT' }</code></td></tr></tbody></table>
<p>Here is an example of the <code>ret</code> object passed to the callback:</p>
<!-- eslint-disable semi -->
<pre><code class="language-js">[ { type: 'A', address: '127.0.0.1', ttl: 299 },
{ type: 'CNAME', value: 'example.com' },
{ type: 'MX', exchange: 'alt4.aspmx.l.example.com', priority: 50 },
{ type: 'NS', value: 'ns1.example.com' },
{ type: 'TXT', entries: [ 'v=spf1 include:_spf.example.com ~all' ] },
{ type: 'SOA',
nsname: 'ns1.example.com',
hostmaster: 'admin.example.com',
serial: 156696742,
refresh: 900,
retry: 900,
expire: 1800,
minttl: 60 } ]
</code></pre>
<p>DNS server operators may choose not to respond to <code>ANY</code>
queries. It may be better to call individual methods like <a href="#dns_dns_resolve4_hostname_options_callback"><code>dns.resolve4()</code></a>,
<a href="#dns_dns_resolvemx_hostname_callback"><code>dns.resolveMx()</code></a>, and so on. For more details, see <a href="path_to_url">RFC 8482</a>.</p>
<h2><code>dns.resolveCname(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolvecname_hostname_callback" id="dns_dns_resolvecname_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.3.2</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url#String_type" class="type"><string[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve <code>CNAME</code> records for the <code>hostname</code>. The
<code>addresses</code> argument passed to the <code>callback</code> function
will contain an array of canonical name records available for the <code>hostname</code>
(e.g. <code>['bar.example.com']</code>).</p>
<h2><code>dns.resolveMx(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolvemx_hostname_callback" id="dns_dns_resolvemx_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.1.27</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url" class="type"><Object[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve mail exchange records (<code>MX</code> records) for the
<code>hostname</code>. The <code>addresses</code> argument passed to the <code>callback</code> function will
contain an array of objects containing both a <code>priority</code> and <code>exchange</code>
property (e.g. <code>[{priority: 10, exchange: 'mx.example.com'}, ...]</code>).</p>
<h2><code>dns.resolveNaptr(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolvenaptr_hostname_callback" id="dns_dns_resolvenaptr_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.9.12</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url" class="type"><Object[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve regular expression based records (<code>NAPTR</code>
records) for the <code>hostname</code>. The <code>addresses</code> argument passed to the <code>callback</code>
function will contain an array of objects with the following properties:</p>
<ul>
<li><code>flags</code></li>
<li><code>service</code></li>
<li><code>regexp</code></li>
<li><code>replacement</code></li>
<li><code>order</code></li>
<li><code>preference</code></li>
</ul>
<!-- eslint-skip -->
<pre><code class="language-js">{
flags: 's',
service: 'SIP+D2U',
regexp: '',
replacement: '_sip._udp.example.com',
order: 30,
preference: 100
}
</code></pre>
<h2><code>dns.resolveNs(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolvens_hostname_callback" id="dns_dns_resolvens_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.1.90</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url#String_type" class="type"><string[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve name server records (<code>NS</code> records) for the
<code>hostname</code>. The <code>addresses</code> argument passed to the <code>callback</code> function will
contain an array of name server records available for <code>hostname</code>
(e.g. <code>['ns1.example.com', 'ns2.example.com']</code>).</p>
<h2><code>dns.resolvePtr(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolveptr_hostname_callback" id="dns_dns_resolveptr_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v6.0.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url#String_type" class="type"><string[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve pointer records (<code>PTR</code> records) for the
<code>hostname</code>. The <code>addresses</code> argument passed to the <code>callback</code> function will
be an array of strings containing the reply records.</p>
<h2><code>dns.resolveSoa(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolvesoa_hostname_callback" id="dns_dns_resolvesoa_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.11.10</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>address</code> <a href="path_to_url" class="type"><Object></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve a start of authority record (<code>SOA</code> record) for
the <code>hostname</code>. The <code>address</code> argument passed to the <code>callback</code> function will
be an object with the following properties:</p>
<ul>
<li><code>nsname</code></li>
<li><code>hostmaster</code></li>
<li><code>serial</code></li>
<li><code>refresh</code></li>
<li><code>retry</code></li>
<li><code>expire</code></li>
<li><code>minttl</code></li>
</ul>
<!-- eslint-skip -->
<pre><code class="language-js">{
nsname: 'ns.example.com',
hostmaster: 'root.example.com',
serial: 2013101809,
refresh: 10000,
retry: 2400,
expire: 604800,
minttl: 3600
}
</code></pre>
<h2><code>dns.resolveSrv(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolvesrv_hostname_callback" id="dns_dns_resolvesrv_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.1.27</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>addresses</code> <a href="path_to_url" class="type"><Object[]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve service records (<code>SRV</code> records) for the
<code>hostname</code>. The <code>addresses</code> argument passed to the <code>callback</code> function will
be an array of objects with the following properties:</p>
<ul>
<li><code>priority</code></li>
<li><code>weight</code></li>
<li><code>port</code></li>
<li><code>name</code></li>
</ul>
<!-- eslint-skip -->
<pre><code class="language-js">{
priority: 10,
weight: 5,
port: 21223,
name: 'service.example.com'
}
</code></pre>
<h2><code>dns.resolveTxt(hostname, callback)</code><span><a class="mark" href="#dns_dns_resolvetxt_hostname_callback" id="dns_dns_resolvetxt_hostname_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.1.27</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>records</code> <a href="path_to_url#String_type" class="type"><string[][]></a></li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve text queries (<code>TXT</code> records) for the
<code>hostname</code>. The <code>records</code> argument passed to the <code>callback</code> function is a
two-dimensional array of the text records available for <code>hostname</code> (e.g.
<code>[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]</code>). Each sub-array contains TXT chunks of
one record. Depending on the use case, these could be either joined together or
treated separately.</p>
<h2><code>dns.reverse(ip, callback)</code><span><a class="mark" href="#dns_dns_reverse_ip_callback" id="dns_dns_reverse_ip_callback">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.1.16</span>
</div>
<ul>
<li><code>ip</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>callback</code> <a href="path_to_url" class="type"><Function></a></p>
<ul>
<li><code>err</code> <a href="path_to_url" class="type"><Error></a></li>
<li><code>hostnames</code> <a href="path_to_url#String_type" class="type"><string[]></a></li>
</ul>
</li>
</ul>
<p>Performs a reverse DNS query that resolves an IPv4 or IPv6 address to an
array of host names.</p>
<p>On error, <code>err</code> is an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code> is
one of the <a href="#dns_error_codes">DNS error codes</a>.</p>
<h2><code>dns.setServers(servers)</code><span><a class="mark" href="#dns_dns_setservers_servers" id="dns_dns_setservers_servers">#</a></span></h2>
<div class="api_metadata">
<span>Added in: v0.11.3</span>
</div>
<ul>
<li><code>servers</code> <a href="path_to_url#String_type" class="type"><string[]></a> array of <a href="path_to_url#section-6">RFC 5952</a> formatted addresses</li>
</ul>
<p>Sets the IP address and port of servers to be used when performing DNS
resolution. The <code>servers</code> argument is an array of <a href="path_to_url#section-6">RFC 5952</a> formatted
addresses. If the port is the IANA default DNS port (53) it can be omitted.</p>
<pre><code class="language-js">dns.setServers([
'4.4.4.4',
'[2001:4860:4860::8888]',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053'
]);
</code></pre>
<p>An error will be thrown if an invalid address is provided.</p>
<p>The <code>dns.setServers()</code> method must not be called while a DNS query is in
progress.</p>
<p>The <a href="#dns_dns_setservers_servers"><code>dns.setServers()</code></a> method affects only <a href="#dns_dns_resolve_hostname_rrtype_callback"><code>dns.resolve()</code></a>,
<code>dns.resolve*()</code> and <a href="#dns_dns_reverse_ip_callback"><code>dns.reverse()</code></a> (and specifically <em>not</em>
<a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a>).</p>
<p>This method works much like
<a href="path_to_url">resolve.conf</a>.
That is, if attempting to resolve with the first server provided results in a
<code>NOTFOUND</code> error, the <code>resolve()</code> method will <em>not</em> attempt to resolve with
subsequent servers provided. Fallback DNS servers will only be used if the
earlier ones time out or result in some other error.</p>
<h2>DNS promises API<span><a class="mark" href="#dns_dns_promises_api" id="dns_dns_promises_api">#</a></span></h2>
<p>The <code>dns.promises</code> API provides an alternative set of asynchronous DNS methods
that return <code>Promise</code> objects rather than using callbacks. The API is accessible
via <code>require('dns').promises</code>.</p>
<h3>Class: <code>dnsPromises.Resolver</code><span><a class="mark" href="#dns_class_dnspromises_resolver" id="dns_class_dnspromises_resolver">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<p>An independent resolver for DNS requests.</p>
<p>Creating a new resolver uses the default server settings. Setting
the servers used for a resolver using
<a href="#dns_dnspromises_setservers_servers"><code>resolver.setServers()</code></a> does not affect
other resolvers:</p>
<pre><code class="language-js">const { Resolver } = require('dns').promises;
const resolver = new Resolver();
resolver.setServers(['4.4.4.4']);
// This request will use the server at 4.4.4.4, independent of global settings.
resolver.resolve4('example.org').then((addresses) => {
// ...
});
// Alternatively, the same code can be written using async-await style.
(async function() {
const addresses = await resolver.resolve4('example.org');
})();
</code></pre>
<p>The following methods from the <code>dnsPromises</code> API are available:</p>
<ul>
<li><a href="#dns_dnspromises_getservers"><code>resolver.getServers()</code></a></li>
<li><a href="#dns_dnspromises_resolve_hostname_rrtype"><code>resolver.resolve()</code></a></li>
<li><a href="#dns_dnspromises_resolve4_hostname_options"><code>resolver.resolve4()</code></a></li>
<li><a href="#dns_dnspromises_resolve6_hostname_options"><code>resolver.resolve6()</code></a></li>
<li><a href="#dns_dnspromises_resolveany_hostname"><code>resolver.resolveAny()</code></a></li>
<li><a href="#dns_dnspromises_resolvecname_hostname"><code>resolver.resolveCname()</code></a></li>
<li><a href="#dns_dnspromises_resolvemx_hostname"><code>resolver.resolveMx()</code></a></li>
<li><a href="#dns_dnspromises_resolvenaptr_hostname"><code>resolver.resolveNaptr()</code></a></li>
<li><a href="#dns_dnspromises_resolvens_hostname"><code>resolver.resolveNs()</code></a></li>
<li><a href="#dns_dnspromises_resolveptr_hostname"><code>resolver.resolvePtr()</code></a></li>
<li><a href="#dns_dnspromises_resolvesoa_hostname"><code>resolver.resolveSoa()</code></a></li>
<li><a href="#dns_dnspromises_resolvesrv_hostname"><code>resolver.resolveSrv()</code></a></li>
<li><a href="#dns_dnspromises_resolvetxt_hostname"><code>resolver.resolveTxt()</code></a></li>
<li><a href="#dns_dnspromises_reverse_ip"><code>resolver.reverse()</code></a></li>
<li><a href="#dns_dnspromises_setservers_servers"><code>resolver.setServers()</code></a></li>
</ul>
<h3><code>dnsPromises.getServers()</code><span><a class="mark" href="#dns_dnspromises_getservers" id="dns_dnspromises_getservers">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li>Returns: <a href="path_to_url#String_type" class="type"><string[]></a></li>
</ul>
<p>Returns an array of IP address strings, formatted according to <a href="path_to_url#section-6">RFC 5952</a>,
that are currently configured for DNS resolution. A string will include a port
section if a custom port is used.</p>
<!-- eslint-disable semi-->
<pre><code class="language-js">[
'4.4.4.4',
'2001:4860:4860::8888',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053'
]
</code></pre>
<h3><code>dnsPromises.lookup(hostname[, options])</code><span><a class="mark" href="#dns_dnspromises_lookup_hostname_options" id="dns_dnspromises_lookup_hostname_options">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li>
<p><code>options</code> <a href="path_to_url#Number_type" class="type"><integer></a> | <a href="path_to_url" class="type"><Object></a></p>
<ul>
<li><code>family</code> <a href="path_to_url#Number_type" class="type"><integer></a> The record family. Must be <code>4</code>, <code>6</code>, or <code>0</code>. The value
<code>0</code> indicates that IPv4 and IPv6 addresses are both returned. <strong>Default:</strong>
<code>0</code>.</li>
<li><code>hints</code> <a href="path_to_url#Number_type" class="type"><number></a> One or more <a href="#dns_supported_getaddrinfo_flags">supported <code>getaddrinfo</code> flags</a>. Multiple
flags may be passed by bitwise <code>OR</code>ing their values.</li>
<li><code>all</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> When <code>true</code>, the <code>Promise</code> is resolved with all addresses in
an array. Otherwise, returns a single address. <strong>Default:</strong> <code>false</code>.</li>
<li><code>verbatim</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> When <code>true</code>, the <code>Promise</code> is resolved with IPv4 and
IPv6 addresses in the order the DNS resolver returned them. When <code>false</code>,
IPv4 addresses are placed before IPv6 addresses.
<strong>Default:</strong> currently <code>false</code> (addresses are reordered) but this is
expected to change in the not too distant future.
New code should use <code>{ verbatim: true }</code>.</li>
</ul>
</li>
</ul>
<p>Resolves a host name (e.g. <code>'nodejs.org'</code>) into the first found A (IPv4) or
AAAA (IPv6) record. All <code>option</code> properties are optional. If <code>options</code> is an
integer, then it must be <code>4</code> or <code>6</code> if <code>options</code> is not provided, then IPv4
and IPv6 addresses are both returned if found.</p>
<p>With the <code>all</code> option set to <code>true</code>, the <code>Promise</code> is resolved with <code>addresses</code>
being an array of objects with the properties <code>address</code> and <code>family</code>.</p>
<p>On error, the <code>Promise</code> is rejected with an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code>
is the error code.
Keep in mind that <code>err.code</code> will be set to <code>'ENOTFOUND'</code> not only when
the host name does not exist but also when the lookup fails in other ways
such as no available file descriptors.</p>
<p><a href="#dns_dnspromises_lookup_hostname_options"><code>dnsPromises.lookup()</code></a> does not necessarily have anything to do with the DNS
protocol. The implementation uses an operating system facility that can
associate names with addresses, and vice versa. This implementation can have
subtle but important consequences on the behavior of any Node.js program. Please
take some time to consult the <a href="#dns_implementation_considerations">Implementation considerations section</a> before
using <code>dnsPromises.lookup()</code>.</p>
<p>Example usage:</p>
<pre><code class="language-js">const dns = require('dns');
const dnsPromises = dns.promises;
const options = {
family: 6,
hints: dns.ADDRCONFIG | dns.V4MAPPED,
};
dnsPromises.lookup('example.com', options).then((result) => {
console.log('address: %j family: IPv%s', result.address, result.family);
// address: "2606:2800:220:1:248:1893:25c8:1946" family: IPv6
});
// When options.all is true, the result will be an Array.
options.all = true;
dnsPromises.lookup('example.com', options).then((result) => {
console.log('addresses: %j', result);
// addresses: [{"address":"2606:2800:220:1:248:1893:25c8:1946","family":6}]
});
</code></pre>
<h3><code>dnsPromises.lookupService(address, port)</code><span><a class="mark" href="#dns_dnspromises_lookupservice_address_port" id="dns_dnspromises_lookupservice_address_port">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>address</code> <a href="path_to_url#String_type" class="type"><string></a></li>
<li><code>port</code> <a href="path_to_url#Number_type" class="type"><number></a></li>
</ul>
<p>Resolves the given <code>address</code> and <code>port</code> into a host name and service using
the operating system's underlying <code>getnameinfo</code> implementation.</p>
<p>If <code>address</code> is not a valid IP address, a <code>TypeError</code> will be thrown.
The <code>port</code> will be coerced to a number. If it is not a legal port, a <code>TypeError</code>
will be thrown.</p>
<p>On error, the <code>Promise</code> is rejected with an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code>
is the error code.</p>
<pre><code class="language-js">const dnsPromises = require('dns').promises;
dnsPromises.lookupService('127.0.0.1', 22).then((result) => {
console.log(result.hostname, result.service);
// Prints: localhost ssh
});
</code></pre>
<h3><code>dnsPromises.resolve(hostname[, rrtype])</code><span><a class="mark" href="#dns_dnspromises_resolve_hostname_rrtype" id="dns_dnspromises_resolve_hostname_rrtype">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a> Host name to resolve.</li>
<li><code>rrtype</code> <a href="path_to_url#String_type" class="type"><string></a> Resource record type. <strong>Default:</strong> <code>'A'</code>.</li>
</ul>
<p>Uses the DNS protocol to resolve a host name (e.g. <code>'nodejs.org'</code>) into an array
of the resource records. When successful, the <code>Promise</code> is resolved with an
array of resource records. The type and structure of individual results vary
based on <code>rrtype</code>:</p>
<table><thead><tr><th><code>rrtype</code></th><th><code>records</code> contains</th><th>Result type</th><th>Shorthand method</th></tr></thead><tbody><tr><td><code>'A'</code></td><td>IPv4 addresses (default)</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dnspromises_resolve4_hostname_options"><code>dnsPromises.resolve4()</code></a></td></tr><tr><td><code>'AAAA'</code></td><td>IPv6 addresses</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dnspromises_resolve6_hostname_options"><code>dnsPromises.resolve6()</code></a></td></tr><tr><td><code>'ANY'</code></td><td>any records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dnspromises_resolveany_hostname"><code>dnsPromises.resolveAny()</code></a></td></tr><tr><td><code>'CNAME'</code></td><td>canonical name records</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dnspromises_resolvecname_hostname"><code>dnsPromises.resolveCname()</code></a></td></tr><tr><td><code>'MX'</code></td><td>mail exchange records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dnspromises_resolvemx_hostname"><code>dnsPromises.resolveMx()</code></a></td></tr><tr><td><code>'NAPTR'</code></td><td>name authority pointer records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dnspromises_resolvenaptr_hostname"><code>dnsPromises.resolveNaptr()</code></a></td></tr><tr><td><code>'NS'</code></td><td>name server records</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dnspromises_resolvens_hostname"><code>dnsPromises.resolveNs()</code></a></td></tr><tr><td><code>'PTR'</code></td><td>pointer records</td><td><a href="path_to_url#String_type" class="type"><string></a></td><td><a href="#dns_dnspromises_resolveptr_hostname"><code>dnsPromises.resolvePtr()</code></a></td></tr><tr><td><code>'SOA'</code></td><td>start of authority records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dnspromises_resolvesoa_hostname"><code>dnsPromises.resolveSoa()</code></a></td></tr><tr><td><code>'SRV'</code></td><td>service records</td><td><a href="path_to_url" class="type"><Object></a></td><td><a href="#dns_dnspromises_resolvesrv_hostname"><code>dnsPromises.resolveSrv()</code></a></td></tr><tr><td><code>'TXT'</code></td><td>text records</td><td><a href="path_to_url#String_type" class="type"><string[]></a></td><td><a href="#dns_dnspromises_resolvetxt_hostname"><code>dnsPromises.resolveTxt()</code></a></td></tr></tbody></table>
<p>On error, the <code>Promise</code> is rejected with an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code>
is one of the <a href="#dns_error_codes">DNS error codes</a>.</p>
<h3><code>dnsPromises.resolve4(hostname[, options])</code><span><a class="mark" href="#dns_dnspromises_resolve4_hostname_options" id="dns_dnspromises_resolve4_hostname_options">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a> Host name to resolve.</li>
<li>
<p><code>options</code> <a href="path_to_url" class="type"><Object></a></p>
<ul>
<li><code>ttl</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> Retrieve the Time-To-Live value (TTL) of each record.
When <code>true</code>, the <code>Promise</code> is resolved with an array of
<code>{ address: '1.2.3.4', ttl: 60 }</code> objects rather than an array of strings,
with the TTL expressed in seconds.</li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve IPv4 addresses (<code>A</code> records) for the
<code>hostname</code>. On success, the <code>Promise</code> is resolved with an array of IPv4
addresses (e.g. <code>['74.125.79.104', '74.125.79.105', '74.125.79.106']</code>).</p>
<h3><code>dnsPromises.resolve6(hostname[, options])</code><span><a class="mark" href="#dns_dnspromises_resolve6_hostname_options" id="dns_dnspromises_resolve6_hostname_options">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a> Host name to resolve.</li>
<li>
<p><code>options</code> <a href="path_to_url" class="type"><Object></a></p>
<ul>
<li><code>ttl</code> <a href="path_to_url#Boolean_type" class="type"><boolean></a> Retrieve the Time-To-Live value (TTL) of each record.
When <code>true</code>, the <code>Promise</code> is resolved with an array of
<code>{ address: '0:1:2:3:4:5:6:7', ttl: 60 }</code> objects rather than an array of
strings, with the TTL expressed in seconds.</li>
</ul>
</li>
</ul>
<p>Uses the DNS protocol to resolve IPv6 addresses (<code>AAAA</code> records) for the
<code>hostname</code>. On success, the <code>Promise</code> is resolved with an array of IPv6
addresses.</p>
<h3><code>dnsPromises.resolveAny(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolveany_hostname" id="dns_dnspromises_resolveany_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve all records (also known as <code>ANY</code> or <code>*</code> query).
On success, the <code>Promise</code> is resolved with an array containing various types of
records. Each object has a property <code>type</code> that indicates the type of the
current record. And depending on the <code>type</code>, additional properties will be
present on the object:</p>
<table><thead><tr><th>Type</th><th>Properties</th></tr></thead><tbody><tr><td><code>'A'</code></td><td><code>address</code>/<code>ttl</code></td></tr><tr><td><code>'AAAA'</code></td><td><code>address</code>/<code>ttl</code></td></tr><tr><td><code>'CNAME'</code></td><td><code>value</code></td></tr><tr><td><code>'MX'</code></td><td>Refer to <a href="#dns_dnspromises_resolvemx_hostname"><code>dnsPromises.resolveMx()</code></a></td></tr><tr><td><code>'NAPTR'</code></td><td>Refer to <a href="#dns_dnspromises_resolvenaptr_hostname"><code>dnsPromises.resolveNaptr()</code></a></td></tr><tr><td><code>'NS'</code></td><td><code>value</code></td></tr><tr><td><code>'PTR'</code></td><td><code>value</code></td></tr><tr><td><code>'SOA'</code></td><td>Refer to <a href="#dns_dnspromises_resolvesoa_hostname"><code>dnsPromises.resolveSoa()</code></a></td></tr><tr><td><code>'SRV'</code></td><td>Refer to <a href="#dns_dnspromises_resolvesrv_hostname"><code>dnsPromises.resolveSrv()</code></a></td></tr><tr><td><code>'TXT'</code></td><td>This type of record contains an array property called <code>entries</code> which refers to <a href="#dns_dnspromises_resolvetxt_hostname"><code>dnsPromises.resolveTxt()</code></a>, e.g. <code>{ entries: ['...'], type: 'TXT' }</code></td></tr></tbody></table>
<p>Here is an example of the result object:</p>
<!-- eslint-disable semi -->
<pre><code class="language-js">[ { type: 'A', address: '127.0.0.1', ttl: 299 },
{ type: 'CNAME', value: 'example.com' },
{ type: 'MX', exchange: 'alt4.aspmx.l.example.com', priority: 50 },
{ type: 'NS', value: 'ns1.example.com' },
{ type: 'TXT', entries: [ 'v=spf1 include:_spf.example.com ~all' ] },
{ type: 'SOA',
nsname: 'ns1.example.com',
hostmaster: 'admin.example.com',
serial: 156696742,
refresh: 900,
retry: 900,
expire: 1800,
minttl: 60 } ]
</code></pre>
<h3><code>dnsPromises.resolveCname(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolvecname_hostname" id="dns_dnspromises_resolvecname_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve <code>CNAME</code> records for the <code>hostname</code>. On success,
the <code>Promise</code> is resolved with an array of canonical name records available for
the <code>hostname</code> (e.g. <code>['bar.example.com']</code>).</p>
<h3><code>dnsPromises.resolveMx(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolvemx_hostname" id="dns_dnspromises_resolvemx_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve mail exchange records (<code>MX</code> records) for the
<code>hostname</code>. On success, the <code>Promise</code> is resolved with an array of objects
containing both a <code>priority</code> and <code>exchange</code> property (e.g.
<code>[{priority: 10, exchange: 'mx.example.com'}, ...]</code>).</p>
<h3><code>dnsPromises.resolveNaptr(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolvenaptr_hostname" id="dns_dnspromises_resolvenaptr_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve regular expression based records (<code>NAPTR</code>
records) for the <code>hostname</code>. On success, the <code>Promise</code> is resolved with an array
of objects with the following properties:</p>
<ul>
<li><code>flags</code></li>
<li><code>service</code></li>
<li><code>regexp</code></li>
<li><code>replacement</code></li>
<li><code>order</code></li>
<li><code>preference</code></li>
</ul>
<!-- eslint-skip -->
<pre><code class="language-js">{
flags: 's',
service: 'SIP+D2U',
regexp: '',
replacement: '_sip._udp.example.com',
order: 30,
preference: 100
}
</code></pre>
<h3><code>dnsPromises.resolveNs(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolvens_hostname" id="dns_dnspromises_resolvens_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve name server records (<code>NS</code> records) for the
<code>hostname</code>. On success, the <code>Promise</code> is resolved with an array of name server
records available for <code>hostname</code> (e.g.
<code>['ns1.example.com', 'ns2.example.com']</code>).</p>
<h3><code>dnsPromises.resolvePtr(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolveptr_hostname" id="dns_dnspromises_resolveptr_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve pointer records (<code>PTR</code> records) for the
<code>hostname</code>. On success, the <code>Promise</code> is resolved with an array of strings
containing the reply records.</p>
<h3><code>dnsPromises.resolveSoa(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolvesoa_hostname" id="dns_dnspromises_resolvesoa_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve a start of authority record (<code>SOA</code> record) for
the <code>hostname</code>. On success, the <code>Promise</code> is resolved with an object with the
following properties:</p>
<ul>
<li><code>nsname</code></li>
<li><code>hostmaster</code></li>
<li><code>serial</code></li>
<li><code>refresh</code></li>
<li><code>retry</code></li>
<li><code>expire</code></li>
<li><code>minttl</code></li>
</ul>
<!-- eslint-skip -->
<pre><code class="language-js">{
nsname: 'ns.example.com',
hostmaster: 'root.example.com',
serial: 2013101809,
refresh: 10000,
retry: 2400,
expire: 604800,
minttl: 3600
}
</code></pre>
<h3><code>dnsPromises.resolveSrv(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolvesrv_hostname" id="dns_dnspromises_resolvesrv_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve service records (<code>SRV</code> records) for the
<code>hostname</code>. On success, the <code>Promise</code> is resolved with an array of objects with
the following properties:</p>
<ul>
<li><code>priority</code></li>
<li><code>weight</code></li>
<li><code>port</code></li>
<li><code>name</code></li>
</ul>
<!-- eslint-skip -->
<pre><code class="language-js">{
priority: 10,
weight: 5,
port: 21223,
name: 'service.example.com'
}
</code></pre>
<h3><code>dnsPromises.resolveTxt(hostname)</code><span><a class="mark" href="#dns_dnspromises_resolvetxt_hostname" id="dns_dnspromises_resolvetxt_hostname">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>hostname</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Uses the DNS protocol to resolve text queries (<code>TXT</code> records) for the
<code>hostname</code>. On success, the <code>Promise</code> is resolved with a two-dimensional array
of the text records available for <code>hostname</code> (e.g.
<code>[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]</code>). Each sub-array contains TXT chunks of
one record. Depending on the use case, these could be either joined together or
treated separately.</p>
<h3><code>dnsPromises.reverse(ip)</code><span><a class="mark" href="#dns_dnspromises_reverse_ip" id="dns_dnspromises_reverse_ip">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>ip</code> <a href="path_to_url#String_type" class="type"><string></a></li>
</ul>
<p>Performs a reverse DNS query that resolves an IPv4 or IPv6 address to an
array of host names.</p>
<p>On error, the <code>Promise</code> is rejected with an <a href="errors.html#errors_class_error"><code>Error</code></a> object, where <code>err.code</code>
is one of the <a href="#dns_error_codes">DNS error codes</a>.</p>
<h3><code>dnsPromises.setServers(servers)</code><span><a class="mark" href="#dns_dnspromises_setservers_servers" id="dns_dnspromises_setservers_servers">#</a></span></h3>
<div class="api_metadata">
<span>Added in: v10.6.0</span>
</div>
<ul>
<li><code>servers</code> <a href="path_to_url#String_type" class="type"><string[]></a> array of <a href="path_to_url#section-6">RFC 5952</a> formatted addresses</li>
</ul>
<p>Sets the IP address and port of servers to be used when performing DNS
resolution. The <code>servers</code> argument is an array of <a href="path_to_url#section-6">RFC 5952</a> formatted
addresses. If the port is the IANA default DNS port (53) it can be omitted.</p>
<pre><code class="language-js">dnsPromises.setServers([
'4.4.4.4',
'[2001:4860:4860::8888]',
'4.4.4.4:1053',
'[2001:4860:4860::8888]:1053'
]);
</code></pre>
<p>An error will be thrown if an invalid address is provided.</p>
<p>The <code>dnsPromises.setServers()</code> method must not be called while a DNS query is in
progress.</p>
<p>This method works much like
<a href="path_to_url">resolve.conf</a>.
That is, if attempting to resolve with the first server provided results in a
<code>NOTFOUND</code> error, the <code>resolve()</code> method will <em>not</em> attempt to resolve with
subsequent servers provided. Fallback DNS servers will only be used if the
earlier ones time out or result in some other error.</p>
<h2>Error codes<span><a class="mark" href="#dns_error_codes" id="dns_error_codes">#</a></span></h2>
<p>Each DNS query can return one of the following error codes:</p>
<ul>
<li><code>dns.NODATA</code>: DNS server returned answer with no data.</li>
<li><code>dns.FORMERR</code>: DNS server claims query was misformatted.</li>
<li><code>dns.SERVFAIL</code>: DNS server returned general failure.</li>
<li><code>dns.NOTFOUND</code>: Domain name not found.</li>
<li><code>dns.NOTIMP</code>: DNS server does not implement requested operation.</li>
<li><code>dns.REFUSED</code>: DNS server refused query.</li>
<li><code>dns.BADQUERY</code>: Misformatted DNS query.</li>
<li><code>dns.BADNAME</code>: Misformatted host name.</li>
<li><code>dns.BADFAMILY</code>: Unsupported address family.</li>
<li><code>dns.BADRESP</code>: Misformatted DNS reply.</li>
<li><code>dns.CONNREFUSED</code>: Could not contact DNS servers.</li>
<li><code>dns.TIMEOUT</code>: Timeout while contacting DNS servers.</li>
<li><code>dns.EOF</code>: End of file.</li>
<li><code>dns.FILE</code>: Error reading file.</li>
<li><code>dns.NOMEM</code>: Out of memory.</li>
<li><code>dns.DESTRUCTION</code>: Channel is being destroyed.</li>
<li><code>dns.BADSTR</code>: Misformatted string.</li>
<li><code>dns.BADFLAGS</code>: Illegal flags specified.</li>
<li><code>dns.NONAME</code>: Given host name is not numeric.</li>
<li><code>dns.BADHINTS</code>: Illegal hints flags specified.</li>
<li><code>dns.NOTINITIALIZED</code>: c-ares library initialization not yet performed.</li>
<li><code>dns.LOADIPHLPAPI</code>: Error loading <code>iphlpapi.dll</code>.</li>
<li><code>dns.ADDRGETNETWORKPARAMS</code>: Could not find <code>GetNetworkParams</code> function.</li>
<li><code>dns.CANCELLED</code>: DNS query cancelled.</li>
</ul>
<h2>Implementation considerations<span><a class="mark" href="#dns_implementation_considerations" id="dns_implementation_considerations">#</a></span></h2>
<p>Although <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a> and the various <code>dns.resolve*()/dns.reverse()</code>
functions have the same goal of associating a network name with a network
address (or vice versa), their behavior is quite different. These differences
can have subtle but significant consequences on the behavior of Node.js
programs.</p>
<h3><code>dns.lookup()</code><span><a class="mark" href="#dns_dns_lookup" id="dns_dns_lookup">#</a></span></h3>
<p>Under the hood, <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a> uses the same operating system facilities
as most other programs. For instance, <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a> will almost always
resolve a given name the same way as the <code>ping</code> command. On most POSIX-like
operating systems, the behavior of the <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a> function can be
modified by changing settings in <a href="path_to_url"><code>nsswitch.conf(5)</code></a> and/or <a href="path_to_url"><code>resolv.conf(5)</code></a>,
but changing these files will change the behavior of all other
programs running on the same operating system.</p>
<p>Though the call to <code>dns.lookup()</code> will be asynchronous from JavaScript's
perspective, it is implemented as a synchronous call to <a href="path_to_url"><code>getaddrinfo(3)</code></a> that runs
on libuv's threadpool. This can have surprising negative performance
implications for some applications, see the <a href="cli.html#cli_uv_threadpool_size_size"><code>UV_THREADPOOL_SIZE</code></a>
documentation for more information.</p>
<p>Various networking APIs will call <code>dns.lookup()</code> internally to resolve
host names. If that is an issue, consider resolving the host name to an address
using <code>dns.resolve()</code> and using the address instead of a host name. Also, some
networking APIs (such as <a href="net.html#net_socket_connect_options_connectlistener"><code>socket.connect()</code></a> and <a href="dgram.html#dgram_dgram_createsocket_options_callback"><code>dgram.createSocket()</code></a>)
allow the default resolver, <code>dns.lookup()</code>, to be replaced.</p>
<h3><code>dns.resolve()</code>, <code>dns.resolve*()</code> and <code>dns.reverse()</code><span><a class="mark" href="#dns_dns_resolve_dns_resolve_and_dns_reverse" id="dns_dns_resolve_dns_resolve_and_dns_reverse">#</a></span></h3>
<p>These functions are implemented quite differently than <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a>. They
do not use <a href="path_to_url"><code>getaddrinfo(3)</code></a> and they <em>always</em> perform a DNS query on the
network. This network communication is always done asynchronously, and does not
use libuv's threadpool.</p>
<p>As a result, these functions cannot have the same negative impact on other
processing that happens on libuv's threadpool that <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a> can have.</p>
<p>They do not use the same set of configuration files than what <a href="#dns_dns_lookup_hostname_options_callback"><code>dns.lookup()</code></a>
uses. For instance, <em>they do not use the configuration from <code>/etc/hosts</code></em>.</p>
</div>
</div>
</div>
<script src="assets/highlight.pack.js"></script>
<script>document.addEventListener('DOMContentLoaded', () => { hljs.initHighlightingOnLoad(); });</script>
</body>
</html>
```
|
```javascript
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions
// are met:
// 1. Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// 2. Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
// THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
// WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
// DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR ANY
// DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
// LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
// ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
description(
"Tests that DFG silent spill and fill of WeakJSConstants does not result in nonsense."
);
function foo(a, b, c, d)
{
a.f = 42; // WeakJSConstants corresponding to the o.f transition get created here.
var x = !d; // Silent spilling and filling happens here.
b.f = x; // The WeakJSConstants get reused here.
var y = !d; // Silent spilling and filling happens here.
c.f = y; // The WeakJSConstants get reused here.
}
var Empty = "";
for (var i = 0; i < 1000; ++i) {
var o1 = new Object();
var o2 = new Object();
var o3 = new Object();
eval(Empty + "foo(o1, o2, o3, \"stuff\")");
shouldBe("o1.f", "42");
shouldBe("o2.f", "false");
shouldBe("o3.f", "false");
}
```
|
```smalltalk
using System;
using System.IO;
namespace Volo.Abp.Content;
public interface IRemoteStreamContent : IDisposable
{
string? FileName { get; }
string ContentType { get; }
long? ContentLength { get; }
Stream GetStream();
}
```
|
```go
// contributor license agreements. See the NOTICE file distributed with
// this work for additional information regarding copyright ownership.
//
// path_to_url
//
// Unless required by applicable law or agreed to in writing, software
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
package main
import (
"beam.apache.org/learning/katas/core_transforms/cogroupbykey/cogroupbykey/pkg/task"
"context"
"github.com/apache/beam/sdks/v2/go/pkg/beam"
"github.com/apache/beam/sdks/v2/go/pkg/beam/log"
"github.com/apache/beam/sdks/v2/go/pkg/beam/x/beamx"
"github.com/apache/beam/sdks/v2/go/pkg/beam/x/debug"
)
func main() {
ctx := context.Background()
p, s := beam.NewPipelineWithRoot()
fruits := beam.Create(s.Scope("Fruits"), "apple", "banana", "cherry")
countries := beam.Create(s.Scope("Countries"), "australia", "brazil", "canada")
output := task.ApplyTransform(s, fruits, countries)
debug.Print(s, output)
err := beamx.Run(ctx, p)
if err != nil {
log.Exitf(ctx, "Failed to execute job: %v", err)
}
}
```
|
Pinus armandii, the Armand pine or Chinese white pine, is a species of pine native to China, occurring from southern Shanxi west to southern Gansu and south to Yunnan, with outlying populations in Anhui. It grows at altitudes of 2200–3000 m in Taiwan, and it also extends a short distance into northern Burma. In Chinese it is known as "Mount Hua pine" ().
It grows at 1,000–3,300 m altitude, with the lower altitudes mainly in the northern part of the range. It is a tree reaching height, with a trunk up to in diameter.
Description
It is a member of the white pine group, Pinus subgenus Strobus, and like all members of that group, the leaves ('needles') are in fascicles (bundles) of five, with a deciduous sheath. They are long. The cones are long and broad, with stout, thick scales. The seeds are large, long and have only a vestigial wing; they are dispersed by spotted nutcrackers. The cones mature in their second year, this is a juvenile female cone:
Varieties
The species has two or three varieties:
Pinus armandii var. armandii. All the range except for the populations below.
Pinus armandii var. mastersiana. Mountains of central Taiwan.
Pinus armandii var. dabeshanensis. The Dabie Mountains on the Anhui-Hubei border. Alternatively, this variety may be treated as a separate species, Pinus dabeshanensis (Dabie Mountains pine). To add further confusion, Flora of China lists this as P. fenzeliana var. dabeshanensis.
IUCN has listed var. dabeshanensis (assessed as Pinus dabeshanensis) as vulnerable and var. mastersiana as endangered.
Pinus armandii has also been reported in the past from Hainan off the south coast of China, and two islands off southern Japan, but these pines differ in a number of features and are now treated as distinct species, Hainan white pine (Pinus fenzeliana) and Yakushima white pine (Pinus amamiana) respectively.
Uses
Pinus armandii seeds are harvested and sold as pine nuts. Research indicates that these nuts can cause pine mouth syndrome. The wood is used for general building purposes; the species is important in forestry plantations in some parts of China. It is also grown as an ornamental tree in parks and large gardens in Europe and North America. The scientific name commemorates the French missionary and naturalist Armand David, who first introduced it to Europe.
Problems
19 different Pestalotiopsis species (a genus of ascomycete fungi) have been found as endophytes from bark and needles of Pinus armandii in China.
Chinese culture
The tree, because of its evergreen foliage, is considered by the Chinese as an emblem of longevity and immortality. Its resin is considered an animated soul-substance, the counterpart of blood in men and animals. In ancient China, Taoist seekers of immortality consumed much of the tree's resin, hoping thereby to prolong life. Legend says that Qiu Sheng () who lived at the time of King Chengtang of Shang () (reigned 1675–1646 BCE), founder of the Shang Dynasty, was indebted for his longevity to pine-resin. The Shouxing, Chinese god of longevity (), is usually represented standing at the foot of a pine, while a fairy-crane perches on a branch of the tree. In traditional pictures of "happiness, honor and longevity" (), the pine-tree represents longevity, in the same manner as the bat symbolizes good fortune due to its homonymic association with the Chinese character for good luck (). A fungus, that the Chinese call Fu Ling grows on the root of the pine-tree, and is believed by the Chinese to suppress all sensations of hunger, cure various diseases, and lengthen life.
References
External links
Pinus armandii cone photos (scroll ¾ way down page)
armandii
Trees of Myanmar
armandii
Flora of Shanxi
Flora of Gansu
Flora of Yunnan
Flora of Anhui
Trees of Taiwan
Edible nuts and seeds
Least concern plants
|
The minisuperspace in physics, especially in theories of quantum gravity, is an approximation of the otherwise infinite-dimensional phase space of a field theory.
The phase space is reduced by considering the largest wavelength modes to be of the order of the size of the universe when studying cosmological models and removing all larger modes.
The validity of this approximation holds as long as the adiabatic approximation holds.
An example would be to only consider the scale factor and Hubble constant for a Friedman–Robertson–Walker model in minisuperspace model the small true vacuum bubble which is nearly spherical with one single parameter of the scalar factor a is described as minisuperspace. It plays a significant role in the explanation of the origin of universe as a bubble in quantum cosmological theory.
References
Quantum gravity
|
```java
/*
*
* This file is part of LibreTorrent.
*
* LibreTorrent is free software: you can redistribute it and/or modify
* (at your option) any later version.
*
* LibreTorrent is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* along with LibreTorrent. If not, see <path_to_url
*/
package org.proninyaroslav.libretorrent.core;
import android.content.Context;
import androidx.annotation.NonNull;
import org.proninyaroslav.libretorrent.core.settings.SettingsRepository;
import org.proninyaroslav.libretorrent.core.settings.SettingsRepositoryImpl;
import org.proninyaroslav.libretorrent.core.storage.AppDatabase;
import org.proninyaroslav.libretorrent.core.storage.FeedRepository;
import org.proninyaroslav.libretorrent.core.storage.FeedRepositoryImpl;
import org.proninyaroslav.libretorrent.core.storage.TagRepository;
import org.proninyaroslav.libretorrent.core.storage.TagRepositoryImpl;
import org.proninyaroslav.libretorrent.core.storage.TorrentRepository;
import org.proninyaroslav.libretorrent.core.storage.TorrentRepositoryImpl;
public class RepositoryHelper
{
private static FeedRepositoryImpl feedRepo;
private static TorrentRepositoryImpl torrentRepo;
private static SettingsRepositoryImpl settingsRepo;
private static TagRepository tagRepo;
public synchronized static TorrentRepository getTorrentRepository(@NonNull Context appContext)
{
if (torrentRepo == null)
torrentRepo = new TorrentRepositoryImpl(appContext,
AppDatabase.getInstance(appContext));
return torrentRepo;
}
public synchronized static FeedRepository getFeedRepository(@NonNull Context appContext)
{
if (feedRepo == null)
feedRepo = new FeedRepositoryImpl(appContext,
AppDatabase.getInstance(appContext));
return feedRepo;
}
public synchronized static SettingsRepository getSettingsRepository(@NonNull Context appContext)
{
if (settingsRepo == null)
settingsRepo = new SettingsRepositoryImpl(appContext);
return settingsRepo;
}
public synchronized static TagRepository getTagRepository(@NonNull Context appContext)
{
if (tagRepo == null)
tagRepo = new TagRepositoryImpl(AppDatabase.getInstance(appContext));
return tagRepo;
}
}
```
|
Amityville: No Escape is a 2016 American horror film written and directed by Henrique Couto, and co-written by Ira Gansler. It is the seventeenth film to be inspired by Jay Anson's 1977 novel The Amityville Horror. A found footage film, it follows two storylines, one set in 1997 and the other in 2016, that both involve 112 Ocean Avenue, a haunted house in Amityville, New York.
Plot
In April 1997, a woman named Lina moves into 112 Ocean Avenue in Amityville, New York. The house is dilapidated and full of objects that were left in it by previous occupants, and as Lina works on repairing and cleaning it, she records a video diary for her absent husband, who is a soldier in the United States Army. Worsening paranormal phenomena occurs in the house, which Lina eventually learns was the site of an allegedly possession-induced familicide that was committed by Ronald DeFeo Jr. back in 1974. After a few weeks, an unseen presence attacks and kills Lina.
In August 2016, a college student named George Harris is doing a video thesis on fear, and convinces his girlfriend, Sarah, his sister, Elizabeth, and their friends Lisa and Simon to accompany him on a camping trip to the woods near 112 Ocean Avenue. On their first night in the forest, George and Elizabeth encounter a woodsman who claims to be searching for his missing daughter, while a little girl dressed all in white is glimpsed by Simon. The next night, the quintet find the woodsman disemboweled shortly after the little girl is spotted by Elizabeth. The group try to flee the woods, but become lost despite spending several hours hiking while following their compass, with all of their attempts at calling for aid proving ineffective due to a lack of cellphone service and their radio emitting nothing but distorted screeching, which traumatizes Lisa.
Simon dies while searching for the little girl, and the increasingly distraught Lisa disappears after being lured off of the path by the child, who is invisible to George. Lisa and Elizabeth are killed, and George and Sarah become separated, with the latter stumbling onto and breaking into the vacant 112 Ocean Avenue at George's insistence. George soon enters the house, and calmly shoots himself in the head in front of Sarah (whose greatest fear was being left alone).
Sarah's fate is left unknown, as the film returns to the 1997 footage, which shows Lisa, looking exactly like she did in 2016 and speaking in a distorted child's voice, touch Lina's body and say, "Tag, you're it."
Cast
Release
The film premiered at the By-Jo Theatre in Germantown, Ohio on August 5, 2016. It was released on DVD by Camp Motion Pictures on June 13, 2017.
Reception
Famous Monsters of Filmland gave the film a glowingly positive review, writing, "If you are a fan of the franchise (in particular the first flick, natch) or just want to see something fresh in the found-footage fright-flick arena, I urge you to give Amityville: No Escape the attention of your putrid peepers. It is a brisk, traditional terror tale told in a fun and innovative way!" In contrast, Tex Hula ranked Amityville: No Escape as the third worst of the twenty-one Amityville films that he reviewed for Ain't It Cool News, and bluntly opined that its ending was "lame" and that the time that he spent watching it was "an hour and a half of life wasted." Fellow Ain't It Cool News reviewer M. L. Miller had a more lenient response to the film, writing, "There are some lulls in the present day stuff; some iffy motivations and decisions of the kids in the woods, some woods scenes that feel like the crew is just walking though a backyard that hasn’t been mowed for a week, and an open ending that really doesn't make a while lot of sense, but does register as creepy. The past stuff in Amityville: No Escape is actually pretty haunting in its simplicity and strong performance by Julia Gomez."
References
External links
2016 films
2016 horror films
2016 independent films
2010s American films
2010s English-language films
2010s exploitation films
2010s ghost films
2010s psychological horror films
2010s supernatural horror films
American exploitation films
American ghost films
American haunted house films
American independent films
American psychological horror films
American supernatural horror films
Amityville Horror films
Camcorder films
Films about couples
Films about fear
Films about mass murder
Films about missing people
Films about siblings
Films about spirit possession
Films about time travel
Films set in 1997
Films set in 2016
Films set in abandoned houses
Films set in forests
Films set in Long Island
Films set in New York (state)
Films shot in Ohio
Found footage films
Unofficial sequel films
|
```javascript
import { test } from '../../test';
export default test({
test({ assert, component, target, window }) {
const allow = target.querySelector('.allow-propagation');
const stop = target.querySelector('.stop-propagation');
allow?.dispatchEvent(new window.MouseEvent('click', { bubbles: true }));
stop?.dispatchEvent(new window.MouseEvent('click', { bubbles: true }));
assert.equal(component.foo, true);
assert.equal(component.bar, false);
}
});
```
|
Ashuganj or Ashugonj () is a town in the Brahmanbaria District of Chittagong Division of Bangladesh in the Meghna River delta. Its altitude is 10 meters (36 feet).
The city is known for the Port of Ashuganj and for its power plant which generates much of the electricity for the country, especially for the capital city. Almost 25% of Bangladesh's electrical generation is produced in Ashugonj Power Station. Zia Fertilizer Ltd is on the other side of Ashuganj. It produces chemical fertilizer for the country. There are more than 500 rice mills, which means above 40% of the national rice output.
Ashuganj is known also as a commercial hub, with a big river port. There is a Transit Line in Ashugonj which communicates with India. It is also very well known as a layover destination for coach bus routes from Dhaka to Sylhet, and vice versa with hotels such as Hotel Ujan Vati and Hotel Razmoni.
The area has experienced severe power shortages but a revamping project is being planned and implemented under Japanese Debt Relief Grant Aid.
See also
Meghna Heli Bridge
References
External links
Country dates with acute power crisis
5th unit of Ashuganj power plant resumes production
Populated places in Chittagong Division
Ashuganj
|
"Back to School Mr. Bean" is the eleventh episode of the British television series Mr. Bean, produced by Tiger Aspect Productions and Thames Television for Central Independent Television. It was first broadcast on ITV on Wednesday, 26 October 1994 and was watched by 14,450,000 viewers during its original broadcast.
Plot
Act 1: At School
Mr. Bean attends an open day at a local school. While looking for a place to park his Mini, he spots a near-identical Mini in a reserved parking space and replaces it with his own. Two Army Cadets help Bean to push the other Mini away, thinking it has broken down. He then confuses a troop of cadets by giving them commands which cause them to stand in unusual stances; the commander scolds the troop upon his return. Inside the school, Bean looks at various things on a wall, tampers with a philatelist's stamp collection and disturbs a calligrapher. He then sees a woman using a Van de Graaff generator to make her hair stand on end and tries it for himself, but it doesn't work for him. It leaves his body electrostatically charged, such that when he then picks up a pamphlet, it gets stuck to his hand. When he gives the pamphlet to another woman, her skirt rises up, revealing her underwear, prompting Bean to exit.
Act 2: Laboratory Trouble and the Art Class
In the chemistry lab, Bean experiments with several chemicals and creates an unstable chemical reaction. As Bean exits the room, a young boy walks in and a violent explosion occurs that creates an abundance of blue smoke. Bean then joins a still-life art class and starts drawing a bowl of fruit, but the bowl is soon replaced with a nude model. When Bean realizes this, he is reluctant to draw any further, despite the teacher's attempts to sway him, so he goes over to the potter's wheel and starts making clay pots. A teacher enters with the boy from the chemistry lab (now coated in blue powder), looking for the instigator of the explosion; she rushes the boy out after they notice the nude model. Bean places the finished pots on the model's breasts, allowing him to draw her without embarrassment.
Act 3: The Judo Class and the Toilet
While partaking in a judo class, Bean is reluctant to allow himself to be thrown. He ultimately manages to overtake the instructor by pushing him to the ground from behind and rolling him up in a mat. Upon changing back into his regular clothes, Bean finds that he is wearing someone else's trousers and searches for his own. He soon finds a man in the men's toilets wearing them. Bean grabs the man by the legs and pulls the trousers off. He throws the man's underpants back to the man, though they end up landing in the toilet.
Act 4: The Disaster
Just as Bean exits the school, there is an announcement over the PA system stating that there will be a demonstration shortly. Bean sees his Mini in the middle of the car park, but stops to buy a cupcake from a nearby cake stall. As he eats the cupcake, a giant Chieftain tank approaches and crushes his Mini. After the tank leaves, Bean does a double take, drops his cupcake, turns around, and slowly approaches his Mini with a tearful look on his face. As the closing credits roll, Bean examines the wreckage of the Mini and finds that the padlock he used to lock it is unharmed. Satisfied, he pulls the lock off and walks away.
Continuity
During The Best Bits of Mr. Bean, Bean finds the wreckage of his destroyed Mini in his loft.
Although the Mini has been crushed it reappears two episodes later in "Goodnight, Mr Bean". This Mini was also Austin Citron Green with a matte black bonnet also with the registration number SLW 287R. It is possible that this Mini was the one (registration ACW 497V) that was supposed to have been crushed in the episode and that Bean took it in as his own after his was destroyed by the Army Tank.
Cast
Rowan Atkinson as Mr. Bean
Suzanne Bertish as the art teacher
Cindy Milo as the nude model in the art class
Sam Driscoll as the boy chemistry lab
Christopher Ryan as judo student
Lucy Fleming as the angry teacher with the boy
David Schneider as the judo instructor
Harriet Eastcott as the electrocuted woman
Christopher Driscoll as the man in school corridor
John Clegg as the calligrapher
John Barrard as the stamp collector
Al Ashton as ACF Drill Instructor
Robin Driscoll as Man in School (uncredited)
Production
There were three cars crushed during filming. Two cars were specifically built for filming this episode and were painted with the same colour scheme as the main car; but with the engines removed. One of these two Minis was also used for the part where Mr. Bean substitutes his car with the identical car (registration ACW 497V). One of the three main cars with the engine removed was also crushed by the tank.
References
External links
Mr. Bean episodes
1994 British television episodes
Television shows written by Rowan Atkinson
Television shows written by Robin Driscoll
|
861 Aïda is a carbonaceous asteroid from the outer region of the asteroid belt, approximately 65 kilometers in diameter. It was discovered on 22 January 1917, by German astronomer Max Wolf at Heidelberg Observatory in southwest Germany, and given the provisional designation . It was named after the Italian opera Aida.
Orbit and classification
Aïda is a dark C-type asteroid that orbits the Sun in the outer main-belt at a distance of 2.8–3.5 AU once every 5 years and 7 months (2,030 days). Its orbit has an eccentricity of 0.10 and an inclination of 8° with respect to the ecliptic. Aïda was first identified as at Heidelberg in 1906, extending the body's observation arc by 11 years prior to its official discovery observation.
Physical characteristics
In May 2002, a rotational lightcurve of Aïda was obtained from photometric observations by French amateur astronomer Laurent Bernasconi. Lightcurve analysis gave a well-defined rotation period of 10.95 hours with a brightness variation of 0.32 magnitude ().
According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite, and NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, Aïda measures between 62.24 and 66.85 kilometers in diameter, and its surface has an albedo between 0.0571 and 0.7. The Collaborative Asteroid Lightcurve Link derives an albedo of 0.0522 and a diameter of 66.78 kilometers with an absolute magnitude of 9.7.
Naming
This minor planet was named for Aida, the famous Italian opera in four acts by composer Giuseppe Verdi (1813–1901), after whom the asteroid 3975 Verdi was named. Naming citation was first mentioned in The Names of the Minor Planets by Paul Herget in 1955 ().
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
000861
Discoveries by Max Wolf
Named minor planets
861 Aida
19170122
|
Iration is the sixth studio album by the American reggae band Iration, released on May 18, 2018.
Track listing
CD release
LP release
The LP Release of the album is notable for including the track "L.I.O.N. (Like It Or Not)" which was exclusively released for the LP and made available for streaming through Spotify. In the respective releases, the track is listed between "Last To Know" and "Energy"
Charts
References
External links
Iration
2018 albums
Reggae albums by American artists
|
Baek Ye-rin (; born June 26, 1997), also known as Yerin Baek, is a South Korean singer-songwriter. A former member of South Korean duo 15&, she debuted as a solo artist with her extended play, Frank, in 2015. Baek is credited with writing and composition for the majority of her songs, often touching on personal topics and real-life experiences. In addition to her solo career, she is also the lead vocalist and guitarist for the South Korean rock band The Volunteers and has been performing with them since 2018.
Life and career
1997–2015: Early life, career beginnings and 15&
Baek Ye-rin was born on June 26, 1997, in Daejeon.
In 2007, Baek appeared on the sixth episode of the SBS variety show Amazing Contest Star King along Seunghee, a member of the South Korean girl group Oh My Girl. She was labeled as a "10-year-old ballad genius" and performed "I Have Nothing" by Whitney Houston, and won first place. She also appeared on the KBS2 television show Yeo Yoo Man Man.
In the same year, Baek auditioned to become a trainee during JYP Entertainment's first open audition and won second place behind 2PM's Jang Woo-young. According to Baek, she would go to school during weekdays, and travel from Daejeon to Seoul on Friday nights where she would be practicing until Monday morning, and come back to Daejeon to attend her classes.
In 2010, while a trainee under JYP Entertainment, Baek moved to New York for two years where she spent her time practicing English, singing, and dancing. She had Johan Kim as a vocal coach during her trainee days. She would move back and forth Korea and the United States during her trainee days.
On September 27, 2012, it was revealed that Baek will debut as part of a duo along with K-pop Star season 1 winner Park Ji-min called 15&. On October 7, 15& debuted with their first single, "I Dream", on SBS Inkigayo.
2015–2019: Solo success and departure from JYP
Following 15&'s four-year hiatus on February 2015, Baek was featured in a number of tracks. She was featured in South Korean rapper Olltii's single "Excited" from his debut album which topped the realtime charts upon release, and appeared in the final episode of the rap competition show Unpretty Rapstar, as a featured artist in contestant Yuk Ji-dam's song, "On & On". The song also went on to top the charts upon release. She was also featured on San E's single "Me You", and eventually topped the realtime charts of seven major music charts.
In November 2015, it was announced that Baek will postpone her college entrance examinations and will focus on her musical career. She officially debuted as a solo artist with her first extended play, Frank, with the lead single titled "Across the Universe" on the 30th. The EP marked the start of Baek's long-time collaboration with producer Cloud.
In February 2016, Baek graduated from Hanlim Multi Art School alongside 15& member Park Ji-min and Yugyeom of Got7.
Baek released her first digital single titled Bye Bye My Blue on June 20, 2016. Her follow-up digital single titled Love You on Christmas was released on December 7.
Baek continued to be featured on songs by popular artists, and began performing covers and her own unreleased songs at festivals. In 2017, Baek performed her then unreleased song "Square" at a festival in Han River while wearing a green dress, and the fan-taken video which was uploaded to YouTube instantly became a hot topic and continues to garner views. The song was eventually released two years later.
In 2018, she formed The Volunteers with members of the independent rock band Bye Bye Badman, and released their first EP, the post-grunge rock "Vanity & People". The Seoul-based rock band consists of vocalist and guitarist Baek, bassist Hyungseok Koh (Cloud), guitarist Jonny, and drummer Chiheon Kim. Like the majority of Baek's post-JYP solo work, The Volunteer's lyrics are written entirely in English.
Baek released her second EP, Our Love Is Great, on March 18, 2019. The following day, her title track "Maybe It's Not Our Fault" went to the top of all eight domestic music sites' realtime charts. At the 2020 Korean Music Awards, Our Love is Great and lead track "Maybe It's Not Our Fault" won Album of the Year, Best Pop Album, and Best Pop Song respectively.
Baek also contributed to the soundtrack of the popular 2019 K-drama Crash Landing on You, with an original ballad, "Here I am Again." The song peaked at #6 on Billboard's K-Pop Hot 100 chart, her highest position on the chart yet.
In 2019, Baek's duo 15& officially disbanded following Park Ji-min's departure from JYP Entertainment. On September 13, Baek announced that her contract with JYP Entertainment ended, and that she was leaving to create her own independent record label.
2019–2021: Founding of label Blue Vinyl, Every Letter I Sent You and Tellusboutyourself
On November 6, 2019, Baek officially launched her own label, Blue Vinyl, and released her first studio and double-album through the label, Every Letter I Sent You the following month with 17 out of the 18 songs being written in English. Her single "Square (2017)" also made Baek the first South Korean artist to top the charts with a song sung entirely in English. The track achieved 'all kill' status, topping the daily and realtime charts of Melon, Genie, Bugs, and Soribada, and realtime charts of Flo and .
At the 2020 Melon Music Awards, Baek was nominated for four of the major awards such as Artist of the Year, Song of the Year for "Square (2017)" and Album of the Year of Every Letter I Sent You, and was eventually awarded Top 10 Artist and Best R&B/Soul for her track "Square (2017)", and at the 2021 Korean Music Awards, Baek was awarded Best Pop Album for Every Letter I Sent You.
In February 2020, Baek embarked on her first solo concert, "Turn on that Blue Vinyl". Tickets for about 4,400 seats were sold out in 30 seconds.
On December 10, 2020, exactly one year after her previous album, Baek released her second studio album Tellusboutyourself. The album featured a wider range of musical genres and lyrical themes than her previous work. Her first remix, and third EP, Tellusboutyourself Remixes, featuring six remixed tracks from the album was released on February 16, 2021.
2021–present: The Volunteers, single releases, and world tour
On May 11, 2021, Baek's band, The Volunteers, joined her independent label, Blue Vinyl, and launched their self-titled debut album on May 27, after previously making their music solely available through SoundCloud and YouTube.
On September 10, 2021, Baek released her first cover and fourth EP, Love, Yerin, featuring six remake tracks from artists such as Nell and The Black Skirts. Following the release of the EP, all tracks entered the top ten charts of Melon, Bugs, and Genie.
On May 24, 2022, Baek released the digital single "Pisces".
On September 19, 2022, Baek announced her North American tour on Instagram which embarked in Atlanta on November 28, and concluded in Vancouver on December 22.
On January 1, 2023, Baek released the digital single "New Year". The following month, Baek followed up with her Asia-Pacific tour, announcing the tour dates on her Instagram with the tour starting with her three-day solo concert in Seoul, titled "Square", on May 19 to 21, 2023. Tickets were sold-out as soon as it opened with a waitlist of 40,000 units. The tour concluded in Bangkok on June 17, 2023.
Artistry
Influences
Baek has cited Beyoncé, Amy Winehouse, Norah Jones, Rachel Yamagata, Oasis, Avril Lavigne, Rage Against the Machine, as well as Korean artists BoA, Light & Salt, and Yoo Jae Ha as her influences. She titled her first EP, Frank, as a tribute to Winehouse, whose debut album share the same name. Baek also revealed through her live performances that she wrote the songs "Amy" and "True Lover" from her debut album dedicated to Winehouse.
Songwriting
Baek is known for her candid and straightforward lyrics. Upon debuting as a solo artist, Baek participated in writing and composing all of the songs on, Frank, revealing her singer-songwriter side. When writing songs for the EP, Baek wrote about her past memories and current self. She would then eventually write the lyrics and compose the majority of her songs.
On her decision to write her songs in English, Baek explains her intentions to make her music accessible to international fans, her plans to have an international tour, and the influence of English-language artists who made an impact on her style and music. In a 2021 interview, Baek mentioned that it is her dream to perform in the home countries of the artists that inspired her.
On her lyrical inspiration for her second album, Tellusboutyourself, Baek cites the poetic metaphors of Blossom Dearie and the straightforward style of Amy Winehouse as influences.
Discography
Every Letter I Sent You (2019)
Tellusboutyourself (2020)
Songwriting credits
Music videos
Concert
Headlining concert
"Turn on that Blue Vinyl"
"2022 Yerin Baek North America Tour"
"Square"
"2023 Yerin Baek Asia-Pacific Tour"
Ambassadorship
Ambassador for the 5th Seoul Animal Film Festival (2022)
Awards and nominations
References
1997 births
Living people
21st-century South Korean singers
Hanlim Multi Art School alumni
JYP Entertainment artists
People from Daejeon
Melon Music Award winners
Korean Music Award winners
South Korean female idols
South Korean women pop singers
South Korean rhythm and blues singers
South Korean singer-songwriters
21st-century South Korean women singers
South Korean women singer-songwriters
South Korean contemporary R&B singers
|
{{DISPLAYTITLE:Fσ set}}
In mathematics, an Fσ set (said F-sigma set) is a countable union of closed sets. The notation originated in French with F for (French: closed) and σ for (French: sum, union).
The complement of an Fσ set is a Gδ set.
Fσ is the same as in the Borel hierarchy.
Examples
Each closed set is an Fσ set.
The set of rationals is an Fσ set in . More generally, any countable set in a T1 space is an Fσ set, because every singleton is closed.
The set of irrationals is not a Fσ set.
In metrizable spaces, every open set is an Fσ set.
The union of countably many Fσ sets is an Fσ set, and the intersection of finitely many Fσ sets is an Fσ set.
The set of all points in the Cartesian plane such that is rational is an Fσ set because it can be expressed as the union of all the lines passing through the origin with rational slope:
where , is the set of rational numbers, which is a countable set.
See also
Gδ set — the dual notion.
Borel hierarchy
P-space, any space having the property that every Fσ set is closed
References
Topology
Descriptive set theory
|
Eyo Ephraim Adam (ca.1849 – 1911) was the head of Etim Efiom royal house of Old Calabar from 1908 until his death on September 28, 1911. His father Ephraim Adam was the founder of the Tete household in Etim Efiom House. His mother Enang Otuk Oyom was equally from Etim Efiom House. He is credited to have aided in the spread of christianity in Akpabuyo, Nigeria.
Independence of Etim Efiom Royal House
On the death of Obong Adam Ephraim Adam I in 1906, Eyo Ephraim assumed the position of family head of Etim Efiom sub-House of great Duke House. At this time, Etim Efiom was a sub-house of Duke House. Thus, the late Obong Adam Ephraim Adam I assumed leadership of Duke House albeit having paternal descent from Etim Efiom. Eyo contested for the role of Etubom of Duke House. The only other candidate who emerged in the contest was Adam Ephraim Duke the family-head of Efiong Essien/Okon Idem sub-house of the larger Duke House. With the full backing and support of members of the Duke Ephraim lineage; Adam Ephraim Duke succeeded as Etubom of great Duke House in 1906. This was one of the motivating factors for the move to liberate Etim Efiom house from Duke House. Prior to this period, Etim Efiom House originally known as Tom Ephraim House was established in 1790 and was later placed under regency in 1834 by Duke Ephraim. Thus, Eyo Ephraim together with Oyo-Ita and Eneyo houses sought their liberation from Duke House. Unlike the warring route taken by some houses to assert their independence from Duke House, the fight for the independence of Etim Efiom house was taken to court. Unfortunately, Eyo Ephraim did not live to see the house being liberated as he died in 1911 leaving his fight for independence to his younger sibling, Ekpo Ephraim Adam. Etim Efiom House became autonomous on 11 April 1913.
Legacy
The abolishment of the slave trade by the British did not mean that slaves residing in the West African region would be automatically liberated. Slave dealings were still ongoing in several parts of the new protectorate until the early 20th century. On 1 January 1902, the new protectorate government made the decision to abolish slave dealings in all parts of the protectorate. This was one of the first steps to the watering down of the slave institution in Old Calabar and its dependencies. On the abolition of slave dealings, The new administration had to deal with the imminent probelem of shortage of labour as slaves made up the workforce in Old Calabar. A new labour policy had to be created with the aim of retaining slaves to provide labour facilities while reconciling the administration's previous stance on slave dealings with unmitigated forms of slavery. According to Nair, "The result was a compromise: gradual abolition of domestic slavery, and the retention of the traditional house system in such a way that the position of slaves was ameliorated. High Commissioner Moor felt that the only way in which slaves could be prevented from running away from the houses was to improve the conditions of slaves within them". In spite of the new policies which aimed to make the status of the slave less burdensome, slaves in the plantations were still prevented from gaining access to the benefits of Missionary presences and Formal Education. According to E. U. Aye, "The missionary made no effort to introduce Christianity into the plantations because he was not allowed to do so by Efik rulers who suspected the Christian dogma as a disruptive influence among the lower orders against the existing Ekpe plutocracy." Through the intervention of Eyo Ephraim Adam, Christianity was introduced into Akpabuyo. The foundation was first laid at Ikot Uba where Eyo introduced the Presbyterian Church and then at Ikot Nakanda. Through his efforts in spreading the Christian faith, the missionaries received a large number of Christian adherents. The Christian religion permeated through villages such as Esuk Mba, Ifondo, Nkakat Ikot Akiriba, Ikot Eneyo, Akwa Obio Inwang Nsidung, Ikot Mbakara, Ekpene Tete and several other villages.
Etubom Eyo was a politically active member of the Efik society. He was president of the Efik National Society in 1905 and was also a member of the Old Calabar judicial council from 1902 together with his brothers Ekei Ephraim Adam and Umo Ephraim Adam. Other members of the Old Calabar Judicial included Prince Bassey Duke Ephraim, Ani Eniang Offiong, George Duke Henshaw, Esien Ekpe Hogan-Bassey and several others. Within the Ekpe society of Old Calabar, Etubom Eyo held the title of Obong Mboko. His Children included, Edidem Bassey Eyo Ephraim Adam III, Eyo Eyo Ephraim Adam, Utong Eyo Ephraim Adam and several others.
Notes
References
.
.
.
.
.
.
.
.
1911 deaths
|
Clash of Civilizations Over an Elevator in Piazza Vittorio () is an Italian language novel by Amara Lakhous, published by . It was first published in 2006. Its English translation, done by Ann Goldstein, was published in 2008 by Europa Editions.
The novel takes place in an apartment complex in the Piazza Vittorio Emanuele II area.
According to The New Yorker, the multicultural Rome of the 2000s is the "author’s real subject".
There is a .
Plot
The novel uses the first person point of view.
In the story, police question residents of various origins in a single apartment complex. According to John Powers of National Public Radio, even though the plot is driven by police looking for the person who committed a murder, "the mystery isn't really the point." Publishers Weekly praised the "intriguing psychological and social insight".
Characters
Amedeo (originally named Ahmed) - From Algeria, he intentionally conceals his origins, leading to ambiguity about his origins among those who know him; based on his words, some think he is southern Italian instead of being of North African origins. Sandro gave him the name "Amedeo". In his past, he was to be married to a woman named Bàgia, but she died.
Abdallah Ben Kadour - He came from the same community Amedeo grew up in, and now lives in Rome.
Mauro Bettarini
Sandro Dandini - A native of Rome, he owns a bar and lives in the same building as Ahmedeo.
Benedetta Esposito - The landlady, she is from Naples. She uses Neapolitan as her primary language, which makes it difficult to find work. She is oppressed by Italians from other parts of Italy, and she oppresses people of recent immigrant origins.
Elisabetta Fabiani - She is from Rome and lives with Valentino, her dog, until the dog is killed.
Maria Cristina Gonzalez - A caretaker of a senior citizen, she originates from Peru, and suffers from alcoholism. She meets other Peruvians at Roma Termini railway station.
Iqbal Amir Allah - He originates from Bangladesh.
Antonio Marini - He originates from Milan, and is a university professor. Graziella Parati of Dartmouth College describes him as "the outsider from the north who considers himself superior and therefore deserving of special treatment".
Johan Van Marten - Originating from the Netherlands, he is an expatriate and is not an immigrant to Italy. Parati stated that he is "privileged" in Italian society.
Stefania Massaro
Parviz Mansoor Samadi - He is from Iran and had left that country due to political reasons. He does not consider himself to be an immigrant. He uses French as his Italian is not fluent.
According to Parati, an elevator used by many characters is rendered into being "a central character".
Reception
It won the Flaiano Prize.
References
Reference notes
External links
Clash of Civilizations Over an Elevator in Piazza Vittorio
Scontro di civiltà per un ascensore a Piazza Vittorio - Edizioni e/o
2006 Italian novels
Novels set in Rome
Italian crime novels
Italian novels adapted into films
|
```xml
<vector xmlns:android="path_to_url" android:height="34.0dp" android:tint="?attr/colorControlNormal" android:viewportHeight="15" android:viewportWidth="15" android:width="34.0dp">
<path android:fillColor="@android:color/white" android:pathData="M12 0C13 0 13 0 13 1L13 14C13 15 13 15 12 15L3 15C2 15 2 15 2 14L2 1C2 0 2 0 3 0L12 0zM10 12L4 12L4 13L10 13L10 12zM12 9L11 9L11 9.75L12 9.75L12 9zM10 2L4 2L4 10L10 10L10 2zM12 7.5L11 7.5L11 8.25L12 8.25L12 7.5zM12 6L11 6L11 6.75L12 6.75L12 6zM12 4.5L11 4.5L11 5.25L12 5.25L12 4.5z"/>
<path android:fillColor="@android:color/white" android:pathData="M8 4C7.25 4 7 4.75 7 4.75C7 4.75 6.75 4 6 4C5.5 4 5 4.5 5 5.25C5 6.5 6.75 8 7 8C7.25 8 9 6.5 9 5.25C9 4.5 8.5 4 8 4z"/>
</vector>
```
|
```objective-c
//your_sha256_hash---------------------------------------
//your_sha256_hash---------------------------------------
#pragma once
namespace Memory
{
template <typename TBlockType>
class SmallHeapBlockAllocator
{
public:
typedef TBlockType BlockType;
SmallHeapBlockAllocator();
void Initialize();
template <ObjectInfoBits attributes>
inline char * InlinedAlloc(Recycler * recycler, DECLSPEC_GUARD_OVERFLOW size_t sizeCat);
// Pass through template parameter to InlinedAllocImpl
template <bool canFaultInject>
inline char * SlowAlloc(Recycler * recycler, DECLSPEC_GUARD_OVERFLOW size_t sizeCat, ObjectInfoBits attributes);
// There are paths where we simply can't OOM here, so we shouldn't fault inject as it creates a bit of a mess
template <bool canFaultInject>
inline char* InlinedAllocImpl(Recycler * recycler, DECLSPEC_GUARD_OVERFLOW size_t sizeCat, ObjectInfoBits attributes);
TBlockType * GetHeapBlock() const { return heapBlock; }
SmallHeapBlockAllocator * GetNext() const { return next; }
void Set(TBlockType * heapBlock);
void SetNew(TBlockType * heapBlock);
void Clear();
void UpdateHeapBlock();
void SetExplicitFreeList(FreeObject* list);
static uint32 GetEndAddressOffset() { return offsetof(SmallHeapBlockAllocator, endAddress); }
char *GetEndAddress() { return endAddress; }
static uint32 GetFreeObjectListOffset() { return offsetof(SmallHeapBlockAllocator, freeObjectList); }
FreeObject *GetFreeObjectList() { return freeObjectList; }
void SetFreeObjectList(FreeObject *freeObject) { freeObjectList = freeObject; }
#if defined(PROFILE_RECYCLER_ALLOC) || defined(RECYCLER_MEMORY_VERIFY) || defined(MEMSPECT_TRACKING) || defined(ETW_MEMORY_TRACKING)
void SetTrackNativeAllocatedObjectCallBack(void (*pfnCallBack)(Recycler *, void *, size_t))
{
pfnTrackNativeAllocatedObjectCallBack = pfnCallBack;
}
#endif
#if DBG
FreeObject * GetExplicitFreeList() const
{
Assert(IsExplicitFreeObjectListAllocMode());
return this->freeObjectList;
}
#endif
bool IsBumpAllocMode() const
{
return endAddress != nullptr;
}
bool IsExplicitFreeObjectListAllocMode() const
{
return this->heapBlock == nullptr;
}
bool IsFreeListAllocMode() const
{
return !IsBumpAllocMode() && !IsExplicitFreeObjectListAllocMode();
}
#if ENABLE_ALLOCATIONS_DURING_CONCURRENT_SWEEP
bool IsAllocatingDuringConcurrentSweepMode(Recycler * recycler) const
{
return IsFreeListAllocMode() && recycler->IsConcurrentSweepState();
}
#endif
private:
static bool NeedSetAttributes(ObjectInfoBits attributes)
{
return attributes != LeafBit && (attributes & InternalObjectInfoBitMask) != 0;
}
char * endAddress;
FreeObject * freeObjectList;
TBlockType * heapBlock;
#if ENABLE_ALLOCATIONS_DURING_CONCURRENT_SWEEP
#if DBG
bool isAllocatingFromNewBlock;
#endif
#endif
SmallHeapBlockAllocator * prev;
SmallHeapBlockAllocator * next;
friend class HeapBucketT<BlockType>;
#ifdef RECYCLER_SLOW_CHECK_ENABLED
template <class TBlockAttributes>
friend class SmallHeapBlockT;
#endif
#if defined(PROFILE_RECYCLER_ALLOC) || defined(RECYCLER_MEMORY_VERIFY)
HeapBucket * bucket;
#endif
#ifdef RECYCLER_TRACK_NATIVE_ALLOCATED_OBJECTS
char * lastNonNativeBumpAllocatedBlock;
void TrackNativeAllocatedObjects();
#endif
#if defined(PROFILE_RECYCLER_ALLOC) || defined(RECYCLER_MEMORY_VERIFY) || defined(MEMSPECT_TRACKING) || defined(ETW_MEMORY_TRACKING)
void (*pfnTrackNativeAllocatedObjectCallBack)(Recycler * recycler, void *, size_t sizeCat);
#endif
};
template <typename TBlockType>
template <bool canFaultInject>
inline char*
SmallHeapBlockAllocator<TBlockType>::InlinedAllocImpl(Recycler * recycler, DECLSPEC_GUARD_OVERFLOW size_t sizeCat, ObjectInfoBits attributes)
{
Assert((attributes & InternalObjectInfoBitMask) == attributes);
#ifdef RECYCLER_WRITE_BARRIER
Assert(!CONFIG_FLAG(ForceSoftwareWriteBarrier) || (attributes & WithBarrierBit) || (attributes & LeafBit));
#endif
AUTO_NO_EXCEPTION_REGION;
if (canFaultInject)
{
FAULTINJECT_MEMORY_NOTHROW(_u("InlinedAllocImpl"), sizeCat);
}
char * memBlock = (char *)freeObjectList;
char * nextCurrentAddress = memBlock + sizeCat;
char * endAddress = this->endAddress;
if (nextCurrentAddress <= endAddress)
{
// Bump Allocation
Assert(this->IsBumpAllocMode());
#ifdef RECYCLER_TRACK_NATIVE_ALLOCATED_OBJECTS
TrackNativeAllocatedObjects();
lastNonNativeBumpAllocatedBlock = memBlock;
#endif
freeObjectList = (FreeObject *)nextCurrentAddress;
if (NeedSetAttributes(attributes))
{
if ((attributes & (FinalizeBit | TrackBit)) != 0)
{
// Make sure a valid vtable is installed as once the attributes have been set this allocation may be traced by background marking
memBlock = (char *)new (memBlock) DummyVTableObject();
#if defined(_M_ARM32_OR_ARM64)
// On ARM, make sure the v-table write is performed before setting the attributes
MemoryBarrier();
#endif
}
heapBlock->SetAttributes(memBlock, (attributes & StoredObjectInfoBitMask));
}
return memBlock;
}
if (memBlock != nullptr && endAddress == nullptr)
{
// Free list allocation
freeObjectList = ((FreeObject *)memBlock)->GetNext();
#ifdef RECYCLER_MEMORY_VERIFY
((FreeObject *)memBlock)->DebugFillNext();
#endif
Assert(!this->IsBumpAllocMode());
if (NeedSetAttributes(attributes))
{
TBlockType * allocationHeapBlock = this->heapBlock;
if (allocationHeapBlock == nullptr)
{
Assert(this->IsExplicitFreeObjectListAllocMode());
allocationHeapBlock = (TBlockType *)recycler->FindHeapBlock(memBlock);
Assert(allocationHeapBlock != nullptr);
Assert(!allocationHeapBlock->IsLargeHeapBlock());
}
if ((attributes & (FinalizeBit | TrackBit)) != 0)
{
// Make sure a valid vtable is installed as once the attributes have been set this allocation may be traced by background marking
memBlock = (char *)new (memBlock) DummyVTableObject();
#if defined(_M_ARM32_OR_ARM64)
// On ARM, make sure the v-table write is performed before setting the attributes
MemoryBarrier();
#endif
}
allocationHeapBlock->SetAttributes(memBlock, (attributes & StoredObjectInfoBitMask));
}
#ifdef RECYCLER_MEMORY_VERIFY
if (this->IsExplicitFreeObjectListAllocMode())
{
HeapBlock* heapBlock = recycler->FindHeapBlock(memBlock);
Assert(heapBlock != nullptr);
Assert(!heapBlock->IsLargeHeapBlock());
TBlockType* smallBlock = (TBlockType*)heapBlock;
smallBlock->ClearExplicitFreeBitForObject(memBlock);
}
#endif
#if DBG || defined(RECYCLER_STATS)
if (!IsExplicitFreeObjectListAllocMode())
{
BOOL isSet = heapBlock->GetDebugFreeBitVector()->TestAndClear(heapBlock->GetAddressBitIndex(memBlock));
Assert(isSet);
}
#endif
#if ENABLE_ALLOCATIONS_DURING_CONCURRENT_SWEEP
// If we are allocating during concurrent sweep we must mark the object to prevent it from being swept
// in the ongoing sweep.
if (heapBlock != nullptr && heapBlock->isPendingConcurrentSweepPrep)
{
AssertMsg(!this->isAllocatingFromNewBlock, "We shouldn't be tracking allocation to a new block; i.e. bump allocation; during concurrent sweep.");
AssertMsg(!heapBlock->IsAnyFinalizableBlock(), "Allocations are not allowed to finalizable blocks during concurrent sweep.");
AssertMsg(heapBlock->heapBucket->AllocationsStartedDuringConcurrentSweep(), "We shouldn't be allocating from this block while allocations are disabled.");
// Explcitly mark this object and also clear the free bit.
heapBlock->SetObjectMarkedBit(memBlock);
#if DBG || defined(RECYCLER_SLOW_CHECK_ENABLED)
uint bitIndex = heapBlock->GetAddressBitIndex(memBlock);
heapBlock->GetDebugFreeBitVector()->Clear(bitIndex);
heapBlock->objectsMarkedDuringSweep++;
#endif
#ifdef RECYCLER_TRACE
if (recycler->GetRecyclerFlagsTable().Trace.IsEnabled(Js::ConcurrentSweepPhase) && recycler->GetRecyclerFlagsTable().Trace.IsEnabled(Js::MemoryAllocationPhase) && CONFIG_FLAG_RELEASE(Verbose))
{
Output::Print(_u("[**33**]FreeListAlloc: Object 0x%p from HeapBlock 0x%p used for allocation during ConcurrentSweep [CollectionState: %d] \n"), memBlock, heapBlock, static_cast<CollectionState>(recycler->collectionState));
}
#endif
}
#endif
return memBlock;
}
return nullptr;
}
template <typename TBlockType>
template <ObjectInfoBits attributes>
inline char *
SmallHeapBlockAllocator<TBlockType>::InlinedAlloc(Recycler * recycler, DECLSPEC_GUARD_OVERFLOW size_t sizeCat)
{
return InlinedAllocImpl<true /* allow fault injection */>(recycler, sizeCat, attributes);
}
template <typename TBlockType>
template <bool canFaultInject>
inline
char *
SmallHeapBlockAllocator<TBlockType>::SlowAlloc(Recycler * recycler, DECLSPEC_GUARD_OVERFLOW size_t sizeCat, ObjectInfoBits attributes)
{
Assert((attributes & InternalObjectInfoBitMask) == attributes);
return InlinedAllocImpl<canFaultInject>(recycler, sizeCat, attributes);
}
}
```
|
The Pokataroo railway line is a railway line in New South Wales, Australia. It branches from the Walgett line at Burren Junction, and opened in 1906. There are signs of the line being constructed across the Barwon River all the way to Collarenebri, New South Wales
The line is closed beyond Merrywinebone. Passenger services were withdrawn in 1974. The line is primarily used for grain haulage with large grain loading facilities located at Merrywinebone and Rowena.
Pokataroo is from Sydney. Pokataroo station precinct features a turning triangle used to reverse the direction of a locomotive prior to commencing a return journey.
See also
Rail transport in New South Wales
Rail rollingstock in New South Wales
References
Further reading
Flat Lands and Myall – The Pokataroo Branch Milne, Rod Australian Railway Historical Society Bulletin, April, 1960 pp79–87
Regional railway lines in New South Wales
Standard gauge railways in Australia
Railway lines opened in 1906
1906 establishments in Australia
|
```tcl
# -*- coding: utf-8; mode: tcl; c-basic-offset: 4; indent-tabs-mode: nil; tab-width: 4; truncate-lines: t -*- vim:fenc=utf-8:et:sw=4:ts=4:sts=4
#
# Usage:
# PortGroup cmake 1.0
options cmake.build_dir cmake.install_prefix cmake.out_of_source
default cmake.build_dir {${workpath}/build}
default cmake.install_prefix {${prefix}}
default cmake.out_of_source no
# standard place to install extra CMake modules
set cmake_share_module_dir ${prefix}/share/cmake/Modules
# can use cmake or cmake-devel; default to cmake if not installed
depends_build-append path:bin/cmake:cmake
proc _cmake_get_build_dir {} {
if {[option cmake.out_of_source]} {
return [option cmake.build_dir]
}
return [option worksrcpath]
}
default configure.dir {[_cmake_get_build_dir]}
pre-configure {
file mkdir ${configure.dir}
}
# cache the configure.ccache variable (it will be overridden in the pre-configure step)
set cmake_ccache ${configure.ccache}
# tell CMake to use ccache via the CMAKE_<LANG>_COMPILER_LAUNCHER variable
# and unset the global configure.ccache option which is not compatible
# with CMake.
# See path_to_url
proc cmake_ccaching_flags {} {
global prefix
upvar cmake_ccache ccache
if {${ccache} && [file exists ${prefix}/bin/ccache]} {
return [list \
-DCMAKE_C_COMPILER_LAUNCHER=${prefix}/bin/ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=${prefix}/bin/ccache \
-DCMAKE_Fortran_COMPILER_LAUNCHER=${prefix}/bin/ccache \
-DCMAKE_OBJC_COMPILER_LAUNCHER=${prefix}/bin/ccache \
-DCMAKE_OBJCXX_COMPILER_LAUNCHER=${prefix}/bin/ccache \
-DCMAKE_ISPC_COMPILER_LAUNCHER=${prefix}/bin/ccache ]
}
}
configure.cmd ${prefix}/bin/cmake
default configure.pre_args {-DCMAKE_INSTALL_PREFIX='${cmake.install_prefix}'}
# Policy 0025=NEW : identify Apple Clang compiler as "AppleClang";
# MacPorts Clang is then handled separately from AppleClang. This
# setting ensures consistency in compiler feature determination and
# use, which is especially useful for older Mac OS X installs --
# e.g., ones that use MacPorts Clang 4.0 via the cxx11 1.1 PortGroup.
default configure.args {[list \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_BUILD_WITH_INSTALL_RPATH=ON \
{*}[cmake_ccaching_flags] \
{-DCMAKE_C_COMPILER="$CC"} \
-DCMAKE_COLOR_MAKEFILE=ON \
{-DCMAKE_CXX_COMPILER="$CXX"} \
-DCMAKE_FIND_FRAMEWORK=LAST \
-DCMAKE_INSTALL_NAME_DIR=${cmake.install_prefix}/lib \
-DCMAKE_INSTALL_RPATH=${cmake.install_prefix}/lib \
-DCMAKE_MAKE_PROGRAM=${build.cmd} \
-DCMAKE_MODULE_PATH=${cmake_share_module_dir} \
-DCMAKE_SYSTEM_PREFIX_PATH="${cmake.install_prefix}\;${prefix}\;/usr" \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DCMAKE_POLICY_DEFAULT_CMP0025=NEW \
-Wno-dev
]}
default configure.post_args {${worksrcpath}}
# CMake honors set environment variables CFLAGS, CXXFLAGS, and LDFLAGS when it
# is first run in a build directory to initialize CMAKE_C_FLAGS,
# CMAKE_CXX_FLAGS, CMAKE_[EXE|SHARED|MODULE]_LINKER_FLAGS. However, be aware
# that a CMake script can always override these flags when it runs, as they
# are frequently set internally in functions of other CMake build variables!
#
# Attention: If you want to be sure that no compiler flags are passed via
# configure.args, you have to manually clear configure.optflags, as it is set
# to "-Os" by default and added to all language-specific flags. If you want to
# turn off optimization, explicitly set configure.optflags to "-O0".
# TODO: Handle configure.objcflags (cf. to CMake upstream ticket #4756
# "CMake needs an Objective-C equivalent of CMAKE_CXX_FLAGS"
# <path_to_url
# TODO: Handle the Fortran-specific configure.* variables:
# configure.fflags, configure.fcflags, configure.f90flags
# TODO: Handle the Java-specific configure.classpath variable.
pre-configure {
# The environment variable CPPFLAGS is not considered by CMake.
# (CMake upstream ticket #12928 "Add support for CPPFLAGS environment variable"
# <path_to_url
#
# But adding -I${prefix}/include to CFLAGS/CXXFLAGS is a bad idea.
# If any other flags are needed, we need to add them.
# In addition, CMake provides build-type-specific flags for Release (-O3
# -DNDEBUG), Debug (-g), MinSizeRel (-Os -DNDEBUG), and RelWithDebInfo
# (-O2 -g -DNDEBUG). If the configure.optflags have been set (-Os by
# default), we have to remove the optimization flags from the concerned
# Release build type so that configure.optflags gets honored (Debug used
# by the +debug variant does not set optimization flags by default).
if {${configure.optflags} ne ""} {
configure.args-append -DCMAKE_C_FLAGS_RELEASE="-DNDEBUG" \
-DCMAKE_CXX_FLAGS_RELEASE="-DNDEBUG"
}
# CMake doesn't like --enable-debug, so remove it unconditionally.
configure.args-delete --enable-debug
}
platform darwin {
set cmake._archflag_vars {cc_archflags cxx_archflags ld_archflags objc_archflags objcxx_archflags universal_cflags universal_cxxflags universal_ldflags universal_objcflags universal_objcxxflags}
pre-configure {
# cmake will add the correct -arch flag(s) based on the value of CMAKE_OSX_ARCHITECTURES.
if {[variant_exists universal] && [variant_isset universal]} {
if {[info exists universal_archs_supported]} {
merger_arch_compiler no
merger_arch_flag no
global merger_configure_args
foreach arch ${universal_archs_to_use} {
lappend merger_configure_args(${arch}) -DCMAKE_OSX_ARCHITECTURES=${arch}
}
} else {
configure.universal_args-append \
-DCMAKE_OSX_ARCHITECTURES="[join ${configure.universal_archs} \;]"
}
} else {
configure.args-append \
-DCMAKE_OSX_ARCHITECTURES="${configure.build_arch}"
}
# Setting our own -arch flags is unnecessary (in the case of a non-universal build) or even
# harmful (in the case of a universal build, because it causes the compiler identification to
# fail; see path_to_url
# Save all archflag-containing variables before changing any of them, because some of them
# declare their default value based on the value of another.
foreach archflag_var ${cmake._archflag_vars} {
global cmake._saved_${archflag_var}
set cmake._saved_${archflag_var} [option configure.${archflag_var}]
}
foreach archflag_var ${cmake._archflag_vars} {
configure.${archflag_var}
}
configure.args-append -DCMAKE_OSX_DEPLOYMENT_TARGET="${macosx_deployment_target}"
if {${configure.sdkroot} ne ""} {
configure.args-append -DCMAKE_OSX_SYSROOT="${configure.sdkroot}"
} else {
configure.args-append -DCMAKE_OSX_SYSROOT="/"
}
# The configure.ccache variable has been cached so we can restore it in the post-configure
# (pre-configure and post-configure are always run in a single `port` invocation.)
configure.ccache no
# surprising but intended behaviour that's impossible to work around more gracefully:
# overriding configure.ccache fails if the user set it directly from the commandline
if {[tbool configure.ccache]} {
ui_error "Please don't use configure.ccache=yes on the commandline for port:${subport}, use configureccache=yes"
return -code error "invalid invocation (port:${subport})"
}
if {${cmake_ccache}} {
ui_info " (using ccache)"
}
}
post-configure {
# restore configure.ccache:
if {[info exists cmake_ccache]} {
configure.ccache ${cmake_ccache}
ui_debug "configure.ccache restored to ${cmake_ccache}"
}
# Although cmake wants us not to set -arch flags ourselves when we run cmake,
# ports might have need to access these variables at other times.
foreach archflag_var ${cmake._archflag_vars} {
global cmake._saved_${archflag_var}
configure.${archflag_var} {*}[set cmake._saved_${archflag_var}]
}
}
}
configure.universal_args-delete --disable-dependency-tracking
variant debug description "Enable debug binaries" {
configure.args-replace -DCMAKE_BUILD_TYPE=Release -DCMAKE_BUILD_TYPE=Debug
}
default build.dir {${configure.dir}}
default build.post_args VERBOSE=ON
# Generated Unix Makefiles contain a "fast" install target that begins
# installing immediately instead of checking build dependencies again.
default destroot.target install/fast
```
|
```python
#
#
# path_to_url
#
# Unless required by applicable law or agreed to in writing, software
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# ==============================================================================
"""Installation test for YAMNet."""
import numpy as np
import tensorflow as tf
import params
import yamnet
class YAMNetTest(tf.test.TestCase):
_params = None
_yamnet = None
_yamnet_classes = None
@classmethod
def setUpClass(cls):
super().setUpClass()
cls._params = params.Params()
cls._yamnet = yamnet.yamnet_frames_model(cls._params)
cls._yamnet.load_weights('yamnet.h5')
cls._yamnet_classes = yamnet.class_names('yamnet_class_map.csv')
def clip_test(self, waveform, expected_class_name, top_n=10):
"""Run the model on the waveform, check that expected class is in top-n."""
predictions, embeddings, log_mel_spectrogram = YAMNetTest._yamnet(waveform)
clip_predictions = np.mean(predictions, axis=0)
top_n_indices = np.argsort(clip_predictions)[-top_n:]
top_n_scores = clip_predictions[top_n_indices]
top_n_class_names = YAMNetTest._yamnet_classes[top_n_indices]
top_n_predictions = list(zip(top_n_class_names, top_n_scores))
self.assertIn(expected_class_name, top_n_class_names,
'Did not find expected class {} in top {} predictions: {}'.format(
expected_class_name, top_n, top_n_predictions))
def testZeros(self):
self.clip_test(
waveform=np.zeros((int(3 * YAMNetTest._params.sample_rate),)),
expected_class_name='Silence')
def testRandom(self):
np.random.seed(51773) # Ensure repeatability.
self.clip_test(
waveform=np.random.uniform(-1.0, +1.0,
(int(3 * YAMNetTest._params.sample_rate),)),
expected_class_name='White noise')
def testSine(self):
self.clip_test(
waveform=np.sin(2 * np.pi * 440 *
np.arange(0, 3, 1 / YAMNetTest._params.sample_rate)),
expected_class_name='Sine wave')
if __name__ == '__main__':
tf.test.main()
```
|
Jack W. Aeby (; August 16, 1923 – June 19, 2015) was an American environmental physicist most famous for having taken the only well-exposed color photograph of the first detonation of a nuclear weapon on July 16, 1945, at the Trinity nuclear test site in New Mexico.
Early life
Jack Aeby was born on August 16, 1923, in Mound City, Missouri, United States.
Career
In 1942, Aeby joined the Manhattan Project by filling out an employment application in Albuquerque. He did a lot of jobs, including ferrying scientists and equipment between Albuquerque and Los Alamos. Though a civilian, he worked his way up into the SED (Special Engineering Detachment) in technician roles and eventually witnessed nearly 100 nuclear explosions. After getting his degree at UC Berkeley after the war, he returned, in the Health Physics Department.
On July 16, 1945, Aeby took the only well-exposed color photograph of the first detonation of a nuclear weapon at the Trinity nuclear test site in New Mexico.
While color motion pictures of the Trinity test were made, most were badly overexposed or damaged due to the fireball's tendency to blister and solarize the film. Aeby was a civilian assigned to the Physics Group 5 with Emilio Segrè and Owen Chamberlain at the time his snapshot was taken.
The photo was taken with a Perfex 33 with a 35mm lens, using a shutter speed of 1/100 at f4 and Anscochrome color movie stock film.
Aeby was not an official observer at the test site, but was invited along to take informal photos of the work, which he had commonly done since he arrived at Los Alamos. He says he took the photos of the blast on a whim, "it was there so I shot it". He took the film, a non-standard piece of Ansochrome movie stock film, out of the camera that night at a local photo lab, and worked it through the 21 step procedure for color film developing. Later on, Los Alamos management asked him if they could keep the original negative "for safe keeping". It has since been lost.
Aeby says in most uses of the photo it is reversed. This was done intentionally so that the asymmetrical fireball and cloud would look the same as other official pictures taken from the north; Aeby was on the south at the Base Camp when he took the picture.
Aeby is a source for a story about a notable estimate made by Enrico Fermi at that test:
Personal life
Aeby lived in the Española Valley in northern New Mexico with his wife Jeanne. They had 5 children. Aeby died at his home in Española in 2015.
See also
Berlyn Brixner – official Trinity test photographer
References
Further reading
External links
2003 Video Interview with Jack Aeby by Atomic Heritage Foundation Voices of the Manhattan Project
Jack Aeby, Atom-Bomb Photographer (MP3) on NPR's All Things Considered (July 15, 2005)
Jack Aeby exhibit at the Los Alamos Historical Museum (photos), The Los Alamos Monitor
1923 births
2015 deaths
American nuclear engineers
American photographers
Los Alamos National Laboratory personnel
Manhattan Project people
People from Lawrence County, Missouri
People from Española, New Mexico
Physicists from Missouri
Scientists from Missouri
|
KPXP, (97.9 FM). branded as Power 99 FM, is a radio station broadcasting an Adult Album Alternative music format. Licensed to Garapan-Saipan, Northern Mariana Islands, the station is currently owned by Sorensen Pacific Broadcasting Inc.
The station was assigned the KRSI call letters by the Federal Communications Commission on May 31, 1991. The call sign was changed to the current KPXP on May 12, 2014, when the former KPXP on 99.5 (now KZGU) moved from the Northern Mariana Islands to Guam. The new KPXP inherited the station's name and format, explaining its use of the Power 99 name while being on 97.9 MHz.
References
External links
PXP
Adult album alternative radio stations in the United States
Radio stations established in 1991
1991 establishments in the Northern Mariana Islands
Garapan
|
San Clemente del Tuyú is an Argentine town in the Partido de la Costa district of the Province of Buenos Aires.
History
Noticed by Ferdinand Magellan in 1520, who gave nearby Cape San Antonio its name, Spanish authorities first surveyed the area in 1580. Led by reformist Governor Hernando Arias de Saavedra, his Guaraní staff christened the spot Rincón del Tuyú ("muddy corner"). First mapped by British Jesuit Thomas Falkner in 1744, the neighboring stream was named San Clemente by Spanish Jesuit José Cardiel.
The waterfront area was soon purchased by the Ortiz de Rozas family, one of Argentina's most well-established landowners. Sold to another prominent family, the Leloirs, in 1816, the area became a sheep ranch. A descendant of the Ortiz de Rozas', General Juan Manuel de Rosas, had the area incorporated into a district of the Province of Buenos Aires in 1825, the area's first assigned jurisdiction since national independence in 1816; as governor, Rosas brutally repressed a local insurrection in 1839 against his repressive rule. Following Rosas' 1852 overthrow, the area was given a county seat (Mar del Tuyú) in 1864 and, with the arrival of abattoirs, the government had fishermen's docks, a canal between San Clemente and Buenos Aires, a railhead and two lighthouses built between 1878 and 1902.
Prospering during the 1920s, the Argentine middle class first became widely aware of the idyllic coast through the efforts of Mayor Jorge Gibson, who had the local coastline graded into public beaches. The project's success led to the first gravel road into San Clemente in 1932 and its formal designation as a municipality; soon followed service stations, campgrounds, real estate developments, a power plant and even a monastery. President Juan Perón made plans for a nearby submarine base that, though never built, resulted in a four-lane highway into San Clemente. This and continuing national prosperity led to the town's rapid development after 1950, which led to the establishment of a hospital in 1970 and of Mundo Marino in 1979, still the largest oceanarium in South America.
A nature theme park (Parque Bahía Aventura) opened in 1997; drawing few crowds, the area was slated for closure when, in 2003, mineral hot springs were discovered at the spot. County Mayor Juan de Jesús set aside part of Bahía Aventura and opened Termas Marinas, today one of Argentina's most popular hot springs.
Tapera of Lòpez: 1922: is there installed the young marriage formed by Manuel López and Magdalena Luero. Manuel was engaged in fishing and fish salting built there, in that place they had 12 children.
The city today
San Clemente del Tuyú, the northernmost among seven sea-side communities in the Partido de la Costa district, today counts 27 hotels (of which 14 are three or four-star establishments). The aquarium, adventure park and hot springs are complemented by two natural sciences museums, fishing boat tours and the 129 meter (400 feet) -long pier, among other parks and attractions. Punta Rasa, at the northern end of the city and the cape, was made a nature preserve in 1997. The activity around fishing boat tours centers around the black corvine feast held annually since 1966, towards December. The area's vast dunes also set the stage for the annual Enduro competition held here every February since 1998. A small but loyal contingent of visitors also arrives seasonally from San Clemente, California, a sister city of San Clemente del Tuyú since 1969.
The seven sister communities receive nearly a million visitors monthly during the peak summer season (January and February), of which San Clemente del Tuyú hosts roughly one tenth, given its proportion of the district's hotel room availability. A considerable number of summertime visitors also come to enjoy Benedictine monk Mamerto Menapace's sermons and lectures, which takes place at the order's San Clemente estancia and offers ascetic "pilgrim" accommodations. San Clemente del Tuyú hosted the Sixth Iberoamerican Congress on Environmental Education in September 2009.
Climate
Sister cities
San Clemente, California
References
External links
Todo San Clemente
Populated coastal places in Argentina
Populated places in Buenos Aires Province
Populated places established in 1935
Seaside resorts in Argentina
|
```php
<?php
/*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
*/
namespace Google\Service\Genomics;
class ContainerStartedEvent extends \Google\Model
{
/**
* @var int
*/
public $actionId;
/**
* @var string
*/
public $ipAddress;
/**
* @var int[]
*/
public $portMappings;
/**
* @param int
*/
public function setActionId($actionId)
{
$this->actionId = $actionId;
}
/**
* @return int
*/
public function getActionId()
{
return $this->actionId;
}
/**
* @param string
*/
public function setIpAddress($ipAddress)
{
$this->ipAddress = $ipAddress;
}
/**
* @return string
*/
public function getIpAddress()
{
return $this->ipAddress;
}
/**
* @param int[]
*/
public function setPortMappings($portMappings)
{
$this->portMappings = $portMappings;
}
/**
* @return int[]
*/
public function getPortMappings()
{
return $this->portMappings;
}
}
// Adding a class alias for backwards compatibility with the previous class name.
class_alias(ContainerStartedEvent::class, 'Google_Service_Genomics_ContainerStartedEvent');
```
|
```c++
#include <boost/test/unit_test.hpp>
#include <fc/bloom_filter.hpp>
#include <fc/exception/exception.hpp>
#include <fc/reflect/variant.hpp>
#include <iostream>
#include <fc/variant.hpp>
#include <fc/io/raw.hpp>
#include <fstream>
#include <fc/io/json.hpp>
#include <fc/crypto/base64.hpp>
using namespace fc;
static bloom_parameters setup_parameters()
{
bloom_parameters parameters;
// How many elements roughly do we expect to insert?
parameters.projected_element_count = 100000;
// Maximum tolerable false positive probability? (0,1)
parameters.false_positive_probability = 0.0001; // 1 in 10000
// Simple randomizer (optional)
parameters.random_seed = 0xA5A5A5A5;
if (!parameters)
{
BOOST_FAIL( "Error - Invalid set of bloom filter parameters!" );
}
parameters.compute_optimal_parameters();
return parameters;
}
BOOST_AUTO_TEST_SUITE(fc_crypto)
BOOST_AUTO_TEST_CASE(bloom_test_1)
{
try {
//Instantiate Bloom Filter
bloom_filter filter(setup_parameters());
uint32_t count = 0;
std::string line;
std::ifstream in("README.md");
std::ofstream words("words.txt");
while( !in.eof() && count < 100000 )
{
std::getline(in, line);
// std::cout << "'"<<line<<"'\n";
if( !filter.contains(line) )
{
filter.insert( line );
words << line << "\n";
++count;
}
}
// wdump((filter));
auto packed_filter = fc::raw::pack_to_vector(filter);
// wdump((packed_filter.size()));
// wdump((packed_filter));
std::stringstream out;
// std::string str = fc::json::to_string(packed_filter);
auto b64 = fc::base64_encode( packed_filter.data(), packed_filter.size() );
for( uint32_t i = 0; i < b64.size(); i += 1024 )
out << '"' << b64.substr( i, 1024 ) << "\",\n";
}
catch ( const fc::exception& e )
{
edump((e.to_detail_string()) );
}
}
BOOST_AUTO_TEST_CASE(bloom_test_2)
{
try {
//Instantiate Bloom Filter
bloom_filter filter(setup_parameters());
std::string str_list[] = { "AbC", "iJk", "XYZ" };
// Insert into Bloom Filter
{
// Insert some strings
for (std::size_t i = 0; i < (sizeof(str_list) / sizeof(std::string)); ++i)
{
filter.insert(str_list[i]);
}
// Insert some numbers
for (std::size_t i = 0; i < 100; ++i)
{
filter.insert(i);
}
}
// Query Bloom Filter
{
// Query the existence of strings
for (std::size_t i = 0; i < (sizeof(str_list) / sizeof(std::string)); ++i)
{
BOOST_CHECK( filter.contains(str_list[i]) );
}
// Query the existence of numbers
for (std::size_t i = 0; i < 100; ++i)
{
BOOST_CHECK( filter.contains(i) );
}
std::string invalid_str_list[] = { "AbCX", "iJkX", "XYZX" };
// Query the existence of invalid strings
for (std::size_t i = 0; i < (sizeof(invalid_str_list) / sizeof(std::string)); ++i)
{
BOOST_CHECK( !filter.contains(invalid_str_list[i]) );
}
// Query the existence of invalid numbers
for (int i = -1; i > -100; --i)
{
BOOST_CHECK( !filter.contains(i) );
}
}
}
catch ( const fc::exception& e )
{
edump((e.to_detail_string()) );
}
}
BOOST_AUTO_TEST_SUITE_END()
```
|
```xml
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="15.0" xmlns="path_to_url">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|Win32">
<Configuration>Debug</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|Win32">
<Configuration>Release</Configuration>
<Platform>Win32</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<ProjectName>modbus</ProjectName>
<ProjectGuid>{498E0845-C7F4-438B-8EDE-EF7FC9A74430}</ProjectGuid>
<RootNamespace>modbus</RootNamespace>
<Keyword>Win32Proj</Keyword>
<WindowsTargetPlatformVersion>10.0</WindowsTargetPlatformVersion>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
<WholeProgramOptimization>true</WholeProgramOptimization>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
<WholeProgramOptimization>true</WholeProgramOptimization>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>StaticLibrary</ConfigurationType>
<PlatformToolset>v143</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="PropertySheets">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup>
<_ProjectFileVersion>16.0.29511.113</_ProjectFileVersion>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<EnableManagedIncrementalBuild>False</EnableManagedIncrementalBuild>
<LinkIncremental>false</LinkIncremental>
<GenerateManifest>true</GenerateManifest>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<EnableManagedIncrementalBuild>False</EnableManagedIncrementalBuild>
<LinkIncremental>false</LinkIncremental>
<GenerateManifest>true</GenerateManifest>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>false</LinkIncremental>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<OutDir>$(SolutionDir)$(Platform)\$(Configuration)\</OutDir>
<IntDir>$(Platform)\$(Configuration)\</IntDir>
<LinkIncremental>false</LinkIncremental>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<PreBuildEvent>
<Command />
</PreBuildEvent>
<CustomBuildStep>
<Message />
<Command />
</CustomBuildStep>
<ClCompile>
<Optimization>Disabled</Optimization>
<IntrinsicFunctions>true</IntrinsicFunctions>
<WholeProgramOptimization>false</WholeProgramOptimization>
<AdditionalIncludeDirectories>.;..</AdditionalIncludeDirectories>
<PreprocessorDefinitions>W32DEBUG;HAVE_CONFIG_H;DLLBUILD;_CRT_SECURE_NO_DEPRECATE=1;_CRT_NONSTDC_NO_DEPRECATE=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<MinimalRebuild>false</MinimalRebuild>
<ExceptionHandling />
<BasicRuntimeChecks>UninitializedLocalUsageCheck</BasicRuntimeChecks>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
<FloatingPointModel>Fast</FloatingPointModel>
<PrecompiledHeader />
<WarningLevel>Level3</WarningLevel>
<DebugInformationFormat>ProgramDatabase</DebugInformationFormat>
<CompileAs>CompileAsC</CompileAs>
</ClCompile>
<ResourceCompile>
<PreprocessorDefinitions>_MSC_VER;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ResourceOutputFileName>$(SolutionDir)/modbus.res</ResourceOutputFileName>
</ResourceCompile>
<Link>
<AdditionalDependencies>ws2_32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<Version>
</Version>
<GenerateDebugInformation>true</GenerateDebugInformation>
<GenerateMapFile>true</GenerateMapFile>
<SubSystem>Console</SubSystem>
<RandomizedBaseAddress />
<DataExecutionPrevention />
<TargetMachine>MachineX86</TargetMachine>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<PreBuildEvent>
<Command />
</PreBuildEvent>
<CustomBuildStep>
<Message />
<Command />
</CustomBuildStep>
<ClCompile>
<Optimization>MaxSpeed</Optimization>
<IntrinsicFunctions>true</IntrinsicFunctions>
<WholeProgramOptimization>false</WholeProgramOptimization>
<AdditionalIncludeDirectories>.;..</AdditionalIncludeDirectories>
<PreprocessorDefinitions>HAVE_CONFIG_H;DLLBUILD;_CRT_SECURE_NO_DEPRECATE=1;_CRT_NONSTDC_NO_DEPRECATE=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<ExceptionHandling />
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<FunctionLevelLinking>false</FunctionLevelLinking>
<FloatingPointModel>Fast</FloatingPointModel>
<PrecompiledHeader />
<WarningLevel>Level3</WarningLevel>
<DebugInformationFormat>ProgramDatabase</DebugInformationFormat>
<CompileAs>CompileAsC</CompileAs>
</ClCompile>
<Link>
<AdditionalDependencies>ws2_32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Console</SubSystem>
<StackReserveSize>1048576</StackReserveSize>
<StackCommitSize>524288</StackCommitSize>
<OptimizeReferences>true</OptimizeReferences>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<LinkTimeCodeGeneration />
<EntryPointSymbol />
<RandomizedBaseAddress>false</RandomizedBaseAddress>
<DataExecutionPrevention />
<TargetMachine>MachineX86</TargetMachine>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<PreBuildEvent>
<Command />
</PreBuildEvent>
<CustomBuildStep>
<Message />
<Command />
</CustomBuildStep>
<Midl>
<TargetEnvironment>X64</TargetEnvironment>
</Midl>
<ClCompile>
<Optimization>Disabled</Optimization>
<WholeProgramOptimization>false</WholeProgramOptimization>
<AdditionalIncludeDirectories>.;..</AdditionalIncludeDirectories>
<MinimalRebuild>false</MinimalRebuild>
<BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<PrecompiledHeader />
<WarningLevel>Level3</WarningLevel>
<DebugInformationFormat>ProgramDatabase</DebugInformationFormat>
<CompileAs>CompileAsC</CompileAs>
<DisableSpecificWarnings>4244;4267;%(DisableSpecificWarnings)</DisableSpecificWarnings>
<PreprocessorDefinitions>W32DEBUG;_WINDLL;HAVE_CONFIG_H;DLLBUILD;_CRT_SECURE_NO_DEPRECATE=1;_CRT_NONSTDC_NO_DEPRECATE=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
</ClCompile>
<Link>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Console</SubSystem>
<StackReserveSize>1048576</StackReserveSize>
<StackCommitSize>524288</StackCommitSize>
<TargetMachine>MachineX64</TargetMachine>
<AdditionalDependencies>ws2_32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<PreBuildEvent>
<Command />
</PreBuildEvent>
<CustomBuildStep>
<Message />
<Command />
</CustomBuildStep>
<Midl>
<TargetEnvironment>X64</TargetEnvironment>
</Midl>
<ClCompile>
<Optimization>MaxSpeed</Optimization>
<IntrinsicFunctions>true</IntrinsicFunctions>
<WholeProgramOptimization>false</WholeProgramOptimization>
<AdditionalIncludeDirectories>.;..</AdditionalIncludeDirectories>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<FunctionLevelLinking>true</FunctionLevelLinking>
<PrecompiledHeader />
<WarningLevel>Level3</WarningLevel>
<DebugInformationFormat>ProgramDatabase</DebugInformationFormat>
<CompileAs>CompileAsC</CompileAs>
<DisableSpecificWarnings>4244;4267;%(DisableSpecificWarnings)</DisableSpecificWarnings>
<PreprocessorDefinitions>_WINDLL;HAVE_CONFIG_H;DLLBUILD;_CRT_SECURE_NO_DEPRECATE=1;_CRT_NONSTDC_NO_DEPRECATE=1;%(PreprocessorDefinitions)</PreprocessorDefinitions>
</ClCompile>
<Link>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Console</SubSystem>
<StackReserveSize>1048576</StackReserveSize>
<StackCommitSize>524288</StackCommitSize>
<OptimizeReferences>true</OptimizeReferences>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<LinkTimeCodeGeneration />
<TargetMachine>MachineX64</TargetMachine>
<AdditionalDependencies>ws2_32.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="..\modbus-data.c" />
<ClCompile Include="..\modbus-rtu.c" />
<ClCompile Include="..\modbus-tcp.c" />
<ClCompile Include="..\modbus.c" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="..\modbus-private.h" />
<ClInclude Include="..\modbus-rtu-private.h" />
<ClInclude Include="..\modbus-rtu.h" />
<ClInclude Include="..\modbus-tcp-private.h" />
<ClInclude Include="..\modbus-tcp.h" />
<ClInclude Include="..\modbus.h" />
<ClInclude Include="config.h" />
<ClInclude Include="modbus-version.h" />
</ItemGroup>
<ItemGroup>
<ResourceCompile Include="modbus.rc" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>
```
|
Séptimo Día - No Descansaré (stylized as Sép7imo Día) was a touring arena show by Cirque du Soleil, inspired by the music of Argentinian band Soda Stereo.
Acts
Skipping ropes
Aerial revolver
Hand balancing
Arms and legs ballet
Hair hang
TV Overdose
Diabolo
Russian cradle
Sand painting
Water tank
Aerial chains and grill
Campfire
Suspended pole
Power track and banquine
Music
The music for the show was produced and mixed by the two surviving members of the band Soda Stereo, Zeta Bosio and Charly Alberti, and was co-produced by Adrián Taverna, who created new versions and mash-ups of the songs especially for the show.
The following tracks are from the official album (and live show soundtrack) and feature remixes and mashups of the band's songs:
En El Séptimo Día (Prologue)
Cae el Sol / Planta (Opening Celebration)
Picnic en el 4to B / Te Hacen Falta Vitaminas / Mi Novia Tiene Bíceps (Skipping Ropes)
Ella Usó, Un Misil (Character Transition)
Prófugos (Aerial Revolver)
En Remolinos (Handbalancing on Canes)
Crema de Estrellas (Arms and Legs Ballet)
Cuando Pase el Temblor (Transition)
Luna Roja (Hair Hang)
Fue (Transition)
Sobredosis de TV (TV Overdose)
Planeador (with samples from Disco Eterno) / Persiana Americana (Diabolo)
Signos (Russian Cradle Wheel)
Un Millón de Años Luz (Sand Painting)
Hombre al Agua (Water Tank)
En La Ciudad de la Furia (Aerial Grill and Chains)
Crema de estrellas / Te Para Tres (Campfire)
Primavera Cero (Suspended Pole)
Sueles dejarme sólo (instrumental) / Corazón Delator (Fast Track Setup)
De Música Ligera (with samples from X-Playo) (Fast Track and Banquine)
Terapia de amor intensiva (Finale)
Tour
Séptimo Día unusually began its arena tour in Argentina (instead of in Montreal, where Cirque du Soleil's touring shows usually premiere), as it was the country of origin of the band Soda Stereo, on which the show's concept was based.
References
External links
Official site
"Exclusive: Cirque Du Soleil Teases Soda Stereo Spectacular"
Cirque du Soleil touring shows
|
```go
package jmespath
import (
"encoding/json"
"errors"
"fmt"
"math"
"reflect"
"sort"
"strconv"
"strings"
"unicode/utf8"
)
type jpFunction func(arguments []interface{}) (interface{}, error)
type jpType string
const (
jpUnknown jpType = "unknown"
jpNumber jpType = "number"
jpString jpType = "string"
jpArray jpType = "array"
jpObject jpType = "object"
jpArrayNumber jpType = "array[number]"
jpArrayString jpType = "array[string]"
jpExpref jpType = "expref"
jpAny jpType = "any"
)
type functionEntry struct {
name string
arguments []argSpec
handler jpFunction
hasExpRef bool
}
type argSpec struct {
types []jpType
variadic bool
}
type byExprString struct {
intr *treeInterpreter
node ASTNode
items []interface{}
hasError bool
}
func (a *byExprString) Len() int {
return len(a.items)
}
func (a *byExprString) Swap(i, j int) {
a.items[i], a.items[j] = a.items[j], a.items[i]
}
func (a *byExprString) Less(i, j int) bool {
first, err := a.intr.Execute(a.node, a.items[i])
if err != nil {
a.hasError = true
// Return a dummy value.
return true
}
ith, ok := first.(string)
if !ok {
a.hasError = true
return true
}
second, err := a.intr.Execute(a.node, a.items[j])
if err != nil {
a.hasError = true
// Return a dummy value.
return true
}
jth, ok := second.(string)
if !ok {
a.hasError = true
return true
}
return ith < jth
}
type byExprFloat struct {
intr *treeInterpreter
node ASTNode
items []interface{}
hasError bool
}
func (a *byExprFloat) Len() int {
return len(a.items)
}
func (a *byExprFloat) Swap(i, j int) {
a.items[i], a.items[j] = a.items[j], a.items[i]
}
func (a *byExprFloat) Less(i, j int) bool {
first, err := a.intr.Execute(a.node, a.items[i])
if err != nil {
a.hasError = true
// Return a dummy value.
return true
}
ith, ok := first.(float64)
if !ok {
a.hasError = true
return true
}
second, err := a.intr.Execute(a.node, a.items[j])
if err != nil {
a.hasError = true
// Return a dummy value.
return true
}
jth, ok := second.(float64)
if !ok {
a.hasError = true
return true
}
return ith < jth
}
type functionCaller struct {
functionTable map[string]functionEntry
}
func newFunctionCaller() *functionCaller {
caller := &functionCaller{}
caller.functionTable = map[string]functionEntry{
"length": {
name: "length",
arguments: []argSpec{
{types: []jpType{jpString, jpArray, jpObject}},
},
handler: jpfLength,
},
"starts_with": {
name: "starts_with",
arguments: []argSpec{
{types: []jpType{jpString}},
{types: []jpType{jpString}},
},
handler: jpfStartsWith,
},
"abs": {
name: "abs",
arguments: []argSpec{
{types: []jpType{jpNumber}},
},
handler: jpfAbs,
},
"avg": {
name: "avg",
arguments: []argSpec{
{types: []jpType{jpArrayNumber}},
},
handler: jpfAvg,
},
"ceil": {
name: "ceil",
arguments: []argSpec{
{types: []jpType{jpNumber}},
},
handler: jpfCeil,
},
"contains": {
name: "contains",
arguments: []argSpec{
{types: []jpType{jpArray, jpString}},
{types: []jpType{jpAny}},
},
handler: jpfContains,
},
"ends_with": {
name: "ends_with",
arguments: []argSpec{
{types: []jpType{jpString}},
{types: []jpType{jpString}},
},
handler: jpfEndsWith,
},
"floor": {
name: "floor",
arguments: []argSpec{
{types: []jpType{jpNumber}},
},
handler: jpfFloor,
},
"map": {
name: "amp",
arguments: []argSpec{
{types: []jpType{jpExpref}},
{types: []jpType{jpArray}},
},
handler: jpfMap,
hasExpRef: true,
},
"max": {
name: "max",
arguments: []argSpec{
{types: []jpType{jpArrayNumber, jpArrayString}},
},
handler: jpfMax,
},
"merge": {
name: "merge",
arguments: []argSpec{
{types: []jpType{jpObject}, variadic: true},
},
handler: jpfMerge,
},
"max_by": {
name: "max_by",
arguments: []argSpec{
{types: []jpType{jpArray}},
{types: []jpType{jpExpref}},
},
handler: jpfMaxBy,
hasExpRef: true,
},
"sum": {
name: "sum",
arguments: []argSpec{
{types: []jpType{jpArrayNumber}},
},
handler: jpfSum,
},
"min": {
name: "min",
arguments: []argSpec{
{types: []jpType{jpArrayNumber, jpArrayString}},
},
handler: jpfMin,
},
"min_by": {
name: "min_by",
arguments: []argSpec{
{types: []jpType{jpArray}},
{types: []jpType{jpExpref}},
},
handler: jpfMinBy,
hasExpRef: true,
},
"type": {
name: "type",
arguments: []argSpec{
{types: []jpType{jpAny}},
},
handler: jpfType,
},
"keys": {
name: "keys",
arguments: []argSpec{
{types: []jpType{jpObject}},
},
handler: jpfKeys,
},
"values": {
name: "values",
arguments: []argSpec{
{types: []jpType{jpObject}},
},
handler: jpfValues,
},
"sort": {
name: "sort",
arguments: []argSpec{
{types: []jpType{jpArrayString, jpArrayNumber}},
},
handler: jpfSort,
},
"sort_by": {
name: "sort_by",
arguments: []argSpec{
{types: []jpType{jpArray}},
{types: []jpType{jpExpref}},
},
handler: jpfSortBy,
hasExpRef: true,
},
"join": {
name: "join",
arguments: []argSpec{
{types: []jpType{jpString}},
{types: []jpType{jpArrayString}},
},
handler: jpfJoin,
},
"reverse": {
name: "reverse",
arguments: []argSpec{
{types: []jpType{jpArray, jpString}},
},
handler: jpfReverse,
},
"to_array": {
name: "to_array",
arguments: []argSpec{
{types: []jpType{jpAny}},
},
handler: jpfToArray,
},
"to_string": {
name: "to_string",
arguments: []argSpec{
{types: []jpType{jpAny}},
},
handler: jpfToString,
},
"to_number": {
name: "to_number",
arguments: []argSpec{
{types: []jpType{jpAny}},
},
handler: jpfToNumber,
},
"not_null": {
name: "not_null",
arguments: []argSpec{
{types: []jpType{jpAny}, variadic: true},
},
handler: jpfNotNull,
},
}
return caller
}
func (e *functionEntry) resolveArgs(arguments []interface{}) ([]interface{}, error) {
if len(e.arguments) == 0 {
return arguments, nil
}
if !e.arguments[len(e.arguments)-1].variadic {
if len(e.arguments) != len(arguments) {
return nil, errors.New("incorrect number of args")
}
for i, spec := range e.arguments {
userArg := arguments[i]
err := spec.typeCheck(userArg)
if err != nil {
return nil, err
}
}
return arguments, nil
}
if len(arguments) < len(e.arguments) {
return nil, errors.New("Invalid arity.")
}
return arguments, nil
}
func (a *argSpec) typeCheck(arg interface{}) error {
for _, t := range a.types {
switch t {
case jpNumber:
if _, ok := arg.(float64); ok {
return nil
}
case jpString:
if _, ok := arg.(string); ok {
return nil
}
case jpArray:
if isSliceType(arg) {
return nil
}
case jpObject:
if _, ok := arg.(map[string]interface{}); ok {
return nil
}
case jpArrayNumber:
if _, ok := toArrayNum(arg); ok {
return nil
}
case jpArrayString:
if _, ok := toArrayStr(arg); ok {
return nil
}
case jpAny:
return nil
case jpExpref:
if _, ok := arg.(expRef); ok {
return nil
}
}
}
return fmt.Errorf("Invalid type for: %v, expected: %#v", arg, a.types)
}
func (f *functionCaller) CallFunction(name string, arguments []interface{}, intr *treeInterpreter) (interface{}, error) {
entry, ok := f.functionTable[name]
if !ok {
return nil, errors.New("unknown function: " + name)
}
resolvedArgs, err := entry.resolveArgs(arguments)
if err != nil {
return nil, err
}
if entry.hasExpRef {
var extra []interface{}
extra = append(extra, intr)
resolvedArgs = append(extra, resolvedArgs...)
}
return entry.handler(resolvedArgs)
}
func jpfAbs(arguments []interface{}) (interface{}, error) {
num := arguments[0].(float64)
return math.Abs(num), nil
}
func jpfLength(arguments []interface{}) (interface{}, error) {
arg := arguments[0]
if c, ok := arg.(string); ok {
return float64(utf8.RuneCountInString(c)), nil
} else if isSliceType(arg) {
v := reflect.ValueOf(arg)
return float64(v.Len()), nil
} else if c, ok := arg.(map[string]interface{}); ok {
return float64(len(c)), nil
}
return nil, errors.New("could not compute length()")
}
func jpfStartsWith(arguments []interface{}) (interface{}, error) {
search := arguments[0].(string)
prefix := arguments[1].(string)
return strings.HasPrefix(search, prefix), nil
}
func jpfAvg(arguments []interface{}) (interface{}, error) {
// We've already type checked the value so we can safely use
// type assertions.
args := arguments[0].([]interface{})
length := float64(len(args))
numerator := 0.0
for _, n := range args {
numerator += n.(float64)
}
return numerator / length, nil
}
func jpfCeil(arguments []interface{}) (interface{}, error) {
val := arguments[0].(float64)
return math.Ceil(val), nil
}
func jpfContains(arguments []interface{}) (interface{}, error) {
search := arguments[0]
el := arguments[1]
if searchStr, ok := search.(string); ok {
if elStr, ok := el.(string); ok {
return strings.Index(searchStr, elStr) != -1, nil
}
return false, nil
}
// Otherwise this is a generic contains for []interface{}
general := search.([]interface{})
for _, item := range general {
if item == el {
return true, nil
}
}
return false, nil
}
func jpfEndsWith(arguments []interface{}) (interface{}, error) {
search := arguments[0].(string)
suffix := arguments[1].(string)
return strings.HasSuffix(search, suffix), nil
}
func jpfFloor(arguments []interface{}) (interface{}, error) {
val := arguments[0].(float64)
return math.Floor(val), nil
}
func jpfMap(arguments []interface{}) (interface{}, error) {
intr := arguments[0].(*treeInterpreter)
exp := arguments[1].(expRef)
node := exp.ref
arr := arguments[2].([]interface{})
mapped := make([]interface{}, 0, len(arr))
for _, value := range arr {
current, err := intr.Execute(node, value)
if err != nil {
return nil, err
}
mapped = append(mapped, current)
}
return mapped, nil
}
func jpfMax(arguments []interface{}) (interface{}, error) {
if items, ok := toArrayNum(arguments[0]); ok {
if len(items) == 0 {
return nil, nil
}
if len(items) == 1 {
return items[0], nil
}
best := items[0]
for _, item := range items[1:] {
if item > best {
best = item
}
}
return best, nil
}
// Otherwise we're dealing with a max() of strings.
items, _ := toArrayStr(arguments[0])
if len(items) == 0 {
return nil, nil
}
if len(items) == 1 {
return items[0], nil
}
best := items[0]
for _, item := range items[1:] {
if item > best {
best = item
}
}
return best, nil
}
func jpfMerge(arguments []interface{}) (interface{}, error) {
final := make(map[string]interface{})
for _, m := range arguments {
mapped := m.(map[string]interface{})
for key, value := range mapped {
final[key] = value
}
}
return final, nil
}
func jpfMaxBy(arguments []interface{}) (interface{}, error) {
intr := arguments[0].(*treeInterpreter)
arr := arguments[1].([]interface{})
exp := arguments[2].(expRef)
node := exp.ref
if len(arr) == 0 {
return nil, nil
} else if len(arr) == 1 {
return arr[0], nil
}
start, err := intr.Execute(node, arr[0])
if err != nil {
return nil, err
}
switch t := start.(type) {
case float64:
bestVal := t
bestItem := arr[0]
for _, item := range arr[1:] {
result, err := intr.Execute(node, item)
if err != nil {
return nil, err
}
current, ok := result.(float64)
if !ok {
return nil, errors.New("invalid type, must be number")
}
if current > bestVal {
bestVal = current
bestItem = item
}
}
return bestItem, nil
case string:
bestVal := t
bestItem := arr[0]
for _, item := range arr[1:] {
result, err := intr.Execute(node, item)
if err != nil {
return nil, err
}
current, ok := result.(string)
if !ok {
return nil, errors.New("invalid type, must be string")
}
if current > bestVal {
bestVal = current
bestItem = item
}
}
return bestItem, nil
default:
return nil, errors.New("invalid type, must be number of string")
}
}
func jpfSum(arguments []interface{}) (interface{}, error) {
items, _ := toArrayNum(arguments[0])
sum := 0.0
for _, item := range items {
sum += item
}
return sum, nil
}
func jpfMin(arguments []interface{}) (interface{}, error) {
if items, ok := toArrayNum(arguments[0]); ok {
if len(items) == 0 {
return nil, nil
}
if len(items) == 1 {
return items[0], nil
}
best := items[0]
for _, item := range items[1:] {
if item < best {
best = item
}
}
return best, nil
}
items, _ := toArrayStr(arguments[0])
if len(items) == 0 {
return nil, nil
}
if len(items) == 1 {
return items[0], nil
}
best := items[0]
for _, item := range items[1:] {
if item < best {
best = item
}
}
return best, nil
}
func jpfMinBy(arguments []interface{}) (interface{}, error) {
intr := arguments[0].(*treeInterpreter)
arr := arguments[1].([]interface{})
exp := arguments[2].(expRef)
node := exp.ref
if len(arr) == 0 {
return nil, nil
} else if len(arr) == 1 {
return arr[0], nil
}
start, err := intr.Execute(node, arr[0])
if err != nil {
return nil, err
}
if t, ok := start.(float64); ok {
bestVal := t
bestItem := arr[0]
for _, item := range arr[1:] {
result, err := intr.Execute(node, item)
if err != nil {
return nil, err
}
current, ok := result.(float64)
if !ok {
return nil, errors.New("invalid type, must be number")
}
if current < bestVal {
bestVal = current
bestItem = item
}
}
return bestItem, nil
} else if t, ok := start.(string); ok {
bestVal := t
bestItem := arr[0]
for _, item := range arr[1:] {
result, err := intr.Execute(node, item)
if err != nil {
return nil, err
}
current, ok := result.(string)
if !ok {
return nil, errors.New("invalid type, must be string")
}
if current < bestVal {
bestVal = current
bestItem = item
}
}
return bestItem, nil
} else {
return nil, errors.New("invalid type, must be number of string")
}
}
func jpfType(arguments []interface{}) (interface{}, error) {
arg := arguments[0]
if _, ok := arg.(float64); ok {
return "number", nil
}
if _, ok := arg.(string); ok {
return "string", nil
}
if _, ok := arg.([]interface{}); ok {
return "array", nil
}
if _, ok := arg.(map[string]interface{}); ok {
return "object", nil
}
if arg == nil {
return "null", nil
}
if arg == true || arg == false {
return "boolean", nil
}
return nil, errors.New("unknown type")
}
func jpfKeys(arguments []interface{}) (interface{}, error) {
arg := arguments[0].(map[string]interface{})
collected := make([]interface{}, 0, len(arg))
for key := range arg {
collected = append(collected, key)
}
return collected, nil
}
func jpfValues(arguments []interface{}) (interface{}, error) {
arg := arguments[0].(map[string]interface{})
collected := make([]interface{}, 0, len(arg))
for _, value := range arg {
collected = append(collected, value)
}
return collected, nil
}
func jpfSort(arguments []interface{}) (interface{}, error) {
if items, ok := toArrayNum(arguments[0]); ok {
d := sort.Float64Slice(items)
sort.Stable(d)
final := make([]interface{}, len(d))
for i, val := range d {
final[i] = val
}
return final, nil
}
// Otherwise we're dealing with sort()'ing strings.
items, _ := toArrayStr(arguments[0])
d := sort.StringSlice(items)
sort.Stable(d)
final := make([]interface{}, len(d))
for i, val := range d {
final[i] = val
}
return final, nil
}
func jpfSortBy(arguments []interface{}) (interface{}, error) {
intr := arguments[0].(*treeInterpreter)
arr := arguments[1].([]interface{})
exp := arguments[2].(expRef)
node := exp.ref
if len(arr) == 0 {
return arr, nil
} else if len(arr) == 1 {
return arr, nil
}
start, err := intr.Execute(node, arr[0])
if err != nil {
return nil, err
}
if _, ok := start.(float64); ok {
sortable := &byExprFloat{intr, node, arr, false}
sort.Stable(sortable)
if sortable.hasError {
return nil, errors.New("error in sort_by comparison")
}
return arr, nil
} else if _, ok := start.(string); ok {
sortable := &byExprString{intr, node, arr, false}
sort.Stable(sortable)
if sortable.hasError {
return nil, errors.New("error in sort_by comparison")
}
return arr, nil
} else {
return nil, errors.New("invalid type, must be number of string")
}
}
func jpfJoin(arguments []interface{}) (interface{}, error) {
sep := arguments[0].(string)
// We can't just do arguments[1].([]string), we have to
// manually convert each item to a string.
arrayStr := []string{}
for _, item := range arguments[1].([]interface{}) {
arrayStr = append(arrayStr, item.(string))
}
return strings.Join(arrayStr, sep), nil
}
func jpfReverse(arguments []interface{}) (interface{}, error) {
if s, ok := arguments[0].(string); ok {
r := []rune(s)
for i, j := 0, len(r)-1; i < len(r)/2; i, j = i+1, j-1 {
r[i], r[j] = r[j], r[i]
}
return string(r), nil
}
items := arguments[0].([]interface{})
length := len(items)
reversed := make([]interface{}, length)
for i, item := range items {
reversed[length-(i+1)] = item
}
return reversed, nil
}
func jpfToArray(arguments []interface{}) (interface{}, error) {
if _, ok := arguments[0].([]interface{}); ok {
return arguments[0], nil
}
return arguments[:1:1], nil
}
func jpfToString(arguments []interface{}) (interface{}, error) {
if v, ok := arguments[0].(string); ok {
return v, nil
}
result, err := json.Marshal(arguments[0])
if err != nil {
return nil, err
}
return string(result), nil
}
func jpfToNumber(arguments []interface{}) (interface{}, error) {
arg := arguments[0]
if v, ok := arg.(float64); ok {
return v, nil
}
if v, ok := arg.(string); ok {
conv, err := strconv.ParseFloat(v, 64)
if err != nil {
return nil, nil
}
return conv, nil
}
if _, ok := arg.([]interface{}); ok {
return nil, nil
}
if _, ok := arg.(map[string]interface{}); ok {
return nil, nil
}
if arg == nil {
return nil, nil
}
if arg == true || arg == false {
return nil, nil
}
return nil, errors.New("unknown type")
}
func jpfNotNull(arguments []interface{}) (interface{}, error) {
for _, arg := range arguments {
if arg != nil {
return arg, nil
}
}
return nil, nil
}
```
|
```javascript
import { LazyClientComponent } from './dynamic-imports/react-lazy-client'
import { NextDynamicClientComponent } from './dynamic-imports/dynamic-client'
import {
NextDynamicServerComponent,
NextDynamicServerImportClientComponent,
NextDynamicNoSSRServerComponent,
} from './dynamic-imports/dynamic-server'
export default function page() {
return (
<div id="content">
<LazyClientComponent />
<NextDynamicServerComponent />
<NextDynamicClientComponent />
<NextDynamicServerImportClientComponent />
<NextDynamicNoSSRServerComponent />
</div>
)
}
```
|
```java
/*
*
* See the CONTRIBUTORS.txt file in the distribution for a
* full listing of individual contributors.
*
* This program is free software: you can redistribute it and/or modify
* published by the Free Software Foundation, either version 3 of the
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
*
* along with this program. If not, see <path_to_url
*/
package org.openremote.manager.rules.facade;
import org.openremote.manager.datapoint.AssetPredictedDatapointService;
import org.openremote.manager.rules.RulesEngineId;
import org.openremote.model.attribute.AttributeRef;
import org.openremote.model.datapoint.ValueDatapoint;
import org.openremote.model.datapoint.query.AssetDatapointQuery;
import org.openremote.model.rules.PredictedDatapoints;
import org.openremote.model.rules.Ruleset;
import java.time.LocalDateTime;
public class PredictedFacade<T extends Ruleset> extends PredictedDatapoints {
protected final RulesEngineId<T> rulesEngineId;
protected final AssetPredictedDatapointService assetPredictedDatapointService;
public PredictedFacade(RulesEngineId<T> rulesEngineId, AssetPredictedDatapointService assetPredictedDatapointService) {
this.rulesEngineId = rulesEngineId;
this.assetPredictedDatapointService = assetPredictedDatapointService;
}
@Override
public ValueDatapoint<?>[] getValueDatapoints(AttributeRef attributeRef, AssetDatapointQuery query) {
return assetPredictedDatapointService.queryDatapoints(attributeRef.getId(), attributeRef.getName(), query).toArray(ValueDatapoint[]::new);
}
@Override
public void updateValue(String assetId, String attributeName, Object value, LocalDateTime timestamp) {
assetPredictedDatapointService.updateValue(assetId, attributeName, value, timestamp);
}
@Override
public void updateValue(AttributeRef attributeRef, Object value, LocalDateTime timestamp) {
updateValue(attributeRef.getId(), attributeRef.getName(), value, timestamp);
}
}
```
|
```scss
/*!
* Propeller v1.3.3 (path_to_url modal.css
*/
.modal-content{
border-radius:$modal-border-radius;
}
.modal-header {
border-bottom: $modal-header-border-width solid rgba(0, 0, 0, 0);
border-top-left-radius: $modal-border-radius;
border-top-right-radius: $modal-border-radius;
margin-bottom: $modal-spacer-y;
padding: $modal-header-padding-y $modal-header-padding-x 0;
&.pmd-modal-bordered{
border-bottom:$modal-header-border-width solid $modal-header-border-color;
padding-bottom:$modal-header-padding-y;
}
h2 {
&.pmd-card-title-text{
font-weight:$modal-header-title-font-weight;
}
}
}
.pmd-modal-list {
margin-bottom:$modal-spacer-y;
margin-top:$modal-spacer-y;
}
.modal-body {
color: rgba(0, 0, 0, 0.84);
margin-bottom:$modal-spacer-y;
margin-top:$modal-spacer-y;
padding:0 $modal-spacer-x;
> p:last-child{
margin-bottom:0;
}
}
.modal-footer {
padding:$modal-spacer;
}
.pmd-modal-action {
padding:$modal-actions-spacer-y $modal-actions-spacer-x;
.btn.pmd-btn-fab {
padding: 0;
}
&.pmd-modal-bordered {
border-top:$modal-action-border-width solid $modal-action-border-color;
}
.btn {
min-width:inherit;
padding: $modal-actions-btn-padding-y $modal-actions-btn-padding-x;
margin:$modal-actions-btn-spacer-y $modal-actions-btn-spacer-x;
&:first-child {
margin-left: $modal-actions-btn-margin-left;
}
&.pmd-btn-flat:first-child{
margin-left: $modal-actions-btn-flat-spacer-x;
}
}
.pmd-btn-flat{
margin:0 $modal-actions-btn-flat-spacer-x 0 0;
}
}
.modal{
.radio,
.checkbox{
margin:$modal-radio-checkbox-spacer-y 0;
}
.radio-options {
> label{
padding-left:32px;
}
}
.list-group.pmd-list-avatar{
margin-bottom:$modal-list-avatar-margin-bottom;
padding:0
}
&.list-group{
&:last-child{
margin-bottom:0;
}
}
}
/* Form css */
.form-horizontal {
.form-group{
margin-left:0;
margin-right:0;
}
}
/* Modal center */
.modal{
text-align:center;
&:before {
content:'';
display:inline-block;
height: 100%;
vertical-align: middle;
margin-right: -4px;
}
.modal-dialog {
display:inline-block;
text-align:left;
vertical-align:middle;
}
}
```
|
```javascript
(function() {
'use strict';
angular.module('theHiveServices').service('GlobalSearchSrv', function(localStorageService) {
this.save = function(config) {
localStorageService.set('search-section', config);
};
this.saveSection = function(entity, config) {
var cfg = this.restore();
cfg.entity = entity;
cfg[entity] = _.extend(cfg[entity], config);
this.save(cfg);
};
this.getSection = function(entity) {
var cfg = this.restore();
return cfg[entity] || {};
};
this.restore = function() {
return localStorageService.get('search-section') || {
entity: 'case',
case: {
search: null,
filters: []
},
case_task: {
search: null,
filters: []
},
case_task_log: {
search: null,
filters: []
},
case_artifact: {
search: null,
filters: []
},
alert: {
search: null,
filters: []
},
case_artifact_job: {
search: null,
filters: []
},
audit: {
search: null,
filters: []
}
};
};
this.buildDefaultFilterValue = function(fieldDef, value) {
var valueId = value.id;
var valueName = value.name;
if(valueId.startsWith('"') && valueId.endsWith('"')) {
valueId = valueId.slice (1, valueId.length-1);
}
if(valueName.startsWith('"') && valueName.endsWith('"')) {
valueName = valueName.slice (1, valueName.length-1);
}
if(fieldDef.type === 'string' || fieldDef.name === 'tags' || fieldDef.type === 'user' || fieldDef.values.length > 0) {
return {
operator: 'any',
list: [{
text: (fieldDef.type === 'number' || fieldDef.type === 'integer') ? Number.parseInt(valueId) : valueId, label:valueName
}]
};
} else {
switch(fieldDef.type) {
case 'number':
case 'integer':
return {
value: Number.parseInt(valueId)
};
case 'boolean':
return valueId === 'true';
default:
return valueId;
}
return valueId;
}
};
});
})();
```
|
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!--
~ contributor license agreements. See the NOTICE file distributed with
~ this work for additional information regarding copyright ownership.
~
~ path_to_url
~
~ Unless required by applicable law or agreed to in writing, software
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-->
<project xmlns="path_to_url" xmlns:xsi="path_to_url" xsi:schemaLocation="path_to_url path_to_url">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-logging</artifactId>
<version>5.5.1-SNAPSHOT</version>
</parent>
<artifactId>shardingsphere-logging-core</artifactId>
<name>${project.artifactId}</name>
<dependencies>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-logging-api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-mode-api</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-test-it-yaml</artifactId>
<version>${project.version}</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<scope>compile</scope>
</dependency>
</dependencies>
</project>
```
|
```yaml
{{- /*
*/}}
{{- if .Values.kustomizeController.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "flux.kustomize-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace | quote }}
{{- $versionLabel := dict "app.kubernetes.io/version" ( include "common.images.version" ( dict "imageRoot" .Values.kustomizeController.image "chart" .Chart ) ) }}
{{- $labels := include "common.tplvalues.merge" ( dict "values" ( list .Values.commonLabels $versionLabel ) "context" . ) }}
labels: {{- include "common.labels.standard" ( dict "customLabels" $labels "context" $ ) | nindent 4 }}
app.kubernetes.io/part-of: flux
app.kubernetes.io/component: kustomize-controller
{{- if or .Values.kustomizeController.serviceAccount.annotations .Values.commonAnnotations }}
{{- $annotations := include "common.tplvalues.merge" ( dict "values" ( list .Values.kustomizeController.serviceAccount.annotations .Values.commonAnnotations ) "context" . ) }}
annotations: {{- include "common.tplvalues.render" ( dict "value" $annotations "context" $) | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.kustomizeController.serviceAccount.automountServiceAccountToken }}
{{- end }}
```
|
Simpson test is a clinical test used in neurology to determine ocular myasthenia gravis. It was first described by the Scottish neurologist John Alexander Simpson.
Procedure
In myasthenia gravis, there is a variable weakness of skeletal muscles, which is exacerbated by repeated contraction. To cause sustained contraction of levator palpebrae superioris muscle, the patient is asked to gaze upward for an extended period of time, without lifting the head. After a few minutes, the patient with myasthenia gravis starts to show drooping of upper eyelids, while normal individuals do not show any drooping. Thus, this test can be used to clinically differentiate between ocular myasthenia gravis and normal individuals. Since myasthenia gravis affects all skeletal muscles, eyelid drooping is often bilateral. It is sometimes done in conjunction with tensilon test, where edrophonium is injected to look for reversibility of eyelid drooping. In myaesthenia gravis, eyelid drooping is no longer detectable after tensilon test.
This test is less sensitive than anti-AChR antibody titers and electromyography, and hence is used only as a screening test in clinical setup.
References
Diagnostic neurology
|
Fabio Gattari was born in Tolentino in 1958. He is the Director of Etere Pte. Ltd. , formerly known as SIS (Societa Italiana Software), which he founded together with Fabio Mazzocchetti in 1987. In its beginnings, Etere was the first Italian company to produce television and radio software for scheduling and automation. Gattari is also a Software Analyst and Systems Planner.
Career
In 1976, Fabio founded the company GB elettronica together with Carlo Barabucci. With this company project, he had built and supported Radio transmitters in the Marche region in Italy.
From 1982 to 1985, Gattari worked with Effe Computers in Macerata Italy, HP concessionaire, where he programmed several software such as notary office management software (SETA), legal office management software and building site accounting management software. This was one of the first software developed for touch-screen at that time and the HP150 was one of the first PCs with built-in touch screen.
Gattari founded SIS (Societa Italiana Software) in 1987. This company would later be renamed Etere Etere provides media enterprise software for broadcasters and media companies around the world, with Fabio Gattari as its director. SIS's first product was the SETA software built in 1992, it started with a traffic and billing system for radio stations integrated with a digital playout using standard PC and an audio card produced by another Italian company, Audiologic (https://www.audiologic.it/) this software was the first in the world to produce a full audio broadcast using standard PC and audio stored in a Networked File server shared with multiple users using Novell Netware software. SIS has always had an active connection with AER to produce its traffic, billing and scheduling software with all the documents as requested by the Italian law for broadcasters.
Gattari has led many innovations while at the helm of Etere including Media Asset Management (MAM) and Media Enterprise Resources Planning (MERP).
During the span of his career, Gattari has created many innovations including the first Digital Audio Broadcasting system based on PCs and local network connectivity in 1992. Fabio obtained CNE (Certified Novell Engineer) from Novell, the company that produces local network software with SFT III systems specialization in 1993. In his entrepreneurship spirit, he became a founding partner of SKYBUS S.R.L which deals in satellite digital systems management for Italian radio syndication in 1994.
From 2012, Gattari left Italy to escape the economical and business problems of the country and joined as a director to Singycon pte ltd. He developed a new business for the media industry in the APAC region and moved to Singapore. In January 2015, Singycon acquired Etere remaining business after the forced bankruptcy of Etere srl and moved its headquarters to Singapore under the leadership of its director, Gattari. Some of the former Etere srl Employees formed a company MERP, to save their jobs as the unemployment rate in Italy is increasing. Etere Singapore choose to use the Italian company as their component supplier for its products.
Fabio Gattari has been speaking at international conferences regarding topics of his expertise for many years. In May 2016, Fabio Gattari presented at Asia's international conference for the pro-audio, film and broadcasting industry, BroadcastAsia2016.
References
Living people
Italian chief executives
1958 births
|
```c
/*your_sha256_hash---------
*
* recovery_gen.c
* Generator for recovery configuration
*
*
*your_sha256_hash---------
*/
#include "postgres_fe.h"
#include "common/logging.h"
#include "fe_utils/recovery_gen.h"
#include "fe_utils/string_utils.h"
static char *escape_quotes(const char *src);
/*
* Write recovery configuration contents into a fresh PQExpBuffer, and
* return it.
*/
PQExpBuffer
GenerateRecoveryConfig(PGconn *pgconn, char *replication_slot)
{
PQconninfoOption *connOptions;
PQExpBufferData conninfo_buf;
char *escaped;
PQExpBuffer contents;
Assert(pgconn != NULL);
contents = createPQExpBuffer();
if (!contents)
{
pg_log_error("out of memory");
exit(1);
}
/*
* In PostgreSQL 12 and newer versions, standby_mode is gone, replaced by
* standby.signal to trigger a standby state at recovery.
*/
if (PQserverVersion(pgconn) < MINIMUM_VERSION_FOR_RECOVERY_GUC)
appendPQExpBufferStr(contents, "standby_mode = 'on'\n");
connOptions = PQconninfo(pgconn);
if (connOptions == NULL)
{
pg_log_error("out of memory");
exit(1);
}
initPQExpBuffer(&conninfo_buf);
for (PQconninfoOption *opt = connOptions; opt && opt->keyword; opt++)
{
/* Omit empty settings and those libpqwalreceiver overrides. */
if (strcmp(opt->keyword, "replication") == 0 ||
strcmp(opt->keyword, "dbname") == 0 ||
strcmp(opt->keyword, "fallback_application_name") == 0 ||
(opt->val == NULL) ||
(opt->val != NULL && opt->val[0] == '\0'))
continue;
/* Separate key-value pairs with spaces */
if (conninfo_buf.len != 0)
appendPQExpBufferChar(&conninfo_buf, ' ');
/*
* Write "keyword=value" pieces, the value string is escaped and/or
* quoted if necessary.
*/
appendPQExpBuffer(&conninfo_buf, "%s=", opt->keyword);
appendConnStrVal(&conninfo_buf, opt->val);
}
if (PQExpBufferDataBroken(conninfo_buf))
{
pg_log_error("out of memory");
exit(1);
}
/*
* Escape the connection string, so that it can be put in the config file.
* Note that this is different from the escaping of individual connection
* options above!
*/
escaped = escape_quotes(conninfo_buf.data);
termPQExpBuffer(&conninfo_buf);
appendPQExpBuffer(contents, "primary_conninfo = '%s'\n", escaped);
free(escaped);
if (replication_slot)
{
/* unescaped: ReplicationSlotValidateName allows [a-z0-9_] only */
appendPQExpBuffer(contents, "primary_slot_name = '%s'\n",
replication_slot);
}
if (PQExpBufferBroken(contents))
{
pg_log_error("out of memory");
exit(1);
}
PQconninfoFree(connOptions);
return contents;
}
/*
* Write the configuration file in the directory specified in target_dir,
* with the contents already collected in memory appended. Then write
* the signal file into the target_dir. If the server does not support
* recovery parameters as GUCs, the signal file is not necessary, and
* configuration is written to recovery.conf.
*/
void
WriteRecoveryConfig(PGconn *pgconn, char *target_dir, PQExpBuffer contents)
{
char filename[MAXPGPATH];
FILE *cf;
bool use_recovery_conf;
Assert(pgconn != NULL);
use_recovery_conf =
PQserverVersion(pgconn) < MINIMUM_VERSION_FOR_RECOVERY_GUC;
snprintf(filename, MAXPGPATH, "%s/%s", target_dir,
use_recovery_conf ? "recovery.conf" : "postgresql.auto.conf");
cf = fopen(filename, use_recovery_conf ? "w" : "a");
if (cf == NULL)
{
pg_log_error("could not open file \"%s\": %m", filename);
exit(1);
}
if (fwrite(contents->data, contents->len, 1, cf) != 1)
{
pg_log_error("could not write to file \"%s\": %m", filename);
exit(1);
}
fclose(cf);
if (!use_recovery_conf)
{
snprintf(filename, MAXPGPATH, "%s/%s", target_dir, "standby.signal");
cf = fopen(filename, "w");
if (cf == NULL)
{
pg_log_error("could not create file \"%s\": %m", filename);
exit(1);
}
fclose(cf);
}
}
/*
* Escape a string so that it can be used as a value in a key-value pair
* a configuration file.
*/
static char *
escape_quotes(const char *src)
{
char *result = escape_single_quotes_ascii(src);
if (!result)
{
pg_log_error("out of memory");
exit(1);
}
return result;
}
```
|
```php
<?php
/*
*
* File ini bagian dari:
*
* OpenSID
*
* Sistem informasi desa sumber terbuka untuk memajukan desa
*
* Aplikasi dan source code ini dirilis berdasarkan lisensi GPL V3
*
* Hak Cipta 2009 - 2015 Combine Resource Institution (path_to_url
* Hak Cipta 2016 - 2024 Perkumpulan Desa Digital Terbuka (path_to_url
*
* Dengan ini diberikan izin, secara gratis, kepada siapa pun yang mendapatkan salinan
* dari perangkat lunak ini dan file dokumentasi terkait ("Aplikasi Ini"), untuk diperlakukan
* tanpa batasan, termasuk hak untuk menggunakan, menyalin, mengubah dan/atau mendistribusikan,
* asal tunduk pada syarat berikut:
*
* Pemberitahuan hak cipta di atas dan pemberitahuan izin ini harus disertakan dalam
* setiap salinan atau bagian penting Aplikasi Ini. Barang siapa yang menghapus atau menghilangkan
* pemberitahuan ini melanggar ketentuan lisensi Aplikasi Ini.
*
* PERANGKAT LUNAK INI DISEDIAKAN "SEBAGAIMANA ADANYA", TANPA JAMINAN APA PUN, BAIK TERSURAT MAUPUN
* TERSIRAT. PENULIS ATAU PEMEGANG HAK CIPTA SAMA SEKALI TIDAK BERTANGGUNG JAWAB ATAS KLAIM, KERUSAKAN ATAU
* KEWAJIBAN APAPUN ATAS PENGGUNAAN ATAU LAINNYA TERKAIT APLIKASI INI.
*
* @package OpenSID
* @author Tim Pengembang OpenDesa
* @copyright Hak Cipta 2009 - 2015 Combine Resource Institution (path_to_url
* @copyright Hak Cipta 2016 - 2024 Perkumpulan Desa Digital Terbuka (path_to_url
* @license path_to_url GPL V3
* @link path_to_url
*
*/
namespace App\Libraries\TinyMCE;
use App\Libraries\DateConv;
class KodeIsianSurat
{
private $dataSurat;
public function __construct($dataSurat)
{
$this->dataSurat = $dataSurat;
}
public static function get($dataSurat): array
{
return (new self($dataSurat))->kodeIsian();
}
public function kodeIsian(): array
{
$DateConv = new DateConv();
return [
[
'judul' => 'Format Nomor Surat',
'isian' => 'format_nomor_surat',
'data' => strtoupper(substitusiNomorSurat($this->dataSurat['no_surat'], ($this->dataSurat['surat']['format_nomor'] == '') ? setting('format_nomor_surat') : $this->dataSurat['surat']['format_nomor'])),
],
[
'judul' => 'Kode',
'isian' => 'kode_surat',
'data' => $this->dataSurat['surat']['kode_surat'],
],
[
'case_sentence' => true,
'judul' => 'Nomer',
'isian' => 'nomer_surat',
'data' => $this->dataSurat['no_surat'],
],
[
'judul' => 'Judul',
'isian' => 'judul_surat',
'data' => $this->dataSurat['surat']['judul_surat'],
],
[
'judul' => 'Tanggal',
'isian' => 'tgl_surat',
'data' => formatTanggal(date('Y-m-d')),
],
[
'judul' => 'Tanggal Hijri',
'isian' => 'tgl_surat_hijrI',
'data' => $DateConv->HijriDateId('j F Y'),
],
[
'case_sentence' => true,
'judul' => 'Tahun',
'isian' => 'tahuN',
'data' => $this->dataSurat['log_surat']['bulan'] ?? date('Y'),
],
[
'judul' => 'Bulan Romawi',
'isian' => 'bulan_romawi',
'data' => bulan_romawi((int) ($this->dataSurat['log_surat']['bulan'] ?? date('m'))),
],
[
'case_sentence' => true,
'judul' => 'Logo Surat',
'isian' => 'logo',
'data' => '[logo]',
],
[
'case_sentence' => true,
'judul' => 'QRCode',
'isian' => 'qr_code',
'data' => '[qr_code]',
],
[
'case_sentence' => true,
'judul' => 'QRCode BSrE',
'isian' => 'qr_bsre',
'data' => '[qr_bsre]',
],
[
'case_sentence' => true,
'judul' => 'Logo BSrE',
'isian' => 'logo_bsre',
'data' => '[logo_bsre]',
],
];
}
}
```
|
```rust
//! A fully asynchronous, [futures]-enabled [Apache Kafka] client
//! library for Rust based on [librdkafka].
//!
//! ## The library
//!
//! `rust-rdkafka` provides a safe Rust interface to librdkafka. This version
//! is compatible with librdkafka v1.9.2+.
//!
//! ### Documentation
//!
//! - [Current master branch](path_to_url
//! - [Latest release](path_to_url
//! - [Changelog](path_to_url
//!
//! ### Features
//!
//! The main features provided at the moment are:
//!
//! - Support for all Kafka versions since 0.8.x. For more information about
//! broker compatibility options, check the [librdkafka
//! documentation][broker-compat].
//! - Consume from single or multiple topics.
//! - Automatic consumer rebalancing.
//! - Customizable rebalance, with pre and post rebalance callbacks.
//! - Synchronous or asynchronous message production.
//! - Customizable offset commit.
//! - Create and delete topics and add and edit partitions.
//! - Alter broker and topic configurations.
//! - Access to cluster metadata (list of topic-partitions, replicas, active
//! brokers etc).
//! - Access to group metadata (list groups, list members of groups, hostnames,
//! etc.).
//! - Access to producer and consumer metrics, errors and callbacks.
//! - Exactly-once semantics (EOS) via idempotent and transactional producers
//! and read-committed consumers.
//!
//! ### One million messages per second
//!
//! `rust-rdkafka` is designed to be easy and safe to use thanks to the
//! abstraction layer written in Rust, while at the same time being extremely
//! fast thanks to the librdkafka C library.
//!
//! Here are some benchmark results using the [`BaseProducer`],
//! sending data to a single Kafka 0.11 process running in localhost (default
//! configuration, 3 partitions). Hardware: Dell laptop, with Intel Core
//! i7-4712HQ @ 2.30GHz.
//!
//! - Scenario: produce 5 million messages, 10 bytes each, wait for all of them to be acked
//! - 1045413 messages/s, 9.970 MB/s (average over 5 runs)
//!
//! - Scenario: produce 100000 messages, 10 KB each, wait for all of them to be acked
//! - 24623 messages/s, 234.826 MB/s (average over 5 runs)
//!
//! For more numbers, check out the [kafka-benchmark] project.
//!
//! ### Client types
//!
//! `rust-rdkafka` provides low level and high level consumers and producers.
//!
//! Low level:
//!
//! * [`BaseConsumer`]: a simple wrapper around the librdkafka consumer. It
//! must be periodically `poll()`ed in order to execute callbacks, rebalances
//! and to receive messages.
//! * [`BaseProducer`]: a simple wrapper around the librdkafka producer. As in
//! the consumer case, the user must call `poll()` periodically to execute
//! delivery callbacks.
//! * [`ThreadedProducer`]: a `BaseProducer` with a separate thread dedicated to
//! polling the producer.
//!
//! High level:
//!
//! * [`StreamConsumer`]: a [`Stream`] of messages that takes care of
//! polling the consumer automatically.
//! * [`FutureProducer`]: a [`Future`] that will be completed once
//! the message is delivered to Kafka (or failed).
//!
//! For more information about consumers and producers, refer to their
//! module-level documentation.
//!
//! *Warning*: the library is under active development and the APIs are likely
//! to change.
//!
//! ### Asynchronous data processing with Tokio
//!
//! [Tokio] is a platform for fast processing of asynchronous events in Rust.
//! The interfaces exposed by the [`StreamConsumer`] and the [`FutureProducer`]
//! allow rust-rdkafka users to easily integrate Kafka consumers and producers
//! within the Tokio platform, and write asynchronous message processing code.
//! Note that rust-rdkafka can be used without Tokio.
//!
//! To see rust-rdkafka in action with Tokio, check out the
//! [asynchronous processing example] in the examples folder.
//!
//! ### At-least-once delivery
//!
//! At-least-once delivery semantics are common in many streaming applications:
//! every message is guaranteed to be processed at least once; in case of
//! temporary failure, the message can be re-processed and/or re-delivered,
//! but no message will be lost.
//!
//! In order to implement at-least-once delivery the stream processing
//! application has to carefully commit the offset only once the message has
//! been processed. Committing the offset too early, instead, might cause
//! message loss, since upon recovery the consumer will start from the next
//! message, skipping the one where the failure occurred.
//!
//! To see how to implement at-least-once delivery with `rdkafka`, check out the
//! [at-least-once delivery example] in the examples folder. To know more about
//! delivery semantics, check the [message delivery semantics] chapter in the
//! Kafka documentation.
//!
//! ### Exactly-once semantics
//!
//! Exactly-once semantics (EOS) can be achieved using transactional producers,
//! which allow produced records and consumer offsets to be committed or aborted
//! atomically. Consumers that set their `isolation.level` to `read_committed`
//! will only observe committed messages.
//!
//! EOS is useful in read-process-write scenarios that require messages to be
//! processed exactly once.
//!
//! To learn more about using transactions in rust-rdkafka, see the
//! [Transactions](producer-transactions) section of the producer documentation.
//!
//! ### Users
//!
//! Here are some of the projects using rust-rdkafka:
//!
//! - [timely-dataflow]: a distributed data-parallel compute engine. See also
//! the [blog post][timely-blog] announcing its Kafka integration.
//! - [kafka-view]: a web interface for Kafka clusters.
//! - [kafka-benchmark]: a high performance benchmarking tool for Kafka.
//! - [callysto]: Stream processing framework in Rust.
//! - [bytewax]: Python stream processing framework using Timely Dataflow.
//!
//! *If you are using rust-rdkafka, please let us know!*
//!
//! ## Installation
//!
//! Add this to your `Cargo.toml`:
//!
//! ```toml
//! [dependencies]
//! rdkafka = { version = "0.25", features = ["cmake-build"] }
//! ```
//!
//! This crate will compile librdkafka from sources and link it statically to
//! your executable. To compile librdkafka you'll need:
//!
//! * the GNU toolchain
//! * GNU `make`
//! * `pthreads`
//! * `zlib`: optional, but included by default (feature: `libz`)
//! * `cmake`: optional, *not* included by default (feature: `cmake-build`)
//! * `libssl-dev`: optional, *not* included by default (feature: `ssl`)
//! * `libsasl2-dev`: optional, *not* included by default (feature: `gssapi`)
//! * `libzstd-dev`: optional, *not* included by default (feature: `zstd-pkg-config`)
//!
//! Note that using the CMake build system, via the `cmake-build` feature, is
//! encouraged if you can take the dependency on CMake.
//!
//! By default a submodule with the librdkafka sources pinned to a specific
//! commit will be used to compile and statically link the library. The
//! `dynamic-linking` feature can be used to instead dynamically link rdkafka to
//! the system's version of librdkafka. Example:
//!
//! ```toml
//! [dependencies]
//! rdkafka = { version = "0.25", features = ["dynamic-linking"] }
//! ```
//!
//! For a full listing of features, consult the [rdkafka-sys crate's
//! documentation][rdkafka-sys-features]. All of rdkafka-sys features are
//! re-exported as rdkafka features.
//!
//! ### Minimum supported Rust version (MSRV)
//!
//! The current minimum supported Rust version (MSRV) is 1.70.0. Note that
//! bumping the MSRV is not considered a breaking change. Any release of
//! rust-rdkafka may bump the MSRV.
//!
//! ### Asynchronous runtimes
//!
//! Some features of the [`StreamConsumer`] and [`FutureProducer`] depend on
//! Tokio, which can be a heavyweight dependency for users who only intend to
//! use the low-level consumers and producers. The Tokio integration is
//! enabled by default, but can be disabled by turning off default features:
//!
//! ```toml
//! [dependencies]
//! rdkafka = { version = "0.25", default-features = false }
//! ```
//!
//! If you would like to use an asynchronous runtime besides Tokio, you can
//! integrate it with rust-rdkafka by providing a shim that implements the
//! [`AsyncRuntime`] trait. See the following examples for details:
//!
//! * [smol][runtime-smol]
//! * [async-std][runtime-async-std]
//!
//! ## Examples
//!
//! You can find examples in the [`examples`] folder. To run them:
//!
//! ```bash
//! cargo run --example <example_name> -- <example_args>
//! ```
//!
//! ## Debugging
//!
//! rust-rdkafka uses the [`log`] crate to handle logging.
//! Optionally, enable the `tracing` feature to emit [`tracing`]
//! events as opposed to [`log`] records.
//!
//! In test and examples, rust-rdkafka uses the [`env_logger`] crate
//! to format logs. In those contexts, logging can be enabled
//! using the `RUST_LOG` environment variable, for example:
//!
//! ```bash
//! RUST_LOG="librdkafka=trace,rdkafka::client=debug" cargo test
//! ```
//!
//! This will configure the logging level of librdkafka to trace, and the level
//! of the client module of the Rust client to debug. To actually receive logs
//! from librdkafka, you also have to set the `debug` option in the producer or
//! consumer configuration (see librdkafka
//! [configuration][librdkafka-config]).
//!
//! To enable debugging in your project, make sure you initialize the logger
//! with `env_logger::init()`, or the equivalent for any `log`-compatible
//! logging framework.
//!
//! [`AsyncRuntime`]: path_to_url
//! [`BaseConsumer`]: path_to_url
//! [`BaseProducer`]: path_to_url
//! [`Future`]: path_to_url
//! [`FutureProducer`]: path_to_url
//! [`Stream`]: path_to_url
//! [`StreamConsumer`]: path_to_url
//! [`ThreadedProducer`]: path_to_url
//! [`log`]: path_to_url
//! [`tracing`]: path_to_url
//! [`env_logger`]: path_to_url
//! [Apache Kafka]: path_to_url
//! [asynchronous processing example]: path_to_url
//! [at-least-once delivery example]: path_to_url
//! [runtime-smol]: path_to_url
//! [runtime-async-std]: path_to_url
//! [broker-compat]: path_to_url#broker-version-compatibility
//! [bytewax]: path_to_url
//! [callysto]: path_to_url
//! [`examples`]: path_to_url
//! [futures]: path_to_url
//! [kafka-benchmark]: path_to_url
//! [kafka-view]: path_to_url
//! [librdkafka]: path_to_url
//! [librdkafka-config]: path_to_url
//! [message delivery semantics]: path_to_url#semantics
//! [producer-transactions]: path_to_url#transactions
//! [rdkafka-sys-features]: path_to_url#features
//! [rdkafka-sys-known-issues]: path_to_url#known-issues
//! [smol]: path_to_url
//! [timely-blog]: path_to_url
//! [timely-dataflow]: path_to_url
//! [Tokio]: path_to_url
#![forbid(missing_docs)]
#![deny(rust_2018_idioms)]
#![allow(clippy::type_complexity)]
#![cfg_attr(docsrs, feature(doc_cfg))]
mod log;
pub use rdkafka_sys::{bindings, helpers, types};
pub mod admin;
pub mod client;
pub mod config;
pub mod consumer;
pub mod error;
pub mod groups;
pub mod message;
pub mod metadata;
pub mod mocking;
pub mod producer;
pub mod statistics;
pub mod topic_partition_list;
pub mod util;
// Re-exports.
pub use crate::client::ClientContext;
pub use crate::config::ClientConfig;
pub use crate::message::{Message, Timestamp};
pub use crate::statistics::Statistics;
pub use crate::topic_partition_list::{Offset, TopicPartitionList};
pub use crate::util::IntoOpaque;
```
|
Sergiu Andon (born 12 September 1939, Bucharest) is a Romanian politician and a member of the Conservative Party.
Career
In 1961, he graduated from the Faculty of Justice from Bucharest. Between 1961 and 1968 he worked as a prosecutor in Fetești, Slobozia, and Urziceni. Between 1968 and 1972, he was the main redactor of the Flacăra (Flame) magazine and between 1972 and 1989 he was working as a publicist commentator for the magazine. 1989 to 1994 he was the redactor-chief deputy of the Adevărul (Truth) magazine.
He wrote some of the most notables articles about King Mihai ("Damn it, Majesty!"). In 1995, he became a politician and worked as a general secretary until 1996. His attitude towards the monarchy changed, ending up congratulating Mihai's speech in Parliament on 2011. 1994 and 1996 he was director of the Romedia agency. From 1996 on he worked as a lawyer in Bucharest.
Between 1997 and 2000, he worked as vice president of PUR, and in 2000 he became the leader of the legislative department PC.
In May 2012, ANI realized that Sergiu Andon's was incompatible with his function as a deputy in the Parliament and requested his revoke from the function, according to the civil sentence No. 2930/13.04.2011. The High Court of Cassation and Justice confirmed his claims and gave a definite verdict of incompatibility. The first time ever the Parliament ignored the decision of the ICCJ (HCoJ) and refused to put it for Andon's revocation. He was revoked from function on 11 September 2012.
Andon is a member of UNICEF and of the Union of the Democratic Jurists in Romania.
References
21st-century Romanian politicians
1939 births
Romanian prosecutors
Members of the Chamber of Deputies (Romania)
Conservative Party (Romania) politicians
Living people
|
Maternity Hospital, also known as Ripley Memorial Hospital and currently known as Ripley Gardens, is a former hospital building in the Harrison neighborhood of Minneapolis, Minnesota. The hospital was established by Dr. Martha Ripley in 1886 in response to the exceptionally high mortality rates for women in childbirth. Dr. Ripley was one of only a few female physicians in the late 19th century, and she employed only women as physicians and board members. The hospital provided services for primarily poor, unmarried, and widowed women. The hospital was originally located in a house at 316 15th Street South, but it quickly outgrew that house and moved to 2529 4th Avenue South. Demand continued to grow, so in 1896 the hospital purchased a house on of land at the corner of Glenwood and Penn Avenues. The hospital built the Marshall Stacy Nursery in 1909, followed by the Babies' Bungalow in 1910 and the Emily Paddock Cottage in 1911. Also in 1911, Ripley appealed to the government for funds to build an even larger building. Ripley died on April 18, 1912, of a respiratory infection.
In 1916, the new building was completed. The hospital was renamed from Maternity Hospital to Ripley Memorial Hospital at that time. The hospital served the community until 1957, when it was closed due to low occupancy and funding problems. The hospital building was sold to Children's Hospital of Minneapolis, and the remaining funds were used to establish the Ripley Memorial Foundation. The foundation has sponsored teenage pregnancy prevention programs since 1993. The former hospital buildings were listed on the National Register of Historic Places in 1980.
The building was redeveloped in 2007 by Aeon, a Minneapolis organization that provides affordable housing. The development, now known as Ripley Gardens, provides housing for low- to moderate-income residents, and provides both rental housing and home ownership opportunities. The redevelopment was one of twelve properties around the nation funded by the Restore America program, sponsored by the National Trust for Historic Preservation and HGTV.
References
External links
Ripley Memorial Foundation
Ripley Gardens
Hospital buildings completed in 1916
National Register of Historic Places in Minneapolis
Hospital buildings on the National Register of Historic Places in Minnesota
Defunct hospitals in Minnesota
Women in Minnesota
|
```xml
import Server from "./reactotron-core-server"
export { createServer } from "./reactotron-core-server"
export default Server
```
|
Nichiji (日持; February 10, 1250 – after 1304), also known as Kaikō, was a Buddhist disciple of Nichiren who traveled to Hokkaido, Siberia, and China.
Nichiji was born in Suruga Province, the second child of a large and powerful family. At first he studied to become a Tendai priest but soon he joined Nichiren as one of his initial followers.
Nichiji was one of the "six chosen disciples" of Nichiren, but was also a disciple of Nikkō. After Nichiren died in 1282, Nichiji established Eishō-ji, now Ren'ei-ji (蓮永寺) in Shizuoka. But soon, relations with Nikkō became strained. He set out on a missionary journey on January 1, 1295. His plan was to walk to Hakodate, Hokkaidō and from there proceed to Xanadu in order to convert the Mongols.
For many centuries it was unknown what happened to Nichiji after he left Japan. According to legend, he founded a temple in northern Japan and caught a new fish in Hokkaido that he named hokke, after the ; even in legends it was unclear if he ever reached China alive. In 1936, though, a Japanese tourist discovered his gohonzon and relics in a remote region of China, and in 1989 these relics were carbon dated and determined by Tokyo University researchers to be most probably authentic. Thanks to his inscriptions on the relics, it is now known that he landed in China in 1298, met some Western Xia Buddhists on the road and decided on their advice to settle in Xuanhua District instead of Xanadu. In Xuanhua, he founded Lìhuà Temple (立化寺塔; Japanese: Rikka-ji)., and a few Chinese residents converted to Nichiren Buddhism under his tutelage, including an old man named Nishote whom he mentions as his chief disciple. He died sometime after 1304.
In Nichiren Shū Nichiji is regarded as a patron saint of foreign missionaries.
References
Further reading
Li Narangoa. Japanische Religionspolitik in der Mongolei 1932-1945. Reformbestrebungen und Dialog zwischen japanischem und mongolischem Buddhismus. Wiesbaden: Harrassowitz, 1998.
Montgomery, Daniel (1991). Fire in the Lotus, The Dynamic Religion of Nichiren, London: Mandala,
前嶋 信次 . "日持上人の大陸渡航について―宣化出土遺物を中心として "
External links
Treasures of Senka - documentary in English
Hakodate News
日持上人開教の事跡-津軽十三湊をめぐって-". Nichiren Buddhism Modern Religious Institute.
劇画宗門史「日持上人」
"Modern Japanese Buddhism and Pan-Asianism"
1250 births
1300s deaths
Japanese Buddhist clergy
Nichiren-shū Buddhist monks
People from Shizuoka Prefecture
Kamakura period Buddhist clergy
|
"La grenade" is a song by French singer Clara Luciani released in 2018. Commercially, the song has charted in Belgium where it peaked at number five and reached the top thirty in France.
Charts
Weekly charts
Year-end charts
Certifications
References
2018 singles
2018 songs
French-language songs
French pop songs
|
Xylopia calophylla is a species of plant in the Annonaceae family. It is native to
Bolivia, Brazil, Colombia, Ecuador, Peru and Venezuela. Robert Elias Fries, the botanist who first formally described the species, named it after its beautiful leaves (Latinized forms of Greek , calli- and , phullon).
Description
It is a medium-sized tree. The young branches are densely covered in soft, silky, rust-colored hairs, but as they mature they become hairless. The branches have reddish bark and often have lenticels. Its elliptical, papery leaves are 8-10 by 2.5-3.5 centimeters. The leaves have short, pointed bases and tapering, somewhat blunted tips, with the tapering portion 5-10 millimeters long. The leaves are differently colored on their upper and lower sides. The upper sides are shiny and hairless. The lower sides are densely covered in silvery hairs that lay flat against the surface. The midribs of the leaves are impressed on their upper surface and very prominent on their lower surface. The leaves have secondary veins that are also impressed on the upper surface and form a network pattern. Its petioles are 5-6 millimeters long, covered in soft hairs, with a groove on their upper side. Its Inflorescences occur in axillary positions. The flowers are on pedicels that are up to 3 millimeters long, packed together in dense groupings, and covered in gold-colored silky hairs. Its flowers have 3 egg-shaped sepals that are 2 by 2 millimeters, with pointed tips. The base of the sepals are fused at their margins. The sepals have silky hairs on their lower surfaces and are hairless on their upper surfaces. Its 6 petals are arranged in two rows of 3. The linear, outer petals are 15-16 by 1.5-2 millimeter with rounded tips. The outer surfaces of the outer petals have gold-colored, silky hairs. The inner petals are shorter and narrower and covered in downy, white hairs except on the base of the inner surface. The flowers have short stamens that are 0.6-0.7 millimeters long with lobed anthers that have 2, or sometimes 3, chambers. The flowers have up to 7 carpels with silky ovaries that are 1 millimeter long. The flowers have thread-like stigma that are 4 millimeters long with styles that are bent at their base.
Reproductive biology
The pollen of Xylopia calophylla is shed as permanent tetrads.
Distribution and habitat
It has been observed growing in forests at elevations of 200-300 meters.
References
Plants described in 1939
Flora of Bolivia
Flora of Brazil
Flora of Colombia
Flora of Ecuador
Flora of Peru
Flora of Venezuela
Taxa named by Robert Elias Fries
calophylla
|
```javascript
// Node module: microgateway
// LICENSE: Apache 2.0, path_to_url
'use strict';
var supertest = require('supertest');
var microgw = require('../lib/microgw');
var backend = require('./support/invoke-server');
var apimServer = require('./support/mock-apim-server/apim-server');
var dsCleanup = require('./support/utils').dsCleanup;
var resetLimiterCache = require('../lib/rate-limit/util').resetLimiterCache;
var zlib = require('zlib');
describe('invokePolicy', function() {
var request;
before(function(done) {
// Use production instead of CONFIG_DIR: reading from apim instead of laptop
process.env.NODE_ENV = 'production';
// The apim server and datastore
process.env.APIMANAGER = '127.0.0.1';
process.env.APIMANAGER_PORT = 8081;
process.env.DATASTORE_PORT = 5000;
resetLimiterCache();
apimServer.start(
process.env.APIMANAGER,
process.env.APIMANAGER_PORT,
__dirname + '/definitions/invoke')
.then(function() { return microgw.start(3000); })
.then(function() { return backend.start(8889); })
.then(function() {
request = supertest('path_to_url
})
.then(done)
.catch(function(err) {
console.error(err);
done(err);
});
});
after(function(done) {
delete process.env.NODE_ENV;
delete process.env.APIMANAGER;
delete process.env.APIMANAGER_PORT;
delete process.env.DATASTORE_PORT;
resetLimiterCache();
dsCleanup(5000)
.then(function() { return apimServer.stop(); })
.then(function() { return microgw.stop(); })
.then(function() { return backend.stop(); })
.then(done, done)
.catch(function(err) {
console.log(err);
done(err);
});
});
var data = { msg: 'Hello world' };
// invoke policy to post a request
it('post', function(done) {
this.timeout(10000);
// by default, chunk-uploaded is false
request
.post('/invoke/basic')
.send(data)
.expect(/z-method: POST/)
.expect(/z-content-length: 21/)
.expect(/z-transfer-encoding: undefined/)
.expect(/z-user-agent: APIConnect\/5.0 \(MicroGateway\)/)
.expect(200, /z-url: \/\/invoke\/basic/)
.end(function(err, res) {
done(err);
});
});
// invoke policy to get a request
it('get', function(done) {
this.timeout(10000);
// Two things are tested:
// 1. a GET request with data should not be rejected by microgateway
// 2. The invoke policy should not forward the Content-Length of a GET request
request
.get('/invoke/basic')
.set('Content-Length', '5')
.send('dummy')
.expect(/z-method: GET/)
.expect(/z-content-length: undefined/)
.expect(/z-transfer-encoding: undefined/)
.expect(/body: \n/)
.expect(200)
.end(function(err, res) {
done(err);
});
});
// This testcase is to verify the invoke policy will urlencode the form data
// before sending them to the api server
it('form-urlencoded-1', function(done) {
this.timeout(10000);
// POST application/x-www-form-urlencoded
request
.post('/invoke/encode')
.type('form')
.send({ foo: 'hello' })
.send({ bar: 123 })
.send({ baz: [ 'qux', 'quux' ] })
.expect(/z-method: POST/)
.expect(/z-content-type: application\/x-www-form-urlencoded/)
.expect(200, /body: foo=hello&bar=123&baz%5B0%5D=qux&baz%5B1%5D=quux/)
// foo=hello&bar=123&baz[0]=qux&baz[1]=quux
.end(function(err, res) {
done(err);
});
});
// This testcase is to verify the invoke policy will parse the urlencoded data
// after receiving them from the api server
it('form-urlencoded-2', function(done) {
this.timeout(10000);
request
.get('/invoke/decode')
.expect('Content-Type', 'application/x-www-form-urlencoded')
.expect(200, /Found the parameter 'baz'=qux,quux in the message.body/)
.end(function(err, res) {
done(err);
});
});
// This testcase is to verify the post-flow should urlencode the message.body
// when the content-type is x-www-form-urlencoded
it('post-flow-should-urlencode-the-form-data', function(done) {
this.timeout(10000);
request
.get('/invoke/decode')
.set('X-TEST-POSTFLOW', 'yes')
.expect('Content-Type', 'application/x-www-form-urlencoded')
.expect(200, /^foo=bar&baz%5B0%5D=qux&baz%5B1%5D=quux&corge=$/)
.end(function(err, res) {
done(err);
});
});
it('get', function(done) {
this.timeout(10000);
request
.get('/invoke/basic')
.expect(200, /z-method: GET/)
.expect(200, /z-url: \/\/invoke\/basic/, done);
});
// a HEAD response has no body
it('head', function(done) {
this.timeout(10000);
request
.head('/invoke/basic')
.expect(200, {}, done);
});
// An invalid host will lead to a ConnectionError. By default, the invoke
// policy stops on ConnectionError. A ConnectionError returns with status of
// "500: URL Open error".
it('host-not-found', function(done) {
this.timeout(10000);
request
.get('/invoke/dynHost')
.set('X-TEST-HOSTNAME', 'cannotbevalidcom')
.expect(function(res, done) {
if (res.res.statusCode !== 500) {
throw new Error('status code should be 500');
}
if (res.res.statusMessage !== 'URL Open error') {
throw new Error('status reason should be "URL Open error"');
}
})
.expect(/"name":"ConnectionError"/)
.expect(/getaddrinfo ENOTFOUND cannotbevalidcom/,
done);
});
// The invoke policy receives a 500 error from the server.
// Any non-2xx response is considered as an OperationError.
// However, by default, invoke only stops on ConnectionError
it('test-500-response-default', function(done) {
this.timeout(10000);
request
.get('/invoke/testStatusCode')
.set('X-CODE', '500')
.expect('x-after-invoke', 'this is the post-invoke header')
.expect(500, /This is a 500 response./, done);
});
// The invoke policy receives a 303 response from the server.
// Any non-2xx response is considered as an OperationError.
// However, by default, invoke only stops on ConnectionError
it('test-303-response-default', function(done) {
this.timeout(10000);
request
.get('/invoke/testStatusCode')
.set('X-CODE', '303')
.expect('x-after-invoke', 'this is the post-invoke header')
.expect(303, /This is a 303 response./, done);
});
// The invoke policy receives a 303 response from the server.
// The invoke policy stops on OperationError
it('test-stop-on-303-response', function(done) {
this.timeout(10000);
request
.get('/invoke/stopOnOperationError')
.set('X-CODE', '303')
.expect(303, /'OperationError' 303: undefined is caught!/, done);
});
// The invoke policy receives a 203 response from the server.
// The invoke policy stops on OperationError
it('test-stop-on-203-response', function(done) {
this.timeout(10000);
request
.get('/invoke/stopOnOperationError')
.set('X-CODE', '203')
.expect(203, /Only non-operationError can reach here!/, done);
});
it('test-ignore-operation-error', function(done) {
this.timeout(10000);
request
.get('/invoke/ignoreAllErrors')
.set('X-CODE', '303')
.expect(303,
/All errors should be ignored, this must be executed after the invoke./,
done);
});
it('test-ignore-connection-error', function(done) {
this.timeout(10000);
request
.get('/invoke/ignoreAllErrors')
.set('X-CODE', '-1')
.expect(function(res, done) {
if (res.res.statusCode !== 500) {
throw new Error('status code should be 500');
}
if (res.res.statusMessage !== 'URL Open error') {
throw new Error('status reason should be "URL Open error"');
}
})
.expect(500,
/All errors should be ignored, this must be executed after the invoke./,
done);
});
it('auth-OK', function(done) {
this.timeout(10000);
request
.get('/invoke/basic')
.auth('root', 'Hunter2')
.expect(200, /z-method: GET/, done);
});
it('auth-NG', function(done) {
this.timeout(10000);
request
.get('/invoke/basic')
.auth('root', 'test123')
.expect(401, /^Not Authorized/, done);
});
it('compress-data', function(done) {
this.timeout(10000);
// gzip compression header includes a platform bit which means different
// platforms actually produce _slightly_ different gzip compression results
// depending on the platform.
// see path_to_url
const gzHelloWorld = zlib.gzipSync('Hello World').toString('base64');
// when data is compressed, use the chunked encoding
request
.post('/invoke/testCompression')
.set('X-RAW-DATA', 'Hello World')
.expect(/z-content-encoding: gzip/)
.expect(/z-content-length: undefined/)
.expect(/z-transfer-encoding: chunked/)
.expect(new RegExp(`raw: ${gzHelloWorld}`))
.expect(200, /body: Hello World/, done);
});
it('use-chunks-yes', function(done) {
this.timeout(10000);
request
.post('/invoke/useChunks')
.send(data)
.expect(/z-content-encoding: undefined/)
.expect(/z-content-length: undefined/)
.expect(/z-transfer-encoding: chunked/)
.expect(200, /{"msg":"Hello world"}/, done);
});
it('just-in-time', function(done) {
this.timeout(10000);
// request returned before timeout
request
.get('/invoke/timeout5Sec')
.set('X-DELAY-ME', '2')
.expect(200, /z-url: \/\/invoke\/timeout5Sec/, done);
});
it('request-timeouted', function(done) {
this.timeout(10000);
// the request timeouted
request
.get('/invoke/timeout5Sec')
.set('X-DELAY-ME', '7')
.expect(/"name":"ConnectionError"/)
.expect(/"message":"The invoke policy is timeouted."/)
.expect(500, done);
});
/// //////////////////// HTTPS servers ///////////////////////
// 8890: The server is "Sarah", whose CA is root
// 8891: The server is "Sandy", whose CA is root2
// 8892: The server is using TLS10, "ProtocolTLS10"
// 8893: The server uses only some ciphers, "LimitedCiphers"
// 8894: The server uses alice and bob as the CA. Incorrect usage?
// 8895: 'Sarah' uses the CA 'root' to authenticate clients
// 8896: 'Sarah' uses the CA 'root2' to authenticate clients
// 8897: 'Sandy' uses the CA 'root2' to authenticate clients
/// //////////////////////////////////////////////////////////
// This is to test if client can skip the validation of server's certificate.
// By default, yes (to be consistent with edge gateway)
it('https-basic', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8890')
.set('X-TLS-PROFILE', 'tls-profile-simple')
.expect(/body/)
.expect(200, done);
});
it('https-without-tlsprofile', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8890')
.expect(200, done);
});
// Use the certificate of Sarah's Root CA to authenticate the Sarah. OK
// Note: the common name of Sarah must be domain name or localhost. Otherwise,
// You might get an error "Host: localhost. is not cert\'s CN: Sarah".
it('https-server-sarah-OK', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8890')
.set('X-TLS-PROFILE', 'tls-profile-serverSarah-1')
.expect(/url: \/\/invoke\/testTLS/)
.expect(200, done);
});
// Use Sarah's own certificate to authenticate Sarah. NG
it('https-server-sarah-NG', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8890')
.set('X-TLS-PROFILE', 'tls-profile-serverSarah-2')
.expect(/"name":"ConnectionError"/)
.expect(/"message":"Error: unable to verify the first certificate"/)
.expect(500, done);
});
it('cannot-find-tls-profile', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8890')
.set('X-TLS-PROFILE', 'not-found')
.expect(299, /Unexpect \'PropertyError\' Cannot find the TLS profile "not-found"/, done);
});
// openssl s_client -tls1_2 -CAfile root.crt -connect localhost:port
// openssl s_client -tls1 -CAfile root.crt -connect localhost:port
// Both of server and client use the TLS v1.0
it('require-tls10', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8892')
.set('X-TLS-PROFILE', 'tls-profile-require-tls10')
.expect(/url: \/\/invoke\/testTLS/)
.expect(200, done);
});
it('require-tls12-while-server-supports-tls10', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8892')
.set('X-TLS-PROFILE', 'tls-profile-require-tls12')
.expect(/"name":"ConnectionError"/)
.expect(/(wrong version number|write EPROTO)/)
.expect(500, done);
});
/// /cipher mapping table for each TLS versions:
/// /path_to_url#CIPHER_LIST_FORMAT
/// /both of server and client support the cipher 'TLS_RSA_WITH_3DES_EDE_CBC_SHA'
// it('use-cipher-TLS_RSA_WITH_3DES_EDE_CBC_SHA', function(done) {
// this.timeout(10000);
//
// request
// .get('/invoke/testTLS')
// .set('X-HTTPS-PORT', '8892')
// .set('X-TLS-PROFILE', 'use-cipher-TLS_RSA_WITH_3DES_EDE_CBC_SHA')
// .expect(200, done);
// });
//
/// /client requires a cipher which is disalloed by server
/// /The EPROTO error is due to the "!ECDHE-RSA-AES128-SHA256" in server side.
/// /The cipher is available but is not allowed.
// it('use-cipher-TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256', function(done) {
// this.timeout(10000);
//
// request
// .get('/invoke/testTLS')
// .set('X-HTTPS-PORT', '8893')
// .set('X-TLS-PROFILE', 'use-cipher-TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256')
// .expect(299, /Error: write EPROTO/, done);
// });
//
/// /client requires the PSK cipher that is not available at server
/// /'no ciphers available'
// it('use-cipher-PSK_WITH_CAMELLIA_128_CBC_SHA256', function(done) {
// this.timeout(10000);
//
// request
// .get('/invoke/testTLS')
// .set('X-HTTPS-PORT', '8893')
// .set('X-TLS-PROFILE', 'use-cipher-PSK_WITH_CAMELLIA_128_CBC_SHA256')
// .expect(299, /SSL23_CLIENT_HELLO:no ciphers available/, done);
// });
// The client expects the server to be Sarah and uses the CA 'root' for auth.
// However, the server is Sandy who should be authenticated using 'root2'.
it('unexected-https-server', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8891')
.set('X-TLS-PROFILE', 'tls-profile-serverSarah-1')
.expect(/"name":"ConnectionError"/)
.expect(/"message":"Error: unable to verify the first certificate"/)
.expect(500, done);
});
// 'sarah' at 8895 is authenticated by 'root' and uses 'root' to authenticate
// its client too. So both of 'alice' and 'bob' will be good
it('mutual-auth-ok', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8895')
.set('X-TLS-PROFILE', 'tls-profile-alice-2')
.expect(/url: \/\/invoke\/testTLS/)
.expect(200, done);
});
it('mutual-auth-ok-2', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8895')
.set('X-TLS-PROFILE', 'tls-profile-bob-2')
.expect(/url: \/\/invoke\/testTLS/)
.expect(200, done);
});
it('mutual-auth-ng', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8895')
.set('X-TLS-PROFILE', 'tls-profile-sandy-2')
.expect(/"name":"ConnectionError"/)
.expect(/"message":"Error: socket hang up"/)
.expect(500, done);
});
// 'sarah' at 8896 is authenticated by 'root' and uses 'root2' to authenticate
// its client too. So only 'sandy' will be good
it('mutual-auth-ok-3', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8896')
.set('X-TLS-PROFILE', 'tls-profile-sandy-2')
.expect(/url: \/\/invoke\/testTLS/)
.expect(200, done);
});
it('mutual-auth-ng-2', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8896')
.set('X-TLS-PROFILE', 'tls-profile-bob-2')
.expect(/"name":"ConnectionError"/)
.expect(/"message":"Error: socket hang up"/)
.expect(500, done);
});
// 'sandy' at 8897 is authenticated by 'root2' and uses 'root2' to authenticate
// its client too. Son only 'sandy' will be good
it('mutual-auth-ok-4', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8897')
.set('X-TLS-PROFILE', 'tls-profile-sandy-2')
.expect(/url: \/\/invoke\/testTLS/)
.expect(200, done);
});
it('mutual-auth-ng-3', function(done) {
this.timeout(10000);
request
.get('/invoke/testTLS')
.set('X-HTTPS-PORT', '8897')
.set('X-TLS-PROFILE', 'tls-profile-alice-2')
.expect(/"name":"ConnectionError"/)
.expect(/"message":"Error: socket hang up"/)
.expect(500, done);
});
// to read the data and headers from somewhere other than the context.message
it('test-input', function(done) {
this.timeout(10000);
request
.post('/invoke/testInput')
.send(data)
.expect(/z-method: POST/)
.expect(/z-secret-1: test 123/)
.expect(/z-secret-2: hello amigo/)
.expect(200, /body: This is a custom body message/, done);
});
// to save the result of invoke policy somewhere other than the context.message
it('test-output', function(done) {
this.timeout(10000);
request
.get('/invoke/testOutput')
.expect('X-TOKEN-ID', 'foo')
.expect(202, /You are accepted/, done);
});
// The api server returns a message of length 5 and header 'Content-Length:5'.
// Then a set-variable policy modifies the message without changing the
// Content-Length. Let's see what'll happen.
// It turns out that express will update the Content-Length
it('test-content-length', function(done) {
this.timeout(10000);
request
.get('/invoke/testContentLength')
.expect('Content-Length', '95')
.expect(200, /This is a very long message/, done);
});
// the returned body might be parsed as JSON depending on the content-type
it('test-json', function(done) {
this.timeout(10000);
request
.get('/invoke/testJSON')
.expect(200, /The quantity is 150 and the price is 23/, done);
});
});
```
|
```javascript
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following
// disclaimer in the documentation and/or other materials provided
// with the distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Flags: --allow-natives-syntax
function f1(a, i) {
return a[i] + 0.5;
}
var arr = [0.0,,2.5];
assertEquals(0.5, f1(arr, 0));
assertEquals(0.5, f1(arr, 0));
Array.prototype.__proto__[1] = 1.5;
assertEquals(2, f1(arr, 1));
%OptimizeFunctionOnNextCall(f1);
assertEquals(2, f1(arr, 1));
assertEquals(0.5, f1(arr, 0));
```
|
Hiawatha is a city in Linn County, Iowa, United States. It is a suburb located in the northwestern side of Cedar Rapids and is part of the Cedar Rapids Metropolitan Statistical Area. The population was 7,183 at the time of the 2020 census, an increase from 6,480 in 2000.
History
In 1946, Fay Clark, an entrepreneur of several ventures located in Linn County north of Cedar Rapids, Iowa, had a vision of houses and a highway running through a new city. In 1950 Clark and another 45 residents signed a petition seeking to become the 17th incorporated town in Linn County. The town would be named after Clark's trailer company. That same year he and Henry Katz of Marion established the Linn County Fire Association to help provide fire protection to rural communities. Clark served as mayor of Hiawatha from 1950 to 1958, and again from 1961 to 1963. Clark died in 1991 at the age of 84.
Hiawatha residents celebrated the dedication of their new City Hall on May 17, 2008.
Hiawatha celebrated its 60th anniversary in May 2010.
Geography
Hiawatha's longitude and latitude coordinates in decimal form are 42.044409, -91.681025.
According to the United States Census Bureau, the city has a total area of , all land.
The elevation of Hiawatha is above sea level.
Hiawatha's population density is estimated at 1884 people per square mile which is considered low for urban areas.
Climate
Demographics
2010 census
As of the census of 2010, there were 7,024 people, 3,071 households, and 1,796 families residing in the city. The population density was . There were 3,310 housing units at an average density of . The racial makeup of the city was 88.9% White, 5.1% African American, 0.2% Native American, 2.2% Asian, 0.2% Pacific Islander, 0.8% from other races, and 2.5% from two or more races. Hispanic or Latino of any race were 2.3% of the population.
There were 3,071 households, of which 29.7% had children under the age of 18 living with them, 42.0% were married couples living together, 11.9% had a female householder with no husband present, 4.5% had a male householder with no wife present, and 41.5% were non-families. 33.3% of all households were made up of individuals, and 8.1% had someone living alone who was 65 years of age or older. The average household size was 2.25 and the average family size was 2.89.
The median age in the city was 37 years. 23.8% of residents were under the age of 18; 9% were between the ages of 18 and 24; 27.7% were from 25 to 44; 26% were from 45 to 64; and 13.5% were 65 years of age or older. The gender makeup of the city was 48.8% male and 51.2% female.
2000 census
As of the census of 2000, there were 6,480 people, 2,859 households, and 1,663 families residing in the city. The population density was . There were 2,979 housing units at an average density of . The racial makeup of the city was 94.24% White, 2.16% African American, 0.31% Native American, 1.53% Asian, 0.37% from other races, and 1.39% from two or more races. Hispanic or Latino of any race were 1.33% of the population.
There were 2,859 households, out of which 29.5% had children under the age of 18 living with them, 44.6% were married couples living together, 10.2% had a female householder with no husband present, and 41.8% were non-families. 33.1% of all households were made up of individuals, and 5.6% had someone living alone who was 65 years of age or older. The average household size was 2.24 and the average family size was 2.89.
Age spread: 23.6% under the age of 18, 12.1% from 18 to 24, 35.4% from 25 to 44, 19.4% from 45 to 64, and 9.6% who were 65 years of age or older. The median age was 32 years. For every 100 females, there were 95.8 males. For every 100 females age 18 and over, there were 95.8 males.
The median income for a household in the city was $40,799, and the median income for a family was $47,135. Males had a median income of $37,277 versus $25,394 for females. The per capita income for the city was $22,664. About 3.4% of families and 4.5% of the population were below the poverty line, including 5.6% of those under age 18 and 1.7% of those age 65 or over.
Parks and recreation
The city of Hiawatha maintains three parks . Each Park has Free Wifi & Little Free Libraries at each of the parks.
Guthridge Park Located on Emmons Street between 7th and 10th
Tucker Park Located on B Avenue
Clark Park ParkLocated off N. 18th Avenue
Nature trails
Cedar Valley Nature Trail
Located on Boyson Road in Hiawatha is a trailhead for the Cedar Valley Nature Trail. The trailhead is furnished with picnic tables, restrooms, maps and information about the trail, and parking.
The main trail is a continuous path just under long running northwest from Hiawatha to Evansdale, with the first of the route north from Hiawatha paved with asphalt. The first going south from Evansdale are also paved with asphalt, the remaining trail is crushed and packed limestone. The trail has wildflowers and pockets of native remnant prairie grasses that have been enlarged through the process of burning surrounding brush and non-native plants. The trail is also known to have abundant wildlife and birds to observe. Although a fee was charged on the southern part of the trail , the entire trail is now free.
Other trails
A connecting segment of trail continues south from the Cedar Valley Nature Trail head through Hiawatha to connect with the Cedar River trail a trail in neighboring Cedar Rapids. The city also maintains smaller, unconnected looped walking trails at all three city parks.
Farmers market
The Hiawatha farmers market is located in the 10th Avenue parking lot of Guthridge Park. For the 2011 year, the market will be open beginning April 24 and running through October 30 on Sundays from 11:00am to 2:00pm (local time). Depending on the day, 25 or more entrepreneurs from the area set up their own displays and sell home grown and home made products. These include, but are not limited to: fresh vegetables, fruits, and flowers, an assortment of baked products, preserved food products, arts, and crafts.
Government
Hiawatha is governed by the Mayor with city council form of government, utilizing several departments, boards, and commissions.The recently built City Hall (dedicated in 2007) is located at 101 Emmons Street Hiawatha, Iowa 52233. The building was designed to include not only the city government and various departments but also includes a Community center and public meeting rooms.
Police department
The Hiawatha Police Department is located in the lower level of the City Hall building. The department has 14 full-time sworn officers, two reserve police officers and special agent “Reso”, a K-9 unit. Special Agent "Mod" retired from the police department on March 21, 2012. Mod died on August 28, 2013.
Fire department
The Hiawatha Fire Department is located at 60 10th Avenue; Hiawatha, Iowa 52233. The department's personnel consist of a combination of paid and volunteer firefighters. The Hiawatha Fire Department has the only fire-based ambulance service in Linn County. HFD runs approximately 2,500 fire and EMS calls per year.
Education
Hiawatha is included in the Cedar Rapids Community School District system.
Public schools
Hiawatha Elementary School (home of the Cougars) is located at 603 Emmonds street in Hiawatha and opened its doors in 1958. Currently (see reference for date) the school has approximately 310 students in kindergarten through fifth grade and a staff of 64.
Nixon Elementary School (home of the Bobcats) is located at 200 Nixon Drive in Hiawatha and opened its doors in 1970. The school's philosophy utilizes self-contained classrooms in kindergarten through fifth grade. Currently (see reference for date) Nixon has approximately 330 students and a staff of 58.
Elementary schools: Most portions are zoned to either Hiawatha or Nixon, though some western parts are zoned to Viola Gibson Elementary School
Secondary schools: Harding Middle School and Kennedy High School
Private schools
Essential Montessori School is located at 1350 Blairs Ferry Road Suite A. The school has an enrollment of 46 from prekindergarten to kindergarten.
Transportation
Transit in the city is provided by Cedar Rapids Transit. Route 30 provides bus service connecting the city to the region.
Notable people
Salvatore Giunta (born 1985), first living recipient of the Medal of Honor since the Vietnam War.
Kraig Paulsen (born 1964) Iowa State representative, Republican majority leader for the 84th Iowa General Assembly.
References
External links
Hiawatha Iowa Official Website Home page
HEDCO Website Hiawatha Economic Development Corporation
Business Directory for Hiawatha
City Data Comprehensive statistical data and more about Hiawatha
Hiawatha Advocate Hiawatha's online news source
American Towns News, organizations, restaurants, and things to do,
Cities in Iowa
Cities in Linn County, Iowa
Cedar Rapids, Iowa metropolitan area
Populated places established in 1950
1950 establishments in Iowa
|
Hilbert is both a Germanic masculine given name and surname. Notable people with the name include:
Given name:
Hilbert Leigh Bair (1894–1985), American World War I flying ace
Hilbert Schauer (1920–2015), associate justice of the Colorado Supreme Court
Hilbert Schenck (1926–2013), science fiction writer and engineer
Hilbert Shirey (fl. 1980s–2010s), American poker player
Hilbert van der Duim (born 1957), Dutch speed skater
Hilbert Van Dijk (1918–2001), Australian fencer
Hilbert Philip Zarky (1912–1989), American tax attorney
Surname:
Andy Hilbert (born 1981), U.S. hockey player
Anton Hilbert (1898-1986), German politician
Carl Aage Hilbert (1899–1953), Danish Prefect of the Faroe Islands
David Hilbert (1862–1943), German mathematician
Donna Hilbert (born 1946), American poet who also writes short stories, plays, and essays
Egon Hilbert (1899–1968), Austrian opera/theatre director
Ernest Hilbert (born 1970), American poet, critic, and editor
Ernest Lenard Hilbert (1920–1942), American Air Force hero
Garrett Hilbert (born 1987), member of American trick shot conglomerate Dude Perfect
Georges Hilbert (1900-1982), French sculptor.
Jaroslav Hilbert (1871–1936), Czech dramatist and writer
Lukas Loules (born Hilbert, 1972), German musician and music producer
Morton Hilbert (1917–1998), professor of public health and environmentalist
Roberto Hilbert (born 1984), German footballer
Rodrigo Hilbert (born 1980), Brazilian actor and model
Stephen Hilbert (fl. 1960s–2010s), American mathematician
Vi Hilbert (1918–2008), Native American tribal elder
German-language surnames
Dutch masculine given names
Masculine given names
Danish masculine given names
Surnames from given names
|
```c
/**
* @license Apache-2.0
*
*
*
* path_to_url
*
* Unless required by applicable law or agreed to in writing, software
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
#include "stdlib/math/base/assert/is_nonpositive_integer.h"
#include <node_api.h>
#include <stdint.h>
#include <assert.h>
/**
* Receives JavaScript callback invocation data.
*
* @param env environment under which the function is invoked
* @param info callback data
* @return Node-API value
*/
static napi_value addon( napi_env env, napi_callback_info info ) {
napi_status status;
// Get callback arguments:
size_t argc = 1;
napi_value argv[ 1 ];
status = napi_get_cb_info( env, info, &argc, argv, NULL, NULL );
assert( status == napi_ok );
// Check whether we were provided the correct number of arguments:
if ( argc < 1 ) {
status = napi_throw_error( env, NULL, "invalid invocation. Insufficient arguments." );
assert( status == napi_ok );
return NULL;
}
if ( argc > 1 ) {
status = napi_throw_error( env, NULL, "invalid invocation. Too many arguments." );
assert( status == napi_ok );
return NULL;
}
napi_valuetype vtype0;
status = napi_typeof( env, argv[ 0 ], &vtype0 );
assert( status == napi_ok );
if ( vtype0 != napi_number ) {
status = napi_throw_type_error( env, NULL, "invalid argument. First argument must be a number." );
assert( status == napi_ok );
return NULL;
}
double x;
status = napi_get_value_double( env, argv[ 0 ], &x );
assert( status == napi_ok );
bool result = stdlib_base_is_nonpositive_integer( x );
napi_value v;
status = napi_create_int32( env, (int32_t)result, &v );
assert( status == napi_ok );
return v;
}
/**
* Initializes a Node-API module.
*
* @param env environment under which the function is invoked
* @param exports exports object
* @return main export
*/
static napi_value init( napi_env env, napi_value exports ) {
napi_value fcn;
napi_status status = napi_create_function( env, "exports", NAPI_AUTO_LENGTH, addon, NULL, &fcn );
assert( status == napi_ok );
return fcn;
}
NAPI_MODULE( NODE_GYP_MODULE_NAME, init )
```
|
Nankinian is one of the Finisterre languages of Papua New Guinea. Nankina Wam, Domung Meh, and Yupno Gen. are related varieties.
Domung Meh is spoken in Yout village () of Nayudo Rural LLG, while Domung is spoken in Aunon, Ayengket, Bobongat, Dirit, Gabutamon, Kian, Kosit, Maramung, Maum, Sibgou, Swantan, Tapen, and Wokopop villages in Madang Province. Nankina is also spoken in Taip, Yowangowo, Bambu, Meweng, Ayongowo, Gupbayong, Mambak, Sepbawang, Sevan, Kandambo, Gwarawon, MIOK, Pivin, Mebu, Tariknan, Youthbo, Mambit, Dakuwe and Yongem in the Rai coast District Madang province.
References
Finisterre languages
Languages of Madang Province
|
```objective-c
#ifndef __LINUX_KERNEL_H
#define __LINUX_KERNEL_H
#include <linux/compiler.h>
#ifndef offsetof
#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
#endif
#ifndef container_of
#define container_of(ptr, type, member) ({ \
const typeof(((type *)0)->member) * __mptr = (ptr); \
(type *)((char *)__mptr - offsetof(type, member)); })
#endif
#ifndef max
#define max(x, y) ({ \
typeof(x) _max1 = (x); \
typeof(y) _max2 = (y); \
(void) (&_max1 == &_max2); \
_max1 > _max2 ? _max1 : _max2; })
#endif
#ifndef min
#define min(x, y) ({ \
typeof(x) _min1 = (x); \
typeof(y) _min2 = (y); \
(void) (&_min1 == &_min2); \
_min1 < _min2 ? _min1 : _min2; })
#endif
#ifndef roundup
#define roundup(x, y) ( \
{ \
const typeof(y) __y = y; \
(((x) + (__y - 1)) / __y) * __y; \
} \
)
#endif
#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
#define __KERNEL_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
#endif
```
|
```javascript
import Image from 'next/image'
export default function Home() {
return (
<Image
alt="red square"
src="data:image/png;base64,your_sha256_hashHwAFBQIAX8jx0gAAAABJRU5ErkJggg=="
width={64}
height={64}
/>
)
}
```
|
The Aero A.19 was a biplane fighter aircraft designed in Czechoslovakia in 1923 and considered by the Czech Air Force against its stablemates the A.18 and A.20. The A.18 was selected for production and development of the A.19 was abandoned.
Specifications (A.19)
See also
A019
Single-engined tractor aircraft
Biplanes
1920s Czechoslovakian fighter aircraft
|
Varzob () is a settlement in Varzob District, Districts of Republican Subordination, Tajikistan, in central Asia. It is the administrative center for the Varzob District.
Geography
Varzob is located on the left (east) bank of the river Varzob, about 25 km north of Dushanbe. The village of Begar lies about 2.5 km north of Varzob on the right bank of the river, and the village of Varzobkala lies just .75 km south of Varzob on the right bank of the river.
There are seven rivers in the area: the Varzob, the Simiganj, the Sioma, the Seer, the Vakhsh, the Amoo, and the Sorhob.
Notes
Populated places in Districts of Republican Subordination
|
```python
import json
import logging
from copy import deepcopy
from typing import Any, Dict, List, Optional
from zlib import crc32
from ray._private.pydantic_compat import BaseModel
from ray.serve._private.config import DeploymentConfig
from ray.serve._private.utils import DeploymentOptionUpdateType, get_random_string
from ray.serve.config import AutoscalingConfig
from ray.serve.generated.serve_pb2 import DeploymentVersion as DeploymentVersionProto
logger = logging.getLogger("ray.serve")
class DeploymentVersion:
def __init__(
self,
code_version: Optional[str],
deployment_config: DeploymentConfig,
ray_actor_options: Optional[Dict],
placement_group_bundles: Optional[List[Dict[str, float]]] = None,
placement_group_strategy: Optional[str] = None,
max_replicas_per_node: Optional[int] = None,
):
if code_version is not None and not isinstance(code_version, str):
raise TypeError(f"code_version must be str, got {type(code_version)}.")
if code_version is None:
self.code_version = get_random_string()
else:
self.code_version = code_version
# Options for this field may be mutated over time, so any logic that uses this
# should access this field directly.
self.deployment_config = deployment_config
self.ray_actor_options = ray_actor_options
self.placement_group_bundles = placement_group_bundles
self.placement_group_strategy = placement_group_strategy
self.max_replicas_per_node = max_replicas_per_node
self.compute_hashes()
@classmethod
def from_deployment_version(cls, deployment_version, deployment_config):
version_copy = deepcopy(deployment_version)
version_copy.deployment_config = deployment_config
version_copy.compute_hashes()
return version_copy
def __hash__(self) -> int:
return self._hash
def __eq__(self, other: Any) -> bool:
if not isinstance(other, DeploymentVersion):
return False
return self._hash == other._hash
def requires_actor_restart(self, new_version):
"""Determines whether the new version requires actors of the current version to
be restarted.
"""
return (
self.code_version != new_version.code_version
or self.ray_actor_options_hash != new_version.ray_actor_options_hash
or self.placement_group_options_hash
!= new_version.placement_group_options_hash
or self.max_replicas_per_node != new_version.max_replicas_per_node
)
def requires_actor_reconfigure(self, new_version):
"""Determines whether the new version requires calling reconfigure() on the
replica actor.
"""
return self.reconfigure_actor_hash != new_version.reconfigure_actor_hash
def requires_long_poll_broadcast(self, new_version):
"""Determines whether lightweightly updating an existing replica to the new
version requires broadcasting through long poll that the running replicas has
changed.
"""
return (
self.deployment_config.max_ongoing_requests
!= new_version.deployment_config.max_ongoing_requests
)
def compute_hashes(self):
# If these change, the controller will rolling upgrade existing replicas.
serialized_ray_actor_options = _serialize(self.ray_actor_options or {})
self.ray_actor_options_hash = crc32(serialized_ray_actor_options)
combined_placement_group_options = {}
if self.placement_group_bundles is not None:
combined_placement_group_options["bundles"] = self.placement_group_bundles
if self.placement_group_strategy is not None:
combined_placement_group_options["strategy"] = self.placement_group_strategy
serialized_placement_group_options = _serialize(
combined_placement_group_options
)
self.placement_group_options_hash = crc32(serialized_placement_group_options)
# If this changes, DeploymentReplica.reconfigure() will call reconfigure on the
# actual replica actor
self.reconfigure_actor_hash = crc32(
self._get_serialized_options(
[DeploymentOptionUpdateType.NeedsActorReconfigure]
)
)
# Used by __eq__ in deployment state to either reconfigure the replicas or
# stop and restart them
self._hash = crc32(
self.code_version.encode("utf-8")
+ serialized_ray_actor_options
+ serialized_placement_group_options
+ str(self.max_replicas_per_node).encode("utf-8")
+ self._get_serialized_options(
[
DeploymentOptionUpdateType.NeedsReconfigure,
DeploymentOptionUpdateType.NeedsActorReconfigure,
]
)
)
def to_proto(self) -> bytes:
# TODO(simon): enable cross language user config
return DeploymentVersionProto(
code_version=self.code_version,
deployment_config=self.deployment_config.to_proto(),
ray_actor_options=json.dumps(self.ray_actor_options),
placement_group_bundles=json.dumps(self.placement_group_bundles)
if self.placement_group_bundles is not None
else "",
placement_group_strategy=self.placement_group_strategy
if self.placement_group_strategy is not None
else "",
max_replicas_per_node=self.max_replicas_per_node
if self.max_replicas_per_node is not None
else 0,
)
@classmethod
def from_proto(cls, proto: DeploymentVersionProto):
return DeploymentVersion(
proto.code_version,
DeploymentConfig.from_proto(proto.deployment_config),
json.loads(proto.ray_actor_options),
placement_group_bundles=(
json.loads(proto.placement_group_bundles)
if proto.placement_group_bundles
else None
),
placement_group_version=(
proto.placement_group_version if proto.placement_group_version else None
),
max_replicas_per_node=(
proto.max_replicas_per_node if proto.max_replicas_per_node else None
),
)
def _get_serialized_options(
self, update_types: List[DeploymentOptionUpdateType]
) -> bytes:
"""Returns a serialized dictionary containing fields of a deployment config that
should prompt a deployment version update.
"""
reconfigure_dict = {}
# TODO(aguo): Once we only support pydantic 2, we can remove this if check.
# In pydantic 2.0, `__fields__` has been renamed to `model_fields`.
fields = (
self.deployment_config.model_fields
if hasattr(self.deployment_config, "model_fields")
else self.deployment_config.__fields__
)
for option_name, field in fields.items():
option_weight = field.field_info.extra.get("update_type")
if option_weight in update_types:
reconfigure_dict[option_name] = getattr(
self.deployment_config, option_name
)
# If autoscaling config was changed, only broadcast to
# replicas if metrics_interval_s or look_back_period_s
# was changed, because the rest of the fields are only
# used in deployment state manager
if isinstance(reconfigure_dict[option_name], AutoscalingConfig):
reconfigure_dict[option_name] = reconfigure_dict[option_name].dict(
include={"metrics_interval_s", "look_back_period_s"}
)
elif isinstance(reconfigure_dict[option_name], BaseModel):
reconfigure_dict[option_name] = reconfigure_dict[option_name].dict()
if (
isinstance(self.deployment_config.user_config, bytes)
and "user_config" in reconfigure_dict
):
del reconfigure_dict["user_config"]
return self.deployment_config.user_config + _serialize(reconfigure_dict)
return _serialize(reconfigure_dict)
def _serialize(json_object):
return str.encode(json.dumps(json_object, sort_keys=True))
```
|
```python
from c7n_azure.provider import resources
from c7n_azure.query import ChildResourceManager, ChildTypeInfo
from c7n_azure.utils import ResourceIdParser
@resources.register('servicebus-namespace-networkrules')
class ServiceBusNamespaceNetworkrules(ChildResourceManager):
"""Azure Service Bus Namespace Network Ruleset Resource
:example:
Returns Service Bus Namespace Network Ruleset resources
.. code-block:: yaml
policies:
- name: basic-servicebus-namespace-networkrule
resource: azure.servicebus-namespace-networkrules
"""
class resource_type(ChildTypeInfo):
doc_groups = ['Events']
service = 'azure.mgmt.servicebus'
client = 'ServiceBusManagementClient'
enum_spec = ('namespaces', 'list_network_rule_sets', None)
parent_manager_name = 'servicebus-namespace'
default_report_fields = (
'name',
'location',
'resourceGroup'
)
resource_type = 'Microsoft.ServiceBus/namespaces'
@classmethod
def extra_args(cls, parent_resource):
return {
'resource_group_name': ResourceIdParser.get_resource_group(parent_resource['id']),
'namespace_name': parent_resource['name']
}
```
|
```php
<?php
declare(strict_types=1);
return [
[4.1, '{5.6,4.1,4.1,3,2,4.1}'], // Calculated value was #N/A
[4.1, '5.6,4.1,4.1,3,2,4.1'],
[3, '3,3,4,4'],
[4, '4,3,3,4'], // Calculated value was 3
['#N/A', '1,2,3,4'],
[2, '1,2,2,"3","3","3"'],
['#N/A', '"3","3","3"'],
];
```
|
```go
/*
path_to_url
Unless required by applicable law or agreed to in writing, software
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*/
// Code generated by applyconfiguration-gen. DO NOT EDIT.
package v1beta1
import (
v1beta1 "k8s.io/api/admissionregistration/v1beta1"
v1 "k8s.io/client-go/applyconfigurations/meta/v1"
)
// ParamRefApplyConfiguration represents an declarative configuration of the ParamRef type for use
// with apply.
type ParamRefApplyConfiguration struct {
Name *string `json:"name,omitempty"`
Namespace *string `json:"namespace,omitempty"`
Selector *v1.LabelSelectorApplyConfiguration `json:"selector,omitempty"`
ParameterNotFoundAction *v1beta1.ParameterNotFoundActionType `json:"parameterNotFoundAction,omitempty"`
}
// ParamRefApplyConfiguration constructs an declarative configuration of the ParamRef type for use with
// apply.
func ParamRef() *ParamRefApplyConfiguration {
return &ParamRefApplyConfiguration{}
}
// WithName sets the Name field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the Name field is set to the value of the last call.
func (b *ParamRefApplyConfiguration) WithName(value string) *ParamRefApplyConfiguration {
b.Name = &value
return b
}
// WithNamespace sets the Namespace field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the Namespace field is set to the value of the last call.
func (b *ParamRefApplyConfiguration) WithNamespace(value string) *ParamRefApplyConfiguration {
b.Namespace = &value
return b
}
// WithSelector sets the Selector field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the Selector field is set to the value of the last call.
func (b *ParamRefApplyConfiguration) WithSelector(value *v1.LabelSelectorApplyConfiguration) *ParamRefApplyConfiguration {
b.Selector = value
return b
}
// WithParameterNotFoundAction sets the ParameterNotFoundAction field in the declarative configuration to the given value
// and returns the receiver, so that objects can be built by chaining "With" function invocations.
// If called multiple times, the ParameterNotFoundAction field is set to the value of the last call.
func (b *ParamRefApplyConfiguration) WithParameterNotFoundAction(value v1beta1.ParameterNotFoundActionType) *ParamRefApplyConfiguration {
b.ParameterNotFoundAction = &value
return b
}
```
|
```c
// 2017 and later: Unicode, Inc. and others.
/*
******************************************************************************
*
* Corporation and others. All Rights Reserved.
*
******************************************************************************
*
* FILE NAME : testTimezone.c
*
* Date Name Description
* 03/02/2006 grhoten Creation.
******************************************************************************
*/
#include "unicode/putil.h"
#include "unicode/ucnv.h"
#include "unicode/uloc.h"
#include "unicode/ures.h"
#include <stdbool.h>
#include <stdio.h>
#include <string.h>
int main(int argc, const char* const argv[]) {
UErrorCode status = U_ZERO_ERROR;
ures_close(ures_open(NULL, NULL, &status));
if (status != U_ZERO_ERROR) {
printf("uloc_getDefault = %s\n", uloc_getDefault());
printf("Locale available in ICU = %s\n", status == U_ZERO_ERROR ? "true" : "false");
}
if (strcmp(ucnv_getDefaultName(), "US-ASCII") == 0) {
printf("uprv_getDefaultCodepage = %s\n", uprv_getDefaultCodepage());
printf("ucnv_getDefaultName = %s\n", ucnv_getDefaultName());
}
return 0;
}
```
|
```yaml
subject: "Encoding keyword"
description: "__ENCODING__ keyword"
focused_on_node: "org.truffleruby.language.literal.ObjectLiteralNode"
ruby: |
__ENCODING__
ast: |
ObjectLiteralNode
attributes:
flags = 1
object = UTF-8
sourceCharIndex = 0
sourceLength = 12
```
|
SBAC may refer to:
SBA Communications, a United States-based telecommunications company
Smarter Balanced Assessment Consortium, an American K-12 Common Core testing consortium
The Society of British Aerospace Companies, a British national trade association
|
Panampatta is a place situated near Pathanapuram in Kollam District, Kerala state, India.
Panampatta included in the Pidavoor Village.
Politics
Panampatta is a part of Pathanapuram assembly constituency in Mavelikkara (Lok Sabha constituency). Shri. K. B. Ganesh Kumar is the current MLA of Pathanpuram. Shri.Kodikkunnil Suresh is the current member of parliament of Mavelikkara.
Geography
Panampatta is a small village in Thalavoor panchayat. Panampatta is junction in Pathanapuram-Kottarakkara road. It connects places Karyara, etc. Vellangadu bridge is a main landmark of Panampatta. Panampatta Akshaya Centre situated near Vellangadu Jn.
References
Geography of Kollam district
|
Willow Bunch is a former provincial electoral division for the Legislative Assembly of the province of Saskatchewan, Canada, centred on the rural municipality of Willow Bunch. This district was created before the 3rd Saskatchewan general election in 1912. The constituency was dissolved and combined with the Notukeu district (as Notukeu-Willow Bunch) before the 9th Saskatchewan general election in 1938.
It is now part of the Wood River constituency.
A federal electoral district in the same area called "Willow Bunch" existed from 1924 until 1935.
Members of the Legislative Assembly
Election results
|-
|style="width: 130px"|Conservative
|William W. Davidson
|align="right"|825
|align="right"|50.46%
|align="right"|–
|- bgcolor="white"
!align="left" colspan=3|Total
!align="right"|1,635
!align="right"|100.00%
!align="right"|
|-
|Conservative
|James Lambe
|align="right"|1,340
|align="right"|33.16%
|align="right"|-17.30
|- bgcolor="white"
!align="left" colspan=3|Total
!align="right"|4,041
!align="right"|100.00%
!align="right"|
|-
|- bgcolor="white"
!align="left" colspan=3|Total
!align="right"|5,573
!align="right"|100.00%
!align="right"|
|-
|- bgcolor="white"
!align="left" colspan=3|Total
!align="right"|5,060
!align="right"|100.00%
!align="right"|
|-
|- bgcolor="white"
!align="left" colspan=3|Total
!align="right"|Acclamation
!align="right"|
|-
|Conservative
|William James Gibbins
|align="right"|4,316
|align="right"|49.39%
|align="right"|-
|- bgcolor="white"
!align="left" colspan=3|Total
!align="right"|8,739
!align="right"|100.00%
!align="right"|
|-
|Conservative
|Edgar B. Linnell
|align="right"|1,445
|align="right"|28.27%
|align="right"|-21.12
|Farmer-Labour
|Charles Morley W. Emery
|align="right"|1,219
|align="right"|23.85%
|align="right"|–
|- bgcolor="white"
!align="left" colspan=3|Total
!align="right"|5,112
!align="right"|100.00%
!align="right"|
See also
Electoral district (Canada)
List of Saskatchewan provincial electoral districts
List of Saskatchewan general elections
List of political parties in Saskatchewan
Willow Bunch, Saskatchewan
References
Saskatchewan Archives Board – Saskatchewan Election Results By Electoral Division
Former provincial electoral districts of Saskatchewan
|
Tytthoscincus batupanggah, also known as the cursed-stone diminutive leaf-litter skink, is a species of skink. It is endemic to Borneo and only known from its type locality Gunung Penrissen in Sarawak, East Malaysia.
Tytthoscincus batupanggah is small skink measuring in snout–vent length. It has been found in a mixed-dipterocarp forest at above sea level. It is a leaf litter specialist.
References
batupanggah
Endemic fauna of Borneo
Endemic fauna of Malaysia
Reptiles of Malaysia
Reptiles described in 2016
Taxa named by Aaron M. Bauer
Taxa named by Indraneil Das
Taxa named by Benjamin R. Karin
Reptiles of Borneo
|
```objective-c
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// This file has been auto-generated by code_generator_v8.py. DO NOT MODIFY!
#ifndef V8SVGFEFuncAElement_h
#define V8SVGFEFuncAElement_h
#include "bindings/core/v8/ScriptWrappable.h"
#include "bindings/core/v8/ToV8.h"
#include "bindings/core/v8/V8Binding.h"
#include "bindings/core/v8/V8DOMWrapper.h"
#include "bindings/core/v8/V8SVGComponentTransferFunctionElement.h"
#include "bindings/core/v8/WrapperTypeInfo.h"
#include "core/CoreExport.h"
#include "core/svg/SVGFEFuncAElement.h"
#include "platform/heap/Handle.h"
namespace blink {
class V8SVGFEFuncAElement {
public:
CORE_EXPORT static bool hasInstance(v8::Local<v8::Value>, v8::Isolate*);
static v8::Local<v8::Object> findInstanceInPrototypeChain(v8::Local<v8::Value>, v8::Isolate*);
CORE_EXPORT static v8::Local<v8::FunctionTemplate> domTemplate(v8::Isolate*);
static SVGFEFuncAElement* toImpl(v8::Local<v8::Object> object)
{
return toScriptWrappable(object)->toImpl<SVGFEFuncAElement>();
}
CORE_EXPORT static SVGFEFuncAElement* toImplWithTypeCheck(v8::Isolate*, v8::Local<v8::Value>);
CORE_EXPORT static const WrapperTypeInfo wrapperTypeInfo;
static void refObject(ScriptWrappable*);
static void derefObject(ScriptWrappable*);
template<typename VisitorDispatcher>
static void trace(VisitorDispatcher visitor, ScriptWrappable* scriptWrappable)
{
#if ENABLE(OILPAN)
visitor->trace(scriptWrappable->toImpl<SVGFEFuncAElement>());
#endif
}
static const int internalFieldCount = v8DefaultWrapperInternalFieldCount + 0;
static void installConditionallyEnabledProperties(v8::Local<v8::Object>, v8::Isolate*) { }
static void preparePrototypeObject(v8::Isolate*, v8::Local<v8::Object> prototypeObject, v8::Local<v8::FunctionTemplate> interfaceTemplate) { }
};
template <>
struct V8TypeOf<SVGFEFuncAElement> {
typedef V8SVGFEFuncAElement Type;
};
} // namespace blink
#endif // V8SVGFEFuncAElement_h
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.