query_halid
string | query
string | query_year
string | query_domain
list | query_affiliations
list | query_authorids
list | pos_halid
string | positive
string | pos_year
string | pos_domain
list | pos_affiliations
list | pos_authorids
list | neg_halids
string | negative
string | neg_year
string | neg_domain
list | neg_affiliations
list | neg_authorids
list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
00262933
|
But the statistical work was never viewed by the transporter as an eventual systematical tool to promote. It was a punctual way to better fit the fraud management system with its environment. Control encounters
By observing control encounters I tried to draw some usual scenarios of control and reporting. They enabled me to propose a flexible model of reporting as a sequenced interaction and made clear the key features of it. Moreover, I underlined some dimensions of control and reporting, such as violence and conflict especially. Once more, it appeared that such a kind of knowledge was of interest for the management, and not that much for controllers, who had a tacit understanding of what I explained. The use of this knowledge was actually very restricted, all the more so as the dimension of conflict and violence were denied. Evaders' interviews
The interviews were very productive, because the y brought much new information. They made connections with the point of view of evaders, which enabled to address topics such as the motivation or the feelings (fear, anger, shame…) of evaders. These topics were not accessible by the controllers' or the database entrances.
|
2005
|
[
"shs"
] |
[
"1172"
] |
[
"3161"
] |
00262932
|
20 We can notice the work of Michael Lipsky on street-level bureaucracy precisely accounts for this kind of decisions (Lipsky, 1983) that adapt bureaucratic rules. (that did not accept the payment rule) to accept the principle of a fine (whatever you call it: compensation or punition…). However, controllers are supposed to report. And even if they can sometimes decide not to do it, they have to account for their work. And the common traditional indicator for control work is the number of fines. 21 This aspect of controllers' work is very important because it explains the instrumental dimension of the relationship that settles by the control encounter. It entails another look at the interaction (a strategic one) 22 and raises the question of the tools at disposal for controllers to manage the relationship. The coercive means to manage the interaction
Control is often seen as a repressive occupation. Such a view is at least partly true. The mission of reporting is anyway rather repressive… The question then is: to achieve a repressive aim, do controllers use repressive tools? A first answer is yes.
|
2005
|
[
"shs"
] |
[
"1172"
] |
[
"3161"
] |
00291750
|
They need a visa to leave and to enter Lebanon. Many do not have passports -they have only been issued a travel document by the Lebanese authorities -and as a result many other countries refuse to give them visas, or even to let them transit their territories. Abu Taraq, who migrated to Germany with his family in 1994, explains his motivations:
In Lebanon, we do not have any rights, to work, to education, nor to health. What is the future for my children? The Oslo Agreements have forgotten us... At least in Europe they respect us as human beings; we have the same rights as everybody else. My children can get an education, they can work, build a future. What is striking about all of the above excerpts is that they are from interviews with men. Rather than reflecting any methodological bias, this is a reflection of the fact that migration tends to be dominated by young males. Indeed their migration has created a significant sex imbalance in the refugee camps. Young women are fin ding it harder and harder to marry, and this can make their status and security even more precarious.
|
2002
|
[
"shs"
] |
[
"199917"
] |
[
"2673"
] |
04159574
|
How does one combine a high-end image with such concerns? How is it possible to fulfill all the wide-ranging commitments with respect to seasonality, awareness of food waste, and so on? Some chefs offer porcelain crockery, which "they can collect from the customer's home and return to the restaurant premises" (J. Guèze), while others are thinking about crockery deposit systems to avoid the use of plastic packaging and cardboard boxes. This thought process coincides with the high-end image that they want to convey, and using innovative packaging is a step in that direction. "We are using packaging with a strong CSR [corporate social responsibility] connotation-recyclable, and, above all, very design-centric, with an origami inspiration. It can be transported easily, and it's very clean. It is in line with the positioning and vision of the restaurant" (F. Gagnaire). In addition, with home delivery, the delivery person does not know the norms of haute cuisine, the chef's philosophy, or the restaurant's brand image in general, especially if the chef appoints an external service provider. The delivery causes chefs to lose control over how they want dishes to be presented, both verbally and nonverbally. These skills are instilled in restaurant frontline staff but are not passed on to the service providers, who now play a major role: "It's a bit as if a waiter brings you a gourmet plate in a t-shirt and jeans; it kinda kills the dream" (J. Guèze).
|
2022
|
[
"shs"
] |
[
"1043181",
"458547",
"33804",
"88676",
"347722",
"531465",
"1043181",
"1042703"
] |
[
"1145713",
"1231313"
] |
03710025
|
As with Dahl et al. (1999), the data collection was carried out among engineering students. The questionnaire was administered though the Qualtrics platform -dedicated to the creation and distribution of online questionnaires -and completed at the end of the creativity process based on design thinking. According to [START_REF] Calder | Designing research for application[END_REF], a sample of students is appropriate for testing a theory, although it has limitations in terms of external validity and generalizing the results. In total, the sample comprised 177 respondents (44 teams), 79.1% male, with an average age of 20.3 (SD = 0.74). The respondents were taking a bachelor's degree, specializing in various fields including materials science, biotechnology, electronics, and signal processing. Descriptive statistics for the main sample are presented in Appendix 3. Field of study. The teams were asked to respond to the challenge of the company Kicklox, worded as follows: "How should engineers be used around the Kicklox platform, the Uber of engineering?" To facilitate the response to the challenge, several subobjectives were linked to the problem (i.e. create a strong link with these engineers; ensure the engineer's full investment, particularly in the quality of the content; create a secure environment for the customer), together with constraints (i.e. being available 2 days per week to develop the solution; providing the lowest possible acquisition cost per user, or even offering a cash-redeemable solution; taking account of technical feasibility in particular; clearly defining the user/customer being addressed). One of Kicklox's cofounders gave a presentation of the company and the challenge.
|
2019
|
[
"shs"
] |
[
"89889",
"88676",
"1041636",
"458547",
"1041636",
"1042703",
"1041636",
"458547"
] |
[
"1145713",
"1231313"
] |
01961139
|
The initial correspondence, reflecting Proto-Kiranti (PK) *r, poses the fewest problems, being well known since the work of Robert Shafer (1953: 148-149; see also [START_REF] Driem | The rise and fall of the phoneme /r/ in eastern Kiranti: sound change in Tibeto-Burman[END_REF]. As for the rime, velar finals are generally preserved in Limbu, Yamphu and Bantawa, but often lost elsewhere, with backing and rounding of the vowel (Winter 1987: 731). In Central and West Kiranti, the vocalic systems have often been multiplied under the influence of finals, with the creation of front rounded or centralized vowels in Bahing and in Khaling [START_REF] Michailovsky | Notes on the Kiranti Verb[END_REF]. Fronting is generally inhibited by velar finals, even where these are subsequently lost. Thus, in the 10-vowel system of Bahing, for example, the rime *ak generally has the reflex Ák, or, in contexts where the final is lost, ÁË. This is particularly clear in conjugated forms of the verb. Khaling aa and Kulung ÁË are also characteristic of syllables with final k. The examples below are intended to illustrate the correspondences of PK *ak. Although there is some uncertainty (between a/Á/o), *rak is quite well supported as the first syllable of our ethnonym. The initial of the second syllable shows a regular correspondence between South Kiranti d, Limbu th, and Yamphu zero; the pertinent feature in Limbu is the aspiration, since Limbu does not have an opposition of voicing. However, the expected correspondence in Central and West Kiranti is t, not d [START_REF] Michailovsky | Notes on the Kiranti Verb[END_REF]Winter 1987: 730).
|
2003
|
[
"shs"
] |
[
"406905"
] |
[
"957269"
] |
03149077
|
The PCR was positive, and the CT-scan demonstrated bilateral crazy paving. 48 hours after admission, he complained of chest pain. 12-lead ECG demonstrated inverted T waves in inferior leads. Hs-TNI was at 355 ng/L then 570 ng/L three hours later (N< 17 ng/L). Transthoracic echocardiography showed a left ventricular ejection fraction at 60% without wall motion abnormalities, no diastolic dysfunction and a mild mitral regurgitation. Coronary angiogram performed via radial approach demonstrated a chronic total occlusion of the posterior descending artery with epicardial collateral from the left anterior descending artery (Rentrop 3, panel A). In the mid right coronary artery, a spontaneous dissecting coronary hematoma was observed with an intimal tear (panels B and C). Flow grade was Thrombolysis In Myocardial Infarction (TIMI) 3 in the posterolateral artery. Optical coherence tomography (OCT) was performed in the right coronary artery and confirmed the spontaneous dissecting coronary hematoma with an intimal rupture (panels D, E and F). A conservative management was decided.
|
2020
|
[
"sdv"
] |
[
"194495",
"139739"
] |
[
"744032",
"779118",
"779119"
] |
03149070
|
Two periods were defined: before the lockdown period (weeks 2-10) and during the lockdown period (weeks 11-14). Figures were plotted using Graphpad Prism 7.04 software. Results
The weekly numbers of myocardial infarctions were roughly comparable before the lockdown period in 2020 and in 2018-2019. After lockdown, it dropped to a much lower level in 2020 versus 2918-2019 (Fig. 1A). Table 1 indicates that the cumulative incidence of myocardial infarctions during weeks 2-10 in 2020 differed from that in 2018-2019 by less than 10%, but markedly decreased by 31.0% during lockdown. However, the numbers of births remained stable over the study periods, without a substantial difference between 2020 and 2018-2019 (Fig. 1B). Lockdown had almost no effect on the numbers of births (Table 1). Discussion
The upheaval induced by COVID-19 has many non-viral consequences and our multicentre study is the first one to tackle the issue of its effect on myocardial infarctions. The present study strongly suggests a decrease in the number of admissions for myocardial infarction during lockdown. Although we do not have a long follow-up to determine whether this trend will continue, this is an important warning for the medical community and authorities.
|
2020
|
[
"sdv"
] |
[
"194495",
"139739"
] |
[
"779119",
"744032",
"779118"
] |
04283890
|
36
Figure 1. Experimental tasks and model framework. (A) In the saccade trials, subjects executed a saccade to a 20 • rightward target. In the pre-exposure phase, the target was extinguished at saccade onset. In the exposure and post-exposure phase, the target either stepped 6 • inward (inward condition), 6 • outward (outward condition) or stayed at its initial position (no step condition). In the pre-saccadic localization trials, subjects localized a 12 ms flash with a mouse cursor while holding gaze at the fixation point. In the post-saccadic localization trials, subjects performed a saccade to a 20 • rightward target and then localized the 12 ms pre-saccadic flash with a mouse cursor. The mouse cursor was a blue line pointer. The yellow circle illustrates gaze location but was not present at the stimulus display. (B) The target with the physical distance P 1 is represented at the location V 1 on the visuospatial map. An inverse model maps V 1 onto a motor command M .
|
2023
|
[
"sdv"
] |
[
"354310",
"354310"
] |
[
"1284175",
"810985",
"735972",
"1284180"
] |
03278345
|
34 Yang and Deng reported a one-pot enantioselective sequence starting with an organocatalysed Michael addition of cyclobutanone 31 onto 2-nitrovinylindole 32 (Scheme 10). 35 This reaction led to intermediate 33 and epi-33 which, upon action of boron trifluoride etherate Lewis acid, underwent a cyclisation to the five-membered ring. The latter could spontaneously perform a fragmentation and deliver the corresponding medium-size indolic systems in yields up to 84%, as a mixture of two diastereomers cis-and trans-34 respectively and with excellent enantioselectivities. The stereoselectivities have been rationalised: the diastereoselectivity could originate from the protonation step of the amide enolate and the enantioselectivity from the Michael addition step catalysed by a chiral bifunctional aminocatalyst. The authors not only demonstrated the broad substrate scope and the scalability of the process but also, the high synthetic potential of the obtained imbedded scaffolds. Another approach to cyclohepta [b]indoles and other useful seven-and even eight-membered carbocyclic platforms, relies on an intramolecular alkyne-de Mayo reaction (Scheme 11). 36 A photochemically induced cascade process starting from 35 in a polar protic solvent provided the desired adduct 36 in excellent yields. The mechanism to the seven-membered ring presumably proceeds through the intermediate tricyclic cyclobutene cis-S 0 -37 via a retro-Mannich reaction. As already stated, the formation of challenging larger ring systems can benefit from the strain release of small-size cyclic compounds. In 2016, Li and co-workers showed that the Scheme 10 Synthesis of medium-size indolic system (2016).
|
2021
|
[
"chim"
] |
[
"186403",
"186403",
"186403",
"186403"
] |
[
"1268105"
] |
02395349
|
While metal catalyzed hydroboration using other B-H sources (catechol boranes [START_REF] Brown | Modern Rhodium-Catalyzed Organic Reactions[END_REF] and tertiary amine borane complexes 16 ) have been mechanistically described by kinetic or DFT studies, the B-H bond metal activation of NHC-boranes has not been investigated yet. For a complete understanding of this activation process, complementary experiments 9 and a theoretical treatment are presented in this paper. We discuss a pertinent mechanism of NHC-borane intramolecular hydroboration promoted by a cationic rhodium species (Scheme 1c). The crucial role of the NHC-borane/alkene substrate as a bidentate ligand and the origin of enantioselectivity are highlighted. Computational details
This work focuses on the mechanism of intramolecular enantioselective hydroboration. In general, enantioselectivity can be due to (small) differences between diastereomeric transition states, notably when a specific mechanistic step (corresponding to a specific transition state) is clearly relevant for the kinetics of the reaction (Curtin-Hammett's conditions). In other cases, especially with large catalytic systems and for complex pathways, several transition states may become kinetically relevant. 17 For both scenarios, accurate energy calculations are desirable. Furthermore, when the substrate is achiral and the reaction is enantioselective, the chirality transfer must be due to the interaction between the substrate and substituents of the bulky chiral ligand. Non-covalent interactions may become crucial for the energetics.
|
2019
|
[
"chim"
] |
[
"186403",
"186403",
"186403"
] |
[
"1268105"
] |
04235958
|
1 Thus, it can be used to obtain structural information or to study reaction mechanisms. 2 In total synthesis, deuterium-labelled compounds have been used to modify the reaction selectivity. [START_REF] Atzrodt | Deuterium-and Tritium-Labelled Compounds: Applications in the Life Sciences[END_REF] H/D substitution plays also a major role in the development of new drugs and it serves, for instance, to increase their metabolic stability. 4 Interestingly, H/D substitution has also been reported to transform an achiral molecule into a chiral one. Chiral isotopologues have attracted the attention of organic chemists and spectroscopists because both their synthesis and the detection of their chiroptical properties are challenging. Several interesting examples have been reported in the literature (compounds A-F in SCHEME 1) and different chiroptical tools have been used to characterize them. 5,[START_REF] Miwa | Asymmetric Synthesis of Isotopic Atropisomers Based on Ortho-CH3/CD3 Discrimination and Their Structural Properties[END_REF] It should be noted that the vast majority of chiral isotopologues studied concern molecules with asymmetric carbons (compounds B, C, D and F). Conversely, chiral isotopologues with an inherent chiral structure are much less common and only a few examples have been reported so far (see for example compounds A and E). SCHEME 1 Examples of chiral isotopologues reported in the literature. As a new example, we report the synthesis of a deuteriumlabelled syn-cryptophane-B (SCHEME 2). Unlike the overwhelming majority of cryptophanes that have been prepared, syn-cryptophane-B (syn-1) is achiral with a C 3h symmetry.
|
2023
|
[
"chim"
] |
[
"663",
"24493",
"341936",
"186403",
"186403",
"194495",
"24493"
] |
[
"756339"
] |
03770903
|
A prototype lowpass filter is then obtained [START_REF] Parks | Digital Filter Design[END_REF]. As these approximations always satisfy the LC-ladder realisability conditions, a prototype LC-ladder filter is synthesised. Using element and frequency transformations, other standard filters may be obtained [START_REF] Baher | Synthesis of Electrical Networks[END_REF]. However, the resulting impedances of the transformed elements may not represent components of practical interest. In [START_REF] Rossignol | Filter Design under Magnitude Constraints is a Finite Dimensional Convex Optimization Problem[END_REF], it is shown how Problem 1 can be formulated as an LMI optimisation problem. By adding the realisability conditions, one can formulate the LC-ladder filter design problem as an LMI optimisation problem. In the next section, it is shown how this approach can be extended to ladder filter with other lossless-passive elements. To achieve this, a generalised variable T (s) is introduced. The resulting design problem remains an LMI optimisation problem. III.
|
2018
|
[
"spi"
] |
[
"527407",
"408749",
"408749",
"408749",
"527407"
] |
[
"171579",
"1228",
"1271",
"173133"
] |
01984417
|
However, those for LC-ladder filters are rather simple. It appears that sufficient conditions are that s 21 (s) is T -bounded-real, with T (s) = s, and is a stable all-pole function, i.e. s 21 (s) = 1 g(s) with g a Hurwitz polynomial [START_REF] Baher | Synthesis of Electrical Networks[END_REF]. In the traditional approach, solutions of Problem 1 are calculated using the Butterworth or the Chebyshev approximations. A prototype lowpass filter is then obtained [START_REF] Parks | Digital Filter Design[END_REF]. As these approximations always satisfy the LC-ladder realisability conditions, a prototype LC-ladder filter is synthesised. Using element and frequency transformations, other standard filters may be obtained [START_REF] Baher | Synthesis of Electrical Networks[END_REF]. However, the resulting impedances of the transformed elements may not represent components of practical interest. In [START_REF] Rossignol | Filter Design under Magnitude Constraints is a Finite Dimensional Convex Optimization Problem[END_REF], it is shown how Problem 1 can be formulated as an LMI optimisation problem. By adding the realisability conditions, one can formulate the LC-ladder filter design problem as an LMI optimisation problem. In the next section, it is shown how this approach can be extended to ladder filter with other lossless-passive elements.
|
2018
|
[
"spi"
] |
[
"408749",
"527407",
"408749",
"408749",
"527407"
] |
[
"171579",
"1228",
"1271",
"173133"
] |
03190972
|
As the vibration is equal to 6 times the working frequency of the device, the solution to remove it was to work at higher frequencies. This would consequently increase the electrical impedance of the coils. To keep reasonable operational voltages, each phase coils connection was modified from a series connection of the original design to a parallel connection. Experimentally the working frequency was set to 660 Hz, which made the aluminum plates resonate at 3960 Hz and provided a more comfortable manipulation. For the measurements a force sensor was mounted on one of the aluminum plates and a digital scope recorded simultaneously the closed loop current of the electronic drivers and the force sensor signal. A thrust-intensity empirical relation is shown in Fig 6 . Each point corresponds to the mean of ten measurements and their standard deviation is included in Fig 6 . Detailed measurement values with percent relative standard deviations can be found in table I. Around 2.2 A the electronic driver starts to limit its output current and then This can be due to the electronic drivers solution adopted for this proof of concept. The time constant of the interface was also measured empirically in Fig 7 .
|
2017
|
[
"spi"
] |
[
"96164",
"96164",
"96164",
"96164",
"96164"
] |
[
"739419",
"745212",
"745154"
] |
01560854
|
Data analysis
We employed partial least squares (PLS) as our analysis approach and utilized the tool smartPLS [START_REF] Ringle | Smart PLS 2.0 M3[END_REF]. PLS is a second generation regression method that combines confirmatory factor analysis with linear regression, and this makes it possible to run the measurement and structural models simultaneously. Table 2 shows item wise averages and loadings of each construct in the model. For each construct the assessment of convergent validity or internal consistency is also included through the composite reliability coefficient [START_REF] Fornell | Evaluating structural equation models with unobservable variables and measurement error[END_REF]. Convergent validity indicates the extent to which the items of a scale that are theoretically related are also related in reality. As we can see from Table 2, all items have significant path loadings greater than threshold 0.7 recommended by Fornell and Larcker [START_REF] Fornell | Evaluating structural equation models with unobservable variables and measurement error[END_REF]. All the constructs have composite reliability values that exceed the threshold recommended by Nunnally [START_REF] Nunnally | Psychometric theory. 2nd Edition[END_REF]. Testing for discriminant validity involves checking whether the items measure the construct in question or other (related) constructs. Discriminant validity was verified with correlation analysis as recommended by Gefen and Straub [START_REF] Gefen | A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example[END_REF].
|
2011
|
[
"info"
] |
[
"93061"
] |
[
"1012511"
] |
01560844
|
Moodle is an open source course management system and has become very popular among educators for creating dynamic online course websites for students. Moodle can be used to conduct online courses or to augment face-to-face courses (hybrid courses). This study was conducted in an internationally acknowledged, multidisciplinary scientific university in Finland. The university has seven faculties. The university has been using Moodle since 2007 as its platform for creating course pages online. Data was collected via a web-based survey from the students of the university who use Moodle in hybrid courses. A list of students' email addresses was collected from the Moodle support team in the university. A total of 1100 email invitations were sent to randomly selected students of the university who had been registered in Moodle as student users. One reminder was sent to increase the response rate after a gap of one week. The survey ran for approximately two weeks.
|
2011
|
[
"info"
] |
[
"93061"
] |
[
"1012511"
] |
01405060
|
Type of construction work. These steps are similar to the ones defined by Group 2, but the main theme for Group 3's design was simplicity for the citizen. For instance, they focused on letting office clerks decide whether or not a submitted application should be categorized as "permit application" or "notification" such that the citizen would not need to decide. This contrasts with the solutions suggested by groups 1 and 2, which suggested that the user (citizen) should decide. Summary. From the overview of the design process, we see that the redesign strategies varied between top-down and bottom-up approaches and that the three groups addressed a different number of usability problems. The main themes of the redesign proposals also differed between the three groups, but all applied a wizard approach of three to four steps. The wizard steps of Group 2 and 3 were similar while Group 1 selected an alternative order and also designed an extra step. All steps in the redesign solutions deviated from the order in the original pdf form. Problem Understanding
This subsection describes our findings on the developers' perception of the problems of the system. In the following we describe the numbers and categories of identified strengths and weaknesses and the collective list of these as prioritized by the five participants. Categories of Strengths and
Weaknesses.
|
2014
|
[
"info"
] |
[
"300821",
"300821",
"473975",
"300821",
"300821"
] |
[
"994709",
"994710",
"994711",
"994712"
] |
01646718
|
We say that a type connective is positive if its right introduction rule is non-invertible, and negative otherwise: (→, ×, 1) are negatives, and (+, 0) are positive. It is easy to decide equivalence of the simply-typed λ-calculus with only connectives of one polarity: we previously remarked that it is easy to define canonical forms in the negative fragment STLC(→, ×, 1), but it is equally easy in the positive fragment STLC(+,0). It is only when both polarities are mixed that things become difficult. A key result of Zeilberger (2009, Separation Theorem, 4.3.14) is that a focusing-based presentation of the simply-typed λ-calculus is canonical in the effectful setting where we assume that function calls may perform side-effects -at least using the specific reduction strategy studied in CBPV (Levy 1999). Two syntactically distinct effectful focused program expressions are observationally distinct -the canonicity proof relies on two distinct error C V I T 2 0 1 6
23:6
Search for Program Structure effects, to distinguish evaluation order, and an integer counter to detect repeated evaluation. The fact that any λ-term can be given a focused form comes from a completeness theorem, the analogous of completeness of focusing as a subset of logical proofs. However, this syntax is not canonical anymore if we consider the stronger equivalences of pure functional programming, where duplicated or discarded computations cannot be observed. Let us write P , Q for positive types, types that start with a positive head connective, and N , M for negative types, that start with a negative head connective. In a λ-term in focused form with types of both polarities -see a complete description in Scherer (2016, Chapter 10) -a non-invertible phases can be of two forms which we shall now define. It can start with a positive neutral (p : P ), which is a sequence of non-invertible constructors of the form σ i for positive types, they commit to a sequence of risky choice to build the value to return.
|
2017
|
[
"info"
] |
[
"29479"
] |
[
"170100"
] |
01646064
|
In programming terms, the fact that the right implication rule is invertible corresponds to an inversion principle on values: without loss of generality, one can consider that any value of type A → B is of the form λx. t. Any value of type A1 × A2 is of the form (t1, t2). This is strictly true for closed values in the empty context, but it is true modulo equivalence even in non-empty contexts, as is witnessed by the η-expansion principles. If a value (t : A → B) is not a λ-abstraction, we can consider the equivalent term λx. t x. But it is not the case that any value of type A + B is of the form σi t, as our example X + Y Y + X demonstrated. Inspired by focusing we look back at our grammar of βη-normal forms: it is not about constructors and destructors, it is about term-formers that correspond to invertible rules and those that do not. To gracefully insert sums into this picture, the non-invertible σi should go into the neutrals, and case-splitting should be a value. [START_REF] Scherer | Which simple types have a unique inhabitant? [END_REF] introduce focusing in more details, and present a grammar of focused normal forms is, lightly rephrased, as follows:
values t ::= λx.
|
2017
|
[
"info"
] |
[
"29479",
"56052"
] |
[
"170100"
] |
01366688
|
One should also consider the differences in level of control between a learner using the FORGE tools and a direct testbed user for which a FIRE facility was envisioned. This is especially relevant when considering troubleshooting possible software and hardware failures that are often unavoidable when using state of the art research equipment and immature technologies. When a learner has only access via a web interface, a series of watchdog programs and actions should be defined to recover the experiment state in case there is a deviation of the expected experimentation path. Even when giving learners direct access to the experimentation machines at least a series of recovery scripts or instructions should be provided since absolute knowledge of the underlying system and quirks cannot be expected. Another aspect that we recommend (apart from what is stated above and below) is to focus on multi-platform approach and easy integration on existing eLearning platforms. Educators desire a good integration within the platforms they are using already. By supporting the creation of widgets that use their FIRE facility, one supports the inclusion of FIRE functionality via the widget in different Learning Management Systems or other digital media (such as eBooks). For example, this has been applied by the inclusion of FORGE widgets into the Moodle-based legacy eLearning LMS of Universidad do Brasilia and by the coupling of the Central Authentication Service (CAS) mechanism for student accounts of Ghent University. This allowed the learners seamless access to the lab, while also maintaining both user authorization and authentication. All interactions of the learner with the widgets and underlying FIRE facility should be collected using Learning Analytics, from the initial reservation of resources to the actual interactions during the lab. All learners should be uniquely identified so the full learning path can be analysed and where possible, technically and legally, the learner should be coupled to his/her real-life identity and university account if applicable.
|
2016
|
[
"info"
] |
[
"389034",
"389034",
"118333",
"302729",
"302729",
"300765",
"466439",
"466439"
] |
[
"9674",
"976595",
"858104"
] |
01594084
|
It appears that it would be very difficult to reach this objective for the Caribbean country producers. By contrast, the transitional tariff-rate quota regime could benefit West African countries where production costs are lower and where sorne multinationals (Dole and Del Monte) now run large plantations. West African countries have welcomed the new EU import banana regime. However, the quota C level could limit their future exports to the EU. Furthermore, as their historical import rights are smaller than expected exports, licenses would have to be purchased to export additional bananas. There are no certainties that the tariff-only regime will enter into force on 1 January 2006. The sening of the appropriate tariff is likely to be a point of considerable discussion until the deadline. The banana industry in ACP countries, notably in Caribbean states, is clearly at a competitive disadvantage with respect to LA suppliers. An EU policy that combines a simple tariff on dollar banana imports with direct aid to preferred suppliers presents several advantages relative to a multiple tariff-rate quota regime with cross-subsidization of non-preferred suppliers through allocation of import licenses within the preferred suppliers' quota. It reduces distortionary impacts and eliminates the quota rent problem.
|
2003
|
[
"shs"
] |
[
"52709",
"39083",
"52709",
"39083"
] |
[
"735969"
] |
02850915
|
TC may have the effect of reducing the quantity utilised of sorne inputs. EquaIly, TC may increase utilisation of other inputs. From a policy perspective, it rnay be interesting to know whether TC has a tendency to reduce labour use in agriculture. This information may be helpful to plan intersectoral shifts in the labour force into other sectors and to develop retraining programmes to ensure that the labour force is gainfully employed in other occupations. Based on Hick's original definition and assuming a two input -one output linearly homogeneous technology, technical change is said to be neutral if it leaves unchanged the rate of substitution between input pairs. However, as noted by [START_REF] Blackorby | Extended Hicks Neutral Technical Change[END_REF], "to compare situations before and after technical change, something must be held constant. Exact1y what is to be held constant has been the subject of sorne debate and constitutes the crux of the issue at hand". If factor endowments are held constant, technical change is measured along a ray where factor production remains the same. For agricultural technologies, at both the firm and the farm levels, it seems more useful to define neutrality holding factor priee ratios constant [START_REF] Binswanger | The Measurement of Technical Change Biases with Many Factors of Production[END_REF]. The dual measure oftechnical change biases he proposes is :
> 0 if TC is input i using Bit = OIog S; 1â .
|
1993
|
[
"shs"
] |
[
"37696",
"37696",
"305143"
] |
[
"735969"
] |
00751180
|
Inferential processing may apply to the result of other types of processing. E.g., an agent can infer a proposition from what she 'sees', i.e. from the result of applying visual processing to visual cues. For quelque, what counts is the processing. More precisely, quelque requires that the corresponding existential proposition be reached through inferential processing. (11)
C-inf A form [quelque] x [R] [S]
is appropriate only under interpretations where the epistemic agent infers that
[∃] x [R] [S]. In most cases, inferential processing leads to conclusions that are weaker than those reached through perceptual processing, which accounts for the contrast in ( 12). (12) a. Yolande a dû epist ouvrir la porte. En fait, je me souviens, je l'ai vue 'Yolanda must epist have opened the door. Actually, I remember I saw her' b. ?
|
2007
|
[
"shs"
] |
[
"51028",
"1053"
] |
[
"859670"
] |
01215546
|
FIGURE 5 ABOUT HERE The labyrinth of the Chartres Cathedral (source: authors). This image-object was often present in Gothic Cathedrals. In France, only two remain visible at the Chartres and Amiens cathedrals. At Chartres, the pilgrim coming into this symbolic space is destined to live a threestep experience (over the 261.5 metres of the labyrinth!) which was a way to do the Jerusalem or Compostela pilgrimage [START_REF] Attali | Chemins de sâgesse[END_REF]. Firstly, he or she will think that the way to the centre will be easy. Then, it may seem that the path moves him/her more and more towards and around the extremities of the labyrinth. A feeling of being lost sets in. Then, as the pilgrim becomes desperate, his/her path leads to the centre. One must imagine a pilgrim, tired and exhausted.
|
2015
|
[
"shs"
] |
[
"389243",
"1032",
"185880"
] |
[
"914586",
"921507"
] |
00671690
|
The History of the "putting out" systems [START_REF] Kieser | Why organization theory needs historical analyses -and how this should be performed[END_REF] could be compared to current outsourcing and issues of de/centralisation. Putting out was a complex network of contracts of manufacture, usually analysed through labour process analysis (workers" control of product and process, division of labour, factory systems, technical superiority, matching of technology with skills, family lives) during the industrial revolution in Western societies, especially the UK. Historical material shows that putting out was a consequence -rather than the cause -of a division of work that was already in existence across rural communities in the North West of England in the textile industry. The centralisation of production was triggered by the need to fill the capacity of large-scale machinery, but putting out systems were far more effective than the centralized factory. Factory owners were forced to compromise as they were unable to find a technology for decentralised production. One could see parallels with the contracting out of workers through increasingly mobile ICTs, which takes place within countries and globally across borders, as opposed to just within regions. It may be possible to contrast and compare across cases, to highlight features particular to each historical context in order to gain some unexpected insights into current practices. While we are not suggesting that History repeats itself, informed historical analyses could serve to reflect on current thinking and critique existing theories of IT-enabled work design, for instance the consequences of offshoring on communities both in Southern and Northern parts of the world (see Howcroft and Richardson, 2010). The historiography of influential ideas and thinkers on action research and change management could bring insights into the topic of participatory design and empowerment through ICTs. [START_REF] Cooke | Writing the left out of management theory: The historiography of the management of change[END_REF] looked at the work of Kurt [START_REF] Lewin | Action research and minority problems[END_REF], who is noted for the development of action research in organizational studies.
|
2012
|
[
"shs"
] |
[
"185880",
"389243"
] |
[
"921507",
"914586"
] |
04027124
|
For this mathematical task in the chemistry course, the praxeology is not the praxeology that would be expected in the mathematics course. TC confirmed in the interview that the students have not been able to solve this exercise. We observe in this example several issues associated with the didactical praxeology for teaching T nsv (which belongs to the praxeology at the discipline level T bmt ). Firstly, the "reminder" can in fact correspond to new knowledge. Here the property can be proven with secondary school knowledge, but it requires a complicated proof. Moreover students are not familiar with vector projections at secondary school. Second, for what TC identifies as a mathematical exercise, the kind of justification expected is very different from what would be expected in a mathematics course. DISCUSSION
Are the didactical types of task (table 2) and the associated praxeologies specific for the target public of non-specialist students? In this section we discuss our results in order to answer this question. Our aim was not to compare the three teachers; nevertheless we also present some hypotheses about the differences between the didactical praxeologies they developed for the same types of tasks.
|
2022
|
[
"shs"
] |
[
"1041771",
"199013",
"238177"
] |
[
"2438",
"1329187",
"8750"
] |
03655658
|
La forma ensayo[END_REF][START_REF] Alter | Chris Marker[END_REF][START_REF] Alter | The Essay Film after Fact and Fiction[END_REF]. This article aims to carry out an unprecedented in-depth analysis on Sans soleil's "thinking in act" (Moure 2004, 37), considering the essay film as a filmic form that, through the subjectivity of the filmmaker, generates a properly audiovisual thinking process, which arises from the relationships established among the elements of the sound image and the visual image. Continuing the essay film theoretical developments of Josep Maria Català-about "parataxic thinking" (2014, 209), focused on the juxtaposition of different elements-and Laura Rascaroli-about "interstitial thinking" (2017, 190) centred on the interstice that arises from that juxtaposition-, I will analyse Marker's essay film and its cinematic thinking process as a materialisation of Gilles Deleuze's time-image and crystal-image (1989). To do so, I will use the concept of sentence-image defined by Jacques Rancière as the materialisation of the essay film's thinking in act: "The sentence is not the sayable and the image is not the visible. By sentence-image, I intend the combination of two functions that are to be defined aesthetically-that is, by the way in which they undo the representative relationship between text and image" (2009,46). Thus, the sentence-image, which generates cinematic thinking, oscillates "between two poles, dialectical and symbolic [...] between the image that separates and the sentence which strives for continuous phrasing" (58). By creating different sentence-images, Marker develops a thinking process that forces the spectator to constantly transform the actual image/virtual image relationship of the film, concepts that Deleuze takes from Bergson to apply to the analysis of the time-image and the crystal-image. The first offers a direct image of time: "It is no longer time that depends on movement; it is aberrant movement that depends on time. The relation, sensory-motor situation à indirect image of time is replaced by a non-localizable relation, pure optical and sound situation à direct time-image" (1989,41). The second achieves the indiscernibility between actual image and virtual image: "the coalescence of an actual image and its virtual image, the indiscernibility of two distinct images" (127).
|
2022
|
[
"shs"
] |
[
"1063811"
] |
[
"748938"
] |
04162605
|
Jiménez's reflection on the nature of filmic material now extends to the position of the spectator: "To make a film is to mask; hide a part of oneself, so that it emerges for others, on those who see, listen to." The autobiographical account begins with the first childhood memories in the Andes, where Jiménez lived until she was six years old. The early childhood memory of Lima is then associated with the memory of the mother. This is how the central device of the film begins: the revisiting of the physical spaces of her memories and, in some cases, the recreation of the experiences lived in them. Thus, the earliest memory of the mother, the taxi ride to the ballet, is narrated from the present physical position in this space. The daughter hands her mother her school reports: "I know that if I am number 1 instead of 37, my mother will love me again. I'm going to try. But I don't know why this idea hurts me." The film turns the autobiographical memory into a filmic revisitation and also into a kind of psychoanalytic regression in which the child character takes the floor. Jiménez confirms the mutation of the memory, the transformative capacity of these recreations already enunciated regarding the Andes: "From now on, when I think of my pain in the absence of my mother's love, it will be the images of this film that will come to mind."
|
2023
|
[
"shs"
] |
[
"1063811"
] |
[
"748938"
] |
01954577
|
Log-linear regressions were performed using Stata 9 [START_REF] Statacorp | Stata Statistical Software: Release 9[END_REF], by regressing the quantity of land entered into HLS (qland) over the payment rate per contract (avepr) and the average distance to the three main cities (avedist). The HLS data sample is truncated as HLS successful entrants are mostly selected from a population of farmers enrolling into the (O)ELS part of the Environmental Stewardship Scheme, and only operating HLS contract data were available. Both truncated and OLS regressions on the log-transformed variables for the given sample led to similar results, so the OLS results (log-linear model) only are reported in Table 2. Under a given budget constraint and controlling for the weighted environmental benefit per hectare (distance to cities), the quantity of land entered is hypothesised to decrease for higher average payment rates (hypothesis 1). A negative coefficient for the average payment rate per contract is consequently expected in the regression analysis. With land closer to cities having a higher environmental value per hectare, for constant payment rates, the quantity of land entered is hypothesised to decrease as the distance from the main cities increases. A negative coefficient for the average distance to main cities is thus expected in the regression analysis (hypothesis 2). The adjusted R 2 value is relatively high (49%) for cross-sectional data, possibly reflecting the fact that the sample is drawn from the same area with similar characteristics. All coefficients display the expected negative signs, and both the coefficients for average payment rates and for the average distance to main cities were found significant at 1%. No heteroskedasticity was detected (Breusch-Pagan / Cook-Weisberg test: Chi2 test statistic (1) = 0.21; p-value of 0.65).
|
2010
|
[
"shs"
] |
[
"300739",
"300739",
"4177"
] |
[
"170787"
] |
01391484
|
The obtained formula is valid for any right-hand side in equation (16a). For instance, additional forcing such as an Ekman stress could be taken into account. In equation ( 17), the solenoidal component of the velocity, ∇ ⊥ ψ, corresponds to the usual geostrophic velocity multiplied by a low-pass filter (17b). The irrotational (ageostrophic) component of the velocity, ∇ ψ, dilates the anticyclones (maximum of pressure and negative vorticity) and shrinks the cyclones (minimum of pressure and positive vorticity) at small scales. Indeed, according to equation (17c), the divergence of the velocity corresponds to the vorticity Laplacian divided by k 2 c . Naturally, this structure is reminiscent of the Ekman model where divergence and vorticity would be related by a double vertical derivative:
δ = 2 E k ∂ 2 z ζ where δ = ∇•u, ζ = ∇ ⊥ • u, (18)
and E k is the thickness of the Ekman layer. The turbulent diffusion involved in equation ( 17c) is rather horizontal due to the strong stratification assumption (see ( 10)). In the proposed stochastic model, the divergent component and the low-pass filter of the system (17) are parameterized by the spatial cutoff frequency k c , which moves toward larger scales when the diffusion coefficient a H increases. If both the vorticity and the divergence can be measured at large scales, the previous relation should enable to estimate the cutoff frequency k c by fitting terms of equation (17c). Then, the horizontal diffusive coefficient, a H , or the variance of the horizontal small-scale velocity (at the time scale ∆t), a H /∆t, can be deduced.
|
2017
|
[
"spi",
"phys"
] |
[
"486012",
"300022"
] |
[
"952791",
"853729"
] |
01391420
|
With such a velocity, the expression of the material derivative is changed. To explicit this change, we introduce the stochastic transport operator, D t . The material derivative, D t , generally coincides with this operator, especially for tracer transports. Otherwise, the difference between these operators has a simple analytic expression. The stochastic transport operator involves an anisotropic and inhomogeneous diffusion, a drift correction and a multiplicative noise. These terms are specified by the statistics of the sub-grid velocity. The diffusion term generalizes the Boussinesq assumption. Moreover, the link between the three previous terms ensures many desired properties for tracers, such as energy conservation and continuous variance increasing. For passive tracer, the PDEs of mean and variance field are derived. The unresolved velocity transfers energy from the small-scale mean field to the variance.
|
2017
|
[
"spi",
"phys"
] |
[
"486012",
"300022"
] |
[
"952791",
"853729"
] |
02008002
|
At such RBER values, one could expect that the reduction would become very high or reach infinity. This is not the case as retention errors may still occur and trigger refresh operations even with the proposed method. The reductions could be improved by increasing the value of 𝛼 𝐷𝐴𝑀𝑃 in (4) at the cost of a smaller tolerated retention RBER. The number of refresh-triggered erase operations and, implicitly, the time required for the execution of such operations Fig. 9 Reduction of the number of refresh-triggered erase operations compared to a systematic scheme with fixed refresh frequency. Each curve stops at the maximum tolerated RBER. The considered parameters are the same as those used in Fig. 5. is reduced to a larger extent than the time spent for refreshtriggered read and write operations. This means that the figures reported for the reduction of the read and write operations can be used as a lower bound for the reduction of the time spent for all three types of refresh-triggered operations. VI. CONCLUSIONS
An approach was proposed to improve the tolerated raw bit error rate (RBER) in NAND flash-based SSDs via an estimation of the remaining retention time. This estimation can be performed each time a flash memory page is read and relies on the number of detected retention errors and the calculated retention age, i.e., the elapsed time since data was programmed.
|
2019
|
[
"spi",
"phys"
] |
[
"577943",
"577943",
"577943",
"487992"
] |
[
"1121850",
"172470"
] |
01332895
|
The paper shows that EDF has a zero competitive factor but nevertheless is optimal for online non-idling settings. INTRODUCTION
E NERGY harvesting is a technology that allows to capture otherwise unused ambient energy and convert into electrical energy that can be used immediately or later thanks to a storage unit [START_REF] Priya | Energy Harvesting Technologies[END_REF]. This approach extends the life of batteries (or eliminates them entirely) and decreases maintenance. A variety of techniques are available for energy harvesting, including solar and wind powers, ocean waves, piezoelectricity, thermoelectricity, and physical motions. Energy harvesting is a perfect match for wireless devices and wireless sensor networks that otherwise rely on battery power. Some of the main applications include operating as power source for human wearable electronics, supplement battery storage devices, etc. Another key application that is being investigated in great detail is miniature self-powered sensors in medical implants for health monitoring and embedded sensors in structures such as bridges, buildings for remote condition monitoring. Levels of harvested energy may vary significantly from application to application. Therefore spare usage of available energy is of utmost importance. The system we target consists of three components (see Figure 1): a single processing unit with unique voltage and frequency, an energy harvester and a rechargeable energy storage. We address the problem of scheduling that arises in an energy harvesting system with real-time constraints where tasks have to meet deadlines.
|
2014
|
[
"info"
] |
[
"21439",
"21439"
] |
[
"883818",
"17150"
] |
00822557
|
EDF is consequently the algorithm of choice under normal functioning since any feasible task set is guaranteed to be successfully scheduled by EDF. However, the feasibility analysis problem turns out to be less straightforward because any computing system could be subject to unpredictable situations that can stop the scheduler from guaranteeing all the deadlines. In order to make this scheduling algorithm resilient with exceptions which are failures and overload mainly, the algorithm must be combined with specific techniques first to recover from failures and second to cope with transient overload. C. Overload Management
Several approaches have been proposed to address deadline missing in firm real-time systems. In the (m,k)-firm model, at least m jobs out of any k consecutive jobs from the same task must meet their deadlines for correct functioning [START_REF] Hamdaoui | A dynamic priority assignement technique for streams with (m, k)-firm deadlines[END_REF]. The elastic task model is an attractive model for adapting real-time systems in the presence of overload [START_REF] Buttazzo | Elastic Task Model for Adaptive Rate Control[END_REF]. The method is to reduce the load by enlarging activation periods. Tasks'periods are considered as springs and can change to adapt the QoS so as to keep the system underloaded. The Skip-Over model can also be used to handle overload conditions [START_REF] Koren | Skip-over algorithms and complexity for overloaded systems that allow skips[END_REF]. Koren and Shasha look at the problem of uniprocessor overload by authorizing occasional deadline violations in a controlled way. A periodic task τ i is characterized besides its basic paramaters by a skip parameter s i .
|
2013
|
[
"info"
] |
[
"21439",
"21439",
"21439"
] |
[
"883818",
"17150"
] |
01676179
|
Following discussions one of the sketches was selected as base sketch to work from (DP 2) and some of the features on the list DA 4 were removed as not being necessary (DP 3). In the final meeting, the aim was to come up with a final design. The desired elements were reviewed and then a layout was created (DA 6). It was observed that one of the students was designated for drawing the design and the others gave suggestions and comments. Initially the participants wanted all the desired elements to show up on the homepage but this would have resulted in a cluttered look. At this point, they went back to reviewing existing related apps and websites and based on existing different designs, managed to create their final design sketch (DP 4, DA 7). As was seen in study 1, constraints existed which had an effect on the decision-making process. The most evident constraint was that of time (C 1), towards the end of each hourly meeting there was an obvious pressure to achieve something which led to ideas being accepted or discarded hurriedly in order to reach a resolution. This particularly affected DP 3 and was directly responsible for DP 4 which led to a final result more based on the review of existing solutions than all of the previous work undertaken. The second constraint was the skill-level of the participants (C 2) which meant that they looked at superficial aspects of the design only (no discussions of technical aspects) and having created the personas and scenarios, DA 3, they used these only in the creation of feature list DA 4, but otherwise they never made use of them again.
|
2017
|
[
"info"
] |
[
"243421",
"82150"
] |
[
"1025906",
"1017271"
] |
04267389
|
Despite a more manageable training process compared to GANs, diffusion models necessitate a multi-step sampling procedure during inference, extending the processing time. This becomes particularly problematic for real-time applications like network traffic generation, where the demand is for the rapid generation of tens of thousands of flows per second, especially in high-throughput settings. This situation underscores the need for optimization techniques that can expedite the inference process of diffusion models while preserving generative quality. Dimensionality of traffic. Generating network traffic data introduces unique challenges stemming from the intrinsic structure of the data. For instance, both input and output lengths can vary, requiring a model capable of handling an inconsistent number of packets in each flow. Additionally, the high dimensionality of each packet, particularly when payloads are included, can complicate the training process and necessitate significant computational resources. Finally, network traffic flows can encompass up to tens of thousands of packets, further escalating the task's complexity. Traditional machine learning models might struggle with this sheer scale of data, underscoring the need for tailored solutions for network traffic data synthesis. Generative foundation model beyond traffic generation.
|
2023
|
[
"info"
] |
[
"129172",
"129172",
"301301",
"529665",
"1084804",
"129172"
] |
[
"740518"
] |
03135284
|
Facebook shows that less-developed regions exhibited larger performance degradations through their analysis of edge networks [START_REF] Böttger | How the internet reacted to covid-19 -a perspective from facebook's edge network[END_REF]. Network latencies were approximately 30% higher during the lockdown in Italy [START_REF] Feldmann | The lockdown effect: Implications of the covid-19 pandemic on internet traffic[END_REF]. According to an NCTA report, networks in the United States saw less congestion [START_REF]NCTA: COVID-19: How Cable's Internet Networks Are Performing: METRICS, TRENDS & OBSERVATIONS[END_REF]. Due to decreased user mobility, cellular network patterns have shifted [START_REF] Lutu | A characterization of the covid-19 pandemic impact on a mobile network operator traffic[END_REF]: The authors found a decrease in the average user throughput as well as decreased handoffs. Feldmann et al. [START_REF] Feldmann | The lockdown effect: Implications of the covid-19 pandemic on internet traffic[END_REF] observed that the fixed-line Internet infrastructure was able to sustain the 15-20% increase in traffic that happened rapidly during a short window of one week. Our work differs from and builds on these previous studies in several ways: First, this study extends over a longer time frame, and it also uses longitudinal data to compare traffic patterns during the past six months to traffic patterns in previous years. Due to the nascent and evolving nature of COVID-19 and corresponding ISP responses, previous studies have been limited to relatively short time frames, and have mainly focused on Europe. Second, this work explores the ISP response to the shifting demands and traffic patterns; to our knowledge, this work is the first to begin to explore ISP and service provider responses. Application Measurements during COVID-19. Previous work has also studied application usage and performance, such as increases in web conferencing traffic, VPN, gaming, and messaging [START_REF] Feldmann | The lockdown effect: Implications of the covid-19 pandemic on internet traffic[END_REF].
|
2021
|
[
"info"
] |
[
"129172",
"46584",
"7118",
"432896",
"129172"
] |
[
"740518"
] |
00582826
|
An enumeration is now possible and an identification / segmentation of these trees according to some objective criteria such as the size of canopy, the average color, local density of the coconut trees fields. Finally, a ground truth validation is performed in order to estimate the detection rate and error in each coconut trees class type leading to a precise extrapolation of the global number of trees. DATA
IKONOS optical data is widely available through the whole Tuamotu archipelago and its high spatial resolution (about one meter resolution at ground level) is sufficient to carry out our objective. The study focuses on the atoll of Tikehau that is wellknown from the specialists and easily accessible from Tahiti as a validation study area before extending the method to the rest of the Tuamotu atolls. Tikehau data set was acquired by IKONOS2 on July and August 2003 and is already ortho-rectified and registered in the WGS84 projection. As the complete mosaic of the atoll of Tikehau has a resolution of 22032 by 15614 pixels, the original image is cut out into sub-images, each one locating a motu (a small island constituting an atoll). A motu is then selected in order to validate the proposed method. TREE FIELDS CLASSIFICATION
The coconut trees crown segmentation process must be applied in coconut fields areas to avoid false alarms. In the images, several structures are distinguished such as the sea, the sand, the coral and some dwellings as well as the vegetation (coconut trees and other atoll vegetation types). First, it's necessary to generate high vegetation masks before applying the segmentation process. Due the lack of the near infra-red band (not available in our database), it is not possible to compute the well known NDVI vegetation index.
|
2008
|
[
"info"
] |
[
"389984",
"2411",
"389984",
"254642",
"254642"
] |
[
"879106",
"1325537",
"1164575",
"743580"
] |
02879271
|
We therefore consider this to be the optimal case. However, (s 1 s 2 ) 1/2 = 4dB can also be achieved by very different choices of parameters, e.g., s 1 = 7dB and s 2 = 1dB. In this case we explicitly show that no Wigner-negativity can be generated remotely with only photon subtraction, i.e., condition ( 6) is not satisfied. However, by implementing a local Gaussian transformation R = S -1 in mode g, we can fulfil (9) and reach a significant amount of Wigner-negativity equal to the optimal case W min ≈ -0.135/2π. This example shows that the main role of the local Gaussian transformation R is to balance the noise in modes f and g. This explains why the symmetric setup with s 1 = s 2 = 4dB is the optimal case. Impure two-modes states do not only arise due to losses, they could also originate from entanglement to additional modes. To explicitly explore this case, we now subtract a photon from a mode in a larger multimode state. In particular, we consider CV graph states [5][6][7][8][9], which form the backbone of measurement-based quantum computing in CV [44], and have tractable entanglement properties. Recently, EPR steering was experimentally observed in such a system [33]. These states are Gaussian, with a covariance matrix that is built in accordance with a graph G as a blueprint.
|
2020
|
[
"phys"
] |
[
"541692",
"541692"
] |
[
"749023"
] |
03037730
|
To get the isomorphism [START_REF] Crespi | Suppression law of quantum states in a 3D photonic fast Fourier transform chip[END_REF], all that remains to be done, is to define the action of U on the vacuum:
U |0 G⊕K = |0 G ⊗ |0 K . ( 43
)
With these definitions, and with [START_REF] Dittel | Totally destructive interference for permutation-symmetric many-particle states[END_REF][START_REF] Ou Z Y, Rhee | Photon bunching and multiphoton interference in parametric down-conversion[END_REF] we can now understand the isomorphism [START_REF] Crespi | Suppression law of quantum states in a 3D photonic fast Fourier transform chip[END_REF] in a much more elegant way. It is also insightful to revisit the single-mode spaces F B (C) and F F (C) in the light of second quantisation. First of all, it should be emphasised that the singlemode space only has a single creation (and annihilation) operator a † . As we stressed before, the mathematical framework is essentially defined by the calculus of creation and annihilation operators. For the bosonic single-mode Fock space, we find that [START_REF] Crespi | Integrated multimode interferometers with arbitrary designs for photonic boson sampling[END_REF] reduces to [a, a † ] = 1, which is exactly the commutation relation that describes the ladder operators of a harmonic oscillator. For the fermionic case, we find that (39) describes an operator with properties {a, a † } = 1 and (a † ) 2 = 0. This is exactly the recipe for the Pauli operator σ + , given by a matrix
σ + = 0 1 0 0 , (44)
which solidifies the connection between fermionic systems and spin chains. This concludes our description of how second quantisation is used to describe states. However, the full potential of the formalism stems from its possibility to also describe observables, as we will see in the next section.
|
2020
|
[
"phys"
] |
[
"541692"
] |
[
"749023"
] |
00656774
|
The calculated configuration of these states is as much as 80-90% pure πd 5/2 ⊗ νd 3/2 . Noteworthy is the fact that both the USD [START_REF] Brown | [END_REF] and USDA/USDB [16] interactions predict the J = 4 state to be a β-decaying isomeric state, partly connected to the ground state by a delayed M 3 transition. In all interactions, the J = 3 state is found to be unbound, at an excitation energy of about 1.7 MeV. Taking the value of 0.80 (12) MeV for the neutron emission threshold, the J = 3 state is predicted to be unbound by about 0.9 MeV. Consequently, it is reasonable to discard the two possibilities of J = 4 and J = 3 for the observed excited state at 657 (7) keV, the former being possibly long-lived isomer, the latter being likely unbound. We therefore ascribe the observed peak at 657 (7)keV to arise from the decay of the J = 2 excited state to the J = 1 ground state. Besides the 'normal' positive parity states, low-lying negative parity states could be present in 26 9 F 17 . Indeed a 3/2 -intruder state has been discovered at 765 keV above the 3/2 + ground state in the 27 10 Ne 17 isotone [5][6][7]. From the recent work of Ref. [8], it is confirmed that the 3/2 -state has a large p 3/2 component.
|
2012
|
[
"phys"
] |
[
"119",
"506920",
"119",
"119",
"119",
"119",
"506920",
"388739",
"506920",
"506920",
"506920"
] |
[
"755248",
"755406",
"755220",
"903877",
"741727"
] |
03339021
|
So, take courage and prepare yourselves for battle in the morning. Do not fear their numbers, for God has the power to deliver the many into the hands of the few. No strength of soldiers can equal a just cause. Remember that you fight not because you want to but because it is necessary. You go to battle not for glory nor dominion but for survival and your lives. Only the sword can open our road to life: we must either win or die. But is it not more glorious to die in battle by the sword of a soldier than in one's home by the dagger of a spy? I would rather die in battle than in a city or in a prison. Nobody shall kill me with impunity. Here, here we must fall, where our sword can avenge us.
|
2021
|
[
"shs"
] |
[
"59299"
] |
[
"15311"
] |
03337393
|
Also, in complex sentences, the original subject or object is often given instead of a pronoun. Names of persons have been retained in the original language 1 (with certain exceptions, see the Index of Persons). It may seem strange to English-speaking readers to read "Emperor Friedrich" instead of "Emperor Frederick", but most will now accept "King Louis" instead of "King Lewis" which was used formerly. The practice of using the original form of names has been followed in a spirit of cultural internationalism, for which the editor requests the reader's tolerance. The same courage, however, was not shown concerning names of places: well-known places like Rome have been given in English. This leads to somewhat inconsequential forms like "Duke Philippe of Burgundy". In this area, complete consistency appears to be really difficult. Texts from the Bible are quoted from the Douai-Reims edition, sometimes in a form slightly modified to fit Piccolomini' text. Texts from classical authors are quoted from the Loeb edition, also sometimes modified. Rare is the translation in which no unrecognised errors have survived, and this applies, of course, to the present translation, too: the reader's generous benevolence is solicited!
|
2021
|
[
"shs"
] |
[
"59299"
] |
[
"15311"
] |
01857756
|
It is also important to note that compliance is not governed by a belief in the legitimacy or validity of the norms embodied by protocols, but is driven instead by social desirability -i.e. when an individual produces the expected behavior, it is because s/he is adjusting to a social norm (in this instance, the expectations of superiors). It is important therefore to consider the reasons that may account for noncompliance by examining naïve knowledge, a potential obstacle hindering the implementation of hygiene protocols. These issues pertain more specifically to the articulation of scientific knowledge and naïve knowledge and the dynamics subtending the relation between these two kinds of knowledge [START_REF] Moscovici | Social representations and social explanations: from the « naive » to « amateur[END_REF]. Two specific groups of healthcare staff were targeted in this research: nurses and healthcare assistants. Nurses and healthcare assistants are the two groups most frequently in contact with patients and may therefore be said to play a determining role in the provision of healthcare and the transmission of hospital-acquired infections. However, it is important to note that the work performed by nurses and healthcare assistants (i.e. staff practices) and their level of training (staff Edith Salès-Wuillemin, Rachel Morlot, Aurélie Fontaine et al… knowledge) are not comparable. It is hypothesized that social representations of hygiene are likely to differ between the two groups. Study 1: Questionnaires assessing Representations of Hygiene among Nurses and Healthcare Assistants
Method and Design
The study was based on verbal association task included in questionnaires. The aim was to highlight the key concepts structuring representations of hygiene. Population
114 nurses and 35 healthcare assistants were interviewed as part of this study.
|
2018
|
[
"shs"
] |
[
"477907",
"477907",
"103953",
"57629"
] |
[
"12693",
"901761",
"740522"
] |
01904917
|
Observing the flow of information through the circulation of messages involves looking at the modes of stakeholder participation. They are materialized in information-communication practices. To answer this question, the analysis is based on the classification of the accounts. In order to characterize Twitter accounts we used some of attributes proposed by [START_REF] Juanals | Analysing cultural events on twitter[END_REF] : "relayed", "relaying", "mentioned" and "passing". As pointed out by [START_REF] Juanals | Analysing cultural events on twitter[END_REF], the value of this index is not significant in itself; it simply provides a means of comparing accounts. We identified six passing accounts that had a significant score [START_REF] Juanals | Categorizing air quality information flow on twitter using deep learning tools[END_REF]. The analysis of these passing accounts makes it possible to identify some of their characteristics. These are all accounts of organizations with the exception of one influencer. It is remarkable these key influential accounts do not share their communities of accounts. From the whole corpus, the data were partioned in restricted subcorpora built according to the criteria of the type of stakeholder (organizational or individual).
|
2018
|
[
"shs",
"info"
] |
[
"145342",
"408942",
"1057"
] |
[
"654",
"461"
] |
00610032
|
Two main ideas underlie our approach: one is to provide cartographic representations of standards, the other to assist the navigation of an end-user through the corpus of standards. Considering that there is no universal representation independent of the goals and the organizational context, we aim to provide both graphic and textual representations, and several tools enabling comparison between several standards. It must be emphasized that all the representations are interconnected and that the platform provides specific interfaces allowing the enduser to navigate between them. Furthermore, this navigation will be assisted by applying specific knowledge based on the NaviText model [START_REF] Couto | NaviTexte, a Text Navigation Tool[END_REF]. Textual and Graphic Representation
The glossary of the standardized domain is the main textual tool. For each term, semantic and usage variations in the selected corpus are provided and enriched links (see section 5.2 below) can be followed; at any moment, the textual contexts of the same term in two standards can be compared by accessing them in one or several standards. This very simple tool is extremely useful to preserve conceptual coherence during the writing process of a new standard by using the same word to refer to an identical concept or on the contrary by choosing a new word to highlight the creation of a new concept. Graphic representations complete the glossary. As explained in section 3.5, we consider that conceptual maps (or local ontologies) provide a useful level of abstraction, while at the same time keeping and foregrounding the relations between concepts and qualifying their semantics. For example, relations could be linked to the different phases of the PDCA cycle which governs all the standards.
|
2011
|
[
"shs"
] |
[
"1057",
"1057"
] |
[
"654",
"461"
] |
03282789
|
In Switzerland, data as infrastructure and data governance play a key role in national strategic papers such as the "Digital Switzerland Strategy" (cf. [START_REF] Klievink | Digital Strategies in Action -a Comparative Analysis of National Data Infrastructure Development[END_REF], [START_REF]Schweizerische Eidgenossenschaft: Strategie "Digitale Schweiz[END_REF]) and the renewed "E-Government Strategy 2020-2023". Even though Switzerland is not an early adopter regarding digital transformation of the public sector [START_REF] Neuroni | E-Government und smarter Staat: Die Schweiz auf halbem Weg[END_REF], data-centric public sector transformation is gaining political awareness and importance. To realize the SDG or the Swiss E-Government Strategy, a successful implementation of the OOP requires transfer and re-use of sensitive or personal data between government agencies across borders involving actors on different levels of a political system. Significant effort for the development of a technical infrastructure as well as organizational frameworks are currently developed in several research projects. In an analysis of drivers and barriers for OOP implementation in the SCOOP4C project, trust is stressed as underlying condition [START_REF] Roustaei | Gap analysis report of challenges, needs and benefits of the OOP4C analysis[END_REF], [START_REF] Wimmer | Roadmap for future areas of actions, and policy recommendations[END_REF], [START_REF] Wimmer | Vision of the once-only principle for citizens, including key enablers and major barriers[END_REF]. In the TOOP project, trust is addressed with a technical approach by defining a trust architecture as part of the system design [START_REF] Pavleska | Cybersecurity Evaluation of Enterprise Architectures: The e-SENS case[END_REF], [START_REF] Grandy | Generic Federated OOP Architecture[END_REF]. In a broader perspective, TOOP also links the question of trust to the organizational culture of government agencies. However, a systematic and scientifically grounded rigorous analysis of the organizational challenges is currently lacking. To investigate and elicit the requirements of interoperable data and information sharing along a structured approach, the European Interoperability Framework (EIF) provides a conceptual model for public services and considers data-related services as a basic component for service provision [START_REF]European Commission: New European Interoperability Framework -Promoting seamless services and data flows for European public administrations[END_REF]. The SCOOP4C and TOOP projects rely on this EIF to structure their investigations of barriers, enablers and architecture for a comprehensive OOP implementation.
|
2020
|
[
"shs",
"info"
] |
[
"147310",
"487376",
"487376"
] |
[
"993332",
"1016625",
"1104839"
] |
02107699
|
It assumes that a given crop only yields one product. It assumes that water serves only one use:
the evapotranspiration of the crop that will be sold. Finally, it assumes that only climatic and agronomic variables determine the quantity of water that is necessary to produce a crop. Such hypotheses rarely resist scrutiny. Agribusinesses with reliable and sizeable infrastructure may benefit from a supply of water on request when operating in very favorable conditions. Palestinian smallholders rely instead on shared springs or farmer managed shallow wells. In the Mediterranean area, spring flow varies widely through the year. This constrains the amount of water a farmer can access, as does the social organization allowing the farmer to access this spring. All Palestinian springs used in irrigation are shared according to "water turns". These are measured in terms of time periods during which the full flow of the spring is usually channeled towards a farmer's plot. Similarly, farmers relying on wells need to share with their neighbors, which constrains their access.
|
2017
|
[
"shs",
"sde"
] |
[
"107303",
"107303"
] |
[
"174372"
] |
02103773
|
Section 2 first sketches an overview of the various literatures that can contribute to the theoretical framework necessary to study the transformation of irrigation in a multi-scalar manner. Section 3 then examines the nature of pioneer fronts and how they need to be conceptualized. The critical point for this article is that new land control creates new frontiers. A pioneer front is usually defined as a space where agriculture is being extended over previously uncultivated land. We re-examine this definition using cases around the world, especially Africa, and argue that a more precise definition of pioneer fronts should refer to the transformation of our interactions with the environment that are linked to the transformation of power relations within society. A pioneer front involves an in-depth reconfiguration of farmers' interaction with land and water that goes beyond turning to high yield varieties or an increased use of fertilizers. Within a pioneer front, land tenure and water tenure are deeply modified. The modalities of access to both land and water are transformed. Appropriation modalities are transformed. Section 4 then explores case studies of groundwater pioneer fronts and wastewater pioneer fronts presently occurring in the West Bank.
|
2018
|
[
"shs"
] |
[
"107303",
"107303"
] |
[
"174372"
] |
04008698
|
Though NWP was devised by the PML-N government under the Prime Ministership of Shahid Khaqan Abbasi, it was Prime Minister Imran Khan of the PTI who first held the meeting of the Council in October 2018 (National Water Council, 2018). Since then, not even a single meeting of the Council has been held that confirms that the issue has been de-prioritised and de-securitised. FINDINGS AND CONCLUSION
Water scarcity was successfully securitised by two successive governments around the 2018 general elections period. Actors involved in the securitising process remained both at societal and state levels. They created an urgency about the issue thus ensuring that the provinces which usually held reservations about the construction of the new dams on the Indus River and its tributaries consented to the new megaprojects. Construction of the Diamer Basha dam was specifically securitised by the efforts of the CJP Saqib Nisar who launched a crowd-funding campaign. Though the campaign was destined to gather funds for the construction of the dam, it did more than thatit raised awareness among the general population about the urgency of the issue. The situation was such that even the poor sections of the society contributed to the dam fund by sending merely PKR 10 through their mobile devices, sending an SMScellular companies thus collected the donations through this simple method and submitted them to the Dam Fund. Different institutions of the states contributed to the fund by offering part of their salaries. Everyone in the electronic and social media was talking about the water scarcity issues and the construction of the Diamer Basha dam.
|
2023
|
[
"shs",
"sde"
] |
[
"1052438",
"472159",
"1052438",
"472159"
] |
[
"1232248"
] |
01322603
|
The case study of the Austrian nationwide public access defibrillation (ANPAD) programme presented in this paper offers an exemplar of the process of co-creation. Here the Austrian Red Cross (ARC) took the lead role in organizing a co-creation network, acting on behalf of citizens and organizing an innovation network capable of creating both the demand and the supply side of a sustainable market for the production and safe application of portable automated external defibrillators (AEDs) by laypeople. This process involved, first, a raising of awareness regarding the need for portable defibrillators, amongst the general public and also politicians. The ARC acted as a representative of users in its dealings with medical professionals, politicians, and private sector businesses. It organized AED training in every first aid training in Austria, worked with research hospitals engaged in establishing an evidence base, worked with firms located in Austria to produce AED devices, and with large businesses to have portable AEDs installed on their premises. The remainder of this paper is organized as follows. Section 2 identifies overlapping areas within the existing social innovation and service innovation literatures. It identifies a common domain of interest, and how these fields of research can usefully be integrated. Section 3 examines the concept of co-creation, which usefully brings together the different strands of literature discussed in section 2. Building on this, section 4 details the theoretical multi-agent co-creation framework that will be used to analyse the dynamics of co-creation in social innovations.
|
2016
|
[
"shs"
] |
[
"407023",
"196396",
"300498",
"1188",
"457940"
] |
[
"2460"
] |
01203646
|
In the reputational world, the brand benefit gained by the developer of the innovation is an immediate output. Indeed, the developer is not only seen as an innovator, but also as a professional sensitive to ecological problems, equity and fairness. However, even in the short term the environmental data platform has much broader effects, too, in terms of reputation. It increases attractiveness of platform and it's developers and thus improves possibilities to ‗market' concrete activities in the area of sustainability. In the longer term, other actors in addition to the original developer -public bodies and private companies participating in the application and further development of the platform -gain visibility for their sustainability efforts. Simultaneously the environmental sustainability as an important value becomes more visible and determinant in the society at large. All in all, the reputational -world‖ is however more tightly linked to specific actors than the other -worlds‖. To summarize, our case illustrates that relational, responsibility and reputational -worlds‖ are equally important as the technological and market views for the understanding of complex system innovations that include service aspects. In addition, our case indicates that the impacts generated in the different -worlds‖ are often interdependent and complementary to each other. For instance, some changes in relational and responsible -worlds‖ are prerequisites to effects generated in the technical and financial spheres.
|
2015
|
[
"shs"
] |
[
"33065",
"33065",
"33065",
"1188",
"33065"
] |
[
"2460"
] |
03352020
|
Results
The descriptive statistics are estimated on a daily basis, and reported in Table 1. It is noted that the Bitcoin spreads are positively skewed with fat-tailed distribution. This implies, that the spread measures have the right-skewed distributions with most values to the right of their mean. Conversely, the Bitcoin return is negatively skewed with fat-tailed distribution. The negative skewness for the return indicates the le -skewed distributions with most values to the le of mean value. The fat-tailed distributions or higher kurtosis values for the spread proxies and return are indicating the extreme values in the corresponding dataset. On a monthly basis, the fluctuation in the Bitcoin spreads and its yields are graphed in Figure 1 and Figure 2, respectively. It is vividly noted that the liquidity cost and returns are time-varying in the Bitcoin market. It matters to unveil whether the liquidity cost is an appropriate measure to estimate yields Pre-pandemic crisis, the regression relationship is quantified in Table 2. On the same trading day, the Bitcoin returns are positive and significantly associated with its liquidity cost, estimated by ES and CBML measures.
|
2021
|
[
"shs"
] |
[
"463709"
] |
[
"1091662"
] |
01182801
|
Bandit approaches perform similar to the greedy deterministic method. As the number of active atoms increases, the bandit approaches succeed better in recovering the extreme component of the gradient while the deterministic approach is slightly less accurate. Note that for any value of k, the randomized strategies suffer more than the other strategies for recovering the true vector w support. From a running time point of view, again, we note that the deterministic and noniid successive halving bandit approaches seem to be the most efficient methods. The gain in running time compared to the exact gradient OMP is slight but significant while it is larger when comparing with the successive reject algorithm. D. Sparse Approximation with CoSaMP
To the best of our knowledge, there are very few greedy algorithms that are able to leverage from stochastic gradient. One of these algorithms has been introduced in [START_REF] Nguyen | Linear convergence of stochastic iterative greedy algorithms with sparse constraints[END_REF]. In this experiment, we want to evaluate the efficiency gain achieved by our inexact gradient approach compared to this stochastic greedy algorithm. Our objective is to show that the approach we propose is empirically significantly faster than a pure stochastic gradient approach. For the different versions of the CoSaMP algorithm, we have set the stopping criterion as follows.
|
2016
|
[
"scco",
"info"
] |
[
"23832",
"388932",
"388932"
] |
[
"174806",
"5004"
] |
03277680
|
While their approach is competitive in term of quality of generated data, it is hardly tractable for large scale dataset, due to the multiple (up to 1000 in their experiments) discriminator trainings. Instead of considering adversarial training, some DP generative model works have investigated the use of distance on distributions. [START_REF] Harder | Differentially private mean embeddings with random features (dp-merf) for simple & practical synthetic data generation[END_REF] proposed random feature based maximum-mean embedding distance for computing distance between empirical distributions. Cao et al. (2021) considered the Sinkhorn divergence for computing distance between true and generated data and used gradient clipping and noise addition for privacy preservation. Their approach is then very similar to DP-SGD in the privacy mechanism. Instead, we perturb the Sliced Wasserstein distance by smoothing the distributions to compare. This yields a privacy mechanism that benefits subsampling amplification, as its sensitivity does not depend on the number of samples, and that preserves its utility as the smoothed Sliced Wasserstein distance is still a distance. Differential Privacy with Random Projections
Sliced Wasserstein Distance leverages on Radon transform for mapping high-dimensional distributions into 1D distributions. This is related to projection on random directions and the sensitivity analysis of those projections on unitnorm random vector is key. The first use of random projection for differential privacy has been introduced by [START_REF] Kenthapadi | Privacy via the johnson-lindenstrauss transform[END_REF].
|
2021
|
[
"scco",
"info"
] |
[
"458139",
"389520",
"458139"
] |
[
"174806",
"5004"
] |
03724129
|
The acyclicity and single-rootedness come at the cost of using inverse relations. Any role, core or non-core, can be reversed by adding -of to its name and changing the direction of the relation. Apart from avoiding cycles, inverse roles also serve to highlight the focus of a sentence by making sure that the central concept is the root of the AMR graph. The AMR Bank is a manually-produced corpus of AMR annotations in English. Only a portion of it (namely the Little Prince corpus and the BioAMR corpus) are freely available. The rest of the AMR Bank can be obtained by a (paid) license from the Linguistics Data Consortium. AMR was designed with English in mind and does not aim to be a universal semantic representation framework. That being said, there have been attempts to use the framework for other languages, notably Chinese, in the Chinese AMR (CAMR) Bank2 . While powerful in its ability to abstract from surface representation, there are a number of phenomena that the framework does not cover -tense, plurality, definiteness, scope, to name some of the more prominent ones. Some of these issues have been addressed: [START_REF] Bos | Separating argument structure from logical structure in AMR[END_REF] proposes an extension to deal with scope in AMR, while [START_REF] Donatelli | Annotation of tense and aspect semantics for sentential AMR[END_REF] proposes to augment AMR with tense and aspect.
|
2022
|
[
"scco",
"info"
] |
[
"150772",
"150772",
"150772"
] |
[
"740210",
"747",
"2082"
] |
02090938
|
This result seems intuitive. If only species 1 (or 2) disappears, there remains 2(E + J) attributes. But if only species 3 disappears, the number of remaining attributes decreases to a lower 2E + J. In Appendix D.1, however, we show that the property emphasized in Proposition 2 is fragile. More precisely, it holds only when ecological interactions are not too strong (even if ecological interactions are not a source of heterogeneity). The Influence of Ecological Interactions
Incorporating this dimension in the model is an attempt to account for the complexities of the web of life. For instance, the interactions between two species can be considered unilateral, e.g. species 1 impacts species 2 but not vice versa, or bilateral, e.g. species 1 impacts species 2 and species 2 impacts species 1. In a two-species system, there are 2 2 = 4 interaction possibilities to consider. As soon as one contemplates a three-species ecosystem, however, there are 3 3 = 27 potential pairwise interactions between species (not even taking into account the added complexity that could be introduced by varying the intensity of each of these ecological interactions). It is evident that the number of interaction possibilities quickly explodes with the number of species in the system.
|
2019
|
[
"shs",
"sdv"
] |
[
"532853",
"526949",
"422966"
] |
[
"22112"
] |
01591987
|
15To model species interactions, we follow [START_REF] Courtois | Conservation priorities when species interact: the noah's ark metaphor revisited[END_REF]. We model each species i as having an autonomous survival probability q i which is the survival probability of species i in an ecosystem free of species interactions and without any management activity. Autonomous survival probability is a measure of the robustness of a species. Low survival probability characterizes species on the brink of extinction while high survival probability characterizes healthy species such as spreading ones. As a result of the interactions that occur between species, the survival probability of each species i also depends on the survival probabilities of all other species through interaction parameters r i,j =i , with r i,j =i ∈ R. Finally, the decision-maker can choose to target the survival probabilities of the invasive species present in the ecosystem. The amount of effort she invests in controlling invasive species k is denoted x k , and we denote by x k the maximum control effort constrained by P i ∈ [0, 1], ∀i. 16 The resulting survival probabilities in our stylized two-native two-invasive species ecosystem reads as:
(4)
P k = q k -x k + k =j r kj P j , P l = q l + l =j r lj P j ,
with the additional constraint:
(5)
x k ∈ [0, x k ] ∀k . System of equations ( 4) describes the stationary law of evolution of survival probabilities of native and invasive species composing the ecosystem. 17
16 An algorithm that computes x k is available upon request. 17 "Stationary" here refers to the fact that it can be interpreted as the steady state of an explicit dynamic system.
|
2017
|
[
"shs"
] |
[
"133405",
"2583",
"199934",
"422966",
"320813"
] |
[
"22112"
] |
04299879
|
Controlling the influence of life expectancy (Table 4, model 4) for departments of residence causes a change in the sign of the life-expectancy coefficient, which remains significant at the 10 % level (Table 5, model 2): when life expectancy increases, height decreases. This phenomenon could be explained by the fact that non-fatal illnesses may be more important than fatal illnesses when it comes to determining height discrepancies between departments and districts. The regressions in Tables 4 and5, calculated as they are on the basis of observations at the individual level, serve as a fine filter for our analysis of the influence of one or another explanatory factor. However, when one takes into account the vast quantity of individual observations, the total variance in height is more a matter of genetics than of socio-economic factors. The adjusted R 2 s are therefore very weak, even if the coefficients of each variable considered individually are of great interest. 38FL01 38 This figure seems quite large; Baten and Komlos-no doubt on the basis of other, unspecified 38FL02 hypotheses-claim that the life-expectancy increase correlated with a 1-cm height increase is only 38FL03 1.2 years in length (Baten and Komlos 1998). In contrast, extrapolating from our results, one would 38FL04 conclude that over the course of the Industrial Revolution, life expectancy increased by more than one 38FL05 hundred years! Nevertheless, diachronic data lend plausibility to our calculation, at least in the case of 38FL06 nineteenth-century France: an increase in conscripts' mean height of 0.75 cm between the 1790-1799 and 38FL07 the 1820-1829 birth cohorts is paralleled by a 7.2-year increase in female life expectancy. U N C O R R E C T E D P R O O F
Literacy is not just a proxy for per capita income, although, for one thing, it certainly helps to reduce the number of unwanted pregnancies and thus improves children's gross nutrition (Weir 1993, 1997). For another, in certain cases, the positive influence of instruction is observed even after one has controlled for income (Meyer and Selmer 1999; Steckel 1998): the positive influence of educational level would thus reflect not only the children's but also their parents' improved grasp of nutrition issues, enabling them to fight more successfully against diseases by means of health practices and paramedical care that were better adapted than were those of the illiterate conscripts.
|
2013
|
[
"shs",
"sdv"
] |
[
"866"
] |
[
"874783"
] |
03491595
|
They do not frequently offer written information (1.9%) but consult a fertility specialist (68.9%) or refer patients directly (78.2%). SRM were asked about fertility management after conservative treatment of EC/AH, but 23 to 30% of them did not answer the questions (Supplementary Table 2). Eighteen (34%) of them were at least involved in young patients with EC/AH after conservative treatment. If the patient's tubes and the spermogram were normal, 30 physicians (56.6%) considered that the patient could wait for spontaneous pregnancy to occur after conservative treatment for EC/AH. If fertility treatment was planned directly or after waiting this amount of time for a spontaneous pregnancy, most of the SRM (56.6 %; 30 physicians), chose IVF. For IVF, the main stimulation protocols chosen were an antagonist (28%) or an agonist (22%). For 54.7% of SRM, several pregnancies can be allowed as long as the disease does not relapse, but for 18.9 % of SRM a systematic hysterectomy must be performed after the first delivery. During fertility treatment, monitoring recurrence of EC/AH was preferentially performed by hysteroscopy and endometrial biopsy every 3 months (22.6%), or based on ultrasound evaluation of the endometrium (39.6%). Discussion
The present study reports the findings of a survey of French gynecologists and their knowledge of, and attitudes toward, FP in EC/AH. Despite average knowledge and attitude scores, most of GS considered and gave advice to patients about FP before EC/AH treatment.
|
2020
|
[
"sdv"
] |
[
"300156",
"300156",
"557826"
] |
[
"1113617"
] |
03489307
|
In a survey of more than 600 young women with early-stage breast cancer, 29% reported that concern about infertility influenced their treatment decisions [START_REF] Partridge | Webbased survey of fertility issues in young women with breast cancer[END_REF]. Hence, the American Society of Reproductive Medicine (ASRM) and the American Society of Clinical Oncology (ASCO) have put forth official guidelines recommending that patients be educated about the effect of cancer treatment on fertility and fertility preservation options [START_REF]Ethics Committee of the American Society for Reproductive Medicine. Fertility preservation and reproduction in cancer patients[END_REF][START_REF] Loren | Fertility preservation for patients with cancer: American Society of Clinical Oncology clinical practice guideline update[END_REF]. The European Society of Gynecological Oncology (ESGO) decided in 2007 to launch the Task Force for Fertility Preservation in Gynecologic Cancer. This task force was developed to promote knowledge of infertility induced by treatment of gynecologic cancers among healthcare workers and the public through national and international collaboration among oncologists, reproductive specialists [START_REF] Denschlag | Fertility-sparing approaches in gynecologic cancers: a review of ESGO task force activities[END_REF]. Strategies for fertility preservation prior to chemotherapy depend on the time required, the woman's age, its risks and efficacy, and the individual preference of the patient [START_REF] Von Wolff | Practical recommendations for fertility preservation in women by the FertiPROTEKT network. Part II: fertility preservation techniques[END_REF]. In the present study, the analysis of the preservation of fertility was based on the cryopreservation of embryos and oocytes, which are the two established methods of fertility preservation. In October 2012, ASRM published an official guideline stating that mature oocyte cryopreservation should no longer be considered experimental and can be recommended with appropriate counseling to patients receiving gonadotoxic therapies for cancer [START_REF]Mature oocyte cryopreservation: a guideline[END_REF]. In our database, no oocyte cryopreservation was found for fertility preservation before 2011.
|
2019
|
[
"sdv"
] |
[
"221529"
] |
[
"1113617"
] |
02627272
|
Solutions for volumetric titrations were bought from Fluka (0.10 M NaOH and 0.10 M HNO 3 ) and Roth (0.010 M NaOH) and used directly in the potentiometric titrations. Solutions prepared from nitric acid (Merck Suprapur) and sodium hydroxide (0.1 M standard, Merck) were used to adjust the pH when necessary. Potassium thiocyanide, hydrochloric acid, potassium chloride and ammonium acetate were all from Merck. Extraction and purification of HA sample
Peat samples were collected in the Mogi River region of Ribeirão Preto, São Paulo State, Brazil. The humic substances were extracted following the IHSS procedure for soil organic matter [START_REF] Botero | Peat humic substances enriched with nutrients for agricultural applications: competition between 10/11[END_REF] . The alkaline extracted (AE) soil HA was taken before the HCl/HF treatment whereas the fully purified (FP) HA underwent the full purification procedure. In brief, the humic matter purification procedure for soils consists in separating the insoluble humin from the soluble humic and fulvic acids using a 1/10 mass ratio of 0.1 M NaOH for 4 h under an inert atmosphere. This step is followed by acidification to pH 1 using 1.0 M HCl to separate the soluble FA from the acid-insoluble HA. The HA is resuspended in 0.1 M NaOH and precipitated in 0.1 M HCl/0.3 M HF to destroy the remaining mineral phase. Then, the solid is dispersed in water to form a slurry, and transferred to a Visking dialysis tube where it is subsequently dialysed against distilled water.
|
2017
|
[
"sdv"
] |
[
"466264",
"237201",
"1005035",
"1005035",
"466264",
"237201",
"496852",
"1005035",
"237201"
] |
[
"17385",
"1027290",
"181021",
"17384"
] |
00539228
|
Although use of perception systems should enhance driver awareness, as represented by the triangular shape in the figure, occlusion will mask several potential risks e.g. the distracting vehicle (DV), the power two wheeled vehicle (PTW), etc. Further, if there is an Intrusion Vehicle (IV) arriving at a prohibited speed, for the driver of the SV will be difficult to know that the IV will be travelling too fast to brake at the stop line on time. When the IV enters the SV sensor field of view, it will be likely too close already. However, if each vehicle can transmit their position, speed and other data, by associating this information onto a digital map representing the road geometry and other contextual information an extended digital representation of the vehicle immediate environment could be built as shown in Figure 4. Thus an application running in the SV can analyse and identify the possible risks informing the driver beyond what current sensors could provide. The figure shows the risk vehicles in red. By knowing the speed at which they are evolving, their distance to the intersection at the time of the query plus the state of the SV, it is possible to warn or even act in the SV. This is the Safety Margin concept deployed in the SAFESPOT project. Thus by sharing vehicle state information, projecting it on the road geometry, it is possible to extend the driver situational awareness. The fundamental functions for a V2V safety system would consist of a Wireless Communications dynamic network, a Digital Map and a Localisation system. A.
|
2010
|
[
"info"
] |
[
"133641",
"133641",
"44462",
"133641",
"133642"
] |
[
"883149"
] |
00539237
|
Figure 1 shows the statistics associated to accidents in the Europe of 27. In the Europe of 27 (2004): 43 % of injury related accidents occurred at intersections. Out of the overall number of fatalities, 21% occurred at intersections, with 34% of the seriously injured [START_REF]CARE, IRF, IRTAD, TRACE, and National Statistics Databanks[END_REF]. The design of a system that is to address intersection safety, needs to identify the context under which this occur. This should give indicators on the type of intersections where accidents occur, the type of vehicles involved, the time of the day, the age distribution of the drivers involved, weather conditions, etc. Information that is used as input on the V2V system that is to be designed. As the ESV application is to be extended to intersection safety involving all type of vehicles. The application considers the statistic results so they are part of the design. For example, 80% of intersections occur in rural areas representing a low percentage of fatalities, by contrast fatalities inside urban areas are 42%. The road structure and geometry is another source of information. Table 1 summarises the context within which accidents occur at road intersections.
|
2010
|
[
"info"
] |
[
"133641",
"133641",
"44462",
"133642"
] |
[
"883149"
] |
00790107
|
It casts camera optimization problems mostly conducted in 6D into searches inside a 2D space on a manifold surface. Interestingly, our model can be easily extended to integrate most of the classical visual properties employed in the litterature [START_REF] Ranon R | Accurately Measuring the Satisfaction of Visual Properties in Virtual Camera Control[END_REF]. For example, size of key subjects (or distance to camera) can be expressed as the set of viewpoints on the manifold which are at a given distance from the camera (resolves as 0, 1 or 2 lines on the manifold surface). In a similar way, vantage angle properties (eg. see the front of a subject) represent sub-regions of the manifold. By reducing the search space to a manifold where the on-screen location of subjects are exact, we obviously restrict the generality of the technique. However, the benefits in terms of computational cost greatly favors our approach. Though the solution for two subjects appears easy to formulate with vector algebra, it has not been reported before and the model serves as an expressive way on which to build more evolved techniques. The techniques presented in the paper have the potential to replace most of the previous formulations related to camera control with a simpler and more efficient approach, and opens great possibilities to include more evolved on-screen composition techniques in a large range of applications in computer graphics. Figure 1 :
1 Figure 1: Heatmap representing the quality of on-screen composition for two subjects (white points) for a region of camera configurations (topview of a the 3D scene).
|
2012
|
[
"info"
] |
[
"155296",
"155296"
] |
[
"3671",
"853625"
] |
01457232
|
The conclusions of the study are strong with respect to the weak uncertainties on the characterisations. CONCLUSION
This study allows us to establish an environmental hierarchy between recycling solutions for aluminium cables. Whatever the electricity mix used by the recycling plant, the MTB mechanical recycling process is the most environmentally friendly. Additionally, LCA was conducted in order to help the company to highlight environmental hotspots of the system and try to design new solutions to decrease environmental impact of aluminium produced [START_REF] Grimaud | Reducing Environmental Impacts of Aluminium Recycling Process Using Life Cycle Assessment[END_REF]. On the one hand, the study demonstrates huge environmental benefits for aluminium recycled in comparison with primary aluminium. On the other hand, the results show the harmful environmental influence of the melting refining in comparison with mechanical recycling process. The LCA revealed that the closed product loop option (considering aluminium cables) has lower environmental impact over the other recycling scenario using mixed aluminium scraps. This performance has already been demonstrated for aluminium cans [START_REF] Niero | Circular economy : to be or not to be in a closed product loop ? A Life Cycle Assessment of aluminium cans with inclusion of alloying elements[END_REF]. To conclude, recycling when driven without loss of quality is a relevant alternative to mining.
|
2016
|
[
"spi",
"sde"
] |
[
"164351",
"483036",
"483040",
"483036",
"164351",
"164351"
] |
[
"1723",
"177001"
] |
01461568
|
The Fig. 3 presents aluminium recycling as modelled in the Ecoinvent dataset. The modelling is divided in 5 steps: 4 mechanical separation steps (in red on the figure) and 1 thermal steps (in blue on the Fig. 3). Scenario 3: MTB Cables Recycling
The Fig. 4 shows all the steps take into account in the modelling of scenario 3. For this scenario, the distance of transport takes into account is 540 km for old scraps and 510 km for new scrap from various cable manufacturers. The intrinsic aluminium quality reaches at least 99.6% of aluminium purity (average quality check during the period 2012-2014). An intensive inventory analysis was developed during an internal survey conducted in collaboration with EVEA consultants firm at MTB Recycling plant during autumn 2014. Foreground data are based on measurement and on stakeholder interviews. Collection of background data comes from Ecoinvent 3.1 or relevant literature. For this scenario, the distance of transport takes into account is 540 km for old scraps and 510 km for new scrap from various cable manufacturers. The intrinsic aluminium quality reaches at least 99.6% of aluminium purity (average quality check during the period 2012-2014).
|
2016
|
[
"spi",
"sde"
] |
[
"164351",
"164351",
"164351"
] |
[
"177001",
"1723"
] |
04184768
|
This wetland system is designed to reduce the forecasted water shortage in Melbourne by providing appropriate quality water for substitution of potable water for non-drinking purposes, such as toilet flushing, laundering, and gardening. RTM&C System
The fundamental architecture of the RTM&C system consists of sensors, actuators, communication devices, and a web server. The water level and water quality sensors (RTM on-site hardware) are already installed at key points of the wetland to provide important information on the system's health and capacity in real-time. These key points include the sedimentation pond, inlet and outlet of the main wetland, and the stormwater harvesting pond. The actuators (RTC on-site hardware) have been installed at the control points to adjust the valve position, hence to flexibly hold and release water into and out of the wetland. The cabinet setup with multiple layers of power protection is shown in Figure 1(b). The control points include the inflow and outflow pipe of the main wetland, as well as the underground baseflow bypass between the harvesting pond and the downstream creek. The communication devices are essential to connect the on-site hardware to the centralised web server. The web server is the "brain" of the entire system, it consists of a centralised database to collect and store real-time data from the field and weather forecast from the weather agency (i.e., BOM in Australia). The collected data is used as input to the decision-making process of RTC strategies, resulting in the valve position target, which is then sent to the actuators for execution.
|
2023
|
[
"spi",
"sde"
] |
[
"500277",
"1081708",
"306322"
] |
[
"981707",
"1125292"
] |
02570676
|
In addition, as described in [START_REF]Privacy and Security Risk Evaluation of Digital Proximity Tracing Systems -The DP-3T Project -21[END_REF]:
"In decentralized systems in which infected people share their identifier, there is an easier way for an attacker to learn, when she was in close proximity to an infected person, without creating multiple accounts. The attacker can simply match the set of infected identifiers against each of her recorded Bluetooth identifiers to determine when she was in contact with an infected person and use this information to reveal the identity of the infected." Therefore, an adversary is able to identify all diagnosed users he has been close to during a time window corresponding to a period of contagiousness. The sharing or publication of this information can lead to the stigmatization and harassment of all diagnosed users. • In the centralized approach, in contrast, when the user is notified that she was in close proximity to an infected person, this user only knows that at least one encountered person has been diagnosed. Although a user is able to re-identify the infected individual if she has met only one person, this re-identification task is much harder otherwise. For example, one way to carry out this attack would be to create an instance of the application (registered on the server) for each encountered person, which is much more costly to deploy. Therefore, risk IR 1 "Identify infected individuals" has a very large scalability in the decentralized approach. To make a clear distinction between the scalability of this attack in the two approaches, we revisit the definition of this risk in the proposed taxonomy by using the following definition:
• IR 1-1: Identify all infected individuals among encounters, when the adversary is able to find diagnosed users among all persons he has encountered during a period corresponding to a contagious period. The attacker proceeds by collecting pseudonyms of each person encountered, and then correlating this list of pseudonyms with the list of infected users' pseudonyms published by the authority to determine when she was in contact with an infected person and use this information to reveal the identity of the infected.
|
2020
|
[
"info"
] |
[
"206120",
"206120",
"206120",
"206120",
"206120",
"206120",
"206120"
] |
[
"6722",
"908",
"868662",
"5208",
"833548",
"170349",
"552"
] |
02611265
|
The protocol works as follows:
• When Bernard goes to Germany, his App broadcasts, at each epoch j, HELLO F R,j messages as defined in Section 4. • When Bernard meets a German User, let's say U ta, at epoch i:
-U ta stores the (HELLO F R,i , time) pair in her LocalProximityList. -Bernard stores the (HELLO DE,i ,time ) pair in his LocalProximityList (where HELLO DE,i is the HELLO message broadcast by U ta at epoch i). • If U ta is later tested and diagnosed COVID-positive:
-U ta uploads her LocalProximityList on the German back-end server. -The German back-end server obtains the (HELLO F R,i ,time) pair and processes it as follows: * It parses HELLO F R,i to retrieve ecc F R (8 bits), ebid X (64 bits), time X (16 bits) and mac X (40 bits). * decrypts ecc F R , using K G , to recover the message country code, CC F R . Since CC F R is the country code for F rance, the (HELLO F R,i ,time) pair is forwarded to the French back-end server. * The French server processes it as described in Section 6. • Similarly, if Bernard is later tested and diagnosed COVID-positive in France:
-Bernard uploads his LocalProximityList on the French back-end server. -The French back-end server obtains the (HELLO GE,i ,time ) pair and processes it as follows: * It parses Hello GE,i to retrieve ecc GE (8 bits), ebid X (64 bits), time X (16 bits) and mac X (40 bits).
|
2020
|
[
"info"
] |
[
"206120",
"206120",
"206120",
"206120",
"206120",
"206120",
"206120"
] |
[
"868662",
"908",
"6722",
"5208",
"833548",
"170349",
"552"
] |
01599665
|
To carry out a real conversation the process cannot take too long. Therefore "How fast can one type with the eyes?" is an equally interesting and important research question as "How fast can one type with the typewriter" was for decades. Previous work
Results from past experiments with text entry by gaze have been collected in Tables 1 (experiments with a soft keyboard) and 2 (experiments with other techniques). We have included only longitudinal experiments where the participants came back to the lab on several days and thus had a chance to improve their performance through experience. The data in the tables is teased out from the publications. For [START_REF] Wobbrock | Longitudinal Evaluation of Discrete Consecutive Gaze Gestures for Text Entry[END_REF] the exact numbers were not reported and are therefore estimated from the graphs in the paper. The same holds for the MSD rate in [START_REF] Pedrosa | Filteryedping: Design Challenges and User Performance of Dwell-Free Eye Typing[END_REF]. Several papers reported on more than one studies. For [START_REF] Räihä | An Exploratory Study of Eye Typing Fundamentals: Dwell Time, Text Entry Rate, Errors, and Workload[END_REF], there were results for the learning phase (denoted by lp in Table 1) and the advanced phase (denoted by ap).
|
2015
|
[
"info"
] |
[
"301029"
] |
[
"1017253"
] |
02348785
|
This means that the coverage zone of BPSK 1/2 is much smaller compared to the other modulations. In this way, the ratio of neighbors communicating through low-throughput links decreases significantly. In the pedestrian case (Figure 9), contact duration distribution shows a more spread behavior than Luxembourg's. This is due to the shorter relative speeds between human beings compared with the vehicular case which leads to longer contact duration. As such, this yields a more spread out distribution for the freespace and two-ray models and a completely different shape for the log-distance model. In fact, for the log-distance model, the highest contact duration probability is found between 25 to 30 seconds. The distribution of contact capacities in the Stockholm scenario, depicted in Figure 9b, has several similarities with the plots of the Luxembourg case -the fixed-rate plots follow the same shape as the distribution of contact duration, while the adaptive plot looks like that of a decreasing exponential, except for the log-distance. The unusual shape the log-distance with a step-wise linear adjusted modulation scheme contact capacity distribution is explained by carefully observing the behavior of nodes, which act as pedestrians. Indeed, due to the much shorter communication range in the case of log-distance, there are only two sorts of contacts. Either the contact is very short, leading to poor capacity, or the contact is long with little distance between the two nodes.
|
2019
|
[
"info"
] |
[
"251992",
"541705",
"251992",
"54302",
"541705"
] |
[
"955086",
"8297"
] |
01321387
|
The key point for an operator is to design a global strategy to select which nodes act as seeders and which ones as leechers, in order to reduce the total dissemination cost. We formulate this question as a stochastic control problem that we solve using an application of Pontryagin's Maximum Principle. We provide a mathematical framework to devise the optimal strategy for opportunistic offloading under a generic cost model. First, we show that an optimal solution exists; then, from this policy, we extract some insights to develop heuristics. Finally, we discuss the advantages of the proposed model compared to the classic seeder-only model. We demonstrate that separating seeders/leechers leads to better incentive strategies in the most demanding cases of content with a large span of delivery delays. I. INTRODUCTION
Device-to-device (D2D) communications are a well-timed strategy for operators to face the ever-increasing mobile data demand by offloading part of the traffic from their cellular infrastructure. Motivated by the delay-tolerance and redundancy of some types of content, operators may send data only to a subset of requesting users (seeders), which act as opportunistic forwarders to help propagate content using D2D communications. The combination of two complementary channels (cellular and D2D) provides extra capacity, helping reduce the impact of redundant traffic.
|
2016
|
[
"info"
] |
[
"251992",
"389034",
"54302"
] |
[
"8297",
"955086"
] |
01273153
|
DVFS can make a significant di↵erence in both performance and energy consumption. Although not available on the manycore processors we evaluated, it is available for the Xeon E5 and GPU platforms. Therefore, for these platforms we always show two measurements. The first ones, Xeon E5 (2.4 GHz) and Tesla K20 (758 MHz), represent the experimental results when their frequencies are optimized for performance, i.e., using their maximum working frequencies. The second ones, Xeon E5 (1.6 GHz) and Tesla K20 (705 MHz), are relative to the optimal energy consumption setting, which for this kernel was 1.6 GHz and 705 MHz on
Xeon Phi and Tesla K20, respectively. Figure 5 compares the time-to-solution and energy-to-solution across the processors using a problem size of 2 GB (180 3 grid points) and 500 time steps. For these experiments we used the optimal number of threads on each platform. With the exception of Xeon Phi (in which the best results were obtained with 224 threads), the thread count was equal to the number of physical cores of each processor. As shown in Figure 4, our solution for Xeon Phi keeps scaling considerably well past the 57 physical cores. To the best of our knowledge, GPUs are among the most energy e cient platforms currently in use for seismic wave propagation simulation. Yet, our proposed solution on MPPA-256 achieves the best energy-to-solution among the analyzed processors, consuming 78 %, 77 %, 88 %, 87 % and 86 % less energy than Tesla K20 (758 MHz), Tesla K20 (705 MHz), Xeon Phi, Xeon E5 (2.4 GHz), and Xeon E5 (1.6 GHz), respectively.
|
2016
|
[
"info"
] |
[
"119004",
"118356",
"18404",
"18404",
"43688",
"1042443"
] |
[
"914915",
"6046"
] |
02378951
|
The function of each node and the overall relationship of this thesis are:
• VALIDATE: to execute the validation process on the platform to check if the input settings are correct. Note that before starting a DSE process, some parameters need to be set up from the designer (these parameters will be mentioned in the next Chapter). This button allows the evaluation if these input parameters are valid or not. If there is no error, the color of this button will turn to purple with "VALIDATED" status as in Figure 3.11. If not, there is a notification box of the problem/cause for the error. The button color is red with "INVALIDATED" status. • GENERATE: to execute the generation process. Based on the platform model, this process will translate this model into programming language and include it in a DSE process. The programming language we use for the core of the DSE process is python. This process ends with a notification box and this button turns to purple with "GENERATED" status (Figure 3.11).
|
2019
|
[
"spi"
] |
[
"185974",
"255534"
] |
[
"13594"
] |
01533664
|
Construction of a continuous viscosity solution to a truncated equation. Let m ≥ 1, we first truncate the initial data as we did for f m in the proof of Theorem 2.1 by considering
u 0m (x) = min{u 0 (x) + 1 m φ(x), m}. (40)
Since u 0 ∈ E µ (R N ), we get
|u 0m (x)| ≤ C m , (41)
|u 0m (x) -u 0m (y)| ≤ L m |x -y|. (42)
Moreover, u 0m still satisfies [START_REF] Ciomaga | On the strong maximum principle for second-order nonlinear parabolic integro-differential equations[END_REF] with the constant C 0 + µ and u 0m → u 0 locally uniformly in R N . We then introduce the truncated evolution problem (2) with H mn (respectively f m ) defined by (33) (respectively (31)) for m, n ≥ 1 and with the initial data defined by (40). The classical comparison principle (see Theorem 4.2) holds for bounded discontinuous viscosity sub and supersolutions of
u t -F(x, [u]) + b(x), Du + H mn (x, Du) = f m (x) in Q T , (43)
with the initial data u mn (x, 0) = u 0m (x). Notice that u ± mn (x, t) = ±(C m + (C m + C H )t) are respectively a super and a subsolution of (43) satisfying the initial conditions
u - mn (x, 0) = -C m ≤ u 0m (x) ≤ C m = u +
mn (x, 0). Then by means of Perron's method, we obtain the existence and uniqueness of a bounded continuous viscosity solution u mn of (43) such that |u mn | ≤ CmT independent of n. We refer to classical references [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] for the details. 2. Convergence of the solution of the truncated equation to a continuous solution of (2).
|
2019
|
[
"math"
] |
[
"75"
] |
[
"13594"
] |
00374703
|
Before the implementation of the control law (25) in the electropneumatic system, the co-simulation was used. This technique consists in using jointly, the software developed by the researchers in modeling, and the software dedicated for system control. Thus, the physical model of electropneumatic system (1) was treated by AMESim, and the control law (25) was developed on Simulink. A satisfactory simulation results are obtained. Then, the control law is implemented using a Dspace 1104 controller boar with the dedicated digital signal processor. The measured signals, all analog, were run through the signal conditioning being read by the 16 bits analog/digital converter. Two pressure sensors are used, their precision is equal to 700 Pa (0.1% of the extended measurement) and their combined non linearity and hysteresis is equal to ± 0.1% of the extended measurement. The cylinder velocity is determined by analog differentiation and low-pass filtering of the output of the position given by an analog potentiometer (Its precision and repeatability is equal to 10µm and its linearity is 0.05% of the extended measurement.). The acceleration information is obtained by differentiating numerically the velocity. In order o assume the system convergence, the gains must be only positive.
|
2008
|
[
"spi"
] |
[
"31070",
"31070",
"31070",
"31070"
] |
[
"859512",
"171914",
"8048",
"839367"
] |
01470317
|
We also compared the financial policies: investment policy, capital structure and payout policies. Table 4 presents the results of these tests. Table 3 shows that the financial performance of firms where government is a shareholder is not significantly different from private firms. The public firms' return on equity, return on sales and stock price performance are smaller but not significantly. This similar performance is partly explained by the fact that private and public financial policies are quite close (Table 4). Indeed, payout policies, debt structure and the ratio of investment expenditures on assets are comparable. We only observe significant differences in return on assets (ROA), Tobin's Q and productivity of employees. We also notice greater investment expenditures on sales in public firms, and a different level of debt. However, we think differences in debt and Tobin's Q are not significant:
-The difference in the level of debt is only significant when comparing with the second sample. And it concerns short term debt as the long term debt ratio is similar. So we assume that the capital structure do not differ significantly between public and private firms. -Moreover, differences in Tobin's Q only appear when comparing public firms to the first sample.
|
2009
|
[
"shs",
"qfin"
] |
[
"108098"
] |
[
"977041"
] |
02057673
|
The puzzle has become more complex with the growing importance of share repurchases. This paper surveys research on pay out policies focusing on the firm point of view. It sets up some answers to why firms pay out. We will not focus on explanations of share price reactions or investment strategies linked to pay out policies. We will only focus on what happens inside the firm. To follow this path, we have chosen to classify concepts depending on who takes the decision in the firm: shareholders or managers. If there is no agency conflict, decisions will be in accordance with shareholders' wishes. Otherwise, managers will influence pay out policies. Of course, shareholders' wishes are not homogeneous and some shareholders may be more influential than others. In French firms, the major agency conflict is not between managers and shareholders, but between controlling shareholders and minority shareholders.
|
2006
|
[
"shs"
] |
[
"108098",
"172354"
] |
[
"977041"
] |
03468357
|
For convenience, we group these control variables into two. The first group covers domestic real economic factors such as growth, income level, economic structure, and investment or saving rate. The second group includes monetary variables, financial structure and global factors. These control variables are first individually considered and then combined in our estimation. The choice of these two groups of economic factors is mainly motivated by the conventional literature, and the main purpose is for them to serve as control variables for the robustness testing of our core findings. Some of the coefficients of these control variables may offer more intuitively expected signs, while others can be ambiguous in theory. For instance, a higher income level, as measured by per capita PPP, should lift corporate leverage, as it is typically associated with a deeper financial system and greater repayment and servicing capacities. Also, higher economic growth should directly help ease the corporate debt/GDP ratio, other things being equal. Its coefficient therefore should be expected to be negative. In addition, a higher investment rate may suggest a greater need for external financing, hence we may expect a positive coefficient.
|
2019
|
[
"shs",
"qfin"
] |
[
"1082618",
"1063734"
] |
[
"1119489",
"749495"
] |
03066218
|
As a consequence, its width can be changed at will, thus speeding up or slowing down the erase process. The triangular barrier also allows for much lower electric fields during the erase process, thus increasing the durability of the device. Finally, QDs in III-V semiconductors can be manufactured to store holes instead of electrons employing the type II band alignment. Since holes have a larger effective mass than electrons their storage time is much longer for the same localization energy. III. QD-FLASH CELLs
A QD flash cell is a modulation-doped field effect transistor (MODFET), in which a layer of QDs has been embedded between the 2-dimensional hole gas (2DHG) and the gate. Fully functioning protypes have been manufactured already 8 years ago using InAs QDs embedded in GaAs or Al0.9Ga0.1As [6], thus demonstrating the feasibility of the QD-Flash concept. The charge state of the QDs is controlled by the gate voltage and read out is done using the 2DHG. The structure of QD-Flash is sketched in Fig. 1 Fig. 1: Sketch of a QD-Flash
The logic state "0" is realized in a QD-Flash when the QDs are not occupied by holes. Conversely, the logic state "1" is realized when holes are localized in the QDs.
|
2019
|
[
"phys"
] |
[
"86624",
"559375",
"1296",
"1067467"
] |
[
"741251"
] |
04187851
|
The detection angle corresponds to the angle between the surface and the direction of the photoelectrons and was varied between 25° (close to grazing incidence, surface sensitive) and 75° (nearly normal incidence, bulk sensitive). All the spectra presented here were obtained for a 45° angle. The surface morphology was analyzed with a Zeiss Supra 55 scanning electron microscope (SEM) using an InLens detector, i.e. by measuring backscattered electrons along the beam direction. The incident electron beam was set to 1 keV for better surface sensitivity. In the case of 2D-BN, the InLens intensity increases with the 2D-BN thickness [START_REF] Sutter | Thickness determination of few-layer hexagonal boron nitride films by scanning electron microscopy and Auger electron spectroscopy[END_REF]. Raman spectra were measured using a Horiba Scientific LabRAM HR confocal spectrometer with a 473 nm laser spot (power ~10mW). The beam was focused to a size smaller than 1 µm by a 100x objective. Results and discussion
When growing 2D-BN by PAMBE, the flux of injected species reaching the sample surface plays a key role on the film stoichiometry, thickness as well as morphology. The impact of the atomic boron flux was firstly analyzed by varying the B-cell temperature T B from 1700 to 1850°C, resulting in an estimated B flux ratio of 11 [START_REF] Paule | A Langmuir determination of the sublimation pressure of boron[END_REF] between the highest and lowest flux (see table 2). The N 2 plasma cell between the metallic coupled 2D-BN layer at the interface and the uncoupled components for films thicker than one monolayer, consistent with the already reported values [START_REF] Preobrajenski | Ni 3d-BN π hybridization at the h-BN ∕ Ni(111) interface observed with core-level spectroscopies[END_REF].
|
2023
|
[
"spi",
"phys"
] |
[
"1066983",
"1067464",
"1066983"
] |
[
"741251"
] |
03727283
|
If we assume that the gas transfer occurs only through the thin films (border-blocking assumption), we can thus rewrite equation (6.6) as follows:
d A dt = β h d = β 1 - 2r PB d = β 1 -2 ε α d 1/2
A 1/4 (6.9)
where we corrected the ideal area growth rate β by the actual portion of film available for gas transfer. We remark however that equation (6.9) considers only the film reduction due to the surface Plateau borders, while it neglects the further area reduction due to the presence of the vertical ones at each bubble vertex. We can now think of introducing a critical bubble area A c at which the two surface Plateau borders merge and thus the thin vertical films vanish. This happens when r PB = d/2, thus from equation (6.7) we obtain:
A c = α 2 d 2 16ε 2 (6.10)
and introducing this relation in equation (6.9) we can write:
d A dt = β 1 - A A c 1/4 (6.11)
As we can see from this relation, once the foam reaches the critical area A c the coarsening rate goes to zero, thus leading to an unphysical arrest of coarsening due to the film disappearance. In the proximity of A c one should thus replug the gas transfer through the Plateau borders in order to describe the coarsening rate correctly. In the samples that we are considering in this section, the liquid fraction is ε=10% and the gap is d=2 mm, thus from (6.10) we obtain a critical area of roughly 58 mm 2 as the geometrical prefactor α can be calculated to be approximately 1.52 for a rather dry foam [START_REF] Gay | Rapid Plateau border size variations expected in three simple experiments on 2D liquid foams[END_REF]. From the mean area evolution showed in figure 6.14 (a) we see that the foam samples do not reach this critical area, we are thus far away from the film vanishing. To simplify the notation, we can make equation (6.11) dimensionless by introducing a dimensionless time t = βt/A c and a dimensionless area Ā = A/A c , so that it becomes:
d Ā dt = 1 -Ā1/4 (6.12)
We can solve this differential equation under the initial condition Ā(t = 0) = Ā0 , obtaining the following solution:
t = Ā Ā0 d Ā 1 -Ā1/4 = 4( Ā1/4 0 -Ā1/4 ) + 2( Ā1/2 0 -Ā1/2 ) + 4 3 ( Ā3/4 0 -Ā3/4 ) + 4 ln 1 - Ā1/4 0 1 -Ā1/4 (6.13)
This gives an implicit relation t( Ā) that we can compare with our experimental results. From our experiments we estimate the value A 0 by fitting A(t) with a power law function, and we consider β to be roughly 3.6•10 -4 mm 2 /s, as the effective diffusion coefficient estimated in quasi-2D drained foams made of the same Fairy solution [START_REF] Guidolin | Controlling foam ageing in viscoelastic media[END_REF]. In figure 6.15 we compare the experimental curve Ā( t) with the one predicted by equation (6.13).
|
2022
|
[
"phys"
] |
[
"1051087"
] |
[
"1150810"
] |
00664528
|
AdaBoost considers each curve as a weak classifier and iteratively selects relevant curves to increase the authentication accuracy. We demonstrate these ideas on a subset taken from FRGC v2 (Face Recognition Grand Challenge) database. The proposed approach increases authentication performances relative to a simple fusion of scores from all curves. Introduction
In order to meet the needs of security, a growing international concern, biometrics is presented as a potentially powerful solution. Biometrics aim to use behavioral and/or physiological characteristics of people to recognize them or to verify their identities. In particular, fingerprint and iris-based systems have showen good performances. However they require cooperation of users who may find them intrusive. Since face recognition is contactless and less restrictive, it emerges as a more attractive and natural biometric for security applications. In the last few years, face recognition using the 3D shape of the face has emerged as a major research trend due to its theoretical robustness to lighting condition and pose variations. However, the problem remains open on the issue of robustness of these approaches to facial expressions [START_REF] Amor | New experiments on icp-based 3D face recognition and authentication[END_REF].
|
2011
|
[
"info"
] |
[
"111636",
"110943",
"111636",
"144103",
"111636",
"144103",
"81932",
"110943"
] |
[
"919425",
"18887",
"170389",
"906382"
] |
00726088
|
I. INTRODUCTION
Since facial biometrics is natural, contact free, nonintrusive, and psychologically supported, it has emerged as a popular modality in the biometrics community. Unfortunately, the technology for 2D image-based face recognition still faces difficult challenges, such as pose variations, changes in lighting conditions, occlusions, and facial expressions. Due to the robustness of 3D observations to lighting conditions and pose variations, face recognition using shapes of facial surfaces has become a major research area in the last few years. Many of the state-of-the-art methods have focused on the variability caused by facial deformations, e.g. those due to face expressions, and have proposed methods that are robust to such shape variations. At the same time, gender classification is emerging as an interesting problem that can be a useful preprocessing step for face recognition. Gender is similar to other soft biometric traits, such as skin color, age, eyes colors, and so on, used by humans to distinguish their peers. Most existing work on gender classification uses 2D-images to extract distinctive facial features like hair density and inner morphology of the face, but 3D shape has not yet been used extensively for gender classification. Several works in psychology have shown that gender has close relationships both with 2D information and 3D shape [START_REF] Bruce | Sex discrimination: how do we tell the difference between male and female faces? [END_REF] [START_REF] O'toole | Sex classification is better with three-dimensional head structure than with image intensity information[END_REF], and it motivates the use of 3D shapes for gender classification.
|
2012
|
[
"info"
] |
[
"111636",
"110943",
"111636",
"144103",
"111636",
"144103",
"81932",
"110943"
] |
[
"919425",
"18887",
"170389",
"906382"
] |
00166017
|
In this article, we propose a new protocol to achieve delay guarantees in wireless multihop networks. With this study, we show that it is possible to design an efficient measurementbased admission control protocol for the delay parameter. The proposed protocol, called DEAN (Delay Estimation in Ad Hoc Networks), is based on a a priori estimation of average end-to-end delay. This estimation is derived from a simple model of IEEE 802.11 nodes and from an accurate evaluation of each link's collision probability. By combining this estimation with accurate admission controls, the estimated delay is guaranteed after a new flow starts. Such guarantees depend mainly on a strong correlation between the estimated delay and the available bandwidth as an efficient estimation of available bandwidth. This latter is estimated with the protocol ABE (Available Bandwidth Estimation) that provides an accurate evaluation [START_REF] Sarr | Improving Accuracy in Available Bandwidth Estimation for 802.11-based Ad Hoc Networks[END_REF]. Moreover, our protocol DEAN is not costly in terms of overhead since it uses the control packets required for the estimation of the available bandwidth and provided by ABE and thus does not add any overhead. Finally, extensive simulations show that our protocol DEAN is very efficient to provide delay guarantees. The remainder of this paper is organised as follows: Section 2 presents related work.
|
2007
|
[
"info"
] |
[
"2372",
"35418"
] |
[
"833995",
"6601"
] |
00870689
|
BCA corrects for the output loss in CGE models but less so in sectoral models. The Explanation seems that in PE models, a higher output loss is due to a drop in demand for CO2-intensive materials, loss which is mitigated by BCA. The features of BCA (coverage, level of adjustment, etc.) are of the highest importance for the WTO consistency, feasibility, and political acceptability. The purpose of the meta-regression was also to assess their impact on competitiveness and leakage. In the meta-regression, the inclusion of all sectors in the scheme appears to be the most efficient feature to reduce leakage ratio, followed by the inclusion of export rebates and adjustment level based on foreign carbon content. Yet one can guess, in the case of hypothetical BCA implementation, that political and juridical aspects will be the more determinant and that only a "light" version (adjustment based on best available technologies, probably without the inclusion of indirect emissions) is likely to see the light of day. Besides, the importance of the coalition size and the Abatement target are statistically confirmed and quantified: the smaller the abating coalition and the more stringent the cap, the bigger the leakage ratio. Policy features providing where and what flexibility (the possibility of Offsets and extension to all greenhouse gases) reduce the leakage ratio. Finally, this meta-analysis confirms the importance of Armington elasticities in the leakage ratio estimation, a result crucial in terms of uncertainty analysis.
|
2013
|
[
"shs"
] |
[
"135977",
"135977"
] |
[
"3394",
"1130"
] |
01137932
|
For others, a-b means that the parameter takes more than two different values within a articles, and that there are b values taken by the parameter in total. Theoretically, the bigger is the abatement, the higher is the leakage in absolute terms (tons of carbon emissions). As the leakage ratio is the leakage in absolute terms divided by the abatement and the latter increases as well, there is an indeterminacy about the relationship between the abatement and the leakage ratio. In the meta-regression model, the correlation is positive, but the statistical significancy is weak (a p-value below 0.1 is reached only for the no-BCAs sample), which may be attributable to the small variability of this parameter. In Alexeeva-Talebi et al. (2012b) (which was not included in our study because there was no BCAs), the correlation is negative (leakage of 32%, 29% and 27% for Europe abating respectively 10%, 20% and 30% of its emissions). In [START_REF] Böhringer | Alternative designs for tariffs on embodied carbon: A global cost-effectiveness analysis[END_REF] however, the relationship is positive (leakage of 15.3%, 17.9% and 21% for Europe abating respectively 10%, 20% and 30% of its emissions). Concerning the policy parameters, authorizing permit trading (linking) within the coalition is not statistically significant. In the two studies that change explicitly this parameter in the different scenarios [START_REF] Lanzi | Alternative approaches for levelling carbon prices in a world with fragmented carbon markets[END_REF][START_REF] Springmann | A look inwards: carbon tariffs versus internal improvements in emissions-trading systems[END_REF], permit trading diminishes leakage to a small extent. It is therefore the lack of variability between studies that may explain this non-significance (about half of the articles have permit trading in all their scenarios and the other half do not in all their scenarios). Conversely, extending carbon pricing to all GHG sources is statistically significant, especially when BCAs are implemented (decreasing the leakage ratio by 6 percentage points).
|
2014
|
[
"shs"
] |
[
"135977",
"148117",
"441569",
"135977"
] |
[
"3394",
"1130"
] |
01778383
|
Collective experience of forwards gives a clear advantage during phases of collective combat. The art of working together, sharing the action either on offence or defence is the essence of rugby. The collective investment and shared effort in all forwards' actions is crucial, whether in rucks to keep the ball, synchronisation during line-out, maul for placement, collective push and orientation in scrums. Containing and guiding teammates during scrums starts with a collective link, placement and work throughout the push. This element of game combines physical skill and a strong complicity, acquired over the years. Collectively adapting to adverse scrums, providing a common effort, direct scrum pressure, meeting together in a difficult situation requires a shared knowledge and combined action. This action knowledge is central to forwards' play and is apparently acquired more slowly. This may be why teams winning the World Cup have forwards with a collective experience significantly higher than those which do not win. We show that, some factors like size and experience might be predictors of success. However, it is probable that there are other factors that explain why only four countries have ever won the Rugby World Cup. Indeed, winning teams in a Rugby World Cup may also owe their victory to their nation's economic, historical, political and technological investment in this sport. [START_REF] Guillaume | Success in developing regions: world records evolution through a geopolitical prism[END_REF]
CONCLUSION
We show that forwards and backs are becoming heavier from one World Cup to the next.
|
2012
|
[
"shs"
] |
[
"441096",
"415984",
"301664",
"441096",
"415984",
"441096",
"415984",
"303623",
"439907",
"441096",
"415984",
"301664"
] |
[
"1027328",
"19038",
"1031020",
"1041396"
] |
00607757
|
In these conditions, character displacement is analogous to evolutionary branching, without the need for positive assortative mating to evolve. We determine how other ecological and environmental conditions influence the probability of stable coexistence of the two populations. We finally discuss how the suitable conditions we found for character displacement are likely to be met in natural populations, and in particular in the Galápagos finches populations. Models
Secondary contact scenario
We consider an initial resident (ancestral) population of N 0 individuals, monomorphic with ecological trait u 0 . Due to a simple "quantitative genetics" rule for trait inheritance (see Section 2.2), the population is no longer monomorphic after a few generations. We let the resident population reach its ecological equilibrium, determined by the interaction with its dynamic food resources. We choose an ecological model such that the trait of the population converges under directional selection to a singular point u * [START_REF] Geritz | Evolutionarily singular strategies and the adaptive growth and branching of the evolutionary tree[END_REF] where the mutant invasion gradient vanishes (assuming 0 < u * < 1). Depending on our choice of parameter values, selection becomes either stabilizing or disruptive at this point. In the first case, the singular point is a fitness maximum called a "continuously stable strategy" (CSS): all mutants in a resident population at u * have a negative fitness, so that they cannot invade the resident population. Selection thus keeps the population at u * .
|
2011
|
[
"math",
"sdv"
] |
[
"31591",
"31591",
"102",
"31591"
] |
[
"183783"
] |
00447327
|
Third, the connectivity of subpopulations via migration is assumed constant over time, except in [START_REF] Whitlock | The effective size of a subdivided population[END_REF] and [START_REF] Whitlock | Fixation probability and time in subdivided populations[END_REF]. However, all components of the landscape are dynamic simultaneously in natural populations. For example, external factors can cause variations of connections between demes, to the point where connectivity either falls to its minimum (unconnected demes, e.g. vicariance) or rises to its maximum (fusion of demes, e.g. postglacial secondary contacts) [START_REF] Young | Morphological and genetic evidence for vicariance and refugium in Atlantic and Gulf of Mexico populations of the hermit crab Pagurus longicarpus[END_REF]. Climatic variations as well as volcanic events can cause sea level changes resulting in separations and fusions of islands [START_REF] Cook | Species richness in Madeiran land snails, and its causes[END_REF]. Repeated changes of the water level causing fragmentations and fusions of lakes are known in the Great African Lakes [START_REF] Owen | Major low levels of Lake Malawi and their implication for speciation rates in cichlid fishes[END_REF][START_REF] Delvaux | Age of Lake Malawi (Nyasa) and water level fluctuations. Tech. rep[END_REF][START_REF] Galis | Why are so many cichlid species[END_REF][START_REF] Stiassny | Cichlids of the rift lakes[END_REF]. At a different spatiotemporal scale, the number and size of populations can vary because of dispersal and recolonization events (establishment of new colonies and their later fusion) [START_REF] Deheer | Colony genetic organization, fusion and inbreeding in Reticulitermes flavipes from the midwestern US[END_REF][START_REF] Vasquez | Intraspecific aggression and colony fusion in the Argentine ant[END_REF]. All aspects of the spatial structure of a population can change because of new ecological interactions, e.g. the emergence or extinction of a predator or parasite [START_REF] Batzli | Dynamics of small mammal populations: a review[END_REF]. Contemporary frag-mentation of habitat due to human action is also always changing the landscape [START_REF] Davies | Human impacts and the global distribution of extinction risk[END_REF].
|
2009
|
[
"math",
"sdv"
] |
[
"31591",
"31591",
"102"
] |
[
"183783"
] |
03121911
|
We have hence characterised a new reason for the failure of spatial spread of suppression drives, in the form of opposing demographic advection. This phenomenon was expected given previous work on spatial dynamics of alleles (as reviewed in [START_REF] Dhole | Gene drive dynamics in natural populations: The importance of density dependence, space and sex[END_REF]), but we clarify conditions under which it occurs. Other models of spatial spread, and in particular individual-based models, had already identified some reasons why the spatial spread of a suppression drive may fail. If the drive suppresses the local population too much and if the density target population is spatially heterogeneous, the drive may go extinct locally with the eradication of a local subpopulation before it could spread to other locations [START_REF] North | Modelling the spatial spread of a homing endonuclease gene in a mosquito population[END_REF]. Strategies relying on the eradication of the target population are also limited by the potential recolonization of emptied locations by wild-type individuals [START_REF] Champer | Suppression gene drive in continuous space can result in unstable persistence of both drive and wild-type alleles[END_REF][START_REF] North | Modelling the spatial spread of a homing endonuclease gene in a mosquito population[END_REF][START_REF] North | Modelling the potential of genetic control of malaria mosquitoes at national scale[END_REF] (such recolonizations can also be observed in our stochastic simulations). Finally, the evolution of resistance to the drive itself, which already hinders the success of gene drives in well-mixed populations [START_REF] Unckless | Evolution of resistance against crispr/cas9 gene drive[END_REF], also affects their spatial spread [START_REF] Beaghton | Requirements for Driving Antipathogen Effector Genes into Populations of Disease Vectors by Homing[END_REF]. Our model was derived under limiting assumptions, including a 100% homing rate, and either homing taking place very early in development or the drive being dominant. Gene drives currently being designed in laboratories do not exactly match these assumptions. While we are pessimistic that analytical results can be obtained when these assumptions are relaxed, future numerical or computational (individual-based) studies will be useful to assess the generality of our findings. The results of individual-based simulations of the spatial spread of underdominance gene drive systems [START_REF] Champer | Population Dynamics of Underdominance Gene Drive Systems in Continuous Space[END_REF] are encouraging.
|
2021
|
[
"math",
"sdv"
] |
[
"441569",
"193738",
"521754",
"542077"
] |
[
"20905",
"12882"
] |
02120491
|
For example, relatives could make an investment in a P2P-FIT-RET project at the birth of a child (in the child's name). This investment would come to maturity and be paid back with interest on the child's 20 th birthday, similar to a Registered Educational Savings Plan (RESP) 4 . A similar investment could be made with the intention of using the repayment as a means of supplementing retirement income. To ensure that investors are repaid, a waterfall payment scheme could be combined with an Escrow account model 5 . As the solar PV panels generate income, the amount due to the lenders would be funneled into a holdings account (to earn interest) as the primary flow (Figure 4). Only when the required monthly (or yearly) payments are made into this fund would the person with the solar PV system on their roof receive payment from the panels (the overflow or secondary flow) that month (or year). All of the models introduced in Sec. 3.1 could be modified to include investment for FIT-RETs around the world. Since all of the P2P portals have a web interface, opening access to members globally should be possible. Section 5 will discuss how the FIT-RET can be modeled as an investment and a micro-entrepreneurial activity. P2P Framework Requirements for Success
Modifications of loan conditions are needed to take full advantage of the earning potential, and will require long-term investment on the part of the investors.
|
2011
|
[
"shs"
] |
[
"3557",
"3557",
"480742"
] |
[
"922049"
] |
02119708
|
ABS and PLA are both thermoplastics that can be injection-molded, each with their own benefits, as ABS is rigid and durable, while PLA is plant-based and can be recycled and composted. The melting temperatures of PLA and ABS allow for safe extrusion, while being high enough to ensure shape retention. Distributed recycling is also being developed to recycle post-consumer products into filament for a 3-D printer, which could further reduce cost and resources required for distributed manufacturing [12]. The use of 3-D printers allows for previously impossible shapes under conventional manufacturing methods (e.g. injection molding) along with the ability to manipulate the inside of an object in multiple ways, such as, fill composition or adding internal parts. The ability to manipulate shapes internally during production has the potential to reduce additional machining during processing. Holes, voids, and other needs within an object that were impossible using methods similar to injection molding previously had to be done using tools, such as, drill presses. These steps are now able to be created during the design step and automatically produced using the RepRap. The ability to change fill composition allows more complicated shapes to be produced with structural integrity with the use of less material. This property combined with the reduction in embodied energy of transportation made available by distributed manufacturing allow for the possibility that it could be less energy and emission intensive than conventional manufacturing. However, questions remain about the environmental benefits of distributed manufacturing due the potential for increases in the overall embodied energy of the manufacturing due to reduction in scale. This preliminary study explores these questions by probing the technical potential of using a distributed network of RepRaps to produce goods.
|
2013
|
[
"chim",
"sde"
] |
[
"186714"
] |
[
"922049"
] |
01854291
|
Since large firms' worker weights are large, the assertion of Kalantzis et al. ( 2012) is consistent with our finding here. The behavior and distribution of the individual dispatched worker ratio are shown in Figure 4. The average individual dispatched worker ratio is slightly higher than the aggregate ratio. The average ratio ranges roughly between six and ten percent. The median values are lower than the averages, with a spread between three and six percent, which is almost the same as that of the aggre gate ratios. The range between the 25th and 75th percentiles is around or less than ten percentage points and is much smaller than that for part-time workers. Additionally, the range shrinks slightly after 2007. As a result, the variation pattern of the average individual ratio is quite similar to that of the aggre gate ratio. Dispatched worker acceptance is not as heterogeneous as part-time employment is. However, note that firms that do not use dispatched workers account for a large fraction of the total, specifically, about half of the total.
|
2018
|
[
"shs"
] |
[
"24516",
"478540"
] |
[
"1035335",
"18477"
] |
00357773
|
The stronger the SOI, the smaller is l so . At B = 0, the interference of time reversed paths leads to a reduction of the backscattering probability below its classical value [START_REF] Bergmann | [END_REF], an effect called weak anti-localization, if l so ≪ l ϕ (strong SOI). It manifests itself as a positive (rather than a negative) magnetoresistance at small fields around B = 0 [7]. Weak anti-localization was experimentally observed by Bergmann in thin metallic films [8]. As the strength of SOI is increased, a transition from weak localization to weak anti-localization is observed. Weak antilocalization was subsequently observed also in semiconductor heterostructures [9,10]. A smaller zero-field anti-localization resistance minimum superimposed on a larger weak localization peak was seen in the magnetoresistance of an inversion layer of InP [9], and an n-type GaAs/AlGaAs heterostructure [10]. A fully developed anti-localization minimum was observed by Chen et al. in the magnetoresistance of an InAs quantum well [11]. Koga et al. demonstrated the transition from a zero-field weak localization maximum to a weak anti-localization minimum by tuning the symmetry of an InGaAs quantum well (QW) wih a metallic top-gate [12]. Weak anti-localization is expected to be particularly expressed in the case of p-type GaAs heterostructures due to the strong SOI in these systems. Experimental studies of weak anti-localization in Be-doped (100) p-type GaAs heterostructures are reported in Refs.
|
2008
|
[
"phys"
] |
[
"548219",
"1296",
"548219",
"548219",
"548219",
"1150306",
"1150306"
] |
[
"760824"
] |
00357350
|
We like to note that the temperature dependence of the conductance is pronounced for all magnetic fields investigated. The curves in Fig. 2 (a) are not vertically offset. Rather the background conductance changes from about 1/70 kΩ -1 to 1/37 kΩ -1 when the temperature is increased from 65 mK to 340 mK. In the same temperature range the resistance around B = 0 changes from 90 kΩ to 35 kΩ which is consistent with the data of Fig. 3 (b). We conclude that there are two different contributions to the temperature dependence of the resistance, one which is present for the entire magnetic field regime investigated and another one which is particularly pronounced around B = 0. These experimental features are linked to the presence of In in the contact material. Motivated by the temperature and magnetic field dependence of the observed effects we discuss in the following possible relations to type II superconductivity in the In/Zn/Au contact pads. Proximity effects extending between semiconductor contacts [12] have been investigated in InAs-Nb systems where great care was taken to optimize the interface between the superconductor and the semiconductor. Indium ohmic contacts were deposited on an n-type Al-GaAs heterostructure at a distance of 1 µm and the flow of a supercurrent was demonstrated [13] and explained in the framework of phase-coherent Andreev reflections [14]. Once the mobility of the electron gas was reduced by electron-beam irradiation, a zero-bias dip in the dif- ferential conductance was observed, which was strongly reduced as the magnetic field was increased above 40 mT.
|
2008
|
[
"phys"
] |
[
"1296"
] |
[
"760824"
] |
01922760
|
PANEL CONSTRUCTION
This section briefly describes the general method used to construct all the panels that have been used in this paper. Figure 1 schematically describes the main elements and accessories needed. The choices for the panel setup and its constitutive components were guided by simplicity, with only eight main components (parts A to H) and simple building steps (see Figure 2). Precise descriptions and additional technical details for all needed parts (A to L) are provided in [START_REF] Robin | A plane and thin panel with representative simply supported boundary conditions for laboratory vibroacoustic test[END_REF]. The frame is made of parts A, B, E and F, all made of steel. The panel (part H) and supporting blades (parts C and D) are made of aluminium. It is precised that steel is chosen for the frame partially for reasons of cost, but mainly to ensure a high 'panel-and-edges to frame' mass ratio so that the frame nearly behaves as a rigid and massive foundation from the panel's point of view. A mass ratio of approximately 0.09 is finally obtained for panel A, described in section 5.1 (with a weight of 20.7 kg for the frame, and 1.8 kg for the panel and edges). Figure 2 then gives visual instructions for assembly, which are to follow from left to right and top to bottom. It is precised that a main advantage of this setup is that the 'panel-and-blades' part can be easily disassembled and reassembled from the frame.
|
2018
|
[
"phys"
] |
[
"110548",
"110548",
"110548",
"12568",
"12568",
"12568",
"12568",
"31116",
"31116"
] |
[
"174243",
"173419",
"16476",
"735233",
"736260",
"19932"
] |
01873991
|
Seoi-nage assured this dominant position for a long time [START_REF] Sterkowicz | Differences in the specific movement activity of men and women practicing judo (Based on the analysis of the judo bouts during the 1996 Olympic games)[END_REF]. Two significant events characterized this category, the first was the return in force of Ashi-waza, and the second is Koshi-waza disappearance, which posed the problem of its effectiveness in competitions. Introducing new refereeing rules [START_REF]Refereeing new rules[END_REF] resulted in the activity's decline of most technical groups in Nage-waza except for Ashi-waza. The activity of medalists in Ne-waza improved, as evidenced by the increasing frequency of Osae-komi-waza and Kansetsu-waza. This increase in no way altered its share, which remained small compared to that of Nage-waza [START_REF] Sterkowicz | Techniques used by judoists during the world and Olympic tournaments 1995-1999[END_REF]. The global technical repertoire, which showed the number of techniques mastered in Nage-waza by the medalists, was large [START_REF] Boguszewski | Technical fitness training of judokas-finalists of top world tournaments in the years 2005-2008[END_REF]. The present analysis confirmed the increasing use of Kokusai-shiai-waza (innovative techniques). Finding alternative solutions to defensive systems seemed to be a concern within this weight category [START_REF] Inman | Classification of innovative international competition techniques[END_REF]. This creativity concerned Te-waza, Sutemi-waza, and to a lesser degree Ashi-waza. Te-waza techniques offered many opportunities for creativity thanks to the multiple hand placements, which explained the considerable number of variations attempted in competition.
|
2014
|
[
"shs"
] |
[
"569170"
] |
[
"176253"
] |
03190816
|
Various studies showed this dominant tendency of Ashi-waza [START_REF] Miller | Throwing Technique and Efficiency in the 2013 British Judo Championships[END_REF][START_REF] Sacripanti | The increasing importance of Ashi Waza, in high level competition. (Their Biomechanics, and small changes in the form)[END_REF][START_REF] Pereira Martins | Techniques utilised at 2017 Judo World Championship and their classification: comparisons between sexes, weight categories, winners and non-winners. Ido Movement for Culture[END_REF]. From the strategic standpoint, the Ashi-waza use allows the judoka not to get too close to the opponent because of a specific safety distance. Counterattacks are difficult for the ban of hand grips below the belt. Compared to others, it is also the least risky group to attack the opponent. [START_REF] Sacripanti | The increasing importance of Ashi Waza, in high level competition. (Their Biomechanics, and small changes in the form)[END_REF] corroborated the difficulty of opponents to defend against Ashi-waza techniques. These arguments justify the choice of medalists for this group. The suggested diagnosis has shown the value of the technicaltactical indices of medalists. Coaches could use them in preparing their judokas for eventual competitions (Adam et al., 2013).
|
2021
|
[
"shs"
] |
[
"569170"
] |
[
"176253"
] |
01133392
|
From our initial sample of 1,194 developers, we identify those who belong to the same development teams (i.e. those who hold commit rights on the same projects and have contributed at least one commit to those projects). We are able to identify 270 such developers, working together on 131 distinct projects. Out of the 131 teams that we identify in our sample, 93 have 2 developers, 23 have 3 developers, 12 have 4 developers and one team has 5, 6 and 8 developers, respectively. 28Based upon our above classification of developers into four cooperative types, we start by describing how diverse those 131 development teams tend to be. We compute a Herfindahl index of concentration of types at the team level. We then take one minus this quantity in order to get an indicator that grows from zero to one as teams tend to be more diverse in terms of the cooperative types of their members:
D = 1 - 4 ∑ t=1 p 2 t (2)
where p t represents the proportion of developers who are of cooperative type t in the development team considered. Figure 5 features the distribution of this indicator of diversity of cooperative types across all 131 development teams. We can see that the distribution features two modes: one at zero (i.e. perfect homophily at the team level), and the other at 0.5, so that a significant fraction of teams are actually comprised of developers with different types. In a second step, we test for homophily at the team level. For each developer i, we compute the proportion of the other members j of his team that are of his cooperative type. We then substract from this proportion the proportion of developers who are of that particular type in the whole underlying population of developers.
|
2014
|
[
"shs"
] |
[
"93713"
] |
[
"932306"
] |
02568253
|
Germany is perhaps the most distinguished example of this energy policy trend. One day after the nuclear catastrophe in Fukushima in March 2011, the German government decided, with the support of quasi-totality of German population, to accelerate the phase-out of nuclear fleet by 2022 -a policy which had been discussed since the beginning of 2000. comprised about 45% of the total production in 2011 (Figure 1). The shutdown of eight nuclear plants with a combined capacity of about 8.4 GW has reduced the electricity production from this type of energy from around 140556 GWh (22.5%) in 2010 to 107971 GWh (18%) in 2011. This closure has also reduced the market share of the big four generators. Nonetheless, they still account for about 73% of generating capacity according to the Monitoring Report 2013, Developments of the Electricity and Gas Markets in Germany, Federal Network Agency andFederal Cartel Office, 2013 (FNA and[START_REF] Fna | Developments of the Electricity and Gas Markets in Germany[END_REF]). Given the large amount of available interconnection capacity between Austria and Germany, these two markets are considered to comprise one electricity market, diluting the market share of the big four by approximately 10%. As regards electricity wholesale prices, there was a significant increase in German spot market in 2011, compared with the previous years (2009 and 2010): from 37€/MWh in 2009 to 51€/MWh in 2011 (37%) on average before a slight decrease in 2012 (figure 2). Source: EPEX Spot
It is difficult to conclude about the nature of the increase in spot prices during this period without a quantitative analysis. In fact, the Energiewende policy of replacing nuclear power with extra fossil fuel capacity and vastly expanding highly-subsidized renewables has two different impacts in wholesale power prices. On the one hand, the extra fossil fuels generation was supposed to increase the wholesale spot prices due to its expensive fuel costs.
|
2016
|
[
"shs"
] |
[
"559342"
] |
[
"184589"
] |
02568268
|
Intermittent generators, however, would not benefit from these high prices since they occur when their output is low. In contrast, when high demand coincides with high renewable output (this is particularly true for solar), merit order effect will drive the prices downs during these periods, lowering marginal revenue for renewables (market value of renewables) 6 . In an electricity system where intermittent generation comprises a small share of total output, the high variability of renewable will have little impact on the average base prices and market value of renewables, the gap between them is low. However, if the share of intermittent generation is significant, this gap might be significant, as illustrated in Figure 2. Measuring merit order effect in this context is of high importance. In the next section, we attempt to evaluate the magnitude of this effect. LITERATURE REVIEW ON QUANTITATIVE ANALYSIS OF THE MERIT ORDER EFFECT
The merit order effect has been recently discussed in a number of articles about renewable energy. Two broad methods to estimate the merit order effect of renewables have been used in literature: electricity market modelling and econometric analysis of historical time series data. Using electricity modelling requires precise calibration of costs and especially definition of reasonable scenarios. A lot of assumptions bound to the models can negate the certitude of conclusions.
|
2020
|
[
"shs"
] |
[
"163511",
"451480"
] |
[
"184589"
] |
00982736
|
1 This controversy between a presiding judge and a top Belgian civil servant, illustrates the tensions that crop up when, as a consequence of having adopted managerial logics and tools, 2 it would seem the exclusive nature of justice were being contested. Typical of management is the fact that organization is done with an eye to cost, efficiency and the quality of output. 3 Such values, unheard of in the world of justice, have taken on increasing importancethough not without stirring up resistancebringing about changes at three levels: organizational, professional and institutional. Through diverse strategies, a managerial type of reasoning has progressively found its way into the justice system, particularly by reinforcing accountability, developing forms of evaluation and controlling magistrates, setting up indicators of productivity and workloads, introducing limited mandates and compulsory mobility, and changing common expectations concerning judges both in matters of deadlines and in the way citizens are received. But assessing, measuring and comparing are precisely what trivializes the missions of Justice, casting doubt on the ways it operates. There is general agreement about the need to modernize and get the best out of the judicial system, so legal professionals and political actors tend to call attention to managerial logics, all the more as they are already being implemented in a good number of Western countries (Sibony, 2002; Fabri et al., 2005, Vigour, 2005; Cavrois et al., 2002; Breen, 2002). What typifies such logics is the widespread use of a vocabulary and procedures that until recently were quite alien to the judiciary: human resources, quality management, clients… Such notions and tactics are gradually being introduced, mixed in with the logics of action with which the legal professions are familiar. In order to show the concrete forms this ongoing process has takenand the opposition it has stirred upwe will be stressing the expectations concerning the legal professions in Belgium and how they have evolved, in particular with regard to the magistracy and chief magistrates. 4 We wish to point out that introducing managerial logics into the judiciary has transformed its classical rationality as well as the ethos of the legal professions. What M. Weber termed "ethos" in The Protestant Ethic and the Spirit of Capitalism corresponds to a mind-set that confers a specific orientation to action and shapes social and professional praxis through tangible ways of relating to the world and the particular conceptions of rationalization it institutes.
|
2009
|
[
"shs"
] |
[
"28721"
] |
[
"181149"
] |
03138649
|
We adapted the Flush+Reload attack of Mastik toolkit [START_REF] Yarom | Mastik: A Micro-Architectural Side-Channel Toolkit[END_REF] from x86 Instruction Set Architecture (ISA) to RISC-V ISA. In particular, the rdtime instruction was used instead of the rdtscp instruction. The cache flush instruction is not officially defined in RISC-V, nevertheless, we found that in Orca, when opcode is set to MISC-MEM, along with funct3 set to REGION and funct7 set to CACHE-FLUSH, a cache region is flushed. This special flush instruction was used instead of clflush of x86 ISA. Our Detection Module focuses on instructions that access the timer Control and Status Register (CSR), including rdtime, and the cache flush instruction described below. By looking for timer/timer or timer/flush attack pattern, we successfully detected this Flush+Reload attack. The synthesis of our Detection Module shows a maximum frequency of 271 MHz. In the fully implemented design, it occupies 235 registers and 400 LUTs. Static synchronization logic occupies additional 793 registers and 256 LUTs. V. CONCLUSION AND FUTURE WORK
In this paper, we discussed the feasibility of dynamic monitoring using reconfigurable hardware to detect cache timing attacks.
|
2020
|
[
"info"
] |
[
"389097",
"389097",
"389097"
] |
[
"737263",
"175135",
"9967"
] |
02949624
|
For that purpose, FPGAs offer numerous logic, routing and memory resources to the user. Taking into account this high level of flexibility, FPGAs usually require large circuits and suffer from much lower frequency than hardwired implementation for the same logic [START_REF] Kuon | Measuring the Gap Between FPGAs and ASICs[END_REF]. Using reconfigurable hardware along with hardwired processors is not a new research topic [START_REF] Compton | Reconfigurable computing: a survey of systems and software[END_REF]. Reconfigurable hardware benefits from highly parallel execution capabilities to speed up the processor's calculations, and can be reconfigured to implement different algorithms. It has been successfully used in many fields such as image processing and communication. Regarding the security domain, reconfigurable hardware has been proposed for cryptography acceleration and secret protection, for power and communication monitoring against hardware attacks [START_REF] Gogniat | Reconfigurable Hardware for High-Security/ High-Performance Embedded Systems: The SAFES Perspective[END_REF]. However, to the best of our knowledge, no research work has proposed the use of reconfigurable hardware to monitor the running software on a processor for CSCA detection. REHAD Architecture
Overall architecture
The REHAD architecture is shown in Fig. 1. This architecture is composed of a main processor core, a detection module made up of reconfigurable hardware, interconnected by three communication channels made up of static hardware, and a trusted software kernel located in the processor. The detection module aims to analyze data provided by the processor core in real-time, and provides hardware relevant information to the trusted software kernel for further decision. Furthermore, the detection module can be reconfigured to adapt to new threats or attacks.
|
2020
|
[
"info"
] |
[
"389097",
"389097",
"389097"
] |
[
"737263",
"175135",
"9967"
] |
02873622
|
The simplest type of thimac is called a TM, as shown in Fig. 1. The flow of things in a TM refers to the conceptual movement among five operations (stages). The stages of the TM can be described as follows. Arrive: A thing flows to a new machine (e.g., packets arrive at a buffer in a router). Accept: A thing enters a TM. For simplification, we assume that all arriving things are accepted; hence, we can combine arrive and accept as the receiving stage. Release: A thing is marked as ready to be transferred outside the machine (e.g., in an airport, passengers wait to board after passport clearance). Process (change):
A thing changes its form but not its "identity" (e.g., a node in the network machine processes a packet to decide where to forward it). Create: A new thing is born in a machine (e.g., a logic deduction system deduces a conclusion). Transfer: A thing is inputted into or outputted from a machine.
|
2020
|
[
"info"
] |
[
"463144"
] |
[
"1069014"
] |
00747723
|
Once deployed, the DSPL operates an execute, monitor, evaluate, adapt control loop. Our focus in this paper is only on the decision-making evaluate element that takes the result of monitoring as input and triggers adaptations as output. The other elements can be provided by: I. an adaptive architecture such as that provided by the OpenCom component model [START_REF] Coulson | A generic component model for building systems software[END_REF] and GridKit middleware [START_REF] Hughes | An experiment with reflective middleware to support grid-based flood monitoring[END_REF], or by the MADAM middleware [START_REF] Khan | Architectural Constraints in the Model-Driven Development of Self-Adaptive Applications[END_REF]; II. a means to monitor claims, by collecting data about the system and its environment and interpreting it in terms of whether it supports or refutes the claims [START_REF] Welsh | Towards Requirements Aware Systems: Run-time Resolution of Design-time Assumptions[END_REF]. Constraint Modeling
A constraint is a logical relationship among several unknowns (or variables), each one taking a value in a given domain of possible values, where a domain is a set of possible values that a variable can take. Constraint programming is a programming paradigm in which constraints between variables are defined declaratively and a solution is found using a solver. A constraint program is defined as a triple (X, D, C), where X is a set of variables, D is a set of domains and C is a set of constraints restricting the values that the variables can simultaneously take. Classical constraint programming deals with finite domains for the variables, which are usually mapped to ordinal values such as integers. The impact on a softgoal of a particular operationalization is represented in the constraint program by integers in the range from 0 (--) to 4 (++). Elements that take Boolean values (See Figure 2) are represented as the integers 0 and 1. Solving constraints involves first reducing the variable domains by propagation techniques [START_REF] Schulte | Efficient constraint propagation engines[END_REF] that will eliminate inconsistent values within domains, and then finding values for each constrained variable in a labeling phase.
|
2012
|
[
"info"
] |
[
"89875",
"74131",
"74131",
"74131",
"17018"
] |
[
"752533",
"177531",
"10585"
] |
00707543
|
If type is "requires", the corresponding constraint is: A ⇒ ad. If type is "excludes", the corresponding constraint is: A * ad = 0. This means that if A is selected (equal to 1), ad must not be selected (must be equal to 0) and vice-versa. Currently, we do not take into account other types of asset dependencies (like parent or child). The conversion algorithm has two main phases presented in the following pseudo-code (Algorithm 1). First, the algorithm navigates through the decision model and then through the asset model. In both cases, we gather the relevant information of decisions and assets and translate them into constraints in CP. Relevant information means information affecting the variability as described above; for example, a description attribute does not affect the variability of the product line model. Our algorithm for converting DOPLER variability models is implemented as an Eclipse plug-in that uses the API of the DOPLER tool suite [START_REF] Dhungana | Integrated tool support for software product line engineering[END_REF]. FORMAL VERIFICATION OF DOPLER MODELS
The automated verification of DOPLER variability models has the goal to find defects and its sources using automated and efficient mechanisms. As the manual verification of variability models is error-prone and tedious we propose an automated solution. Our approach offers a collection of operations which are applied on a DOPLER model and return the evaluation results intended by the operation.
|
2011
|
[
"info"
] |
[
"74131",
"107396",
"97984",
"97984",
"74131",
"74131"
] |
[
"752533",
"10585",
"177531"
] |
03379755
|
In section III, we present our methodology for obstacle detection based on convolutional autoencoders. Section IV is devoted to the evaluation methodology. Experimental results are presented in Section V. Finally, the conclusion and future work will be given in section VI. II. RELATED WORKS
Existing works in the domain of this paper can be divided in two parts:
A. Unsupervised models for anomaly detection
In the literature, there is an important number of works that uses unsupervised models for anomaly detection. Deterministic models, such as [START_REF] Sakurada | Anomaly detection using autoencoders with nonlinear dimensionality reduction[END_REF], propose an autoencoder for anomaly detection using non linear data. The authors in [START_REF] Ke | Anomaly detection of Logo images in the mobile phone using convolutional autoencoder[END_REF] use a convolutional autoencoder to detect anomalies on image logos of mobiles. They identify the input image as negative when it exceeds a predefined threshold. The authors in [START_REF] Chow | Anomaly detection of defects on concrete structures with the convolutional autoencoder[END_REF] exploit the use of convolutional autoencoders to detect defects in concrete. Their work relies on thresholding on pixel level where the mean value of the anomalous class is supposed to be as high as possible.
|
2021
|
[
"info"
] |
[
"1067790",
"547473",
"547473",
"1067790",
"1066983",
"1066999"
] |
[
"1295502",
"8531",
"921561",
"747913"
] |
03450168
|
Cloudlets and Aircraft
MACE uses CORE as network emulator, and it emulates each network instance as Linux namespaces serving as minimal containers. Each aircraft client application runs inside these namespaces, and they communicate via veth network interfaces with connectivity controlled by the distance between the nodes. The cloudlets are also emulated in such namespaces with a server running the UAS endpoint and interfacing instances of etcd running in the same namespaces. The emulated scenario is run over an area of one square kilometre. Mobility
The Random Waypoint mobility model was adopted for the experiment. In this model each aircraft receives a random waypoint and a random velocity to simulate a mission's objective. The movement of the aircraft is emulated in MACE, and the real-time position is injected directly in the network emulator so that it is reflected in the network connectivity. The position is made available to the applications running in the virtual aircraft via UNIX sockets. UAS Broadcasts
For the payload reporting, a client running in each aircraft broadcasts a JSON object containing the position and additional data via IPV4 UDP sockets using the emulated Ad-hoc wireless links. The payload includes also an unique message ID, timestamp, an aircraft ID, velocity and status.
|
2022
|
[
"info"
] |
[
"380071",
"531214",
"380071"
] |
[
"738221",
"866375",
"734750"
] |
03278760
|
The tests were performed with the same parameters as stated on their website and shown in Table 4. It is possible to see that the results are below the baseline, which is expected considering the high latency configured in CORE for the links. Reducing the latency to 300us instead of 1300us increased the average queries per second to 11172, and the average latency was reduced to 85ms. Mobility was then added with the random walk model provided by a third part library. As seen in Table 5, with mobility there is a considerable decrease in performance, with lower throughput and higher latency. The mobility can also be controlled by an external agent related to the specific application domain. To test this, the emulator was connected to an open-source UAV flight simulator. Paparazzi [START_REF] Hattenberger | Using the Paparazzi UAV System for Scientific Research[END_REF] is an autopilot developed for fixed and rotary wing UAVs, and when using Paparazzi, all the UAVs are controlled by the ground station via radio commands. However, Paparazzi is also suited with a flight simulator where the radio link between the UAVs and the ground station is replaced by a UDP sockets communicating via pprzlink 9 . MACE also includes a proxy for the pprzlink that can capture all packets exchanged between the simulated UAVs and the ground station. As a result, the emulator can capture in real-time, the simulated GPS position of the UAVs and update the emulated topology.
|
2021
|
[
"info"
] |
[
"380071",
"531214",
"380071"
] |
[
"738221",
"866375",
"734750"
] |
03925654
|
To reflect this feature, we define the reward function as follows: where 𝑊 ! and 𝑊 * are the time and monetary weights provided by the user, and T +, -and M +, -are the time and monetary costs for executing the current operator op in query q.
𝑅𝑒𝑤𝑎𝑟𝑑 𝑅 = # #"(% ! * ' "# $ )"(% % * ) "# $ ) (1
According to this reward function, the query is executed based on the user's preference which is either the user wanting to spend more money for a better query execution time or vice versa. We call these two preferences Weights. These two weights defined by the user are called Weight Profile (wp), which is a two-dimensional vector, and each dimension is a number between 0.0 to 1.0. Notice that the user only needs to specify one dimension of the weight profile, the other dimension is computed as 1-Weight automatically. The detail can be found in our previous work [START_REF] Wang | Adaptive Time-Monetary Cost Aware Query Optimization on Cloud DataBase[END_REF]. 3 The SLA-Aware Reinforcement Learning-Based Multi-Objective Query Re-Optimization Algorithm (SLAReOptRL)
An SLA is a contract between cloud service providers and consumers, mandating specific numerical target values which the service needs to achieve. Considering an SLA in query processing is important for cloud databases. If an SLA violation happens, the cloud service providers need to pay a penalty to their users in a form such as money or CPU credits.
|
2022
|
[
"info"
] |
[
"240165",
"240165",
"1003581"
] |
[
"1090052",
"999621",
"1019835"
] |
01205522
|
This corroborates with the intensity-driven mechanisms of absorption. Nevertheless, burst-mode affects the nonlinear propagation as shown in Fig. 1b. The filamentary-type LAZ is longer and has smaller diameter in ns burst-mode, as a result of a more uniform energy absorption compared with the single pulse machining (see Fig. 1b). We have analyzed the energy deposition inside the glass by considering thermal effects during laser irradiation. The theoretical numerical model is based on nonlinear light absorption and nonlinear propagation accounting for the generation of the free electrons and the optical Kerr-effect. The model allows us to determine spatial laser energy absorption, which serves as the heat source for the subsequent thermal diffusion and heat accumulation. Simulations show that for a time delay of 25 ns the absorbed energy does not dissipate before the next pulse arrives. Instead, the accumulated heat causes temperature induced band-gap collapse [START_REF] Saito | Absorption edge in silica glass[END_REF] and transient positive refractive index variation at the laser wavelength. On the other hand, temperature-induced volume expansion can lead to a decrease of the refractive index. Such an expansion takes place in a time characterized by the viscoelastic relaxation s=/G, where is the viscosity and G is the bulk modulus of glass.
|
2015
|
[
"phys"
] |
[
"598",
"240533",
"598",
"598",
"598",
"598",
"21150",
"240533",
"331609",
"598"
] |
[
"860549",
"642",
"738728"
] |
01205517
|
The 1030 nm laser beam was focused by a 10x objective in the glass with an input energy of 1.3 µJ per pulse. The theoretically predicted dimensions of the zone at the softening temperature (dashed line) are compared to experimental measurements (solid line) in Fig. 1(a) with guides for the eyes. We report a thresholdlike behavior for the onset of permanent material modifications for RR between 100 and 200 kHz, which is well reproduced by our model. The discrepancy appearing for RR higher than 300 kHz are probably due to the pulseto-pulse evolution of material properties [START_REF] Ziman | Principles of the Theory of Solids[END_REF]. Such a temporal coupling between subsequent laser pulses will be investigated in a future work. A typical form of the experimental modifications (RR 500 kHz) is presented in the Fig. 1(b). The "comet-like" shape of the affected area is also observed in our simulations and can be attributed to nonlinear propagation effects and spatio-temporal deformations of the driving pulse. Fig. 1
1 Fig. 1 (a) Theoretically predicted and experimentally observed dimensions of the modified structure induced by a train of 500 laser pulses as a function of the repetition rate (RR). (b) Experimental modification obtained with 500 pulses at 500 kHz and with an energy of 1.3 µJ per pulse and definition of the dimensions under investigation. The laser pulses were shot from left to right.
|
2015
|
[
"phys"
] |
[
"598",
"598",
"598",
"21150",
"598",
"240533",
"240533",
"331609",
"598",
"598",
"598"
] |
[
"738728",
"642",
"860549"
] |
00070974
|
The scan was performed perpendicularly to the silicon leads patterned on the oxide sample ͑Fig. 10͒ at a speed of 1 m/s. The lines drawn by the laser beam consist of a series of micronic silicon dots, whose separation depends on the laser scan speed and laser power. It can be easily varied from 0 to 1 m.
Figure 11 shows the result of an experience carried out to connect two silicon bumps by a Si NW. In the bottom right of the image, one can see the silicon crystalline step ͑bright contrast͒ on the oxide surface ͑dark contrast͒. Gold colloids are mainly present along the silicon step. A silicon discontinuous line has been drawn perpendicularly to this contact lead ͑from bottom right to top left͒. This line is composed of micronic silicon bumps separated from about 0.5 m apart. We can observe that some Si NWs start only on the bump directly linked to the crystalline silicon leads, i.e., on the only area where gold colloids are present. No Si NWs are observed on the next bump, grown on silicon oxide layer, i.e., in absence of colloids. Furthermore the NW growth is directed along the laser beam scan axis, which is a very promising result to obtain connected Si NWs. IV.
|
2006
|
[
"phys"
] |
[
"458",
"458",
"458",
"458",
"458",
"458"
] |
[
"19072",
"175837",
"172363"
] |
01361055
|
The final trap is a far off resonance optical dipole trap made from two horizontal crossing beams. The first one at 1064 nm with a waist of 48.5 µm (radius at 1/e 2 ) permits a strong radial confinement while the second one at 1550 nm with a waist of 150 µm is used to provide a weak longitudinal confinement (44 Hz). The final evaporation is performed by lowering the power of the 1064 nm beam down to 56 mW such that the most energetic atoms fall under gravity. We obtain almost pure condensates with up to 4×10 4 atoms. The radial trap is then recompressed up to a power of 117 mW to form an elongated trap, whose frequencies are measured through parametric oscillations to be 195 Hz×195 Hz×44 Hz. The final step to produce solitons consists of modifying the scattering length by changing the magnetic field value. This is done in two steps, first to 507 G in 150 ms approaching the zero-crossing from the positive side and then to 501.3 G in 400 ms, where the scattering length a = -1.5(2) a 0 is then negative (see figure 1). The condensate then shrinks and forms the solitons. The ramp times are relatively long compared to the inverse of the longitudinal trapping frequency, preventing the condensate from being excited. Figure 2 shows the propagation of solitons in the 1064 nm optical trap, when the longitudinal confining beam is switched off. The longitudinal potential has been characterized in detail. It has an antitrapping curvature (i × 1.9 Hz) which mainly originates from the bias magnetic field curvature.
|
2016
|
[
"phys"
] |
[
"390498",
"390498",
"390498",
"390498",
"390498",
"390498",
"390498"
] |
[
"18957",
"1269973",
"738660"
] |
01424619
|
Experimentally, atomic bright soliton scattering has only been studied in the regime of negligible interaction energy, where the behavior resembles the one of noninteracting particles [49][50][51]. In this paper, we report the study of the nonlinear scattering, in a disordered potential, of an atomic bright soliton in the regime where the interaction energy is of the order of the center of mass kinetic energy [52]. As the experiment is repeated, we find that the atoms tend to be collectively either reflected or transmitted. More precisely, the histogram of the reflected fraction shows two distinct peaks at low (close to 0) and high (close to 1) reflected fractions, in contrast with the observed bell shaped histogram for non-interacting particles. This behavior is a signature of the non-linear behavior of solitons in scattering. We find that Gross Pitaevskii simulations are sufficient to account for our observed double peaked histogram, because of their strong sensitivity to small fluctuations of the experimental parameters and in particular of the soliton velocity. We argue however, that, in similar conditions, mesoscopic quantum superpositions might be observable provided that the number of atoms is significantly smaller than in the experiments reported here. Our experiment starts with a 39 K condensate in the |F = 1, m F = -1 state, produced by evaporative cooling in an optical trap [53] close to the 561 G Feshbach resonance [54]. A soliton, containing 5500(800) atoms, is then created by ramping the magnetic field close to the scattering length zero crossing at 504.4 G [52,54]. The atoms then have a negative mean-field interaction energy, which binds them together.
|
2017
|
[
"phys"
] |
[
"390498",
"390498",
"390498",
"390498",
"390498",
"390498",
"390498"
] |
[
"18957",
"1269973",
"738660"
] |
04275597
|
Although various studies have been conducted on heat sink design and optimization, a clear classification and analysis of these optimization practices within the available literature are still lacking. Therefore, the primary objective of this paper is to present a comprehensive review of the design and structural optimization of heat sinks for efficient single-phase liquid cooling of electronic devices, with a specific emphasis on addressing the challenges posed by heterogeneous heating. The rest of the paper is organized as follows. Section 2 will commence by enumerating the detrimental consequences of electronics overheating resulting from multiple heat sources, incorporating several illustrative examples. Subsequently, in Section 3, an extensive survey of the state-of-the-art will explore different practices and advancements related to heat sink design and structural optimization. Finally, Section 4 will provide conclusions, future research perspectives, and challenges to overcome. Overheating Problem of Electronics under Multiple Peak Heat Flux
In this section, we present several cases where electronic devices exhibit multiple-peak heat flux. We provide an overview of the packing/arranging structure and the resulting shape of the heat flux in each case. Examples of such devices include Lithium-ion battery packs for electric vehicles, arrays of multiple light-emitting diodes (LEDs), power electronics (such as Insulated-gate bipolar transistors and diodes: IGBTs), multi-chip modules (MCMs), and multi-junction high concentrator photovoltaics (HCPVs), among others. It is essential for these devices to operate normally within a typical acceptable temperature range.
|
2023
|
[
"phys"
] |
[
"1088615",
"1088615",
"1088615",
"1088615",
"1088615"
] |
[
"179555",
"745225",
"739950",
"748935"
] |
01601421
|
Additional information is annotated onto this reference structure, including, the quantity of emails sent/received by an individual is denoted by the size of the node, and the color of the edge depicts the frequency of peer-to-peer correspondence. Similarly, the aim of eArchivarius [START_REF] Leuski | eArchivarius: Accessing Collections of Electronic Mail[END_REF] is to highlight existing communities of people. The eArchivarius tool visualizes and organizes collections of emails in various ways, one example demonstrates a cluster-based visualization, where each sphere glyph represents a person and the more emails two people exchange the closer the glyphs become. Colors can represent various attributes such as the topic (where the confidence of 'an email being correctly classified' is realized by the intensity of the color). Thread Arcs [START_REF] Kerr | THREAD ARCS: An Email Thread Visualization[END_REF] is another graph-based tool that visualizes relations between emails. In this representation the threads are chains of emails where each one (except the root) is a reply to another belonging to the chain. Arcs link each 'child' (an email being the reply to another one) to its 'parent'' (the email the child replies to) showing the connections among them and the progress of a conversation. The user can interact selecting any email in the thread. The emphasis of the aforementioned related work focuses on intra-email visualization, however the focus of our work in this paper is on visualizing the personal email archive especially displaying the emails on time related plots. So far, there has not been much research investigating methods to visualize this archive.
|
2005
|
[
"sdv"
] |
[
"112120",
"112120",
"112120"
] |
[
"182402"
] |
01607401
|
We have developed a prototype system (see figure 3) where we connect to a data source through a web service, then extract relevant details (e.g. attributes of the global schema that are of interest to the user). This information is then passed to the mapping stage where there are two different sets of rules; asset rules to transform data attributes from the global schema to geometry and other rules to map uncertainty data. The output is a 2D map displayed on the web. The user interacts with the display and the web server pulls the data and responds with a new map. Our prototype system displays data using layers; a method often used in GIS and with which utility data users are familiar. Our initial work uses two methods to visualize the positional accuracy of assets: the blurring effect and colour bands. Blurring (see figure 4) provides users with qualitative information about the spatial accuracy of underground assets. The more blurred a polyline, the less accurate its position. However, we are aware of the perceptual issues related to, for example, the number of levels of blurring that the user is able to distinguish. In addition, blurring in 2D with the presence of occlusion can be problematic. Over-plotting may result in certain areas looking less blurred or more certain than they are in reality.
|
2001
|
[
"sdv"
] |
[
"74407",
"74407"
] |
[
"182402"
] |
00342613
|
The associations for HCL were greater and significant for cereal, corn and vine growing (3.5 1.1 11.3 , 7.6
[ - ] [
2.1 28.1 and 8.5 1. 6 44.6 , respectively). No difference between the DLCL and FL NHL subtypes was observed, irrespective of crop. -] [ -]
With regard to animal husbandry, pig breeding was related to HL (3.8 1.3 11.1 ) and sheep breeding to the FL (5.6 1.7 18.6 ) subtype. [ -] [ -]
No other significant association was observed. Occupational exposure to pesticides
shows the associations between pesticides and the main LN subgroups. Overall, the OR associated with exposure to pesticides Table 3 were 1.5, 2.1 and 3.5 for NHL, HL and MM, respectively. The association was significant for MM. Only 2 cases and no controls were exposed to insecticides only; 9 cases and 2 controls were exposed to fungicides only; 3 cases and 3 controls were exposed to herbicides only. Most of the subjects (40 cases and 26 controls) were exposed to all three pesticide categories. Overall, significant associations between MM and the use of insecticides, fungicides and herbicides and between HL and the use of fungicides and insecticides (borderline significance) were observed.
|
2009
|
[
"sdv"
] |
[
"83616",
"2805",
"94233",
"82656",
"202936",
"42647",
"99728",
"42641",
"94233",
"518500",
"42641",
"506113",
"2805",
"506113",
"2805"
] |
[
"752156"
] |
00871613
|
In this paper, in order to investigate the sustainability of telehealth integration, we propose a System Dynamics (SD) simulation model for early design decision making. The aim of the model proposed is to link influencing factors such as the number of people aged more than 60 years old, the demographic of medical staff and the healthcare cost. The simulation model is set to the 2010 conditions in a French region named Picardie and run for a period of 30 years until 2040. We propose to integrate scenario-based simulation in order to provide a decision support investigation of potential solutions to reduce healthcare cost and save valuable medical time. 2 Literature review on system dynamics and healthcare
Complex systems are systems whose size, dependence on context, variety of elements and their interdependence makes unpredictable and therefore difficult to manage. Making decisions within and concerning those systems is always difficult due to the unanticipated result that can occur. As healthcare systems are complex ones, decision making are mostly difficult. For example, making decisions on the closure or conversion of hundreds of hospital beds, on the introduction of new medical references for reimbursement in the State Medical Classification, on giving permission of use for new drugs or on the incorporation of telehealth technology in the current healthcare system are complex issues. Thus, providing decision support for healthcare managers can help them to opt between different policies. System dynamics have been identified as an interesting tool in the field of healthcare system modeling and prediction.
|
2013
|
[
"spi",
"sdv",
"info"
] |
[
"300342",
"11769",
"11769",
"11769",
"11769"
] |
[
"50",
"5565",
"5517",
"7422"
] |
00871615
|
Using system dynamics simulations allows us to see not just events, but also patterns of behavior over time. Sometimes the simulation looks backward, to historical results. At other times it looks forward into the future, to predict possible future results. Understanding patterns of behavior, instead of focusing on day-to-day events, can offer a radical change in perspective. It shows how a system's own structure is the cause of its successes and failures. This structure is represented by a series of causally linked relationships. The implication is that decisions made within an organization have consequences, some of which are intentional and some are not. Some of these consequences will be seen immediately while others might not be seen for several years. System dynamics simulations are good at communicating not just what might happen, but also why. This is because system dynamics simulations are designed to correspond to what is, or might be happening, in the real world.
|
2013
|
[
"shs",
"spi"
] |
[
"11769",
"11769",
"11769",
"11769",
"11769"
] |
[
"50",
"5565",
"5517",
"7422"
] |
00921893
|
The electrocardiogram (ECG) signal is conventionally used to measure the cardiac cycle. However, in some circumstances, obtaining an uncorrupted ECG signal recorded on small animals with RF pulses and gradient switching is challenging. To monitor respiratory motion, an air cushion associated with a pressure sensor is commonly used but the system suffers from bulkiness. For many applications, the physiological gating information can also be derived from an MR navigated-signal. However, a compact device that can simultaneously provide respiratory and cardiac information, for both prospective gating and physiological monitoring, is desirable. This is particularly valid since small volume coils or dedicated cardiac RF coil arrays placed directly against the chest wall are required to maximize measurement sensitivity. An optic-based device designed to synchronize MRI acquisitions on small animal's respiratory and heart motion was developed using a transmit-receive pair of optical fibers. The suitability of the developed device was assessed on mice (n=10) and was based on two sets of experiments with dual cardiac and respiratory synchronization. Images acquired with prospective triggering using the opticalbased signal, ECG and the pressure sensor during the same experiment were compared between themselves in the first set. The second set compared prospective technique using optical-based device and ECG to a retrospective technique.
|
2014
|
[
"spi",
"sdv",
"info"
] |
[
"530749",
"530749",
"182223",
"301510",
"530749"
] |
[
"19097"
] |
02173336
|
CPU DDR
App Direct and 1-Level-Memory for Storage
Latest Xeon processors may also be configured in 1-Level-Memory mode (1LM) which puts back DDR as the main volatile memory as show on Figure 3. The Memory Mode part of NVDIMMs is not usable anymore. The App Direct part is exposed as a Persistent Memory Regions (called region in the reminder of this paper) that may be used as a disk (e.g. /dev/pmem1). However this disk is directly byte-accessible by the processor. Contrary of usual disks, there is no need to copy disk blocks in memory (in the kernel page-cache) before actually accessing those bytes. This mode is called DAX (Direct Access) in Linux and Windows. It enables the mapping of the actual backend data directly in application virtual memory and the use of load and stores. This avoids the need for intermediate copy and page-cache allocations. CPU DDR
App Direct Mode NVDIMM Figure 3: 1-Level-Memory mode (1LM) uses DDR as the main memory while NVDIMMs are exposed as a persistent memory region that is usually used as storage. 1-Level-Memory has several interesting use cases for HPC applications, ranging from local disks as burst buffers [START_REF] Henseler | Architecture and Design of Cray DataWarp[END_REF], to recovering memory contents after a fault thanks to persistence.
|
2019
|
[
"info"
] |
[
"409750",
"409750"
] |
[
"1244",
"176348"
] |
03599360
|
Memory migration could be a solution to avoid capacity issues when important buffers are not used during the same application phase. hwloc is already able to migrate memory buffers between memory targets that may be selected using our new API. However, this operation is quite expensive in operating systems [START_REF] Broquedis | Structuring the execution of OpenMP applications for multicore architectures[END_REF]. Hence, it should likely be avoided unless the application behavior changes significantly between phases, either by using different buffers, or by using the same buffers with different access patterns. VIII. CONCLUSION AND FUTURE WORK
As the memory subsystem of computing platforms becomes increasingly complex, there is a need for portable software solution for managing data buffers in HPC applications. The variety of emerging heterogeneous memory architectures where normal memory may be combined with high-bandwidth and/or non-volatile memory requires abstracted ways to compare these technologies before decide where to allocate. In this article, we presented an interface to manage the complexity by exposing hardware characteristics. Our approach focuses on first identifying the existing memory kinds in the platform and then exposing their abilities through a number of convenient attributes, such as bandwidth, latency, and capacity. This identification and characterization step was missing in existing approaches and it hindered productivity and portability by requiring users to benchmark every new platform and guess their memory organization.
|
2022
|
[
"info"
] |
[
"409750",
"409750"
] |
[
"1244",
"176348"
] |
01233888
|
In the Operational phase, the gateway choose the action π * (s) = (m, n) (the redundancy level) based on the current state s and the optimal deterministic policy π * . VI. PERFORMANCE EVALUATION
A. Simulation Settings
In this section, we evaluate the performance of DA-VBC based on the the expected total reward defined in [START_REF] Kong | Decentralized coding algorithms for distributed storage in wireless sensor networks[END_REF] to find the optimal police π * (represented in the curves by π mdp ). As, for the best of our knowledge, we are the first work to dynamically optimize the redundancy level of packets stored in a WSN, we do not have another similar method to compare with DA-VBC. Therefore, we compare it with the performance of our benchmark BS [START_REF] Liu | In-network storage for virtual broker-based publish/subscribe in wsns[END_REF], the previous original VBC scheme [START_REF] Oliveira | Virtual broking coding for reliable in-network storage on wsans[END_REF], and with two heuristic-based policies. The heuristics are: the policy π Cost which always choose the state with the lower cost, and the policy π Benef which always choose the state with the greater packet delivery. We also show the impact of different application weight factors assigned to the two metrics (π mdpCost2 and π mdpBenef2 ) used to calculate the expected total reward of the DA-VBC MDP-based optimal policy. We perform the experiment using the Network Simulator version 2(NS-2) where each solution is executed 100 times with a simulation time of 300s. We assume the scenario depicted in Figure 6, with four publishers and four subscribers. The node in the center represents the gateway defined in our system model (responsible for finding the optimal policy applied in the WSN) and the 8 nodes around it represent the broker nodes.
|
2015
|
[
"info"
] |
[
"3210",
"40831",
"117182",
"3210"
] |
[
"10016",
"922579"
] |
01437186
|
The general problem of automatic diagnosis in an ITS [START_REF] Delozanne | Mathematical Learning Environments: a French Viewpoint[END_REF] is to infer the information of the learner's model from what is noticed of his behaviour, which means making an analysis and an interpretation of the data collected during the interaction. In the PÉPITE project, we base our work on a rigorous didactic and cognitive study which has been validated academically, institutionally and experimentally [START_REF] Grugeon | Issues linked to the transition between didactic institutions : the case of algebra in the transition from vocational high schools to general high schools[END_REF]. The didactical analysis
This research in the didactics of mathematics starts from the hypothesis of knowledge building: The students have built up pieces of knowledge sometimes different from the reference knowledge. Consequently, the productions of the students present coherences and regularities which correspond to their personal knowledge. One of the results of this study is a tool enabling us to interpret the students' productions in order to find the starting point to modify their knowledge. This tool combines a series of pencil & paper tasks with a multidimensional analysis grid allowing us to interpret the student's production to establish their profile in elementary algebra. The pencil & paper tasks
Three types of pencil & paper tasks are proposed to the students during a test : technical exercises to determine numeric calculating and formal manipulating procedure, recognition exercises to determine how students identify and interpret algebraic expressions in the algebraic writings frame or linked with other semiotic frames, modelling exercises to identify which algebraic type of treatment is used, how they translate problems in the algebraic frame and how they use the tools adapted to solve the problems. The students' answers to the exercises are, in this didactical work, analysed by hand by the teacher with the multidimensional analysis grid. The multidimensional analyse grid
This grid is made of six components: arithmetic / algebra relationship, operationality of formal manipulation of algebraic expressions, interpretation of semiotic representations in relation to algebra, production of semiotic representations in relation to algebra, role of algebra and rationality in algebra. A set of criteria is associated to each of these analysis components. During the correction of the tests, for each answer given by the student the teachers award global values defined by the different criteria of the analysis grid.
|
1997
|
[
"info"
] |
[
"117974",
"117974",
"65",
"117974"
] |
[
"3077",
"13873",
"1105838"
] |
01458041
|
We then introduce the research objectives of PÉPITE project and the general architecture of the system. We describe each prototype we have implemented and its validation. We point out that difficulties in designing and implementing such a software are not only a diagnosis problem as well known in AIED community, but first an interface designing problem. Finally, we discuss the methodology of validation of PÉPITE and our research results. Educational basis
We begin by presenting what teachers needs to know about students, we then present our theoretical framework about mathematical learning and our model of competence in algebra on which our work is based. This section ends with the presentation of the paper-and-pencil diagnosis tool we built. What do teachers want to know? Assessment systems are very often short-item tests consisting of questions that can be answered in less than one minute each. Such systems give a description of student's state of knowledge in term of rates of success / failure. A more popular approach in AIED community bases assessment on student modelling [START_REF] Wenger | Artificial Intelligence and Tutoring Systems, Computational and Cognitive Approaches to the Communication of Knowledge[END_REF].
|
2000
|
[
"info"
] |
[
"117974",
"117974",
"117974",
"65"
] |
[
"13873",
"3077",
"1105838"
] |
01452107
|
The energy chains differ in the energy carrier and the structure of the factory system. For example, compressed air is usually generated inhouse whereas electricity comes mainly from outside. In context of energy efficiency, closed energy cycles (e.g. by energy recovery) are preferred. By this, the energy losses are minimized. Energy Flow Systems
The classification of the energy flow systems belongs to the flow items. Therefore, basing on the necessary energy forms, the energy carriers of the processes are determined. The principle flow system from provision to emission can be set up for every energy carrier. However, it is also possible to separate the energy flow systems by energy forms (for physical description) or classes such as solids, liquids or gases. Figure 1 summarizes the energy flow functions, system elements and systems. Use Case
The application of the Flow System Theory for modeling energy flow systems of a factory is illustrated in this section. The "Experimental and Digital Factory" (EDF) [START_REF] Horbach | Building Blocks for Adaptable Factory Systems[END_REF] is chosen as a simplified example. The EDF consists of various manufacturing, logistics and information systems of an item-based production.
|
2013
|
[
"info"
] |
[
"94194",
"94194"
] |
[
"1000003",
"875392"
] |
03974367
|
But the Copenhagen COP also delivered another major "top-down" innovation: the goal of limiting global temperature increase to 2°. The 2° global limit has a long and complex history [START_REF] Randalls | History of the 2°C climate target[END_REF] (Aykut and Dahan 2011) [START_REF] Cointe | 2°: the history of a policy-science nexus[END_REF]. Since the 1970s, the 2° figure has appeared in climate change talks in various contexts: as the mean estimate of climate sensitivity, 5 the reference value for the first costbenefit studies, and the proposed threshold for dangerous climate change in first international conferences. In 1990, Article 2 of the Climate Convention specified an "ultimate objective": "to achieve (…) stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system " -without specifying the level in question. However, until the late 1990s, the issue of the long term goal remained at the margins of climate debate, despite the adoption of 2° as a long-term objective by the European Union. It was in the 2000s that the long-term goal -first formulated in terms of concentrations of greenhouse gasestook on its central role in the IPCC reports and in climate negotiations. In 2005, Europe launched the proposal of a 2° target for a global agreement, at first unsuccessfully. But in 2007, the Bali COP proposed the adoption of a "shared vision" including a long-term goal. The 2° target crystallized through the continuous interaction between scientific experts and policymakers, initially associated to an atmospheric concentration of 450 parts per-million (ppm) of CO2. The 2° target came to dominate in the months leading up to Copenhagen due to its ability to connect emissions and impacts and the ease with which it can be communicated, but also because this figure is sufficiently vague to allow several interpretations and is "less accurate and less clearly measurable than concentrations, which affords it an ambiguity that is very useful in the negotiation process" [START_REF] Cointe | 2°: the history of a policy-science nexus[END_REF].
|
2017
|
[
"shs"
] |
[
"1321"
] |
[
"742515"
] |
03974376
|
They were looking at what the climate biases were, but this still wasn't the place where we would talk about the question of where these biases come from... But it's not an easy question, we don't know how to do that. You get the impression that there's a lack of activity between these two communities. (L2, pers. comm, December 2013) Some researchers insist on the need to strengthen links and coordination between the process and climate areas. But the problem is deeper and touches on a fundamental characteristic of climate modeling: the difficulty of attributing the characteristics of simulations to components of the model, because of the multitude of processes and interactions at many scales. This problem is nicely summarized in a WCRP white paper: "model development is hindered by a lack of understanding of how a poor representation of cloud scale processes and cloud scale dynamics contribute to model biases in the large scale circulation features and influence future projections." 7 This "gap between processes and large-scale climate" (in the climatologist's parlance) has been characterized by philosophers as a consequence of the "epistemic opacity" of computer simulation (Humphreys, 2004: 147) or their "confirmation holism" (Lenhard and Winsberg, 2010: 254). As an instance of this epistemic opacity, even when a new parameterization successfully reproduces an aspect of the current climate, there is no guarantee that the same will be true for the future climate, because the influence of the numerous physical processes may vary in unknown ways over different timescales. That is why, according to certain modelers, improving parameterizations is a "gamble".
|
2017
|
[
"shs"
] |
[
"1321"
] |
[
"742515"
] |
01280882
|
The three critical transitions that we identified were: (1) its transfer from its original location in Cannes to Annecy (1958Annecy ( -1960)), (2) the opening up of the festival to commercial animation (1981 -1983) and (3) the event's shift from a biennale to an annual cycle (1997 -1998). Findings
The overall development of the Annecy festival can be described as shifting from serving as a prestige showcase in the late 1950s, toward functioning as an artistic gathering dedicated to auteur animation in the 1960s and 1970s, toward a broad film festival and market open to commercial animation in the 1980s and 1990s, and eventually, since the late 1990s, to a more broad-spectrum industry event serving as a creative marketplace for animation industry products and projects. Each of these forms catered to specific audiences and was connected to a particular type of internal organization, as summarized in Figure 1. During the first transition (1958)(1959)(1960), the festival organization shifted from a small group of Paris-based curators toward a more complex, hybrid organization composed of a Paris-based festival management and a local, Annecy-based volunteer organization. The second critical transition (1981)(1982)(1983) was marked by the disruption of ties with the Paris-based festival managers and the creation of a small, purely local organization based in the city of Annecy with strong volunteer support. The third transition (1997)(1998) saw this local organization grow into a larger, more professional entity that served as a hub for industry-level exchanges beyond the festival itself. In the following subsections, we present a narrative account for each of these three transitions, focusing on the environmental conditions, actors and decision processes involved, and their outcomes in terms of event characteristics and internal organization. Shift in Location: From Cannes to Annecy (1958Annecy ( -1960) ) In the 1950s, the general public mainly considered animation as a form of children's and family entertainment, epitomized by Disney movies, which had defined animation as a 'cartoon' medium since the late 1920s [START_REF] Wells | Animation: Genre and Authorship[END_REF]. To present a more artistic perspective on animation, the French film critic and animation director Andre ´Martin organized a week of artistic animated short films as a side event at the 9th Cannes International Film Festival in 1956, with the help of the national association for the promotion of cinema (Association pour la diffusion du cine ´ma, AFCD), and the support of the French national film board (Centre national de la cine ´matographie, CNC). For the first time, this event convened a large number of highly respected animation artists (e.g. Norman McLaren, Jiri Trnka, Alexandre Alexeieff, Paul Grimault and John Hubley) to take stock of the varied artistic creation in the animation field, and presented a program of animated films from such artists.
|
2015
|
[
"shs"
] |
[
"1032",
"81341",
"432899"
] |
[
"866633",
"954183"
] |
01644729
|
Our approach predicts the migration duration statically by a preliminary analysis of the VMs load. We model the memory dirty pages generation in a two-stage process based on the analysis of common workloads observation. Haikun et al. [START_REF] Liu | Performance and energy modeling for live migration of virtual machines[END_REF] propose a migration performance model based on the memory dirty pages transfer algorithm implemented in Xen. They consider both static and refined dirty pages rate build on historical observations and assume that the Writable Working Set size should be transferred in one round thereby determining the VM downtime. In contrast, we model the dirty pages rate using a two-stage approach based on KVM behavior and we consider a preset maximum downtime for each VM migration. They also do not tackle migration scheduling and network topology that are the main contributions of this paper. The CloudSim simulator [START_REF] Calheiros | CloudSim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms[END_REF] provides a model to estimate the migration duration but the model relies on the assumptions of Beloglasov et al. [START_REF] Beloglazov | Energy Efficient Resource Management in Virtualized Cloud Data Centers[END_REF] discussed previously. Takahiro et al. [START_REF] Hirofuchi | Adding a Live Migration Model into SimGrid: One More Step Toward the Simulation of Infrastructure-as-a-Service Concerns[END_REF] implemented the pre-copy migration algorithm in the Simgrid simulator. They reproduce the memory dirty pages generation behavior by using a single rate but with unusual linear correlation on the CPU usage. In contrast, we define the dirty pages generation rate as a two-stage process, according to live VM memory observations and independently of the CPU usage.
|
2017
|
[
"info"
] |
[
"455237",
"478607",
"451999",
"451999"
] |
[
"16780",
"1334"
] |
01271665
|
Recently, some migration models have been developed and integrated into simulators to evaluate VM placement algorithms properly [START_REF] Hirofuchi | Adding a Live Migration Model Into SimGrid, One More Step Toward the Simulation of Infrastructure-asa-Service Concerns[END_REF]. While these models reproduce migrations finely, they are only devoted to simulation purpose and not used to compute scheduling decisions. We propose here a migration scheduler that considers the network topology, the migration routes, the VM memory usage and the dirty page rates, to compute precise migration durations and infer better schedules. We implemented our scheduler on top of BtrPlace [START_REF] Hermenier | BtrPlace: A Flexible Consolidation Manager for Highly Available Applications[END_REF], an extensible version of Entropy [START_REF] Hermenier | Entropy: a Consolidation Manager for Clusters[END_REF] that allows to enrich the scheduling decision capabilities through plug-ins. To assess the flexibility of our scheduler, we also implemented constraints to synchronize migrations, to establish precedence rules, to respect power budgets and an objective that minimizes energy consumption. We evaluated our model accuracy and its resulting benefits by executing migration scenarios on a real testbed including a blocking network, mixed VM memory workloads and collocation settings. Our model predicted the migration durations with a 94% accuracy at minimum and an absolute error of 1 second while BtrPlace vanilla was only 30% accurate. This gain of precision led to wiser scheduling decisions. In practice, the migrations completed on average 3.5 time faster as compared to an execution based on BtrPlace vanilla. Thanks to a better control of migrations and power-switching actions we also reduced the power consumption of a server decommissioning scenario according to different power budgets.
|
2015
|
[
"info"
] |
[
"241684",
"241684",
"241684"
] |
[
"16780",
"1334"
] |
00537439
|
A booklet (available on the web site [START_REF] Marchal | A Math lab on vectors: complet report[END_REF]) with the explanation and a brief reminder of the interface is provided. The rules are given: how to enter answers, waht is required to validate an exercise, the necessity to show the result to the teacher before going to the next exercise. Pupils have to take notes in order to write a report after the lab. The possibility to save the sessions so that the pupils can keep a copy of their work is mentioned. -Second part: Explanation of the first exercise. Individual help during the realization. -Third part: Explanation of the second and the third part of the lab session when two thirds of the class have achieved the first exercise. -Fourth part: Debriefing. Reminder of the report to be returned to the teacher, together with an opinion sheet. -After the session: The pupils have to fill in a report justifying what they did and why, mentioning the laws they used.
|
2006
|
[
"info"
] |
[
"707",
"2952",
"79",
"59688",
"2396"
] |
[
"2294",
"838492",
"839229",
"20288"
] |
00514500
|
Explanations provide us with a declarative view of the computation and we plan to use their tree structure to adapt declarative diagnosis [START_REF] Shapiro | Algorithmic Program Debugging[END_REF] to constraint programming. In [START_REF] Tessier | Declarative diagnosis in the CLP scheme[END_REF] a framework for declarative debugging was described for the CLP scheme [START_REF] Jaffar | Semantics of constraint logic programs[END_REF]. Symptom and error are connected via some kind of proof tree using clauses of the program. The diagnosis amounts to search for a kind of minimal symptom in the tree. In [START_REF] Tessier | Declarative diagnosis in the CLP scheme[END_REF], the solver was only seen as a (possibly incomplete) test of unsatisfiability (well-behaved solver of [START_REF] Jaffar | Semantics of constraint logic programs[END_REF]) so constraint solving was not fully taken into account. But, for CLP in finite domains, constraint solving involves domain reduction for which we have defined in this paper another kind of proof tree: explanation trees. In a future work we plan to integrate these two kinds of proof trees in order to have finer connections between symptom and error. c
2003 Published by Elsevier Science B. V.
Example 3 . 1
31 Arc consistency is a simple and particular case of hyper-arc consistency. Let c ∈ C with var(c) = {x, y}.
|
2002
|
[
"info"
] |
[
"1625",
"1625",
"1625"
] |
[
"4110"
] |
00459189
|
Declarative means that the user has no need to consider the computational behavior of the programming system, he only needs a declarative knowledge of the expected properties of the program. This paper is an attempt to adapt declarative diagnosis to constraint programming thanks to a notion of explanation tree. Constraint programs are not easy to debug because they are not algorithmic programs [START_REF] Meier | Debugging constraint programs[END_REF] and tracing techniques are revealed limited in front of them. Moreover it would be incoherent to use only low level debugging tools whereas for these languages the emphasis is on declarative semantics. Here we are interested in a wide field of applications of constraint programming: finite domains and propagation. The aim of constraint programming is to solve Constraint Satisfaction Problems (CSP) [START_REF] Tsang | Foundations of Constraint Satisfaction[END_REF], that is to provide an instantiation of the variables which is solution of the constraints. The solver goes towards the solutions combining two different methods. The first one (labeling) consists in partitioning the domains. The second one (domain reduction) reduces the domains eliminating some values which cannot be correct according to the constraints. In general, the labeling alone is very expensive and domain reduction only provides a superset of the solutions.
|
2003
|
[
"info"
] |
[
"1625",
"1625",
"1625"
] |
[
"4110"
] |
00136179
|
Detect: Sm = skeleton of the mask M. Bm = the external boundary of M after dilation. Ig = gradient of Is Es = edges of Is (after previously mentioned post processing) 2. Set Ig(Es) to max(Ig) (Pixels belonging to the edges will have the highest gradient values) 3. Modify Ig so that it has minima only at Sm and Bm. 4. Apply the watershed. The algorithm gives a unique region as the object around the skeleton bounded by the detected edges. Experimental Results
We tested our system on 20 bridge and 20 roundabout objects. Step Object Results
Correct Acceptable Incorrect Segm. Bridge 15 3 2
Round-about 10 5 5
Match.
|
2006
|
[
"info"
] |
[
"16654"
] |
[
"836897"
] |
00644982
|
Indeed, it was soon recognized that panoramic images may ease photogrammetric work; the first panoramic camera may be one developed by Puchberger in 1843, i.e. just a few years after the invention of modern photography [START_REF] Mcbride | A timeline of panoramic cameras[END_REF]. Multi-camera systems were developed at least as early as in 1884, initially mainly if not exclusively for aerial imaging. The earliest work known to me (no effort was made for an exhaustive bibliography research) is that of Triboulet, who, as reported in [START_REF] Tissandier | La photographie en ballon[END_REF] experimented from 1884 on with a multi-camera system consisting of 7 cameras attached to a balloon: one camera looked downwards and 6 cameras were equally distributed around the balloon's circumference (the system thus resembles the popular Ladybug sensor). In addition to hardware for the acquisition of images and complementary measurements, photogrammetry progressed signficantly through the development of equipment and procedures to exploit the acquired images, see e.g. an early survey in [START_REF] Laussedat | Recherches sur les instruments, les méthodes et le dessin topographiques[END_REF][START_REF] Laussedat | Recherches sur les instruments, les méthodes et le dessin topographiques[END_REF]. Epipolar and Multi-view Geometry
Epipolar geometry seems to have been first uncovered by Hauck in 1883 [START_REF] Hauck | Neue Constructionen der Perspective und Photogrammetrie (Theorie der trilinearen Verwandtschaft ebener Systeme) -1st article[END_REF]. In the same paper as well as follow-up papers [START_REF] Hauck | Theorie der trilinearen Verwandtschaft ebener Systeme. die orientirte Lage -2nd article[END_REF][START_REF] Hauck | Theorie der trilinearen Verwandtschaft ebener Systeme. die dreibündigeindeutige Verwandtschaft zwischen drei ebenen Punktsystemen und ihre Beziehungen zur quadratischen und zur projectiv-trilinearen Verwandtschaft -3rd article[END_REF][START_REF] Hauck | Theorie der trilinearen Verwandtschaft ebener Systeme. die trilineare Beziehung zwischen drei einstufigen Grundgebilden -4th article[END_REF][START_REF] Hauck | Theorie der trilinearen Verwandtschaft ebener Systeme. zusammenfassung und wichtige Specialfälle -5th article[END_REF], trilinear relationships of points and lines seen in three images, were also described.
|
2011
|
[
"info"
] |
[
"108956"
] |
[
"1630"
] |
00525670
|
The method was developed for a sensor based on a parabolic mirror, but its adaptation to other sensors is straightforward. 3D reconstruction is done using geometrical constraints provided by the user, that are simple in nature (coplanarity, perpendicularity and parallelism) and may be easily provided without any computer vision expertise. The major drawback of single-view 3D reconstruction is of course that only limited classes of objects may be reconstructed and that the reconstruction is usually incomplete. The major advantages however are that it is a quick way of obtaining 3D models, that it is rather easy to implement and to use and that due to user interaction and the small size of the problem the reconstruction process becomes very reliable, compared to more automatic multi-view systems. Also, using geometrical constraints on the scene structure is always a good idea in order to obtain realistic 3D models. 3D models from single images might be used to register perspective views of scene details in order to obtain high resolution global 3D models. One advantage of our method compared to other approaches is that a wider class of objects can be reconstructed (especially, there is no requirement of disposing of two or more ideal points for each plane). The simultaneous reconstruction of several planes and several points that forms the starting point of our method makes it likely that errors are nicely spread over the whole 3D model, compared to more sequential approaches. Please contact the author for getting a paper version with color figures. Figure 1 :
1 Figure 1: The paraboloidal mirror. (a) Calibration: the dotted line shows the circle (see text).
|
2000
|
[
"info"
] |
[
"2399"
] |
[
"1630"
] |
02274143
|
The attributes of the students considered for adaptive feedback are the cognitive learning style, student's performance, and knowledge level. The cognitive apprenticeship approach to learning involves the guidance of a student to achieve cognitive and meta-cognitive skills with the help of an expert. This principle was selected for this research because it clearly supports feedback, thus providing an avenue to demonstrate the use of a dynamic knowledge base approach to adaptive feedback. Algebraic Concept Relations for Adaptive Feedback in Physics
In this article, we focus on the cognitive apprenticeship (CA) pedagogical principle and the physics domain. In a CA principle, the main aim of designing teaching methods is to help students acquire and use cognitive and meta-cognitive strategies for discovering, utilizing and managing knowledge [START_REF] Collins | Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. Knowing, learning, and instruction: Essays in honor of Robert Glaser[END_REF]. The acquisition and use of these strategies solely depends on the interaction between the student's current knowledge, the social and physical environment of the problem-solving and the intricacies of the problem-solving. The challenge is to represent the attributes of the CA principle that affect the type, timing, goal, and sequence of feedback, relating it to the characteristics of the domain and student models. The representation of the characteristics of all 3 models relating to adaptive feedback is shown in An algebraic concept relation which provides adaptive feedback in physics is proposed. The main objective is to observe the relationship between the performance of a student based on the adaptive feedback provided.
|
2018
|
[
"info"
] |
[
"567440",
"567440",
"567443",
"567440",
"567442"
] |
[
"1053004"
] |
03694417
|
Defining process and data at the global level of a spacecraft will fail. Instead, applying the concept of divide and conquer, the identification of processes and data models is first done at discipline level, for which relevant experts can support the local digitalisation of the discipline that they know. In a second step, putting together the data models shows that some data are shared between disciplines. For example, the system product tree is used by Product Assurance, the centre of gravity of the CAD tool is used by the control system, etc. It is essential that the two disciplines have the same understanding of the shared data: same meaning (dry mass, wet mass), same referential, same units, same structure, same financial reference (for cost data), etc. This means that each data must have a unique semantic for everyone. Defining the common semantic of data is done at the level of an ontology, which is commonly discussed and agreed in the OSMoSE group [12]. Indeed, a set of data can have a physical representation (e.g. XML), a logical representation (e.g. a meta-model), and a conceptual representation (an entity-relation concept, called here ontology). The ontology language selected is ORM [12], [14]. ORM allows to define the agreed entity-relationship of the data set, as well as (thanks to the specific mechanism of "derivation"), any semantically equivalent expression preferred by one or another stakeholder. This semantic equivalence allows generating automatically a physical representation of the data set appropriate for each stakeholder, as well as the tool that translates one format into another one. This allows achieving what is called "semantic interoperability" (Figure 2).
|
2022
|
[
"info"
] |
[
"85250"
] |
[
"975435"
] |
02263427
|
This covers multiple domains such as avionics, data handling, payload data processing, microelectronic, control, and software. They address in particular: -consolidation of the reference architecture (for data handling, AOCS, software) -modeling of the architecture (e.g. in AADL), -elaboration of reference specification (for Computers, Remote Terminal Units, software execution platform and functional chains, software and hardware communication) -onboard communications services and the use of electronic datasheets for interface definition -interface standardization (for bus protocols, sensor/actuators functional interface, payload interface, space ground interface) -prototyping of some building blocks. The SAVOIR portfolio is organized according to the technology levels: basic technologies, components, modules, equipment, systems. Basic technologies and components are considered as enabling technologies. Reference specifications apply more to modules and equipments. The building blocks are provided with either only a reference specification, or with an implementation at various Technology Readiness Levels. The Savoir Advisory Group is supported by sub working groups:
-SAVOIR-S/A-I/F This working group addresses the electrical interface of the sensor actuators used for the attitude control and the guidance of the spacecraft. It has concluded on the selection of the Spacewire and the 1553 as bus standards, and has recommended the standardization of a RS422 interface protocol, which has then been defined in subsequent R&D activities. -SAVOIR-SAFI This working group is in charge of the standardization of the functional interface with sensor and actuators. It intends to prepare the use of interface mechanisms based on the use of Electronic Data Sheets as defined in CCSDS SOIS standards -SAVOIR-FAIRE This working group is in charge of the on-board software reference architecture. It has proposed an architecture (named COrDeT) based on the segregation of the application software (independent from the execution context and expressed with components: Platform independent Model) and the execution platform (providing services to components such as real-time scheduling, communication and specific services).
|
2012
|
[
"info"
] |
[
"85250"
] |
[
"975435"
] |
01276635
|
Nearer to the object paradigm and to UML, [START_REF] Górski | Object oriented safety monitor synthesis[END_REF] define "critical attributes" of the system objects and study the effects of potential harmful variation of these attributes. This leads first to the notion of critical sub-systems (which can be components) and sec- ond to the identification of hazardous state based on statecharts ( [START_REF] Nowicki | Object oriented safety analysis of an extra high voltage substation bay[END_REF]). The approach is the same as the previous ones and consists in identifying parts of the system that might cause some damage. But the link between the very system objects are even more difficult to evaluate. Among analytical methods, allowing fault forecasting, FMECA ( [START_REF]1629A. Procedures for performing a Failure Mode, Effects and Criticality Analysis[END_REF]) is certainly the most used during functional analysis. Nevertheless, it can be applied to software components and to their links [START_REF] Yacoub | A methodology for architectural-level risk analysis[END_REF]. This approach is similar to the study of electronics components but does not take into account the major object concepts as classes or methods. [START_REF] Bitsch | Requirements on methods and techniques in perspective to approval process for railway systems[END_REF] suggests to use this technique analyzing the objects methods as a function analysis, and thus identifying effects on the system. In a case study on a car design, [START_REF] Johannessen | Hazard analysis in object oriented design of dependable systems[END_REF] use the UML use cases to specify requirements and realize a FMECA based on these diagrams. However, the link with UML is limited because the use cases identified, as the "car stability during braking", correspond to non-functional requirements and thus can not be used to identified objects.
|
2003
|
[
"info"
] |
[
"32848"
] |
[
"17727"
] |
02910983
|
It has been successfully applied to study the scale inhibition efficiency of Cu and Zn cations. Maximum inhibition efficiency were reached with 5 mg/L Cu 2+ and 4 mg/L Zn 2+ . 2-In addition, the FCP method showed that these inhibitors are very efficient at concentrations greater than 1 mg/L. The optimal concentration of Zn 2+ ion was 4 mg/L and that of the Cu 2+ ion was 5 mg/L, for which no calcium carbonate precipitation occurred. Trace amounts of inhibitor can substantially block the CaCO3 nucleation process. 3-FCP and SQCM results are in good agreement. They are very sensitive to detect the presence of inhibiting substances in water at very low concentrations. Indeed, similar concentrations used with SQCM tests showed that the surface coverage of deposits on a pre-calcified surface was reduced by the presence of these inhibitors. 4-The deposits formed by FCP method essentially consist of vaterite in the absence of inhibitor, aragonite in the presence of copper ion and calcite with addition of zinc ion. Figure captions
Panalytical employing Cu-Kα radiation (λ = 1.54 Å).
|
2019
|
[
"chim"
] |
[
"219323",
"541955",
"541955",
"1048418",
"541955",
"1048418",
"541955"
] |
[
"842301",
"740303"
] |
01836183
|
The set-up was tested with different water samples which were brought to a given supersaturation coefficient by degassing the dissolved CO 2 . The prediction of the scaling propensity of water was then possible through the relationship between the scaling rate on a pre-calcified surface and the supersaturation coefficient. In addition, the kinetics of CaCO 3 deposit on the precalcified SQCM surface was found to be slower for natural water than for synthetic water (same calcium concentration). Furthermore, the activation energy for scale deposit, in synthetic water, was found to be 22 kJ.mol -1 , which may be related to the diffusion of ions and/or CaCO 3 nuclei in solution. Introduction
Scale deposit on solid surfaces causes serious problems in many industries using or processing water, e.g. in cooling systems (Abd-El-Khaled and Abd-El-Nabey, 2013) or in heat exchangers [START_REF] Yang | Investigation of induction period and morphology of CaCO 3 fouling on heated surface[END_REF]. Indeed, the thickness of the scale deposit may limit thermal exchanges or the water flow in pipes. The use of chemicals is a common approach to control scale deposit. However, those chemicals may have an important impact on environment, i.e. in the eutrophication process. On the one hand, scale deposit could occur when water becomes supersaturated, i.e the product of ions activities, [Ca 2+ ] × [ ], exceeds the solubility product K sp of CaCO 3 [START_REF] Xyla | The precipitation of calcium carbonate in aqueous solutions[END_REF]. On the other hand, scale deposit may take place due to an external cause, i.e. the introduction of a substrate or seed crystals [START_REF] Donnet | Contribution of aggregation to the growth mechanism of seeded calcium carbonate precipitation in the presence of polyacrylic acid[END_REF].
|
2018
|
[
"chim"
] |
[
"541955",
"538577",
"529736",
"538577",
"541955"
] |
[
"842301",
"740303"
] |
02930349
|
A summary of the key structural features is collated in Table 1. Several trends emerge from examination of these data. The Nimine-Ncarbazolate-N'imine angle increases with the size of the Ae element, so that more room is created to accommodate increasingly large metals. Conversely, the bite angle Nimine-Ae-N'imine narrows substantially when the metal becomes bigger, because the hosting of the metal by the bis(imine) claw then gradually becomes more difficult. The further consequence of these distortions is a gradual increase of the distance between the metal and the best average N=Ciminie-C1-C9a-Ncarb-C8a-C8-C'imine=N' backbone plane, an additional adjustment of the geometrical pattern required to maintain the Ae-Ncarbazolate bond. The Ae-Ncarbazolate interatomic distances raise steadily from Mg to Ba, but these variations essentially match the increase of the ionic radii of the metals. A c c e p t e d M a n u s c r i p t
Beyond its utility for barium, demonstrated in our earlier communication through the preparation of complexes 1-5, we have hence shown here that {Carb DiPP }H is a versatile platform that can be implemented to the other large alkaline earths, calcium and strontium, and also to their smaller congener, magnesium. Overall, the preparation of stable Ae complexes has so far been successfully extended to various alkalineearth amides, alkyls, halides, silyls and stannyls. In this respect, and also because it is readily available on large scales within a matter of days, {Carb DiPP }H can be compared in efficiency to the ubiquitous βdiketimine {BDI DiPP }H that has proved so spectacularly successful with calcium and magnesium. Unlike {BDI DiPP }H, the new {Carb DiPP }H has failed so far to deliver isolable "simple" Ae-hydrides, even if the available experimental data (compounds 15 and 16) suggest that these species must be formed at least transiently.
|
2020
|
[
"chim"
] |
[
"194938",
"194938",
"194938",
"194938",
"194938",
"194938"
] |
[
"754009",
"929139",
"750528",
"12522"
] |
03182770
|
The paper is structured as follows. The second section describes the adopted methodology, including the presentation of the industrial impregnation process, the design of the studied samples, the magnetic measurement method and the experimental protocol that was followed. Then, results related to the effect of impregnation process on the iron losses and the normal magnetization curves are given. Moreover, thanks to thermo-mechanical finite element simulations, these effects are linked to the thermal retraction of the resin on the magnetic circuit. Finally, the main conclusions of the study and future work are presented. Methodology and experimental protocol
The studied industrial impregnation process
Usually, large electrical machines, once the magnetic circuits are built and wound, are impregnated with a dielectric resin that is polymerized under heat treatment. As mentioned in the introduction, the aims of this process are mainly to ensure thermal properties, electrical insulation and reduction of the machine vibrations during operation. In this study, the considered industrial impregnation process involves two cycles as presented in Figure 1. Each cycle includes a vacuum pressure impregnation phase followed by a heat treatment phase. The impregnation phase consists in placing the machine in a tank which is subjected to a defined level of vacuum and applying the resin.
|
2021
|
[
"spi"
] |
[
"13338",
"544873",
"211915",
"13338",
"211915",
"343049",
"343049"
] |
[
"4802",
"17796"
] |
03182770
|
Indeed, the way the resin will diffuse, and consequently its impact on the magnetic properties, is directly linked to the geometry of the core. Conclusions
The presented study deals with a first investigation of the effect of an industrial impregnation process on the magnetic properties of a lamination stack. In particular, the changes in magnetic properties have been related to the way the resin diffuses within the magnetic circuit. For this, two groups of magnetic cores have been studied: a first group with ring cores that are wrapped by a non-porous adhesive tape, implying that the resin will only be distributed on the outer surface and a second group without tape, implying that the resin will diffuse within the magnetic core. Experimental results show that in both cases the impregnation process has a significant impact on the magnetic properties. In particular, they show that the more the resin diffuses within the magnetic circuit, the more the magnetic properties are deteriorated. Finally, the results of this study demonstrate the interest of investigating the impact of an impregnation process on magnetic properties of laminated magnetic cores. One must note that the level of impact in the presented results is valid for the studied geometries, namely magnetic circuits of small sizes. Indeed, for larger magnetic circuits, in particular in the case of electrical machines, it is necessary to investigate whether and how the resin will diffuse over the entire magnetic circuit. However, this first investigation that shows the effect of the impregnation process on magnetic properties has many perspectives.
|
2021
|
[
"spi"
] |
[
"13338",
"544873",
"211915",
"13338",
"211915",
"343049",
"343049"
] |
[
"4802",
"17796"
] |
01396210
|
Finally, the tone observed at 1.5 kHz in sideline microphones spectra during CEPRA19 experiments that corresponds to the one registered in the skin with the unsteady pressure measurements (see Figure 12 (top left)) and studied by D. Casalino [START_REF] Casalino | Facing rim cavities fluctuation modes[END_REF] is well recovered. Nevertheless, in the upstream direction, the level of this tone is overestimated, with an almost perfect agreement in pure sideline (θ = 90 ○ ) and a suppression of the tone downstream of the landing gear. Interestingly, the agreement with measurements in terms of frequency band, is quite different between the acoustic spectra and the wall pressure spectra (see Figures 10 and12). The global trend is a shift towards higher frequencies for the acoustic spectra with respect to the wall pressure one. Two comment are necessary. On one hand, wall pressure spectra correspond to a rather local information of the flow whereas acoustic spectra are calculated from a surface integration that takes into account the whole landing gear skin. On the other hand, there is no obvious relationship in terms of broadband frequency content between wall pressure spectra information and far-field acoustics. In the end, the difficulty to establish necessary and sufficient criteria in the simulation to target an acoustic frequency band in far-field arises. The acoustic criteria on the grid cell size around the landing gear are necessary to ensure the acoustic propagation in near-field, but broadband noise in far-field is rather controlled by the accuracy of the flow simulation. On this point, the computational strategy based on the ZDES and mesh refinement for the attached boundary layer and high curvature areas seems successful.
|
2017
|
[
"spi"
] |
[
"252026",
"252026",
"404148",
"252026"
] |
[
"172243"
] |
02661239
|
STEC strains were incubated in the digestive contents of sheep fed the HW diet supplemented with the probiotic S. cerevisiae I-1077 or L. acidophilus BT-1386. The pH of the rumen contents of sheep fed S. cerevisiae was slightly higher than the pH of the rumen contents of sheep fed L. acidophilus (pH 5.87 versus pH 5.65), and the values dropped to pH 4.68 and 4.38, respectively, after 24 h of incubation with the STEC strains, while the survival rate of STEC varied between 1.48 and 0%. Very low levels of STEC cells were recovered for five STEC strains (four AR strains and one AS strain) in the presence of S. cerevisiae, but STEC cells were never recovered in the presence of L. acidophilus, even after a culture enrichment step. In the rumen contents containing S. cerevisiae, high concentrations of ethanol were found (Table 2), and in the samples containing L. acidophilus, the lactate concentration was increased, whereas the concentrations of the three major VFAs (acetate, propionate, and butyrate) remained close to those in samples without probiotics. In contrast to the results for rumen contents, probiotic supplementation did not inhibit STEC growth in the jejunal and cecal contents (data not shown). In jejunal samples there was an increase in the concentration of major VFAs in the presence of probiotics (Table 2), which remained less than 25 mM. The concentration of lactate in L. acidophilus incubations was also higher than the concentration in the absence of probiotics, but it remained less than 20 mM. In cecal contents, no change in the concentration of fermentation products was seen in the presence of probiotics. Since E. coli O157:H7 did not survive for 24 h during incubation in the rumen fluid but survived for at least 12 h, we performed a growth kinetic analysis in the presence of probiotics. As described above, the O157:H7 strain survived for about 12 h in the rumen fluid, and then the concentration of viable cells began to decrease (Fig. 5). No viable cells were detected after 18 h. In the presence of L. acidophilus, the cell concentration decreased more rapidly, and no viable cells were detected after 15 h of incubation; in the presence of S. cerevisiae no viable cells were detected after 16 h.
To determine whether probiotic yeast could help alleviate the induction of acid resistance mechanisms in the rumen fluid, we preincubated the STEC strains in rumen fluid harvested from animals fed the probiotic yeast before the acid challenge.
|
2010
|
[
"sdv"
] |
[
"31340",
"37712",
"31340",
"420960",
"129892",
"31340"
] |
[
"1203138"
] |
01601365
|
First, a trial was performed in an experimental farm with well controlled conditions, where the herd was randomly split into two groups, one being supplemented and the other one being a control. Secondly, a field trial involving five farms where periods of supplementation alternated with control periods was carried out. MATERIALS AND METHODS
Trial 1: University farm trial with contemporaneous group comparison Trial design: The study was conducted with 40 Holstein dairy cows in the experimental farm of the University of Bologna, Italy. The lactating herd was divided into 2 groups, homogeneous for milk yield, fat and protein content, parity and body weight. Criteria to select cows undergoing the experiment were, among others, parity number <4 and 30< days of pregnancy <160. Animals received the same diet under the form of Total Mixed Ration (TMR) that was formulated to meet the requirements of high producing cows according to [START_REF] Nrc | Nutrient Requirements for Dairy Cattle[END_REF]. The diet was based on hay (43% of the TMR) and did not include any silage in accordance to the operating instructions to produce Parmigiano Reggiano cheese. The experimental product (Saccharomyces cerevisiae CNCM I-1077, Lallemand Animal Nutrition, Blagnac, France) was provided at the dose of 2×10 10 CFU of live yeast/head/day in a premixture to be included in the diet at 100 g/head/day. The control group received 100 g/head/day of a placebo which consisted in the carrier (limestone) used in the premixture. The duration of the trial was 63 days, including a 14 days observation period with both groups receiving the same diet without yeast supplementation, followed by a 42 days period with one group receiving the yeast supplementation and the other group the placebo.
|
2016
|
[
"sdv"
] |
[
"511289",
"460848",
"511289",
"460848",
"119997",
"511289",
"31340",
"508107"
] |
[
"1203138"
] |
03657128
|
Lastly, since endoscopic and histological data were not collected in the three databases, residual confounding by these parameters cannot be entirely ruled out. We adjusted for the number of C-reactive protein tests ordered, while we did not adjust for the number of fecal calprotectin test ordered. Indeed, fecal calprotectin was not used in the early part of our inclusion period and fecal calprotectin is not reimbursed in France. In conclusion, this study based on three large population-based claims databases of patients with UC in both the US and France provides strong support that clinically-relevant real-world effectiveness outcome measures can be calibrated by comparing RCTs and their emulations in RWD. These measures can be further used to assess effectiveness of other treatments in patients with similar diseases. Author Contributions:
A list of each author's contributions. The standard contributions include: J.K. and S.S. wrote manuscript, J.K., R.J.D., M.S., L.B., S.C.K., and S.S. designed research, J.K. and S.S. performed research, JK. analyzed data. (
MarketScan) 2004-2018 and Optum's Clinformatics® Data Mart Database (Optum) 2005-2019, and the French administrative health database 2009-2018 (Système National des Données de Santé, SNDS). Patients enrolled in the MarketScan and Optum databases are representative of a commercially insured population in the U.S.; the SNDS covers 95% of the
Figure 1 .Figure 2 .Figure 3 .
|
2022
|
[
"sdv"
] |
[
"454982",
"130467",
"130467",
"130467",
"542132",
"130467",
"130467"
] |
[
"748817",
"826133",
"781522"
] |
02331287
|
One of the first published effort to detect review spam utilized the fact that spammers duplicate their reviews, either on the same or different product [START_REF] Jindal | Analyzing and detecting review spam[END_REF]. Similarly, spam scoring proposed in [START_REF] Lim | Detecting product review spammers using rating behaviors[END_REF] was based on the cosine similarity between reviews. Furthermore, Wang et al. [START_REF] Wang | Review graph based online store review spammer detection[END_REF] developed a review graph to capture the interactions among reviews, reviewers and stores. Thus, the honesty of reviews could be calculated. Interestingly, this approach did not use any review text information. In contrast, the approach proposed in [START_REF] Lau | Text mining and probabilistic language modeling for online review spam detecting[END_REF] was based on text features only. Li et al. [START_REF] Li | Learning to identify review spam[END_REF] examined the effect of several feature categories on review spam identification, including content, sentiment, product or profile features. Review metadata were integrated with relational features in SpEagle, a unified framework to rank reviews [START_REF] Rayana | Collective opinion spam detection: Bridging review networks and metadata[END_REF]. Unusual temporal patterns of correlated review ratings were also used to detect spam attacks [START_REF] Xie | Review spam detection via temporal pattern discovery[END_REF]. These patterns make real-time detection of abnormal events possible [START_REF] Ye | Temporal opinion spam detection by multivariate indicative signals[END_REF][START_REF] Li | Bimodal distribution and co-bursting in review spam detection[END_REF].
|
2019
|
[
"info"
] |
[
"494772",
"494772"
] |
[
"1033474",
"992409"
] |
01821042
|
A Naïve Bayes classifier was proposed by [START_REF] Wang | Don't follow me: Spam detection in Twitter[END_REF] to detect spam in Twitter. Features related to tweet content and user behaviour were identified and used in machine learning by [START_REF] Benevenuto | Detecting spammers on twitter[END_REF]. A hybrid approach for identifying spam profiles was proposed by [START_REF] Aswani | Detection of spammers in twitter marketing: a hybrid approach using social media analytics and bio inspired computing[END_REF], combining social media analytics and firefly algorithm with chaotic maps for spam detection in Twitter marketing. In addition to spam messages detection, recent studies have also considered an alternative task of social spammer detection. A large Twitter dataset was used in [START_REF] Shen | Discovering social spammers from multiple views[END_REF] to demonstrate that feature distributions between spammers and legitimate users are different. These feature distributions were used in a social spammer detection framework that integrated this information with a social regularization term incorporate into a classification model. In [START_REF] Gogoglou | Early malicious activity discovery in microblogs by social bridges detection[END_REF], the so-called "social bridges" were identified to detect spammers in Twitter. These are reported as the major supporters of malicious users and a graph-topology based classifier was used to detect such bridge linkages. Another way to tackle the issue of detecting spammers in Twitter was described in [START_REF] Bindu | Discovering spammer communities in twitter[END_REF]. A multilayer social network was defined and the identification of spammers was based on the existence of overlapping community-based features of users represented in the form of Hypergraphs, such as structural behaviour and URL characteristics.
|
2018
|
[
"info"
] |
[
"312887",
"312887"
] |
[
"1033474",
"992409"
] |
03727266
|
The homogeneous controller guarantees a finite-time stabilization of the system state at zero without overshoot in the first component. The main advantage of the finite-time controller [START_REF] Longchamp | Influence of zero locations on the number of step-response extre[END_REF] is the simplicity of the parameters tuning. The parameter r in [START_REF] Longchamp | Influence of zero locations on the number of step-response extre[END_REF] defines the maximum norm of the initial state x 0 for which the settling time of the system is bounded by the arbitrary fixed number T > 0. Remark 1. The controller ( 21) is uniformly bounded
|u h (x)| 2 ≤r 2 λ max P -1 2 K T KP -1 2 , ∀x ∈ R n . ( 41
)
The positive cone3 Ω ⊂ Σ -defines a set of initial states of the system (1) for which the linear control (16) stabilizes the system without overshoot in the first coordinate. In the case of the homogeneous nonovershooting control ( 21) such a positively invariant set is the d-homogeneous cone4 Ω r ⊂ Σ -. Since |u lin (x)| = |u h (x)| for ∥x/r∥ d = e s then linear and homogeneous controllers have the same maximum magnitude on the homogeneous ball
B r = {x ∈ R n : ∥x/r∥ d ≤ e s}. ( 42
)
Theorem 1 implies that the positively invariant compact set Ω r ∩ B r of the homogeneous control system is larger than the positively invariant set Ω∩B r of the linear control system despite that both controllers have the same magnitude on B r . The differential equations ( 19) and ( 37) for linear and homogeneous barrier functions are slightly different.
|
2022
|
[
"info"
] |
[
"551852",
"300717"
] |
[
"735471"
] |
01297538
|
The study uses a difference-in-differences method with a recursive bivariate probit model to evaluate the existence of changes in trends for the employment of females in relation to that of males due to the policy, conditional on any preexisting trends. It shows a significant effect from the reductions of the Social Security employer contributions and estimates an average elasticity of employment with respect to these reductions of 0.08 for women with long-term disabilities. Other studies have also used the difference-in-differences method to examine the influence of the disability insurance programs on individuals' professional careers. Along the same lines, [START_REF] Staubli | The Impact of Stricter Criteria for Disability Insurance on Labor Force Participation[END_REF] studies, in the case of Austria, the effect of linking the eligibility to disability insurance to return to work and disability enrolment. They identify a statistically significant positive effect for this type of measure on labour market participation (in the private sector) for disabled men aged 55 and over (an increase of 1.6 to 3.4 points). The method is based on a study of differences over time and according to a gender-age mix. Marie and Vall Castello (2012) look into the impact of establishing a more generous disability insurance programme on the rate of labour market participation for claimants suffering from partial disability (who can combine disability benefits and job income) for individuals aged 55 and over (the only ones eligible for such benefit increases). Using Spanish data, the authors find that a 36% increase in the amount of disability pensions reduces the probability of being employed by 8%. A second category of studies has to do with the impact of disability on labour-market outcomes, including professional careers. [START_REF] Jones | Disability, Employment and Earnings: An Examination of Heterogeneity[END_REF] estimates the influence of different characteristics of disability (type, origin, duration, severity) on the probability of being employed and on labour-market earnings, using an ad hoc module of the 2002 UK Labour Force Survey.
|
2016
|
[
"shs"
] |
[
"188024",
"74242"
] |
[
"669"
] |
01297822
|
They confirm the impact of the vagaries of a career path on retirement behavior revealed above. They also allow us to underline how different careers -here summarized in a typology-lead to different models of retirement behavior. For the precarious and the unstable, the more unemployment has affected the career, the later the retirement, while, for the precarious, having known periods of sickness tends to increase the probability of obtaining a full rate by age pension. A woman who has pursued a "precarious" career (Cluster 1) or, even more, a "non-working young" career (Cluster 2), tends to postpone her retirement in comparison with a man. Most of those in the 'precarious' cluster have low wages, lower than the second decile of the wage distribution of the 1940generation insured at the same age. So, after an intermittent and low-paid working life, if the pension is their only income, their financial safety is certainly badly insured. Postponing their retirement allows them to expect a higher pension. The non-working young population is likely to be in a different situation. Their early withdrawal from the labor force could be explained by the existence of additional incomes (professional activity outside the private sector, spouse's incomes, capital returns) of which we have no knowledge. Like the precarious, non-working young tend to put off their retirement, but once married, they can be less assertive in their decision.
|
2011
|
[
"shs"
] |
[
"188024",
"74242",
"219273"
] |
[
"669"
] |
03525544
|
First, Salkinson believed that Biblical Hebrew was the appropriate kind of Hebrew for books of a 'scriptural' nature, such as the New Testament and Paradise Lost that he claimed was 'based from start to finish on the Holy Books'. 32 Hence, in criticizing a previous Hebrew translation of the New Testament, Salkinson wrote: 'It abounds unduly with Aramaean forms, vulgar expressions, and Rabbinical idioms; all of which appear to the lover of genuine Hebrew incompatible with the character of inspired Scripture'. 33 Second, Salkinson thought Jews had a special appreciation for Biblical Hebrew and might therefore be induced to read Christian books written in this kind of language. He thus wrote in a letter to Franz Delitzsch:
We must remember that our New Testament is intended chiefly for our unconverted brethren. Therefore it may be of some service to have it in a style which the Jews have not yet forgotten to appreciate, that is, the Biblical Hebrew. [START_REF] Delitzsch | In Self-Defence: Critical Observations on My Hebrew New Testament[END_REF] As for Salkinson's translations of Shakespeare's plays and Tiedge's Urania, his motivations for biblicizing in them stem chiefly from the fact that Salkinson produced these translations at the request of his Jewish acquaintances in Vienna, whose special appreciation for biblical language he knew. Hence, it was Smolenskin who had requested that Salkinson translate Othello and Romeo and Juliet, as Smolenskin himself stated in his introductions to both translations:
And how my heart rejoices, when I remember that it was I who stirred the translator's spirit to undertake this task. (Introduction to Ithiel) 35 I have already told you that you are the only man who can bring this precious object into the treasure house of our language, and I have already asked and begged you to do so. (Introduction to Ram and Jael) 36 Furthermore, in a letter to Jellinek, which serves as an introduction to the translation of Urania, Salkinson wrote: 'You read the book Urania in your youth, and you found its words to be good
and right. You did not mind that the author was of a different covenant, and when I came to see you, you did not tell me: "What is there between you and me?
|
2018
|
[
"shs"
] |
[
"1003020"
] |
[
"1123100"
] |
01238895
|
This application reads a file, and sends it back along with a request counter. The handler function, line 5 to 10, receives the input stream of requests. The count variable at line 3 counts the requests, and needs to be saved between two messages receptions. The template function formats the output stream to be sent back to the client. The app.get and res.send functions, lines 5 and 8, interface the application with the clients. Between these two interface functions is a chain of three functions to process the client requests : app.get → → handler → reply. This chain of functions is transformed into a pipeline, expressed in the high-level fluxionnal language in listing 2. The transformation process between the source and the fluxional code is explained in section 3. The execution is illustrated in figure 2. The dashed arrows between fluxions represent the message streams as seen in the fluxionnal application.
|
2016
|
[
"info"
] |
[
"445109",
"218317",
"218317"
] |
[
"171263"
] |
01132776
|
Continuations are functions called asynchronously once a result is available. As we will see later, Promises are designed as placeholders for a unique outcome. Iterators and Listeners are invoked multiple times resulting in multiple outcomes. Only continuations are equivalent to Promises. Therefore, we focus on continuations in this paper. Callbacks are often mistaken for continuations ; the former are not inherently asynchronous while the latter are. In a synchronous paradigm, the sequentiality of the execution flow is trivial. An operation needs to complete before executing the next one. In an asynchronous paradigm, parallelism is trivial, but the sequentiality of operations needs to be explicit. Continuations are the functional way of providing this control over the sequentiality of the asynchronous execution flow.
|
2015
|
[
"info"
] |
[
"218317",
"218317",
"406723"
] |
[
"171263"
] |
00817504
|
We follow a similar procedure as in the initialization step (see §2.1). We extract the FAST corners and compute their BRIEF descriptors, and we keep only those features that are at a distance of at least minDist = 15pix from the current tracked features. This prevents "redundant" points and it guarantees a more uniform distribution of the features in the image. Then we consider the set of all the 3D points that have been reconstructed so far and that are visible from the camera but not anymore associated to any tracked feature. The 3D point visibility is computed by considering the actual position and orientation of the camera and its field of view, according to the calibration data. We project these 3D points on the left image: if the projected image point is within a distance repThreshold from a new detected feature we then assign that feature to that 3D point and we add it to the set of tracked features. In order to speed-up the search process we use the new left detected features to build a KD-Tree and then we use the FLANN algorithm to query the tree with a projected point in order to find its closest point among the new left detected features. It is worth to note that this re-association algorithm may generate false associations, since, e.g., occlusions among 3D points are not taken into account. However the system is overall robust to false re-associations: when false re-associated points are tracked in the next frame, the points are discarded by the RANSAC process of the pose estimation algorithm, which is robust to outliers. The remaining new features are then matched with those of the right image, triangulated and added to the set of tracked features as described in §2.1.
|
2013
|
[
"info"
] |
[
"388575"
] |
[
"5848"
] |
02550949
|
INTRODUCTION
Recently there has been a renewed interest in exploring Less-than-Best Effort (LBE) access in the Internet research community and standards bodies. LBE, also known as the Scavenger class of traffic, came into existence almost a decade ago with work being carried out at Internet2 [START_REF] Qbss | Internet2 QBone initiative[END_REF]. Recently P2P and other bulk traffic have been pointed out as some of the root causes of the BufferBloat problem [START_REF]BufferBloat: What's Wrong with the Internet? [END_REF], due to large customer premise equipment (CPE) router queues. This problem, mainly caused by router buffering packets for a long period instead of dropping them, impacts real-time traffic which is getting more and more pervasive today [START_REF]Cisco Global mobile data traffic forecast update[END_REF]. As a result, the Internet Engineering Task Force (IETF) has started focusing on LBE congestion methods [START_REF] Welzl | A survey of Lower-than-Best-Effort Transport Protocols[END_REF] to transmit background data. In particular, a recent paper [START_REF] Sathiaseelan | The free Internet: a distant mirage or near reality? [END_REF] proposes the use of LBE access to provide free Internet access. The idea is to leverage the unused capacity to carry signaling or non-commercial traffic with an LBE protocol. This paper follows this idea and aims to explore the performance of the Low Extra Delay Background Transport (LEDBAT) [START_REF] Shalunov | Low extra delay background transport (LEDBAT)[END_REF] over large bandwidth × delay product (LBDP) networks.
|
2013
|
[
"info"
] |
[
"103314",
"82720",
"300765",
"36235",
"82720",
"467490"
] |
[
"772271",
"184088"
] |
02191751
|
This section aims to weight up the impact of emulation systems in the sake of evaluating QoE over SATCOM. We do not pretend to explain all the issues observed but attempt to provide a comprehensive summary that we also present in Table I. A. Comparison of Approaches
As for all emulator systems, both Mininet and OpenSAND rely on hypothesis and simplifications in order to approach reality as much as possible. While complexity does not necessarily represent a guarantee in terms of result fidelity, specialized satellite emulators may better model some satellite features like resource allocation, frame (dis)encapsulation, attenuation, etc. For instance, we saw that a satellite emulator like OpenSAND provides better MOS estimation than Mininet above the saturation point. Of course, a real satellite access offers unrivaled realism while allowing to benefit from advanced satellite features like Performance-Enhancing Proxies (PEPs). However, without a partnership with a space agency, accessing to a satellite infrastructure may be difficult. Mininet is an interesting solution, easy to use, requiring at least one single machine to virtualize network plane. Since this software is not a satellite network emulator, users can only create simple satellite topologies by using Mininet's API. Limitations occur when users require more advanced features like resource allocation, modulation changes, etc. On the contrary, OpenSAND is a mature software that can be used to emulate a complete end-to-end satellite communication system.
|
2019
|
[
"info"
] |
[
"199705",
"467490",
"307314"
] |
[
"184088",
"772271"
] |
03623310
|
The design approach we propose has integrated a FSM, suitable for interactive scenarios where user interactions in the scene will affect scenario progress. A loose link between condition and interaction means that the evaluation conditions of the scene will be analyzed based on the results of user interactions in the scene. It could be for example the results of answering the questions, the time it took to answer the questions, or the total time spent in the scene and the like. Thus, we have just introduced an overview of the design approach for the interactive scenario of serious games based on panoramic videos. This approach is suitable to help the author develop his game by describing scenarios using storyboards. It should be noted that for this part of the architecture, the multithreading scenario is purely mechanical, which means that the system has a finite state and the state changes are only based on the results of real-time user interaction. In the future, a larger architecture based on adaptive learning models [START_REF] Arsovic | E-learning based on the adaptive learning model: case study in Serbia[END_REF][22] will be developed, using the resources of this design approach in combination with artificial intelligence techniques, for example machine learning and deep learning to analyze data profile and user behavior to offer appropriate suggestions. Conclusion
This article proposes a new method of design that enables authors to build serious gaming applications based on interactive 360 videos. The proposed model follows scenario-based design, which is the result of a serious analysis of different game architectures, considering the particular properties of 360 video and additional interactions. Encapsulation of video segment within the scene ensures the integrity of the story in the video.
|
2022
|
[
"info"
] |
[
"1042359",
"1042359",
"1042359"
] |
[
"175214",
"170893"
] |
01628421
|
The basic solution concept for a static game is called Nash equilibrium and corresponds to a state where each player has no incentive to change his strategy assuming the other players do not change theirs. Note that various other solution concepts have been proposed in the litterature, among them the mixed Nash equilibrium [Nash, 1951] is one of the most popular. A best response for Player i corresponds to the situation where the player play his best option according to the others players (who have already fixed their strategies). Definition 2 (Best Response). A strategy profile s is a best response for player i if and only if
∀s i ∈ A i , u i (s) ≥ u i ((s i , s -i )). Definition 3 (Pure Nash Equilibrium). A strategy profile s is a Pure Nash Equilibrium (or PNE) if and only if s is a best response for all players. We call NE(G) the set of pure Nash equilibria of a game G.
Example 2 (Example 1 continued). The WLC game has 3 PNE. The first one happens when all players choose not to come to the party (wlc).
|
2017
|
[
"info"
] |
[
"431513",
"388594"
] |
[
"986257",
"941793"
] |
01628432
|
However, it is well known that the problem is hard [START_REF] Papadimitriou | The complexity of Finding Nash Equilibria, chapter 2[END_REF]. In addition, maximizing a social welfare function is a desirable but also more complex property to compute, since it involves a comparison with the whole set of equilibria. In order to quantify the efficiency of an equilibrium, concepts like Pareto efficiency, Price of Stability or Price of Anarchy are usually used. All these concepts are computationnaly intensive, even for relatively small games. Constraint Games [START_REF] Nguyen | A complete solver for constraint games[END_REF] are a new framework in which utilities are represented by Constraint Optimization Problems. Not only it gives compact models, but also they are very intuitive and readable. We have built a solver based on Constraint Programming which is orders of magnitude faster than the current state-of-theart Gambit [START_REF] Richard D Mckelvey | Gambit: Software tools for game theory[END_REF]. This solver is based on tree-search, and the players' preferences are implemented as global constraints with a dedicated filtering algorithm. In this paper, we propose to compute optimal and Pareto-optimal Nash equilibria. By adding constraints to express the social welfare function, the classical branch & bound optimization of Constraint Programming is able to find efficiently optimal equilibria of a game. Price of Anarchy and Price of Stability can be computed by finding the maximally efficient centralized situation and comparing respectively with the minimally and maximally efficient equilibria.
|
2017
|
[
"info"
] |
[
"388594",
"431513",
"431513",
"388594"
] |
[
"941793",
"986257"
] |
00511902
|
Then specific methods should be defined. Then a new collection becomes specific by adding Traits from the library of Traits. In Figure 3, the collection BCOrderedCollection is composed of the traits: TOrderedAdding, TOrderedAccessing, TOrderedEnumerating, TOrderedUpdatable, TOrderedCreation, TOrderedCollection, TOrderedRemoving, TOrderedCopying, TSequenceableTesting. To specify the main collections: OrderedCollection, Set, Sorted-Collection, Dictionary, Interval and Array, we created traits representing the behaviors defined by protocols proposed in the "Pharo by Example" book [BDN + 09]. We created 9 different categories of traits presented in Table 1. Each of these categories can be defined (not necessarily) for each main collection. If we take the case of Dictionary, Dictionary is a subclass of HashedCollection and needs behaviors from SequenceableCollection to be indexable. If we put all the behavior of one class in one trait, we have to cancelled some methods not used in Dictionary. Methods: primary vs. secondary
In Smalltalk, traits do not have state. Our design supports this separation between traits and object state access.
|
2010
|
[
"info"
] |
[
"82140",
"82140",
"82140",
"82140"
] |
[
"875126",
"8846",
"939",
"955"
] |
00394480
|
In this case, f 0 can't be precomputed, so we need two different products: one by R T KR for ∆ f and another by R T K for f . In pratice, it is more efficient not to build an assembled stiffness matrix ; its heavy construction could be amortized by a lighter computation of the conjugate gradient iterations, but in the case of interactive animations, the number of iterations is generally too small. It is preferable to store separately R, B and D and to process each element independently. For each element, we first compute R∆u, then BR∆u until R T B T DR∆u. Figure 4 shows that to amortize the cost of the assembly, 50 iterations minimum are necessary in this example, which is really too big in the case of an interactive simulation. f 0 = R T Kx 0 , with f = R T K(Rx -x 0 ) = R T KRx -f 0 as shown in
Discussion and results
Robustness
Large displacements or user manipulations sometimes result in degenerate configurations such as flat or inverted elements. Such cases are not properly speaking physical, but it is important to be able to face them to guarantee the stability of the simulator. The polar decomposition applied to an inverted element computes a left-handed local frame. The element tends to recover its initial shape in this frame, converging to a reversed shape. This can be solved by flipping the sign of an axis, but this requires the computation of the determinant to detect a change of sign resulting from the inversion.
|
2005
|
[
"info"
] |
[
"707",
"2396",
"388249",
"2396"
] |
[
"3680",
"904",
"4829"
] |
00443064
|
This allows us to perform plausible animations using a reduced number of elements. An extreme case is presented in Figure 5 where an object in form of 'c' is animated under gravity using one single boundary element. As expected, using an averaged stiffness, both parts of the object have the same properties, and the empty part is as stiff as the full part. In contrast, using our precomputed non-uniform law, stiffness takes into account where the matter is, resulting in a more realistic behavior. Influence of the Hexahedra Positioning
The accuracy of the employed co-rotational hexahedral elements has already been tested in classical uses, where elements exactly fit the object. They give quite good results compared to theoretical results, certified numerical results and real data [45]. In this section, we want to evaluate the behavior of non-uniform elements. Remember that the bounding elements using non-uniform properties only appear at object borders. Although they do not represent a large percent of elements in pratice, they permit the simulation of realistic deformations at low cost. Fixed Beam
The deviation of a fixed beam subject to gravity has already been measured for finite elements [45], and provides very satisfactory results.
|
2010
|
[
"info"
] |
[
"44938",
"44938",
"388249"
] |
[
"3680",
"4829",
"904"
] |
04327355
|
We firstly thank the two organizations that have made the workshop possible, ESF COST Action IC0904 -Towards the Integration of Transectorial IT Design and Evaluation, and SIGCHI Cyprus. We thank of course our session chairs, Andri Ioannou, Jan Gulliksen, and Ebba Thora Hvannberg. We thank our invited speakers, Alistair Sutcliffe and Davide Rocchesso. We thank our sponsors, European Science Foundation, Cyprus Tourism Organization, Cyprus University of Technology, University of Leicester, and European University Cyprus for giving us the support required for the smooth hosting of this workshop. A special thank you to Marco Winkler who designed the front and back cover of the proceedings, and more! Finally we thank our program committee. The program committee is the following:
Table of Contents Session: User Evaluations of e-Learning Domain
Guiding design and evaluation of e-learning systems
Introduction
In the e-learning context, a major challenge for designers is to develop software tools and environments that can engage novice learners and support their learning even at a distance. Towards this end, there should be a synergy between the learning process and the learner's interaction with the software. As for any interactive system, usability is a primary requirement. If an e-learning system is not usable, the learner spends more time learning how to use the software rather than learning the contents. Besides being usable, an e-learning system addresses other challenges, e.g., it must be effective in meeting pedagogical objectives.
|
2008
|
[
"info"
] |
[
"1065149",
"469497",
"300751"
] |
[
"1068057",
"1073608"
] |
00381913
|
Thus, the acceptance test may be difficult to implement as block behaviors may be very different. We propose to extend this conventional design pattern with behavioral models attached to each block. These models will be used to generate detectors that raise error signals as soon as the behavior of blocks do not correspond to their specification, i.e., implementing Early Error Detection (EED). Then block execution is interrupted and the next block is activated. The figure below represents the structure of such a fault tolerant component. Figure 1 Design pattern for execution of degraded operational modes
As said in [START_REF] Randel | The Evolution of the Recovery Block Concept[END_REF], the efficiency of such a design pattern highly depends on the quality of the error detection services. We will first provide a motivating example of the application of Early Error Detection and then explain how it is integrated in the design pattern. Specification of real-time behaviors
The allowed behavior of each block is described via a timed automaton [START_REF] Alur | A theory of timed automata[END_REF]. Such automaton defines allowed executions as a set of traces. These traces describe the sequences of allowed events with their timing constraints.
|
2009
|
[
"info"
] |
[
"389097",
"389097",
"389097"
] |
[
"736574",
"735189",
"3041"
] |
00546934
|
Thus, it is possible to define the condition under which control states are still alive through a conjunction of inequations defining the upper bound of each clock. In our example, this results in:
• Adding to all nodes a deadline on x at 20, and adding it also on each transition. • Updating the constraint on the transition on lock-ing_get by requiring that x is lower than 10 when firing. When comparing the new automaton with the original specification, it appears that the transformation has made explicit all conditions on clocks that used to be implicit 2 . In most cases a human made timed automaton transforma-tion is not possible, since it is a very burdensome way to specify behaviors. Yet, such a transformation can be inferred automatically from a well established transformation: timed abstraction. The time abstraction splits up control states as soon as the enabled and valid transitions differ, according to the clock values. In our approach, we used the tool KRONOS [START_REF] Tripakis | Analysis of timed systems using time-abstracting bisimulations[END_REF] to perform this transformation. The aim was to allow local decision in the sense that steps 2 and 3 of Figure 5 can be done in a greedy way. Time abstraction adds clock upper bounds to control states, and integrates the reachability test and the guards in timing bounds put on each state.
|
2010
|
[
"info"
] |
[
"162010",
"459",
"459"
] |
[
"736574",
"3041",
"735189"
] |
00910296
|
Moreover, a pattern lattice can contain a lot of irrelevant patterns for an expert. Projections of PSs "simplify" to some degree the computation and allow one to work with a reduced description. In fact, projections can be considered as constraints (or filters) on patterns respecting certain mathematical properties, ensuring that the concepts in the projected lattice have correspondence to the original ones [START_REF] Ganter | Pattern Structures and Their Projections[END_REF]. A projection ψ : D → D is an operator, which is monotone (x ⊑ y ⇒ ψ(x) ⊑ ψ(y)), contractive (ψ(x) ⊑ x) and idempotent (ψ(ψ(x)) = ψ(x)). A projection preserves the semilattice operation ⊓ as follows. Under a projection ψ, a PS (G, (D, ⊓), δ) becomes the projected PS ψ((G, (D, ⊓), δ)) = (G, (D, ⊓), ψ • δ). The concepts of a projected pattern structure have a "similar" concept in the initial pattern structure [START_REF] Ganter | Pattern Structures and Their Projections[END_REF]. Pattern Structures Techniques
As a PS is an abstract mathematical object, any software aiming at the PS realization should either prepare several different PSs, such as PSs based on intervals or graphs, or give to a user an opportunity to add arbitrary PSs to the software. Our goal is to process any PSs and in this case one should decide how an arbitrary semilattice can be introduced by a user. It is not possible in some cases to enumerate all elements of a semilattice.
|
2013
|
[
"info"
] |
[
"205125",
"237796"
] |
[
"949065",
"949066"
] |
03642963
|
This special issue of Global and Planetary Change presents a series of papers dealing with the global carbon cycle and its changes over the Quaternary. This publication is organised by the Carbon Commission of the International Union for Quaternary Research (INQUA). The Carbon Commission was created in 1995 at the XV INQUA Congress held in Berlin. Its objective is to gain a better understanding of the global carbon cycle by establishing the carbon budget of its surface components during the Pleistocene and Holocene epochs. It focuses mainly on land reservoirs, such as vegetation, soils, peats, rivers, underground waters, and includes the continental shelf. The aim is to quantify the contribution of these continental reservoirs to the glacial -interglacial changes of ocean and atmosphere carbon budgets. The commission involves a series of working groups dealing with the various reservoirs and/or processes as well as with modelling. This volume results from two symposia organised by the Carbon Commission in 1999 during the European Union of Geosciences 10th meeting (EUG 10) held in Strasbourg, France, and the XVI INQUA Congress held in Durban, South Africa. Most papers in the volume deal with the past carbon cycle both in the marine and terrestrial environments. However, a few papers do not analyse past changes, but attempt to describe some specific features of the modern carbon cycle, which are today still poorly known quantitatively, such as karst dissolution, tropical soil carbon, the isotopic budget of methane-producing lake sediments and carbon degassing from the lithosphere.
|
2002
|
[
"sdu"
] |
[
"154689"
] |
[
"178393"
] |
04345084
|
This color difference has previously been reported by [START_REF] Viers | Chemical weathering in the drainage basin of a tropical watershed (Nsimi-Zoetele site, Cameroon): Comparison between organicpoor and organic-rich waters[END_REF] and [START_REF] Oliva | The effect of organic matter on chemical weathering: Study of a small tropical watershed, Nsimi Zoetele, Cameroon[END_REF], and is attributed to differences in DOC content (Table 18.2). For the hydrologic year 1998/1999, the mean annual DOC concentration was 0.31 mg.L -1 in the spring and 14.3 mg.L -1 in the stream at the outlet. The mean DOC river fluxes were between 4700 and 9300 kg km -2 yr -1 (Table 18.2). The values corresponding to the southern part of the Nyong watershed are higher than those observed for the eastern part of the basin (Mbalmayo station). But this difference may be due to a higher drainage intensity in the southern part, rather than to differences in the vegetation and soil covers. The organic-rich waters exhibit a very high ionic imbalance between the cationic sum (S c ) and the anionic sum (S a ), due to an important anionic deficit, as was reported by [START_REF] Probst | Dissolved major element exported by the Congo and the Ubangui rivers during the period 1987-1989[END_REF] for the Congo River. The mean annual deficit averaged 44 to 84% depending on the stations. This imbalance is mainly due to organic anions that are not taken into account in the measurement of alkalinity [START_REF] Fillion | Natural organic matter contribution to throughfall acidity in French forests[END_REF]. Consequently, there is a good relationship between ionic deficit (in µeq L -1 ) and DOC content (in mg L -1 ) for the Mengong River (Figure 18.2). The slope of the regression lines obtained for different stations varies between 4.3 and 9.1 µeq.mg -1 C; the correlation coefficients R calculated for all catchments are > 0.91.
|
2005
|
[
"sdu"
] |
[
"24486"
] |
[
"178393"
] |
02994869
|
The Huveaune River is one of the most polluted rivers in the Mediterranean Basin [START_REF] Kanzari | Distributions and sources of persistent organic pollutants (aliphatic hydrocarbons, PAHs, PCBs and pesticides) in surface sediments of an industrialized urban river (Huveaune), France[END_REF] with several sources of pollution including a very high degree of urbanization as well as the presence of both heavy industry and intense agricultural activity. Marseille's Cortiou WWTP is the largest in the region (Kanzari et al., 2014). Some pollutants in Mediterranean sediments are thought to have discharged via the Huveaune River after passing untreated through the Cortiou sewage outlet discharge during episodes of high rainfall [START_REF] Kanzari | Distributions and sources of persistent organic pollutants (aliphatic hydrocarbons, PAHs, PCBs and pesticides) in surface sediments of an industrialized urban river (Huveaune), France[END_REF]. In our study, the highest organic carbon values in sediments were found at Huveaune (HV) station. The high log K ow of DnOP, DEHP, and TEHP compounds underlines their tendency to be adsorbed on particulate matter. F2 can be explained by the interaction of PAE and OPE compounds of similar environmental sources and behavior: both are urban-based anthropogenic in origin (such as car wash detergents/cleaning materials/sewage) and are linked to the high particulate matter transported by the rivers. Factor 3 represents 12% of the total variance and relates to BBzP, which exhibits the highest concentrations at Estaque (EQ) and Huveaune (HV) stations. Over 90% of BBzP is used for the plasticizing of PVC or other polymers with consumer and industrial applications, such as floorings, sealants, adhesives, caulking, coatings and paints [START_REF] Koch | Di-n-butylphthalate and butylbenzylphthalate -urinary metabolite levels and estimated daily intakes: pilot study for the German environmental survey on children[END_REF]. Factor 3 can be therefore linked to industrial applications of this additive, which finds its way into urban effluent. The other factors accounted for ≤10% of the total variance and were not considered as driving factors of the occurrence of PAEs and OPEs in the area.
|
2020
|
[
"sdu"
] |
[
"191652",
"65641",
"191652",
"65641",
"191652",
"106834",
"191652",
"191652",
"191652",
"191652"
] |
[
"19835",
"1353396",
"171888",
"2881"
] |
01139473
|
Only very recently the study of what happens after the modulational instability develops has been performed for Klein-Gordon [20] and FPU-lattices [21,13]. From these analyses it turned out that these high-frequency initial conditions lead to a completely new dynamical behavior in the transient time preceeding the final energy equipartition. In particular, the main discovery has been the presence on the lattice of sharp localized modes [21,20]. These latter papers were the first to make the connection between energy relaxation and intrinsic localized modes [22], or breathers [START_REF]Energy Localisation and Transfer, Advanced Series in Nonlinear Dynamics[END_REF]. Later on, a careful numerical and theoretical study of the dynamics of a β-FPU model was performed [START_REF] Cretegny | [END_REF]. It has been shown that moving breathers play a relevant role in the transient dynamics and that, contrary to exact breathers, which are periodic solutions, these have a chaotic evolution. This is why they have been called chaotic breathers. Following these studies, Lepri and Kosevich [25] and Lichtenberg and coworkers [26,27] have further characterized the scaling laws of relaxation times using continuum limit equations. Zabusky et al. have recently simulated numerically the behavior of the one-dimensional, periodic α-FPU model with optical and acoustic initial excitations of small-but finite and large amplitudes. Using beautiful color representations [28] of the numerical results, they find nearly recurrent solutions, where the optical result is due to the appearance of localized breather-like packets.
|
2007
|
[
"nlin",
"phys"
] |
[
"13",
"46351",
"222576",
"525957"
] |
[
"3333"
] |
00199375
|
It is important to emphasize that the parameter q allows for an interpolation between the "window" function which is constant over a finite support (obtained for q → -∞), the ordinary Gaussian distribution (q = 1), and distributions having power-law tails (1 < q < 3). The function G q (x) has therefore quite a large fitting spectrum and might be applied for the study of strongly correlated systems. Independently of their applicability to statistical mechanics, q-Gaussians have impressive mathematical properties and became rapidly popular. They have been proposed to describe numerous experimental or numerical results: velocity distributions of classical rotators or galaxy clusters, turbulent flows, cellular aggregates or the temperature fluctuations in the cosmic microwave background,... Unfortunately, in the absence of firm grounds, physicists have distributed themselves between enthusiasts and skeptics. At this point in time, it is therefore important to distinguish whether the formalism suggested by ( 1) and ( 2) can be the basis for developing a real predictive theory or if it is "just" a nice idea and a powerful fitting function. Two important questions are particularly pressing here: (i) does the q-Gaussian law describe the details of some physical problems and, more importantly, (ii) is anyone able to provide analytical predictions of the value of the q-index in terms of the microscopic parameters of the physical system. A particularly interesting paper in this respect is the one by Henk Hilhorst and Gregory Schehr [START_REF] Hilhorst | A note on q-Gaussians and non-Gaussians in statistical mechanics[END_REF]. It is an important step for statistical physics since the authors are able to show by explicit calculations that, in two examples of random variables, previously put forward as candidates for being q-Gaussian distributed, the probability distributions of the sums turn out to be analytically different, although they closely resemble them numerically. The first example is presumably the simplest imaginable instance of a strongly correlated system [START_REF] Thistleton | Generalized Box-Muller method for generating q-Gaussian random deviates[END_REF], namely the scaled sum j u j /N, where the N variables u j are identically distributed on a finite domain, but with strong mean-field correlations. For this example, Thistleton et al conjectured from numerical fits that the distributions of the sums is q-Gaussian.
|
2007
|
[
"phys"
] |
[
"13"
] |
[
"3333"
] |
01159293
|
Though the SFA does a reasonable job of providing the energy cutoff, these kinds of trajectories are absent in the SFA, and are a direct consequence of the ion's Coulomb field and the organization of the dynamics by the periodic orbits O and O ± and the invariant manifolds of O. CONCLUSION
In summary, we have shown that even in the high laser intensity regime where the strong field approximation is expected to hold, the Coulomb field significantly impacts the dynamics by allowing delayed recollisions to bring high energy to the core region. We reported the effectiveness of Coulomb focusing in a one-dimensional model, which is usually attributed to higher dimensional models. We found that while the SFA gives adequate estimates of the maximum possible return energy in the strong field regime, it misses the behavior of the electron for times beyond the first laser cycle, when the Coulomb field causes trajectories to continue to recollide with energies near 3.17 U p . We have unraveled the dynamical organization of these delayed recollisions by looking at specific periodic orbits and their invariant manifolds. The delayed recollision trajectories that we focused on are important not only because of the high energy they may recollide with, but also because they spend potentially many laser cycles near the core. Thus, they have many opportunities to exchange energy with the ion and the electromagnetic field (due to the acceleration imparted on such electrons by the Coulomb force). We expect that the delayed recollisions are the mechanism underlying HHG and thus explain the observation of the appearance of the plateau and ∼ 3.17 U p high-harmonic cutoff only after the first laser cycle, as in Ref. [17]. Figure 1 .
|
2015
|
[
"nlin",
"phys"
] |
[
"179898",
"261046",
"407872",
"179898",
"261046"
] |
[
"966852",
"828797"
] |
01057979
|
Al-Mahdī wrote a new letter to the qāḍī ordering him to make a public confession about the injustice of his judgment and to issue a fresh decision in favour of the defendant. If the qāḍī refused to comply, the caliph would order to have him decapitated. The qāḍī had no choice but to revise his judgement. 28 Here the caliph did not act as an appeal institution . He did not revise the judgement nor issue a new verdict, but rather forced the qāḍī to do so. He interfered with ʿUbayd Allāh's practice and his threatening order meant he considered the judge to be a subordinate who had to comply with his superior's will. When qāḍīs and jurists claimed autonomous judgeship
Resistance to caliphal interventions ʿUbayd Allāh b. al-Ḥasan al-ʿAnbarī's case reflects a new conception of justice in the early
Abbasid period. Procedural rules had been extensively discussed by jurists during the last Umayyads and, despite secondary disagreements (ikhtilāf), Muslim jurists had now achieved an almost unified vision of the judicial process. If early qāḍīs had sometimes been chosen amongst individuals whom later biographical literature would not recognise as scholars, respected jurists were now commonly appointed as qāḍīs. ʿUbayd Allāh was himself a prominent scholar of the ancient Baṣran legal trend.
|
2014
|
[
"shs"
] |
[
"57701",
"218986"
] |
[
"754170"
] |
00827952
|
The crystallization of the Shīʿī opposition around ʿAlid leaders such as the Ḥasanid Ibrāhīm ibn ʿAbd Allāh (d. 145/763), who revolted in Basra under al-Manṣūr, [START_REF] Kennedy | The Early Abbasid Caliphate. A Political History[END_REF] jeopardized the caliphate in Iraq. The ʿAbbāsids could not afford to impose any legal reform against the wishes of the local elites, lest opposition to their dynasty be strengthened. Dealing with local issues
The centralization of the judiciary was a long process, and the method of selecting the qāḍīs did not change immediately. The first ʿAbbāsid caliphs still consulted local people before appointing a judge and summoned to Baghdad delegations (wafd, pl. wufūd) made up of notables. The sources give several examples of delegations coming from Basra to meet the caliphs al-Mahdī and al-Rashīd (r. 170-93/786-809). These delegations usually consisted of five or six notables and scholars from the city, representing different legal and theological trends. 54 The caliph received them and asked whom they wanted as qāḍī. In 167/783-4, the governor of Basra, Muḥammad ibn Sulaymān, chose a delegation of six local notables and sent them to the caliph al-Mahdī.
|
2013
|
[
"shs"
] |
[
"57701",
"218986"
] |
[
"754170"
] |
03707207
|
We also compare the results to those that we obtain from a standard classification. The data
We consider two consumption surveys, performed by Statistiques Canada, in 1986 and 1990, with about 10 000 households which were not the same ones from one survey to the other. The consumption structure is known through a 20 functions nomenclature, described in Table 5.1. Each household is represented by its consumption structure, expressed as percentages of the total expenditure. The two surveys have been gathered in order to define classes including individuals which belong to one or the other year. So it will be possible to observe the dynamic evolution of the households groups which have similar consumption structures. One can see that for any classification method, the classes contain in almost equal proportion data of the two surveys. It seems that there is no temporal effect on the groups, which simplifies the further analyses. So from now on, we continue the analysis by using the SOM classification, followed by the grouping into 10 macro-classes. Crossing with qualitative variables
To understand better the factors which determine the macro-classes, and to allow their identification, we use a graphic representation of some qualitative variables, that were not present in the classification.
|
1999
|
[
"shs"
] |
[
"25201",
"18791",
"25201",
"18791",
"25201",
"18791",
"25201",
"18791"
] |
[
"1463"
] |
03942199
|
Control samples that were grown in RPMI-1640 medium that was supplemented with DMSO showed no difference compared to samples that were grown in RPMI-1640 medium alone. Therefore, Mel56 was selected for further analyses. Similar results were obtained in growth assays using nutrient rich growth medium (YPD) or synthetic growth medium with varying carbon sources (YNB with glycerol or galactose) confirming that the Mel56 antifungal properties were independent of the growth medium. Determination of Mel56 Minimum Inhibitory Concentration
To determine the MIC of Mel56 in C. albicans, broth double dilution assays were performed. As C. albicans frequently displays strain-specific phenotypes [START_REF] Marakalala | Differential adaptation of Candida albicans in vivo modulates immune recognition by dectin-1[END_REF], Mel56 antifungal activity was monitored in C. albicans clinical isolates MC99 and MC102 and the laboratory reference strains SN250, DAY185, and DAY286. In addition, Mel56 antifungal activity was assessed in the fluconazole-resistant C. albicans clinical isolate 3147. Mel56 at a concentration of 8 µg/mL was comparable to MIC values that were obtained from samples that were treated with fluconazole (methods) and sufficient to completely inhibit the growth of C. albicans strains MC99, MC102, 3147, DAY185, DAY286, and SN250 compared to the DMSO-treated control groups following 24 h of growth (Table 1). The similarity of the MIC values for C. albicans strain 3147 compared to the other C. albicans strains suggests that the physiological basis for azole resistance in this strain is independent of Mel56 susceptibility. To verify that Mel56 is fungicidal in C. albicans, yeast strains MC102, MC99, and SN250 were labeled with the viability dye PI. Labeling was initially performed by pooling samples directly from the microplate following 24 h of Mel56 exposure.
|
2023
|
[
"chim"
] |
[
"302931",
"302931",
"307202",
"543256",
"307202",
"543256",
"302931"
] |
[
"1034029"
] |
04268974
|
The p values for TCGA data were obtained from the UALCAN analysis tool. The non-parametric Wilcoxon signed-rank test was used for paired sample comparison. Kaplan-Meier curves were evaluated using the log-rank test. For calculation of IC50s, data were transformed to log2, normalized, and non-linear regression (log)inhibitor vs. response-variable slope (four parameters) was performed. For the transformation of SEM to SD, SEM was multiplied by the square root of the sample size n. Tukey's multiple comparison test was applied for multiple comparisons. Two-tailed values of * p < 0.5, ** p < 0.1, *** p < 0.01, and **** p < 0.001 were considered significant. Lowercase letters denoted statistical significance, as stated in the associated figure legends. All data are expressed as the mean ± SD or SEM. Conclusions
Our results, obtained in silico and in vitro, constitute the initial step in an innovative direction to develop pharmacological strategies to treat liver cancer. This is particularly valuable owing to the lack of available tools for efficaciously treating advanced nonsurgically resectable HCC.
|
2023
|
[
"sdv"
] |
[
"143868",
"143868",
"58016",
"143868",
"576800",
"576800",
"576800",
"207869",
"58016",
"143868",
"576800"
] |
[
"1034029"
] |
04066946
|
3.2.1. 15 N and 13 C NMR. In diluted acidic media, chitosan contains glucosamine units both in the protonated and neutral amine forms. With a weak acid, such as acetic acid, a fraction of the amine groups will be protonated, whereas all amine groups will be protonated with a strong acid, such as hydrochloric acid. The films cast from chitosan solutions could thus contain charged and neutral amine groups. 15 N CP-MAS spectra allow for a direct detection of each monomer unit: the acetylated amine at about 100 ppm, the amine between 0 and 15 ppm (Figure 2a). It also discriminates between charged and neutral amine groups, detected respectively at 15 and 0 ppm. It is experimentally observed that the films directly cast from chitosan solutions in HCl are charged, while the original powder and the films suspended in dilute NaOH solutions and rinsed are neutral. 13 C CP-MAS NMR spectra show 'snapshots' of the distribution of chemical shifts present in the sample. 13 C CP-MAS spectra were recorded on the original powder, films prepared with acetic acid and HCl, and neutralized films (see Figure 2b for representative spectra and supporting information for all spectra). Powder and neutral films all exhibit the same 13 C CP-MAS spectrum.
|
2011
|
[
"chim"
] |
[
"225146",
"225146",
"225146",
"39648",
"39648",
"39648",
"1157167"
] |
[
"1238848"
] |
00636150
|
Two types of lexico-syntatic operations are studied: the expansion of an existing term by the addition of one or more modifier words (information retrievalefficient retrieval of information); the substitution a word in a term, either in the modifier position (coronary heart diseasecoronary lung disease) or in the head position (mutant motifmutant strain). We call COM P the subset of relations that affects modifier words in a term and CLAS the subset that affects the head word in a term. Clustering is based on COM P and CLAS relations and CPCL, a graph-based algorithm called which implements a variant of hierarchical clustering. Let us refer to this principle of clustering as "clustering by lexico-semantic similarity" (LSS). COM P relations are used in an initial phase to form connected components and CLAS relations are used in the 2nd phase to form clusters of such components in a hierarchical process. The particularity of CP CL is to compute at each iteration the local maximal similarity values in the graph of non null similarity relations. Average link clustering is then performed on the resulting subgraph. Evaluation metrics
For the OTC task, we need a measure that focuses on cluster quality (homogeneity) vis-à-vis an existing partition (here the GENIA categories) and that is also adapted to the comparison of methods producing a great number of clusters (hundreds or thousands) and of very differing sizes. Pantel & Lin's editing distance [START_REF] Pantel | Clustering by Committee[END_REF] appears as the most suitable for this task. We focus on two of the elementary operations in their measure: "merges" which is the union of disjoint sets and "moves" that applies to singular elements.
|
2006
|
[
"shs",
"info"
] |
[
"70904"
] |
[
"180321"
] |
00681931
|
The academic community was disconnected from their professional institutions (archives, libraries, museums) in terms of research. Only the documentation sector was somewhat associated with the field's constitution. The consequence was that "the academic recognition of the discipline was done on the borderline of its already established cognitive institutionalization: only a little proportion of this heritage including professional, militants and scholars would join the academic constitution of IS, leaving by the roadside a part of the public institutions (libraries, museums, archives) and some researchers who had invested time on IS research." [START_REF] Palermiti | Dynamiques de l'institutionnalisation sociale et cognitive des sciences de l'information en France[END_REF]. As [START_REF] Palermiti | Dynamiques de l'institutionnalisation sociale et cognitive des sciences de l'information en France[END_REF] pointed out, the fundamental issue raised by this demarcation is the reluctance by the government to let the state funded universities handle the training of its elite. This is one of several "exceptions françaises": the distinction made between universities who are obliged to accept practically all freshmen candidates and the "Grandes Ecoles" who are very selective and function by entrance examination (concours 23 ). Students who are lucky to be admitted into these special schools become "pupils of the State". Upon completion of their training, they are practically guaranteed a job placement in a public or private institution where they become top executives. However, the European Union, through the Bologna process may well signify the end of such "exceptions". There have been insistent noises from neighbouring European countries that such schools put some french pupils in a more privileged position than their european counterparts.
|
2012
|
[
"shs"
] |
[
"70904"
] |
[
"180321"
] |
03414725
|
Decision making under uncertainty
In almost every human-AI collaboration the final decision is made by humans. For this reason it is of utmost importance to understand how humans decide. Human decision making [START_REF] Bell | Decision making: Descriptive, normative, and prescriptive interactions[END_REF] is studied in a number of different fields such as mathematics [START_REF] Gilboa | Maxmin expected utility with non-unique prior[END_REF][START_REF] Schmeidler | Subjective probability and expected utility without additivity[END_REF], behavioural economics [START_REF] Kahneman | Prospect theory: An analysis of decision under risk[END_REF][START_REF] Tversky | Advances in prospect theory: Cumulative representation of uncertainty[END_REF] and psychology [START_REF] Stanley Budner | Intolerance of ambiguity as a personality variable 1[END_REF]. The existing literature can be divided into work that focuses on normative theory, i.e., how decisions should be made with logical consistency, descriptive models, i.e., how people make decisions, and prescriptive approaches that try to help make people better decisions. Further, decision making problems can be divided by whether a decision maker has access to outcome probabilities, i.e., a decision task associated with risky uncertainty, or is lacking such information, i.e., a decision problem with ambiguous uncertainty. Depending on whether uncertainty is risky or ambiguous [START_REF] Knight | Risk, uncertainty and profit[END_REF], humans tend to exhibit different behaviour, generally favoring risky options over ambiguous ones [START_REF] Camerer | Recent developments in modeling preferences: Uncertainty and ambiguity[END_REF][START_REF] Slovic | Who accepts savage's axiom? [END_REF][START_REF] Curley | Psychological sources of ambiguity avoidance[END_REF] Expected utility theory [START_REF] Von Neumann | Theory of games and economic behavior[END_REF] attempts to explain human choice behaviour using utility functions that take into account the objective value of a choice. Prior work has recognized the discrepancy between how people should choose under an expected value maximization principle and their actual choices [START_REF] Rieger | Cumulative prospect theory and the st. petersburg paradox[END_REF]. It has been found that choices might deviate from the optimal one for a number of reasons, including risk aversion [START_REF] Pratt | Risk aversion in the small and in the large[END_REF][START_REF] Arrow | Aspects of the theory of risk-bearing[END_REF][START_REF] Kahneman | Choices, values, and frames[END_REF]. A widely known example of this is the St. Petersburg paradox, where players are reluctant to bet on a game with infinite expected value [START_REF] Rieger | Cumulative prospect theory and the st. petersburg paradox[END_REF].
|
2020
|
[
"shs",
"info"
] |
[
"1079909",
"1079909"
] |
[
"1115813"
] |
03389169
|
CEU Press. 2004; WEDEL, Janine R. Rethinking Corruption in an Age of Ambiguity. The Annual Review of Law and Social Science. 2012; HOUGH, Dan. Corruption, Anti-Corruption and Governance. London: Palgrave Macmillan. 2013; KATZAROVA, Elitza. The Social Construction of Global Corruption From Utopia to Neoliberalism. London: Palgrave Macmillan. 2019.
|
2020
|
[
"shs"
] |
[
"94080",
"301897"
] |
[
"1050718"
] |
03402515
|
While adopted partly to facilitate the detection of abuses, these instruments also seek to act on parliamentarians' conduct, creating incentives and risks, and making them think about interests that might influence them and gradually interiorise formalised rules. Policy instruments are however never neutral objects, and technical or functionalist perspectives tend to hide the political aspects of policy instruments. 208 As we will see in the rest of the dissertation, these instruments carry representations about the problem at hand but also about the role of the State, about the relation between the branches of government as well as the nature of political representation, which correspond to the contexts in which they were initially developed. Recognizing that instrumentation of ethics is not politically neutral makes it all the more important to understand the common trajectory that these countries are embarked upon. A convergence of instruments: a common move towards public interest registers and codes of conduct
In addition to revealing the move towards a new form of regulation of behaviour, approaching the convergence of conflict of interest regulation through the analytic lens of policy instruments allows for a more detailed analysis of policy change identifying subtle forms of policy convergence, 209 since it makes the concrete recipes of policies visible. 210 Indeed, when existing studies conclude that countries' responses to pressure for more and stricter ethics regulations are significantly different, 211 our approach allows us to partly refute such a finding. Seeing instruments as constitutive of a policy, one can see that there is an instrumental convergence of conflict of interest regulation in Britain, France and Sweden, the three countries having adopted a public interest register and a code of conduct for their parliamentarians, as illustrated in Figure 4. Source: the axis represents the number of instruments (among those analysed in this research) adopted to regulate conflict of interest. It does not provide indication as the type of instrument (register or code) as the objective is to illustrate the instrumental convergence, meaning the adoption of the same instruments to regulation conflicts of interest in parliament. France originally conceived of responding to problems associated with parliamentary ethics principally through the lens of incompatibilities and a posteriori repression of abuse.
|
2020
|
[
"shs"
] |
[
"94080"
] |
[
"1050718"
] |
03940143
|
AME's first meetings with the Minister for Malians Abroad and the French ambassador in Bamako in 2008 showed the symbolic weight of the pressure brought to bear by the association against the readmission agreements, but they were also indicative of the interest shown by the Malian and international political class in the "global approach" to migration policies promoted by the EU. The presence of CIGEM and the OIM in Bamako made the drawing up and adoption by the Malian parliament of a legislative framework on migration in 2011 inevitable, a process that was interrupted by the war which broke out in Mali in 2012. These bodies all consulted AME during the process. Seizing the opportunity of having its voice heard by the political authorities, the association was nevertheless wholly aware of the political exploitation in which they were involved. Those meetings with the French ambassador were certainly helpful in ensuring that submissions on family reunification for expelled individuals were examined; but the ambassador also tried to bring up again unsigned joint agreements on managing migration flows and used the association's network to inform migrants that they could now make appointments with the French consulate via the internet. At a hearing in November 2011 in the French Senate, at the invitation of the socialist senator Richard Yung, the association raised questions relating to the situation of the sans-papiers in France and expellees in Mali, but it also accepted expulsion as an inherent part of European policy from then on. The paradoxes of AME's political legitimisation can be seen in the difficulty it has in holding onto a purely oppositional position and not becoming an alibi for the construction of institutional policies. Consultations with different elements of civil society are a part of that search for consensus which is a crucial ingredient in the workings of European political institutions. All struggles founded on self-help organisations of marginalised people probably run the risk of exploitation and hybridisation. Combining the consultation process with maintaining a critical stand is a difficult balance when politics is itself operating at various levels.
|
2017
|
[
"shs"
] |
[
"1144678"
] |
[
"181198"
] |
01997380
|
In section 5, results obtained from the approximate model are compared to the optimal values calculated by the Discrete Event Simulation model. A sensitivity analysis on the impact of different system parameters is conducted in section 6. Finally, section 7 concludes the paper. Literature review
Our study is related to two research streams: components inventory control in ATO systems, and the use of rush orders (also called emergency shipments in the literature) in inventory systems. The literature on ATO systems mainly addresses the problems of components inventory replenishment and inventory allocation. Agrawal and Cohen (2001) derived expressions to calculate optimal base stock levels for components under a constraint on FGs service described by order completion rate. Authors assume a fair shares allocation policy where the fraction of component inventory allocated to a FG is equal to the quotient of its demand and the total demand of FGs. A general literature review of ATO system was provided by [START_REF] Song | Supply Chain Operations: Assemble-to-Order Systems[END_REF] where they classified the different models into: one period models, discrete time multiperiod models and continuous time models. [START_REF] Atan | Assemble-to-Order Systems: A Review[END_REF] gave an update of the literature on ATO systems following the same classification. [START_REF] Akçay | Joint Inventory Replenishment and Component Allocation Optimization in an Assemble-to-Order System[END_REF] studied the calculation of the optimal base stock levels for an ATO system with multiple products and components where products demand is correlated. In a two stage stochastic decision model, the authors studied also the optimal allocation problem and proposed a simple component allocation rule.
|
2019
|
[
"shs",
"info"
] |
[
"11769",
"11769",
"11769"
] |
[
"5097",
"5531",
"7047"
] |
01802516
|
It can be seen that forecast accuracy is good except for components with low requisition (these components are called "Low Runners" in the studied plant). The forecast accuracy decreases from week 1 to week 3 (before delivery date). This result is intuitive because the more forecasts are far from the delivery date, the less accurate they are. Because the forecast accuracy is good, forecasts can be used to estimate the mean requisition: 𝛽 𝑥 = Average forecast Forecast quotient ≅
Figure 16 . 16 Figure 16. Forecasts (without batch effect) made one week before delivery date (W-1)
is 1.19 with a CV equal to 0.20. On the other hand, each component x is used by a set of FGs. The arrival of each unit of FG that uses component x induces the arrival of ax units of this component on average. The sum of arrivals of FGs that use component x follows a Poisson process (the sum of Poisson processes is a Poisson process). As a result, the arrival process of component x can be viewed as a compound Poisson with a batch size equal to ax (ax is the weighted average assembly coefficient).
|
2018
|
[
"shs",
"info"
] |
[
"11769",
"11769"
] |
[
"5097",
"5531",
"7047"
] |
03613418
|
Sanitary restrictions should be adapted to the citizens' health status. The context description purposely never mentions the name of the virus responsible for the epidemic in Wafer city, and avoids any allusion to COVID-19. Indeed, the game aims at discussing surveillance enabled by AI in a broader context, and should stay topical even after the end of the current pandemic. Besides, we also wanted to take some distance from a very sensitive context, as the COVID-19 epidemic is not over yet. Sowana
The Sowana company is specialised in Artificial Intelligence and has previously worked with the town on other topics, such as public transportation or health. Sowana has answered the call for proposals of Wafer city regarding the epidemic with three solutions: Eye'Wana, Wana'Like and Wana'Pass, detailed below. Eye'Wana
Eye'Wana is based on video-surveillance to ensure that sanitary rules are respected equally by all citizens at all times. It promises to unload police forces while avoiding human biases: everybody has the same risks of being controlled and fined when not respecting the rules. This solution is based on three building blocks: (i) a network of CCTVs and a fleet of drones collect videos and images from public places, (ii) computer vision algorithms extract faces from these pictures, and (iii) facial recognition algorithms identify the individuals from their face. Eye'Wana can detect various violations of sanitary rules (failure to wear a mask or to respect physical distancing; violation of curfew or lockdown; etc), identify the author, and notify them of the subsequent fine they have to pay.
|
2022
|
[
"shs",
"info"
] |
[
"1043319",
"1043301",
"46207"
] |
[
"13835"
] |
02873070
|
To avoid additional notations, we use sets of configurations to define predicates; the predicate should then be understood as the characteristic function of the set (that returns true if the configuration is in the set, and false otherwise). Lemma 1. Using a range of r1, ∆pvq 2 `1s for colors at node v is sufficient to enable distance-2 coloring of the graph. Proof. In the worst case for the number of colors, all neighbors at distance one and two of v have different colors. Now, v has at most ∆pvq neighbors at distance one, each having ∆pvq ´1 other neighbors than v. In total, v has at most ∆pvq 2 ´∆pvq neighbors at distance up to two, each having a distinct color. Using a range of r1, ∆pvq 2 `1s for v's color leaves at least ∆pvq `1 available colors for node v. Proof. The function Λpγq decreases by any execution of rules R ∆ and R ∆. Remark that degpvq is considered a non corruptible local information, so once v has executed R ∆ , this rule remains disabled afterwards. Moreover, R Color maintains the value of the color inferior (or equal) to ∆pvq 2 `1, and other rules modifying the color maintain this invariant.
|
2020
|
[
"info"
] |
[
"541705",
"541705",
"541966"
] |
[
"1596",
"9380"
] |
00166547
|
Overall, after polynomial time, every agent has met every other agent and stored their identifiers in its local memory. When every agent has all other agent identifiers in its local memory, the leader status remain correct and unchanged. The memory cost of the algorithm is O(k log(k)) per agent and the time complexity is polynomial. An alternative to this algorithm is as follows. Each agent performs a random walk in the network (at each node the agent chooses with equal probability (1/node degree) the next edge to visit). In [START_REF] Feige | A tight upper bound on the cover time for random walks on graphs[END_REF] it is proved that the expected time for a random walk to cover all nodes of a graph is O(n log(n)). Each time an agent visits a node it marks in the node table its identifier if it is not present. If the agent identifier is the maximum in the table then the agent is the leader otherwise it keeps the follower status. The memory complexity of the algorithm is O(k log(k)) per node and the expected time complexity is O(kn log(n)). From leader election to naming
Now consider the reverse problem of solving the naming problem given a leader in the group of agents.
|
2007
|
[
"info"
] |
[
"82587",
"2436",
"56049",
"2544"
] |
[
"1596",
"9380"
] |
00597981
|
This view can be helpful in the first steps of the debugging but some problems can be difficult to detect visually. In this case we can decide to use an exhaustive verification technique. In [START_REF] Deantoni | RT-Simex: retro-analysis of execution traces[END_REF] we have described a verification strategy based on the TimeSquare tool. It consists of building a CCSL specification that represents the execution of the partial order. Then, the original constraints specified on the model are marked as assertions. TimeSquare takes such a CCSL specification and checks if some assertions are violated. In case of violations, the constraint and the step where the violation has occurred are provided. For each violation, an event is raised so that it is possible to launch specific feedback on assertions (for instance meaningful annotation of the model). VII. CONCLUSION
In many real-time systems, different computational units communicate to deliver a functionality.
|
2011
|
[
"info"
] |
[
"2445",
"2445",
"2445"
] |
[
"9852",
"1812"
] |
01369954
|
This behavior has also been reported in [START_REF] Han | Mobile data offloading through opportunistic communications and social participation[END_REF], [START_REF] Whitbeck | Push-and-track: Saving infrastructure bandwidth through opportunistic forwarding[END_REF]. For some groups of friends, we may not need an intelligent helper selection strategy, mainly because the connectivity graph is either too dense or too sparse. This can be addressed by considering the dynamics of the contact patterns to select the most suitable dynamic centrality metric. The cumulative distribution function of the delivery latency for successful deliveries is shown in Fig. 9. For all datasets, the probability of the delivery latency being less than 1 day is approximately 60% for λ = 10%. In [START_REF] Trestian | Taming user-generated-content in mobile networks via drop zones[END_REF], it has been observed that 55% of Flickr content is uploaded after a lag of more than 1 day. Thus, we believe that the delivery latency resulting from the opportunistic communication is practical in such content dissemination applications. For applications/services that require a lower delay, it is possible to increase the threshold of content replication as shown in Fig. 9. The common pattern is that the delivery latency reduces with increasing λ. In Dartmouth and USC, there is a 20% improvement in reducing the delivery latency to less than 1 day when λ increases to 30%, and there is a 30% improvement in SWIM.
|
2016
|
[
"info"
] |
[
"103314",
"267245",
"103314",
"103314"
] |
[
"928971",
"928972",
"898443"
] |
00724055
|
Nevertheless, multi-hop opportunistic propagation is not considered and the metrics used do not consider the dynamic aspects of contact time and duration of users. Further, VIP delegation does not address any privacy and trust related issues in content dissemination. Overview of the System Architecture
The primary objective of our system is to provide privacy-aware distributed mobile social networking which addresses the lack of trust, timeliness of delivery and loss of user control of data. We aim to take advantage of opportunistic direct communication among wirelessly connected friends (i.e. hood friendship), and the ever increasing storage and processing power of mobile devices for distributed storage. In this context, our focus is on dissemination of content for delay tolerant social networking applications/services, (i.e. UGC sharing, advertisement propagation, etc.) where users are geographically clustered into communities. In this section, we provide an overview of our distributed mobile social networking architecture. Suppose a user, namely a creator, wants to share a content with a set of users who have previously been identified as friends through a social networking service. In a typical distributed content dissemination system, the creator will try to propagate the shared content to the devices of friends. However, each and every friend may not be interested in the shared content even though it is pushed to the device [START_REF] Ugander | The anatomy of the facebook social graph[END_REF].
|
2012
|
[
"info"
] |
[
"103314",
"70767",
"70767",
"103314",
"103314"
] |
[
"928971",
"928972",
"898443"
] |
01809063
|
Yet, on the other hand, a common feature in the liveliness of a city is the population of vehicles, either public or private, roaming on the streets and highways. This together with the maturing technology of Vehicle-to-Everything (V2X) communications opens the possibility of utilising cars for data collection from devices and deliver them to the nearest sink for lower cost, or even for free. The utilisation of vehicles to deliver data will be affected by the intermittent connection available due to the mobility. This is why the Vehicular Delay Tolerant Network (VDTN) concept need to be incorporated within the process of delivering data from the source to the sink. In this paper, we present performance evaluation of four VDTN routing protocols, namely: First Contact (FC), Epidemic (EP), Spray and Wait (SNW), and MaxProp (MP). Previous studies on similar performance evaluations can be found in [START_REF] Benamar | Are VDTN Routing Protocols Suitable for Data Collection in Smart Cities: A Performance Assessment[END_REF], [START_REF] Soares | Improvement of Messages Delivery Time on Vehicular Delay-Tolerant Networks[END_REF] and [START_REF] Spaho | Performance Comparison of Different Routing Protocols in Sparse and Dense VDTNs[END_REF], but they do not consider different communication technologies. We improve upon this work by implementing and studying some of the latest wireless access technologies with farther communication ranges achievable today, Zigbee for the sensors and IEEE 802.11p for the vehicles. Another difference is that we additionally consider public vehicles and study their impact on the performance. We take into account the fact that well-established bus routes and schedules are common in most modern cities today. This predetermined mobility of buses in a smart city, among other potential functions that the buses can be assigned to, is one key factor that we will try to exploit in our work-in-progress towards a VDTN routing protocol for an efficient data collection in smart cities, which we name as Bus Assisted data deLIvery or BALI in short.
|
2017
|
[
"info"
] |
[
"482801",
"525242",
"472282",
"17835",
"300284",
"482801",
"525242"
] |
[
"21625",
"3246",
"4310"
] |
04383397
|
In the base case scenario which, compared to historical data, has the most realistic yet ambitious construction rates, the 2050 European generation mix is composed of around 21% solar, 22% wind and 41% nuclear power. Averages over timeslice structures hide some variability over the different timeslice runs, however. While the combination of Figures 8 and9 once again confirm the dominating effect of the investment scenario, they also show that within a scenario, the differences in the European 2050 generation mix are almost entirely dictated by the random sampling of days, not the number of timeslices. Indeed, increasing the number of timeslices has little effect both on the mean and the distribution width of the generation technology annual output. This is likely the key reason this analysis fails to clearly reproduce all of the effects recorded in the literature and discussed in Section ? ? (increased temporal detail leads to a reduction in non-flexible capacity, an increase in flexible capacity, and an increase in total system costs). The experimental approach, designed to provide an upper bound to the impact of temporal detail relative to maximum investment rates, introduces random noise through its sampling of days, making these effects inaudible. In contrast, these effects can be picked up by existing studies which build model time steps using smarter methods, hence removing the random effect of day sampling. There is another interesting point to make regarding the differences in behaviours between wind and solar power.
|
2024
|
[
"math",
"spi",
"shs"
] |
[
"301436",
"214579",
"301436",
"214579",
"214579"
] |
[
"10990",
"11867"
] |
01108808
|
Filter's width here ranges from 20 min to 12 h. Noise attenuation from the smoothing operation performed by the DOB and FDG filters allows for a rapid increase in detection performance when the filter's width increases. The presence of nearby ramps limits the response to a ramp, and ultimately results in SNR stagnation for a filter's higher width value. Due to the particular nature of its definition, the MaxMin filter's response to noise increases with the filter's width. This increase counterbalances the one associated with the filter's response to a ramp, thus resulting in an overall low level of SNR. Due to the special nature of its definition, characterized by the absence of a smoothing operation, the MaxMin filter does not behave as usual. Its detection performance in comparison with the DOB and FDG filters is much lower (see Figure 5 red curve). When its width increases, its response to noise increases, instead of decreasing. This increase counterbalances the increase in the response to a ramp, resulting in the SNR's overall stagnation. Different filters and their localization performance
Two of the three criteria proposed by Canny are dedicated to evaluating a filter's performance in the localization of an edge. This localization is complicated by the multiplicity of maximum due to noise in the filter's response to an edge.
|
2015
|
[
"spi"
] |
[
"214579",
"214579"
] |
[
"10990",
"11867"
] |
04182119
|
2: Four modes' annual revenues in € for the conventional modeling and the proposed "four modes" approach
Effect of activation signals (β) on annual profits
Revenue from the FCR market strongly depends on the assumption made on the activation. Especially, the coefficients β + and β -represent the ratio of the activated reserve over the total reserve capacity whenever there is a response to frequency deviations. Therefore, their values significantly impact the battery output contribution to the FCR market. When those ratios increase, more energy is activated and the ESS owner receives more money for upward activation or pays more for downward activation. This relationship can be seen in Fig. 9(a), in normal operation using conventional modeling, where the annual revenues from the FCR market decrease as the activation ratio increases. A similar trend for FCR revenues is also observed using the four-mode operation, though with a smaller decrease in revenues. This is due to the reliance on reserve to generate profits in the FCR market, as revenues are not achieved through energy activation, which incurs additional costs for the ESS owner. In contrast, the revenues from the DA market (in normal operation) increase with higher values of β, as more energy can be monetized in the DA trading market. More importantly, a significant increase in DA profits is observed when using the four-mode operation. This is due to the commitment of more DA energy products that act as reserves for the FCR market, resulting in increased profits in both DA and FCR markets.
|
2023
|
[
"math",
"spi",
"shs"
] |
[
"1066014",
"1043220",
"1066014",
"1043220",
"1043220",
"1066014",
"490867"
] |
[
"750087"
] |
04188866
|
Regarding the available empirical evidence, the authors considered that this is actually weak and inconclusive. Their own contribution added to the literature by studying a sample of OECD countries, and showing that the stringency of environmental policies (measured by EPS) indeed depends on trade openness. The underlying relationship appears to be non-linear, as it is influenced by both the levels of stringency and the nature of trade, i.e., North-or South-oriented. Finally, a different and growing body of the economic literature has theoretically and empirically investigated the linkages between business cycles, macroeconomic policy and the environment. Annicchiarico et al. (2021 [3]) provided an interesting review of this literature. Specifically, it is argued that expansionary monetary policies yield higher CO2 emissions, as suggested by Qingquan et al. (2020 [69]). Hence, if interest rates decrease, the environmental legislation is expected to be strengthened, in order to limit the adverse effects on the environment. On the other hand, Dennis (2022 [START_REF] Dennis | Climate Change and Financial Policy: A Literature Review, Finance and Economics Discussion Series[END_REF]) highlighted the importance of linkages between climate change and financial policy. In this case, environmental unbalances yield new risks that, beside urging stricter environmental policies, also fuel individual and social risk aversion (Carney, 2015 [16]). Hence, a positive correlation between the risk premium and EPS should be expected.
|
2023
|
[
"shs",
"sde"
] |
[
"1001031",
"581146",
"1001031",
"2564"
] |
[
"1094318",
"10164"
] |
04208688
|
The null hypothesis of random effects is now rejected at the 1% level (a statistic of 63.209 with a p-value equal to 0.000). The null hypothesis of no serial correlation is rejected (a statistic of 13.292 with a p-value equal to 0.002) as it is rejected the null hypothesis of homoscedasticity of the error term (a statistic of 128.35 with a p-value equal to 0.000). To conclude, all five specifications should be estimated with a fixed effect model and serial correlation and heteroskedasticity should be taken into account with robust standard errors. Fixed effects (FE) model
Two-way fixed effects estimates are presented in the first part of Table 2. In the baseline model, all the main control variables but the Gini are significant: the GDP per capita, the Gov coal variables and the climate variable are significant at the 1% level, the corruption is significant at the 5% level. The GINI estimate is negative but not significant (see Table 2 column 1). GDP per capita has a negative effect on EPS. As expected GDP per capita has a non-linear impact on EPS. It is worth noting that the effect of GDP per capita and GINI depends also on the nature of the government. Indeed, the interaction variables GDP with Gov coal and GINI with gov coal are positive and significant at the 1% level.
|
2023
|
[
"shs",
"qfin"
] |
[
"1001031",
"581146",
"2564"
] |
[
"1094318",
"10164"
] |
03877220
|
In addition, price sensitivity is quite inelastic for electricity among the elderly and not significant for gas, which differs significantly from the youngest age category. Energy can be considered to be a good that prevents problems such as poor health, and the elderly cannot adjust their consumption after a change in energy prices (Warriner, 1981 [44]). They may therefore be more vulnerable (as long as their income does not change) to problems of fuel poverty (Legendre and Ricci, 2015 [START_REF] Legendre | Measuring fuel poverty in France: Which households are the most fuel vulnerable?[END_REF]). Two points can be made. First, the inelasticity of emissions to price shows that in times of increasing energy prices, older people will have to give up other consumption to maintain a temperature compatible with the desired or necessary level of comfort. However, we can also question this inelasticity and argue that retired people, whose incomes have fallen, have already adopted a restricted consumption behavior, leaving no room for further adjustment of the quantity of energy consumed. The phenomenon of fuel poverty already highlighted in the literature can be exacerbated by the fact that older households have less air conditioning than other types of households, and live in poorly insulated dwellings, making them more vulnerable to overheating during the increasingly frequent summer heat waves. After the age of 60, the effect of gender is marked. If the head of the household is male, the household consumes more energy, which is consistent with a body of literature showing that women care more about the environment. Most studies report a positive relation between being male and CO 2 emissions: the proportion of high consumers of energy is much greater among males than among females (Barla, et al., 2011 [4], Bel and Rosell, 2017 [START_REF] Bel | The impact of socioeconomic characteristics on CO2 emissions associated with urban mobility: Inequality across individuals[END_REF]).
|
2021
|
[
"shs",
"sde"
] |
[
"432899",
"432899"
] |
[
"1016128",
"1196242"
] |
01577754
|
Another common aspect of the colonial management to which these populations were subjected is the implementation of a divide and rule policy which led to a reification of the Berber and Sikh identities. Colonisation exacerbated the grievances between Berbers and Arabs or Sikhs and Hindus and the colonial authorities drew upon these grievances to impose colonial governance. Colonisation equally led to the sharpening of religious identities. In order to resist Christian missionaries, religious reformist movements emerged in Punjab. At the same time, the creation of several bodies and gave the Sikh community its contemporary institutional framework. The Akali Dal, the main Sikh political party, and the Shiromani Gurudwara Parbandhak Committee (SGPC) were created in the 1920s in order to regain gurudwaras'1 control over British influence. Paradoxically, the colonial attempt to integrate the countries under one single political authority was conducive to ethnic fragmentation. In the three cases, the independentist movements rose against this endeavour. The Indian Congress in the 1890s, the Algerian MTLD in the 1920s and FLN in the 1950s, and the Moroccan Istiqlal in the 1930s were created to react against the colonial dismembering of the nation. This colonial paradox set the stage for post-independence minority conflicts.
|
2009
|
[
"shs"
] |
[
"199917"
] |
[
"12182"
] |
00921812
|
It also opens wider access to other forms of employment in the non-farm economy, in urban areas or abroad. Knowledge is therefore a key component for the improvement of rural livelihoods in general and not only for agricultural production. Knowledge access is of particular importance for smallholders and landless peasants who could not make a livelihood out of farming. The lack of information about market opportunities and available employment is also pointed out by the authors of the case studies. Besides human capital, access to information is also crucial. The Internet and portable communication are a means to overcome the isolation of rural communities. Public infrastructures deficit
Transport
Transport is critical for the marketing of agricultural goods. Access to urban markets, airports and harbours hinges on the availability of adequate transport systems. In India, investments in rural roads contributed about 25 percent of the growth in agricultural output in the 1970s (World_Bank 2007: 53). Despite the high return on such investment, this sector is, more often tha ot, egle ted state autho ities. The epo t o To ga p o ides illust atio of this: some rural roads have become impassable even duri g the dry eather.
|
2011
|
[
"shs"
] |
[
"199917"
] |
[
"12182"
] |
01213503
|
Indeed, participants with higher SES consider female drivers as more incompetent, more nervous and less cautious than participants with lower SES. Besides, the results showed that differences between men and women seem to vary according the participant's SES. Female's responses tend indeed to be closer to those provided by the male group when they are higher SES. This was the case for the positive elements associated with their group (e.g., prudence): higher SES women depreciated more their group than lower SES, their responses being thus more similar to those of men. This was also the case for the negative elements associated with their group (e.g., incompetence) where the same phenomenon seemed to occur: higher SES women were more likely to devalue their group. Concerning elements associated with men, lower SES women promoted more men than higher SES women (and even more than men did). Again in this case, higher SES women's responses are more similar to those of men as if, for women with higher SES, socioeconomic status was favored over the membership of the gender group, the first leading to a dominant and valued position than the second does not bring. By devaluing their gender group, higher SES women may emphasize their dominance linked to their socioeconomic status and thus get closer to the position of men. In a perspective of social dominance [START_REF] Sidanius | Social dominance: an intergroup theory of social hierarchy and oppression[END_REF], this reflects the depreciation conducted by the dominant group (i.e., women with higher SES) to maintain the asymmetry with the dominated group (i.e., women with lower SES). As previously discussed, this phenomenon seems to take place at a larger level, between the male and female groups, in the same perspective of maintaining the status quo. Practical consequences from a traffic perspective
The practical significance of these results obviously concerns the sex differences in crashes and driving behaviors.
|
2015
|
[
"shs"
] |
[
"98218",
"222176",
"222176",
"98218"
] |
[
"1265694",
"9033",
"18344"
] |
01866633
|
For both the baseline and the proposed approach we train the network for 10 4 iterations with a learning rate of 0.1, then for 4•10 3 iterations with a learning rate of 0.05, another 4 • 10 3 with a learning rate of 0.025 and finally for 2 • 10 3 iterations with a learning rate of 0.0125. We jitter the training images by shifting them by a random number of between 0 and 32 pixels in both directions. We perform the accuracy tests on the network defined by the weight vector after the last update in this procedure, instead of cherry-picking the model using a cross validation on the validation set. We found out that this strategy gave better results for both the baseline and the proposed algorithm. When using our method, we set k = 1, that is, we update the discriminator once per every update of the trained network. We use batches of 16 training images for the baseline, and batches of 8 training and 8 unlabelled images for the semi-supervised setting. We present numerical results in table 1 and in figure 2. The baseline attains an accuracy of 49.3% Intersection-over-Union (IoU), which exceeds the performance of 47.7% reported in the original paper [START_REF] Badrinarayanan | A deep convolutional encoderdecoder architecture for image segmentation[END_REF]. We suspect the increase comes from the differences in the training protocol, including jitter and a decreasing learning rate. Our method consistently outperforms the baseline.
|
2017
|
[
"info"
] |
[
"406734",
"406734",
"406734"
] |
[
"1036104",
"3494",
"3233"
] |
01713323
|
Discussion
We proposed a loss function for semi-supervised learning, capable of generating useful error signals based exclusively on predictions. Contrary to the pre-training and co-training approaches, it enables end to end training on unlabelled data, irrespective of the task or network architecture. We have demonstrated that it allows to capitalize on unannotated data, narrowing the performance gap between predictors trained on the fully-and partly-labelled training sets, and enabling training useful predictors on just one annotated image. These advantages come at a cost of computation time and memory needed to train the discriminator network. Moreover, the regularised networks typically took more training iterations to attain their maximum performance. Finally, the presented experiments are performed on small data sets. It remains to be seen if the proposed method yields considerable improvements when hundreds, as opposed to tens, annotated images are available. Figure 1 :
1 Figure 1: The flow of data and error signals when training a structured output network f with the proposed method, presented in algorithm 1. The discriminator update is not shown in the drawing. The green line denotes the flow of labelled training data and the corresponding gradients.
|
2017
|
[
"info"
] |
[
"406734",
"406734",
"406734"
] |
[
"1036104",
"3233",
"3494"
] |
00933551
|
In the second model, it is only considered a reduction of the pipeline cross-section in a particular stretch of the pipeline. In the third model, the blocked segment is modeled by considering two orifice plates and between the plates there is a blocked pipe stretch. These models have four different outputs, pressure and flow rate at the input and pressure and flow rate at the output. The aim of this work is to determine which pair of outputs have the highest sensitivity to the change of blockage position in order to improve the detection and location of the blockage. In this study, the energy for each output and for each position is calculated and the magnitude of the energy variation with respect to blockage position is then obtained. Numerical simulations are presented for the different models. Fluid transportation by pipeline is really cheap and efficient, but faults, such as leaks or blockages, can generate a loss in money and repairing time if there are not detected in short time. A lot of work has been done on leaks detection and location but not so much on blockages detection and location (Wang et al. [START_REF] Wang | Detection and location of a partial blockage in a pipeline using damping of fluid transients[END_REF] and Guillén et al. [START_REF] Guillén | Study of A Flow Model for Detection and Location Of Leaks and Obstructions In Pipes[END_REF]). Particularly, in this work, a sensitivity analysis is done to show how detectable is the position of a blockage when the input and output flows and pressures are measured. The paper is organized as follows: section 2 presents the different models considered and section 3 explains the sensibility analysis done.
|
2014
|
[
"info"
] |
[
"388240",
"388240",
"388240",
"86008",
"86008",
"86008"
] |
[
"955143",
"2524",
"18114"
] |
03556852
|
A wide range of data can help to have an empirical relation to find flame speed for any pressure and temperature for a given set of initial conditions. The previous work reported laminar flame speeds for a maximum temperature and pressure of 485 K and 20 bar [1]. The present work aims to extend these limits to 585 K and almost 30 bar and to provide a general correlation for this large range of experimental conditions. The most recent kinetic mechanisms existing in the literature have been chosen and evaluated for the same conditions. Furthermore, it is interesting to study the inner structure of the flame to understand how the combustion process can be enhanced for a better performance. It is known that ammonia flames are thick [2]. When the characteristic flame thickness is more than 1 mm, there is a need to precisely define the isotherm associated to fresh gases side as the flame speed evaluation is highly dependent on it. Method
Experimental set up
The most common methods to determine the laminar flame speed experimentally include closed spherical constant pressure chamber, stagnation/counterflow method, heat flux burner, Bunsen burner, and externally heated channel. However, the most accurate method of measuring the flame speeds for a large range of extreme conditions is the constant volume spherically outward propagating flame. The experiments are performed in the OPTIPRIME facility of ICARE-CNRS, Orléans, France and has been described in [3].
|
2021
|
[
"spi"
] |
[
"474628",
"474628",
"474628",
"474628"
] |
[
"1111337",
"172584",
"5621",
"173490"
] |
03777739
|
The large variation in thickness associated to minor variation in k1 implies that Le is not the only parameter that influences the flame response and its reactivity. It was seen that the intensity of the flame decreased from the base to the tip irrespective of the value of the curvature. Upon, performing several tests to capture different flame responses to the same excitation, this observation was confirmed. It is understood that the convective-diffusion velocities played a major role in the reactivity rather than the acoustic perturbation itself. The effect of preferential diffusion on the reactivity of curved flames has been highlighted by Mizomoto et al. [START_REF] Mizomoto | Effects of preferential diffusion on the burning intensity of curved flames[END_REF]. On computing the Damköhler number, Da it was seen that this value was close to 1 indicating that the time scale of the perturbation was not sufficient to see the curvature effects on the species reactivity when compared to the chemical scale. The convective velocity was about 25 times the diffusion velocity of ammonia into air at ambient conditions. This indicates that it is not ammonia that directly diffuses onto the sides of the flames but ammonia itself decomposes into lighter species that diffuses faster than the convection flow. The preferential diffusion of H2 and other species towards the sides play a major role in enhancing the local reactivity. Conclusions
Flame-acoustic interactions were performed to study the curvature effect on the reactivity of the species.
|
2022
|
[
"spi"
] |
[
"474628",
"474628",
"474628",
"474628"
] |
[
"1111337",
"172584",
"5621",
"173490"
] |
03519350
|
Moreover, the modes of higher order may be observed to be more sensitive to the parameter . This may be explained by fact that, higher order modes have shorter wavelengths in the 2 -direction (see Fig. 9). For a given dispersion parameter , one can observe in the zoomed part in Fig. 5 that the 0 -mode is more dispersive than 0 -mode in the considered frequency range. This effect can be checked from probability density functions (PDF) presented in Fig. 8. At a given frequency, the support of 0 -mode's PDFs are much larger than the one of 0 -mode. Similar remarks regarding to the fluctuation of phase velocity may be made by considering the 2mm-and 4mmthickness plates. By comparing these 3 cases, we can see that for a same mode at a same frequency, fluctuations of thinner plates are more important than the one of thicker plates. In the case of 1mm-and 2mm-thickness plates, some modes, whose the patterns are completely separated from in each to other in the case of homogeneous plates (e.g. modes 1 and 1 ), may have overlap regions. In a homogeneous plate, the phase velocity ℎ of Lamb waves is a constant whose the value is calculated from the relation ℎ × . For a heterogeneous plate, the fluctuation of ℎ can be observed to be depending of the thickness.
|
2022
|
[
"spi"
] |
[
"1004414",
"1004414",
"1004414",
"1004414"
] |
[
"785636",
"179687",
"4535",
"4587"
] |
03242823
|
Mention may be made of L'Année Philologique, published annually since 2017 by Brepols, after having for a long time been distributed by Les Belles-Lettres: not only is this bibliography the only one to attempt to cover the group of disciplines encompassing Classical and Late Antiquity -it is also the oldest and largest bibliography listing studies on these authors and editions of texts, and a fee is charged for access. The other resources are specialised bibliographies in the form of regularly updated databases. There are many examples 4 , but we may mention here:
• Base d'Information Bibliographique en Patristique (free access) 5 ;
• Bibliographie papyrologique en ligne (free access) 6 ; • L'Année épigraphique (paywall) 7 ; • Syri.ac: An annotated bibliography of Syriac resources online (free access) 8 ;
• Répertoire des sources philosophiques antiques (free access) 9 ;
• Droits Antiques (free access) 10 . New formats of shared bibliographical resources are springing up all the time, using software specifically designed for organising bibliographies. It is now possible, for example, to share collections and libraries in Zotero 11 . In addition, we are also seeing the appearance of digital libraries of ancient editions and manuscripts: each library has its own criteria for managing its collections of manuscripts and ancient editions, developing tools in-house that not only allow consultation but also, more often than not, prevent or restrict the downloading of data. Mention should also be made of the massive but disorganised (and of very uneven quality) digitalisation work carried out by GoogleBooks / GoogleScholars, archive.org, HathiTrust (with restrictions on access from outside the US) projects, and the development of academic networks (such as academia.edu) for following up topics and researchers. These are new approaches to the publication and circulation of bibliographical data that -whatever we think of themcorrespond to new uses or new expectations in academic circles. Thus there is a host of initiatives in this field, but we are still at an experimental stage in terms of digitalising and structuring bibliographical data. There are a number of problems: firstly the large number of proprietary resources, and secondly the fact that the good quality open resources are still not well known outside digital humanities circles.
|
2021
|
[
"shs",
"info"
] |
[
"447661"
] |
[
"11980"
] |
03551828
|
D. 32 (P), folios 1a + 2b r and 1b+ 2a v . 1 The script dates from the late sixth century C.E. The manuscript is nowadays illegible on the lower part of the first page, and only partly legible on the lower part of the second page. The text preserved covers Wis 9:8b-10:2, but due to the poor state of the leaf, Wis 9:10, 9:12-14 and 9:16 are almost lost, except for few words. Wis 9:17 and 10:1-2 are very fragmentary, and it is impossible to check the manuscript, for the reason mentioned above. The mention of the beginning of Wisdom of Solomon 10 allows us to presume that the manuscript was not originally a lectionary but contained the whole book of Wisdom of Solomon. The fragment was acquired in 1893 by the Bodleian Library, and is coming from the Cairo Genizah. 2 Thus it is very probable that the CPA manuscript came originally from the Monastery of Saint Catherine in the Mount Sinai, as many other CPA texts found in the Cairo Genizah. Modern Editions
The manuscript was first published by Stenning in 1896, with a detailed philological commentary and a translation. [START_REF] Gwilliam | Biblical and Patristic Relics[END_REF] The text was then reedited, with few emendations but without commentary nor translation, by Müller-Kessler and Sokoloff.
|
2019
|
[
"shs"
] |
[
"447661"
] |
[
"11980"
] |
01542460
|
On the other hand, there are many application contexts, e.g. web, scientific data management, and personal information management, which do not require full integration to provide useful services [START_REF] Madhavan | Web-scale data integration: You can afford to pay as you go[END_REF]. These applications need to start with a data integration application in a complete automatic setting for reducing human effort and development time and put more effort on improving it as needed. Let us present a motivating example from the scientific data management context. Example 1. Consider a researcher who is interested in the less-known or yet unknown functions of the protein ABCC8 related to diabetes. While biological experiments are the ultimate means for verifying predicted functions, she must first discover and suggest such functions. For doing this, she should perform manual exploratory searches over numerous online sources. For example, she should consider both well-known databases such as EntrezGene, EntrezProtein and less-known databases of other research labs as well. Having a data integration system with approximate answers can considerably save the time and reduce the research cost in this domain. It is sufficient to set up such a system in a complete automatic setting and spend more effort to improve it only if it is necessary.
|
2012
|
[
"shs",
"info"
] |
[
"120654",
"120654",
"141072",
"141072"
] |
[
"932772",
"932773",
"172647",
"172604"
] |
02950392
|
Therefore, p is fixed to 1. Increasing the modulation frequency from 1 MHz to 1.5 MHz makes the q value decrease from 4 to 2, emphasizing the key influence of the forcing frequency on the entrainment of the spikes which always appear for a fixed phase of the periodic forcing. Thus, the time interval between successive dropoffs remains constant and this strongly differs with what is observed when studying LFFs in a semiconductor laser pumped with a continuous or quasi-continuous bias [START_REF] Jumpertz | Nonlinear Photonics in Mid-infrared Quantum Cascade Laser[END_REF]. The RF spectra analysis furthermore shows that the frequencies retrieved in the experimental waveforms are integral multiples of the forcing frequency. Indeed, the RF spectra printed in Fig 2 . only exhibit three peaks below 5 MHz when the external optical feedback is masked and exhibit several peaks up to a few dozen of MHz when both external forcing and optical feedback are applied. Conclusions
We experimentally observed the first LFF dynamics compatible with a communication in the transparency window at 9 µm. We also unveiled that LFFs in QCLs show an arrangement when applying periodic forcing and external optical feedback. The number of observed spikes per period depends on the frequency of the sine forcing and these spikes always pop up for a given phase of the external modulation. This results in a discrete RF spectrum composed of integral multiples of the forcing frequency.
|
2018
|
[
"spi"
] |
[
"554442",
"554513",
"563936",
"478061",
"72398",
"463368",
"478061",
"72398",
"554442",
"554513",
"563936",
"117472"
] |
[
"1057340"
] |
02950389
|
In this paper, we examine the effects of COF in mid infrared QCL structures and unveil that the incident angle of the feedback wave enables to efficiently tailor their near-field beam profile. The Fabry-Perot QCLs under study are 3 mm long and 14 µm wide. They are made of 30 periods of GaInAs/AlInAs grown between two InP cladding sheets and they emit around 4.6 µm [START_REF] Evans | Buried heterostructure quantum cascade lasers with high continuous-wave wall plug efficiency[END_REF]. The characteristics of one of the QCLs under study when powered with a 3% duty cycle (meaning 600 ns long pulses repeated at 50 kHz) are presented on figure 1. The light-current voltage characteristics are measured at room temperature. The threshold current is 400 mA, the external efficiency is 0.5% and the threshold voltage is about 8 V. The inset stresses that voltage has a strong influence on the shape of the optical near field and leads to beam steering effect. Figure 2 shows cavity modes TM0 and TM1 of the same QCL, simulated with finite-element method software based on Maxwell's equations. These two modes are responsible for beam steering and induce beam deformation. The optical feedback setup is composed with a beam splitter allowing part of the QCL beam to be reflected on the mirror and to be reinjected inside the laser cavity. On the other hand, it allows retrieving the optical near field on an infrared camera made of 124 124 pixels.
|
2017
|
[
"spi"
] |
[
"478061",
"554442",
"554513",
"563936",
"484335",
"478061",
"478061",
"478061",
"117472",
"554442",
"554513",
"563936"
] |
[
"1057340"
] |
00651238
|
The first four columns present the LC characteristics: name, number of input, number of output and transistor count. The last column show the additional power required by the TMR implementation, expressed in percentage of the new architecture's power consumption. As shown in Table IV, for largest circuits, the hybrid architecture save about 30% of power consumption compared to TMR except for ISCAS'89 benchmark circuits. In fact, these circuits have many more inputs/outputs than other circuits of the same size. Consequently, for these circuits, the consumption of the logic part does not dominate the overall architecture power consumption. Therefore, the fact that only two LCs are running instead of three does not reduce the power consumption as expected. B. Area
The second evaluation of the hybrid architecture is related to its silicon area compared to TMR. To perform this comparison we first have to estimate the cost of each module used by each architecture. Synthesis tools mentioned in the last sub-section use an optimized module to realize the voter of TMR, while no such optimization is provided for the comparator of the hybrid architecture.
|
2011
|
[
"spi"
] |
[
"181",
"408080",
"408080",
"408080",
"408080",
"408080",
"19646"
] |
[
"18065",
"172965",
"172470",
"740458",
"938342"
] |
04125654
|
In the enriched model, a firm will invest if the ratio of the sum of the stock price and carbon pricing to the acquisition price of capital is greater than 1.
Tobin s Q with carbon tax= stock market price + carbon price x ecologic efficiency of green capital . replacement cost of capital
(18)
As the stock price fluctuates, it is possible to modulate the carbon tax according to the level of the stock price: when the stock price highly values the realization of green investments, the carbon tax can decrease because the value of the stock price is sufficient to encourage companies to make the necessary investments. However, when the stock price falls for one reason or another, the carbon tax rate must be raised to maintain a high level of investment. As first pointed out by [START_REF] Keynes | The general theory of employment, interest, and money[END_REF], the decision to invest strongly depends on the marginal efficiency of capital. [START_REF] Keynes | The general theory of employment, interest, and money[END_REF] indicates that a variation in the stock market price is equivalent to a variation in the marginal efficiency of capital, a stipulation with which Tobin (1969) agrees. However, when stock prices are persistently low, it is complicated to stimulate investment with an expansionary monetary policy. In this carbon tax model, raising the tax is an effective method for restoring the level of investment: firms will be forced to make ecological investments to prevent the carbon tax having too great impact on profits that the economic crisis has already eroded. To the extent that carbon pricing becomes a direct determinant of green investment, it can be less volatile. The model also shows that if the ecological efficiency of green investments is assumed to increase over time, the carbon tax can decrease over time because the modified Tobin's Q depends positively on the product of the carbon tax T and the ecological efficiency of the green investment represented by the parameter v.
Discussion
This study shows that the carbon tax should be floating, adapting to the economic context and the dynamics of the stock market.
|
2023
|
[
"shs"
] |
[
"560"
] |
[
"13001"
] |
03741321
|
It creates a flow of wealth in continuous time. In other words, if a firm experiences an increase in its production costs at time t, and thus a decline in profits, a new flow of wealth will be created at time t+1, which will at least partially restore profits. The process continues at time t+2, so that the increase in costs is eventually amortized. In the end, it can be said that the short-term loss of profit generated by the security investment is offset by the potential gain realized over the duration of its activity. Continuing this line of reasoning, we see that the firm is able to generate a cost-reducing flow of wealth if the productivity of its production apparatus (how much wealth is generated on average by each unit of capital input) is greater than the cost of that apparatus, i.e. the interest rate at which the firm pays its loan (which is used to acquire the capital) plus the rate of depreciation. The safety investment decreases productivity and/or increases the cost of the depreciation rate, but as long as the spread is positive, the firm can continue to make money. The ongoing process of wealth creation eventually amortizes the costs incurred. A company that invests in safety will prefer less profit in each production period (i.e. in the short term), but these profits are more secure. Therefore, it will earn more profits over the long term. We propose here both a problematic and an approach different from the microeconomic approach.
|
2022
|
[
"shs"
] |
[
"560",
"217752"
] |
[
"13001"
] |
00659292
|
They had, on average 6.49 children, well above the 2.63 recorded for their daughters, though this difference is also affected by the fact that most the women in the latter group were still fertile at the time of the survey. Table 3 also reveals that when returnees were 20 years old, they faced a real oil price equal to USD 49.84, while the corresponding figure for stayers stands at USD 39.41. This is consistent with the idea that the decision to migrate towards an Arab country is influenced by labor market conditions at destination, which are in turn deeply shaped by the ups and downs in the evolution of the price of oil. Econometric analysis
The vector of regressors x 1i which is expected to influence the number of children q i of woman i in our sample includes (i ) the duration of the wedding, (ii ) characteristics of the wife, (iii ) characteristics of her mother and (iv ) characteristics of the husband. Specifically, with respect to (ii ) we include the age of at the time of the wedding, while for (iii ) we control for the level of education of the mother and for the number of children that she had.12 With respect to the husband, beyond the migration experience, we control for the completed number of years of schooling. Furthermore, we also control for the distance to the nearest health clinic, and we include regional dummies, which remove from the estimates the influence of spatial differences in the incidence of migration which correlate with preferences towards the number of children. We opted for a parsimonious specification of x 1i to reduce the concerns, though possibly not to eliminate them completely, connected to the possible endogeneity of its elements. 13 We provide evidence below that this selection of regressors suffices to reach a good fit of the observed fertility choices in our sample, measured, as suggested by [START_REF] Cameron | Microeconometrics Using Stata[END_REF], by the correlation between the actual and the predicted values of the dependent variable. Selection of the instrument
Beyond the controls included in the vector x 1i , the key variable of interest is represented by y 2i , which describes the past migration experience of the husband in an Arab country. Following [START_REF] Wahba | Out of Sight, Out of Mind: Migration, Entrepreneurship and Social Capital[END_REF], we rely on the real price of oil to obtain an exogenous dimension of variability in y 2i .
|
2012
|
[
"shs"
] |
[
"945",
"945"
] |
[
"171478",
"743983"
] |
02929048
|
A comprehensive study of the pluming instability visible without confinement in [START_REF] Roche | Marangoni flow of soluble amphiphiles[END_REF][START_REF] Le Roux | Soluble surfactant spreading : How the amphiphilicity sets the Marangoni hydrodynamics[END_REF] will be the topic a future work, but we observed that the wavelength is close to the depth of the bath. Accordingly, to hinder the development of this instability, we confine laterally the flow on a width w hall of 12 mm or 6 mm, while the water height h is always greater than 30 mm. This also prevents any interaction between the possible vertical vertices [START_REF] Dussaud | Dynamics of spontaneous spreading with evaporation on a deep fluid layer[END_REF] and the bottom. Some experiments were performed in shallow water (h < 10 mm) for comparison, and no noticeable difference in the flow morphology was observed. The confinement is effective if the length of the fast Marangoni flow (the spreading length L Mar ) respects some geometrical constraints : w hall /2 < L Mar < L. This raises strong conditions on the molar flow rate, which has a strong effect on the spreading length L Mar ∝ q 3 m [START_REF] Le Roux | Soluble surfactant spreading : How the amphiphilicity sets the Marangoni hydrodynamics[END_REF]. The molar flow rate and hence the flow speed (v ∝ q -1 m ) [START_REF] Le Roux | Soluble surfactant spreading : How the amphiphilicity sets the Marangoni hydrodynamics[END_REF] are therefore limited in a reduced range by the geometrical conditions. Additionally, for the phenomenon to be well controlled, the surface and the boundary layers must be stable. This requires to maintain low capillary (Ca = ηv γ ) and Reynolds numbers (Re = ρvL η ), where η, v,ρ, L and γ are respectively the viscosity, the velocity, the density, the characteristic length, and the surface tension. But neither the velocity nor the length are controlled directly : both result from the diffusion of the surfactants from the surface to the bulk, and from the molar flowrate of surfactants. Velocity mapping on the surface
Images of the surface are acquired with a USB camera Mako U-130B mounted with a 25 mm lens from Edmund Optics, enabling a resolution of 5.5 pixel/mm.
|
2020
|
[
"phys"
] |
[
"57111"
] |
[
"573"
] |
01489745
|
However, when temperature is increased to 50°C (above the solubility boundary), the crystals begin to melt and the continuous phase between the bubbles shifted from a gel to a liquid state, leading to an onset in foam destabilization. By cooling the foam back to 20°C, the destabilization process is halted and the foam becomes stable again due to the re-crystallization process both in the continuous phase and around the bubbles. By again increasing the temperature to 50°C, foam destabilization mechanisms are reactivated and the foam completely collapses in few minutes. Same results have been obtained for other oil foam systems [START_REF] Binks | Whipped oil stabilised by surfactant crystals[END_REF][START_REF] Brun | Stabilization of air bubbles in oil by surfactant crystals: A route to produce air-in-oil foams and air-in-oil-in-water emulsions[END_REF]. Moreover, an approach used for aqueous photo-thermo-responsive foams have been extended to develop non-aqueous photo-responsive foams [START_REF] Fameau | Multi-stimuli responsive foams combining particles and selfassembling fatty acids[END_REF]. The concept is based on the use of internal heat sources incorporated into the foam matrix to generate the heat. Carbon black particles have been used since they can absorb UV light and dissipate the adsorbed energy as heat. The advantage of using light as a stimulus instead of heating the foam externally is that light avoid the physical contact with the sample and can be directed at a precise location of interest with high resolution. The carbon black particles are entrapped inside the continuous oily phase. Without UV, the oil foams are ultrastable, but under UV illumination, the foam destabilization process begins and bubbles disappear [START_REF] Fameau | Smart Nonaqueous Foams from Lipid-Based Oleogel[END_REF].
|
2017
|
[
"phys"
] |
[
"57111"
] |
[
"573"
] |
01419901
|
These values are in good agreement with the values reported for the maximum charge mesured on BN surfaces. [START_REF] Siria | Giant Osmotic Energy Conversion Measured in a Single Transmembrane Boron Nitride Nanotube[END_REF] In conclusion, we have investigated comparatively within ab initio DFT framework a monolayer of BN and graphene embedded in implicit water and in contact with one hydroxyl anion. We predict for the first time a contrasted chemical reactivity for graphene and BN :
a chemisorption of OH -on BN while this mechanism is not highlighted on graphene. This striking difference between the two pristine materials is linked to their electronic structure and the lack of screening for graphene surface as compared to the insulating BN. Furthermore, on the basis of atomistic thermodynamics including all the vibrational contributions to energy and entropy, we have derived a free energy of adsorption of -0.46 eV on BN which corresponds to a pK a 6, in quantitative agreement with experimental measurements. [START_REF] Siria | Giant Osmotic Energy Conversion Measured in a Single Transmembrane Boron Nitride Nanotube[END_REF] Hence this charging scenario is viable on BN and our study leans strong support to the recently-revealed non-inertness of the BN material in contact with aqueous saline phase. On the other hand, the case of graphene requires further investigations. Indeed planar graphitic surfaces show minute surface charge (as deduced from surface conductance measurements) 9
while the experiments on carbon nanotubes still exhibit a surface charge, 2 although much weaker as compared to BN tubes. [START_REF] Siria | Giant Osmotic Energy Conversion Measured in a Single Transmembrane Boron Nitride Nanotube[END_REF] This suggests the possibility of adsorption of other charged species on the carbon surfaces, not considered in the present study. Future work will involve the exploration at finite T of the charging mechanism using explicit water solvent and the inclusion of counterions using Ab Initio Molecular Dynamics AIMD, in order to evaluate a more realistic pK a value.
|
2016
|
[
"phys"
] |
[
"1371",
"1371",
"4",
"4",
"1371"
] |
[
"749049",
"847969"
] |
00259335
|
The former analysis may carry an etymologising bias, reflecting diachronic hypotheses rather than synchronic, phonetic reality. We choose the analysis as /dʑ/+/u/, which reflects the intuition that the frontness of the earlier rhyme has been entirely transferred onto the initial. Palatalisation is also having consequences on other phonemes of Yongning Na. The extreme fronting of the syllables /tɕʰi/, /tɕi/, /dʑi/ brings them in the phonetic vicinity of the syllables /tsʰɯ/, /tsɯ/, /dzɯ/. The rhymes of both sets of syllables can be described as the voiced continuation of the initial: for instance, /tsɯ/ is realised as [tsz̩ ], and /tɕi/ can be approximately transcribed in IPA as [tɕʑ̍ ], rather than [tɕi]. (The diacritic indicating syllabic status is placed over the [ʑ̍ ], not underneath it, for the sake of clarity.) It can be imagined that the continuous evolution towards fronting combined with an extreme degree of syllable-level coarticulation may threaten the distinction between these two sets (/tɕʰi/, /tɕi/, /dʑi/ on the one hand, /tsʰɯ/, /tsɯ/, /dzɯ/ on the other). The high functional load of the difference between the two sets goes against their confusion, however. So does the ever growing influence of Mandarin Chinese, where both sets exist: alveolo-palatal as in 七 / 1 tɕʰi/ 'seven' and 鸡 / 1 tɕi/ 'chicken', with less palatalisation than in Yongning Na; dental as in 此 / 3 tsʰɯ/ DEICT. and 子 / 3 tsɯ/ 'child', less fronted than in Yongning Na.
|
2008
|
[
"shs"
] |
[
"986"
] |
[
"922128"
] |
00468169
|
Note that /u/ and /ju/ are in opposition in Japanese (ex. /uku/ "to float" vs. /juku/ "to go"; /'umi/ "sea" vs. /'jumi/ (female first name)). In terms of PAM, the French pair /u-y/ could thus be considered as "two-category assimilation", in which case discrimination is expected to be excellent. Sequences /u/ and /ju/ are also found in English, but the opposition could be considered as partially neutralised because of "j-dropping", even more so in American English (see [START_REF] Wells | Longman Pronunciation Dictionary[END_REF], for example). There exist some minimal pair words in contexts that did not undergo the phenomenon (e.g. "cue" /kju/ vs. "coo" /ku/), but the two sequences are neutralised in other contexts (e.g. "tune" /tun/, "toon" /tun/). It is not easy to decide whether j-dropping was caused by the similarity between /u/ and /ju/, or j-dropping favoured the similarity between these two sequences. In any case, it is likely that AESL cannot easily assimilate French /u-y/ to English /u-ju/, unlike JSL (note also that English /u/ is diphthongised, with its F2 lowering during the vowel, unlike Japanese /u/). This case could thus be considered as an example of "single-category assimilation" (or "category-goodness difference") rather than "two-category assimilation" in terms of PAM, making it difficult for native speakers of American English to distinguish this pair. One of the reasons of the relative ease with which JSL learn to distinguish French /u/ and /y/ perceptually may be due to the fact that both /u/ and /y/ have the advantage of being focal vowels (with a high concentration of energy). Besides, the focal nature of these vowel phonemes may and should be included in their symbolic representation to better characterise them [START_REF] Vaissière | La phonétique[END_REF].
|
2009
|
[
"shs"
] |
[
"986"
] |
[
"922128"
] |
02966011
|
Executive summary
Europe 2020 strategy puts the emphasis on five main targets among which rising employment, boosting innovation through R&D as well as fighting against poverty and social exclusion. These goals are presented as interrelated and mutually reinforcing. In the framework of the Quinne project (Quality of jobs and Innovation generated Employment outcomes), this working paper brings empirical evidence on the links between innovation, employment and job quality outcomes as well as on the potential differentiated effects of innovation on different social groups. Innovation is a complex phenomenon that is often cited as a solution to create jobs and improve their quality but can also be a source of unemployment (especially for low qualified workers) and job quality deterioration. Empirical studies focusing on the impact of innovation on employment and job quality are scarce and limited, especially at the firm level. Based on a comparative study of France, Germany and Spain, three European countries with different innovation and employment profiles [START_REF] Erhel | Innovation and job quality regimes: a joint typology for the EU[END_REF], this paper evaluates the impact of innovation on employment and job quality at the firm level. It uses an innovative difference-in-differences methodology that deals with selection bias and unobserved heterogeneity. It is based on European and national databases including a rich set of innovation variables, in order to distinguish between different types of innovation in a comparative perspective (product innovation, product innovation new to the market, process innovation and organizational innovation, in accordance to the Oslo manual typology). Contrary to most studies on the impact of innovation at the firm level, it analyzes not only the effect of innovation on total employment but also on a variety of employment variables including job quality variables (types of contracts, wages, working hours etc.) and explores the possible differentiated effects of innovation on employment and job quality for different social groups (low/high educated workers, women/men…).
|
2018
|
[
"shs"
] |
[
"480325",
"186209",
"186209",
"480325",
"186209",
"480325",
"15080",
"303600",
"303600",
"300499",
"300499",
"300499"
] |
[
"737830",
"940746",
"20938"
] |
02886346
|
The feeder condition was excluded from this analysis as these ants were expected to move in a straight line and exit the board so that the actual peek number of a given ant may correlate with the location where the ant peeks. The three other groups (BeN, ByN, U) on the other hand, are expected to search on the board so any effect of the peek number is unlikely to be attributed to a specific location on the board. Interestingly, peek duration, which was recorded up to 60 s, was strongly influenced by the number of peeks previously displayed by the ant (GLM peek number: F = 17.09, P < 0.001; Fig. 2b) and not the actual test condition (GLM condition: F = 0.17, P = 0.841; Fig. 2b). The more peeks an ant had previously displayed the longer its current peeking lasts. This shows that the ant's peeking behaviour is also modulated by past information. Whether it is the time passed or the number of peek previously displayed cannot be disentangled here. DISCUSSION
Ants dragging a heavy food item backward occasionally trigger a so-called 'peeking behaviour' or 'peek': ants drop their food and turn around to look forward. Aligning their body in a familiar direction enables them to recognise the learnt visual panorama and store this direction to follow it during their subsequent backward path [START_REF] Schwarz | How Ants Use Vision When Homing Backward[END_REF]. It is clear that ants gain directional information from learnt terrestrial cues when peeking forward. However, whether or not they use terrestrial cues while dragging their food item backward is less clear.
|
2020
|
[
"sdv",
"scco"
] |
[
"519304",
"650"
] |
[
"794049",
"179512"
] |
02886337
|
From this we calculated the probability of an ant scanning (N(ants scan>0)/N(all ants)), and the median number of scans, for each trial and route segment. Before the trap: (i) N=31 proba=0.45 Median=0 (ii) N=25 proba=0.40 Median=0 (iii) N=25 proba=0.80 Median=2. Star indicate a significant increase in scan number (see text for detail). After the trap: (i) N=31 proba=0.35 Median=0 (ii) N=25 proba=0.40 Median=0 (iii) N=25 proba=0.32 Median=0. C. For the same three focal trials, the location of scans is shown relative to the Feeder (F, (0,0)), Nest
Figure 3 . 3 Figure 3. Overview of the appetitive and aversive learning experiences. A. illustrations of the concept. Across a sequence of journeys for a typical ant we show the regions of operation for aversive (red) and appetitive (green) learning. The aversive region is first formed by memory trace learning on the trials where the ant encounters the trap. Over time a detour develops, and the new route is maintained by appetitive processes.
|
2020
|
[
"sdv",
"scco"
] |
[
"650",
"519304",
"1005144",
"216273"
] |
[
"179512",
"794049"
] |
01375692
|
In vivo labeling of Aβ-positive lesions
Diffusion of R3VQ after stereotaxic brain injection
To test the ability of R3VQ to diffuse in the brain and to label in vivo amyloid plaques, six micrograms of R3VQ was stereotaxically injected into the cortex of 12-month old PS2APP mice (n=2). The mice were sacrificed 2 hours after injection. IHC revealed that R3VQ diffused in a large spherical region (approximatively 3 mm diameter) and amyloid plaques were detected in this volume (Fig. S2A). The results demonstrate a good ability of R3VQ to diffuse through living tissue and to label Aβ lesions quite rapidly (< 2 hours). To summarize, the VHH R3VQ was able to diffuse in the brain and specifically target amyloid plaques in vivo. BBB transit and diffusion of R3VQ after intravenous injection
To investigate if R3VQ-S-AF488 crossed the BBB, a 50-mg/kg dose was injected iv into two 12-month-old PS2APP mice. In parallel, two age-matched PS2APP were injected with PBS (negative controls) or with AlexaFluor 488 conjugated anti-Aβ mAb 4G8 (4G8-S-AF488; 50 mg/kg) as controls for BBB passage of standard immunoglobulins. Four hours after injection, the mice were sacrificed and paraffin brain sections were prepared. In R3VQ-S-AF488 injected mice immunostaining using anti-His mAb showed extensive plaque labeling throughout the brain. This demonstrates that the VHH conjugate was able to cross the BBB, diffuse widely in the living brain and label its target.
|
2016
|
[
"sdv",
"scco"
] |
[
"246044",
"431040",
"93273",
"51097",
"40402",
"266450",
"246044",
"93273",
"40402",
"246044",
"431040",
"51097",
"266450",
"467809",
"467809",
"246044",
"93273",
"40402",
"51097",
"246044",
"431040"
] |
[
"779180",
"735500",
"180107",
"18085"
] |
00616670
|
Levels of Participation
In Fig. 1 we positioned our various stakeholders regarding their participation level and their participation approach. This graph is given as an indication only, as it was not created on the basis of quantitative indicators, but rather on the author's perceptions, e.g. the mass media is typically a top-down informing process, elected members that have a blog or use Twitter are informing on a more bottom-up basis, the FAGE supports involvement through colloquiums, and so on. With a quick look at this figure one can conclude that the main supported participation processes are informing and consulting, and barely involving. We are thus quite far from our "ideal" participation where all stakeholders could be empowered. Stakeholders and Processes in Participation
Table 2 is an effort to characterize stakeholders in terms of the typology defined by [START_REF] Colombo | Workflow Modeling for Participatory Policy Design: Lessons form Three European Regions[END_REF]. We have already defined what commissions, groups and circles were in §3.1 and others such as media and citizens are quite self-explanatory. Let us briefly explain some additional stakeholders listed in the table:
• The Presidency is made of four members elected by the Constituent Assembly and it is mainly in charge of applying rules and monitoring the operations of the Assembly (and of signing acts). • The Bureau is formed of one delegate per group of the Constituent Assembly and is mainly responsible for planning, organizing, budgeting external relations. • The General Secretary supports the work of the Constituent Assembly; it comprises legal and administrative workers that are hired by the Assembly. This survey was conducted in an exploratory mode, as the Constituent Assembly was just elected when we began and nothing was in place.
|
2010
|
[
"shs"
] |
[
"99844"
] |
[
"862756"
] |
00420112
|
It does not use rule-or frame-based representation. In order to visualize knowledge we chose to use the knowledge maps approach and we cover the concept, competency and process types of knowledge representation. However MIMIK does not only provide for the representation, organization and storage of knowledge, it also provides instruments that support knowledge sharing, such as the prototype described in section 4. In their review of the conceptual foundations for knowledge management and in the section dedicated to knowledge transfer, [START_REF] Alavi | Review: Knowledge management and knowledge management systems: Conceptual foundations and research issues[END_REF] classify knowledge transfer channels as informal or formal, personal or impersonal. They provide a few illustrative examples: coffee break meetings are typically informal, personnel transfers within departments during a training period are formal and personal, knowledge repositories are formal and impersonal, and so on. As a basis for their analytic framework, [START_REF] Alavi | Review: Knowledge management and knowledge management systems: Conceptual foundations and research issues[END_REF] rely on the four processes of knowledge creation defined by [START_REF] Nonaka | The Knowledge Creating Company: How Japanese Companies Create the Dynamics of Evolution[END_REF][START_REF] Wasko | Why should I share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice[END_REF] explain that the availability of electronic communication technologies is no guarantee that knowledge sharing will actually take place and examine why people voluntarily contribute knowledge. They make a distinction between communities of practice, i.e. groups whose members are engaged in a shared practice and know each other and work together, and networks of practice, i.e. loosely knit groups of individuals who are engaged in a shared practice but who do not necessarily know each other. [START_REF] Wasko | Why should I share? Examining Social Capital and Knowledge Contribution in Electronic Networks of Practice[END_REF] studied knowledge sharing within networks of practice and identified two main potential problems:
-Knowledge seekers have no control over the respondents and over the quality of the responses.
|
2008
|
[
"shs"
] |
[
"99844"
] |
[
"862756"
] |
02796376
|
Given the inspiration taken from Brazil's PAA Programme, it is worth noting here that the inclusion of informal groups of farmers in Brazil contributed to the gradual inclusion of these farmers in the formal sector. The commitment of public funds is an important demonstration of priority given to the initiative, and broadly, to the frame of reference behind this initiative. In Malawi, an intersectorial department has been created with the mandate of defining and implementing a home grown school feeding strategy. PAA Africa is the only initiative which clearly counts on international funds but is also included in the public budget. The country has shown interest at both the local and national levels to advance the concept. In Mozambique, several meetings of the technical group have confirmed the need to raise the subject of the institutional markets to a higher political level in the ministries, attesting to the current low level priority. The idea of creating a consultative political group including ministries' directors has been identified by the technical group as a means of providing greater importance to the programmes and the possibility of producing institutional changes considered relevant to the sustainability of the programme and effective inclusion of smallholder farmers. Agricultural and food security strategies depend on strong institutionalization processes and hinges on the fiscal capacity to pursue a trajectory that requires financial sustainability [START_REF] Pierri | How Brazil's Agrarian Dynamics Shape Development Cooperation in Africa[END_REF]. Likewise, social protection programmes are perceived as costly and as demanding a complex delivery structure. Mozambique is highly dependent on external aid and resources raised through international cooperation are usually reserved to finance the execution of specific development projects, limited in scale and spatial coverage.
|
2015
|
[
"shs"
] |
[
"533448",
"107303",
"420902",
"107303",
"533448",
"420902",
"93029"
] |
[
"1007803",
"734676"
] |
00540724
|
But it would now attract one-stop shoppers (those
for which v A ≤ t ≤ v AL = v S ), from which L could earn a total margin r ′ AL = w AL -v ′ AL = w AL -v S = w AL -w S + r S . Since any candidate equilibrium requires r S ≥ 0, the deviation would be profitable when w AL > w S . Second, consider the boundary between the two regimes, in which one-stop shoppers are indifferent between visiting L or S (v AL = v S ). Note that there must exist some active consumers, since either retailer can profitably attract consumers by charging a small positive margin; therefore, we must have v AL = v S > 0. Suppose that all active consumers are multi-stop shoppers (in which case L only sells A while S sells B S to all consumers), which requires v AL = v S ≤ τ . Applying the same logic as in the beginning of Appendix B, we can without loss of generality focus on the case v AL = v S = τ . It is then profitable for L to transform some multi-stop shoppers into one-stop shoppers, by reducing its margin on B L to r ′ L = w Lε > 0 and increasing r A by ε, so as to keep v AL constant: doing so does not affect the total number of active consumers, but transforms those whose shopping cost lies between τ ′ = v Sv ′ L = τε and τ into one-stop shoppers. While L obtains the same margin on them (since r ′ AL = r AL ), it now obtains a higher margin r ′ A > r A on the remaining multi-stop shoppers. Therefore, some consumers must visit a single store, and by assumption must be indifferent between visiting either store (v AL = v S ). Suppose now some one-stop shoppers visit S. Since S can avoid making losses, we must then have r S ≥ 0.
|
2010
|
[
"shs"
] |
[
"444480",
"94586",
"93477"
] |
[
"1202612"
] |
00544494
|
While this paper emphasizes the adverse impact of vertical integration on information leaks and foreclosure, the same analysis could have different implications in different industry situations. For instance, in markets where the risk of information leaks already exists even in the absence of vertical integration, a vertical merger would again exacerbate this risk for the independent rivals, but would also induce the integrated firm to better protect its own subsidiary, the overall impact of vertical integration on industry innovation, consumers, and welfare would then be more ambiguous. Also, if the upstream market is quasi-monopolized, then vertical integration and the associated foreclosure effect may well distort downstream competition in a way that reduces the merging parties' profit. This concern has for instance been mentioned in 1999 by General Motors (GM) as a motivation for spinning-off its auto parts subsidiary Delphi, so as to enable it to contract with other automakers, which were reluctant to rely on Delphi as long as it was a unit of GM. 41 A similar concern may underlie AT&T's 1995 voluntary divestiture of its manufacturing arm, AT&T Technology (now Lucent), as the 1996 Telecommunication Act was due to allow the RBOCs to compete with AT&T on the long distance market. 42 Finally, while we focus on situation where the information leaks intensify competition and dissipate profits, Milliou and Petrakis (2010) consider an alternative situation in which information flows increase industry profit: Namely, imitation expands demand more than it intensifies competition. In this context, the integrated firm may well choose to communicate information from its own subsidiary to the downstream rival and vertical integration may benefit consumers as well as firms. investing firms are then respectively Π A1 = Π (ρ 1 , ρ 2 ), and Π 2 = Π θ (ρ 2 , ρ 1 ) .The equilibrium investments are thus ρ
1 = ρ + θ > ρ * > ρ 2 = ρ - θ , and U A -D 1 's expected profit is Π + θ > Π * . U A -D 1 therefore invests in reverse engineering whenever F < Π + θ -Π * . C Guarantees
In this Appendix, we prove Proposition 10, assuming that firm D 1 (resp. D 2 ) obtains a small surplus γ (in case of innovation) when buying from his favored supplier U A (resp. U B ). Suppliers' reliability is irrelevant when both downstream firms' innovation efforts are successful.
|
2010
|
[
"shs"
] |
[
"2579",
"444480",
"444480",
"90563",
"93477"
] |
[
"1202612"
] |
01573501
|
The option chosen by the worker may (under certain conditions) not be suitable for the hierarchy. If one can choose formally to take 4 weeks annual leave all of a sudden, the employer may be afraid at some point to have difficulties to meet the demands of customers. Maintaining that allowed choice of a long vacation by the worker may be interpreted under certain conditions as an autonomous action. Moreover, a prescription can be perfectly executed, but facing the aim of the control regulation. Applying a rule to the letter may be a sign and expression of a challenge to control. This type of autonomy is as real as the creation of one's own rules. So the meeting between autonomy and control is inherently uncertain in its form and in its results. The creation and use of the rules are subject to the power relationship between autonomy and control. We see that autonomy in these cases is not always the ability to produce one's own rules, but sometimes it is executing according to the rules, including those dedicated to control, in order to assert power. In other words, autonomy is not necessarily a characteristic of the individual, or a functional necessity that could define the substance previously, but only a projected will to oppose the claims of control.
|
2017
|
[
"shs"
] |
[
"560"
] |
[
"6435"
] |
01078218
|
We conjecture this to be the case for all (m, n) = (0, 1). Conjecture 3.3. For the family P of planar graphs, we have
ω a(m,n) (P) = 3(2m + n) 2 + (2m + n) + 1
for all (m, n) = (0, 1). Computational Complexity
It is known that the complexity of deciding whether, given an undirected simple graph G, colors may be assigned to the edges of G to make it an (0, 2)-clique is NP-hard. A similar result holds for orienting edges to form a (0, 1)-clique [START_REF] Bensmail | The complexity of deciding whether a graph admits an orientation with fixed weak diameter[END_REF]. Here we address a related problem concerning signed graphs, an equivalence relation on the family of (0, 2)-colored mixed graphs [START_REF] Naserasr | Homomorphisms of signed graphs[END_REF]. As we are able to formulate and address the problem using tools developed herein, we forgo a complete background and encourage the reader to consult [START_REF] Naserasr | Homomorphisms of signed graphs[END_REF], where the background of this class of graphs is described in full. Let G be a 2-edge-colored graph, i.e., a (0, 2)-colored mixed graph. An unbalanced 4-cycle of G is a 4-cycle of U (G) having an odd number of edges of the same color in G. We call G a signed clique if every pair of vertices are either adjacent or belong to an unbalanced 4-cycle. In this section we show that it is NP-complete to decide whether, given an undirected graph, we can assign colors to the edges to obtain a signed clique.
|
2017
|
[
"info"
] |
[
"211142",
"544673",
"3102",
"15325",
"3102",
"161271"
] |
[
"910355"
] |
00881861
|
We herein consider the opposite question, namely does an AP+k graph necessarily have to be (kind of) Hamiltonian? We answer this question in the negative throughout. Namely, we show that AP+k graphs can have arbitrarily small longest paths (compared to their orders). For this purpose, we consider the notion of path cover, where a path cover of some graph G is a collection of vertex-disjoint paths covering all vertices of G. It is worth mentioning that the notion of path cover and Theorem 1 were already bind in e.g. [START_REF] Johansson | An El-Zahár type condition ensuring path-factors[END_REF], where a path version of Theorem 1 is investigated. This paper is organized as follows. First, we introduce some terminology and preliminary results in Section 2. We then show, in Section 3, that, given some non-traceable AP+k graph, we can construct AP+k graphs with arbitrarily large path cover numbers. In Section 4, we prove that such non-traceable AP+k graphs can be constructed inductively starting from a small non-traceable AP+1 graph. Combining these results, we are able to construct AP+k graphs with arbitrarily large path cover numbers, and hence longest paths arbitrarily smaller than their orders. Preliminaries
Let G be a graph.
|
2013
|
[
"info"
] |
[
"3102"
] |
[
"910355"
] |
00288339
|
As far as we know, ADAGE is the only adaptable deployment framework to address a large variety of complex applications. However, all these deployment systems are designed for one-shot deployments: once the application has been started, there is no support for additional deployment nor smart process removal. The goal of this paper is to introduce a preliminary attempt at addressing the problem of joint codeployment and dynamic re-deployment of complex distributed applications on a grid. Section 2 describes a motivating scenario. Section 3 explains the various aspects of our CORDAGE tool, starting from the high-level description down to the low-level operations on the grid. Section 4 provides some details about our prototype and its performance. A motivating example: co-deploying JUXMEM and GFARM
As a motivating case-study, we describe the deployment of JUXMEM [START_REF] Antoniu | JuxMem: An adaptive supportive platform for data sharing on the grid[END_REF]. We provide here a very brief description of this software, emphasizing its deployment requirements. JUXMEM is a data-sharing service for the grid that enables transparent data sharing through the use of a unique global identifier. JUXMEM is inspired by both distributed shared-memory systems (DSM) regarding the transparent access to memory, and by peer-to-peer systems (P2P) regarding the support of dynamic reconfiguration.
|
2008
|
[
"info"
] |
[
"2526",
"2526",
"2526"
] |
[
"746326",
"1264"
] |
01023550
|
Thus the corresponding values of uncertainties are given by Eq. ( 7). Table 3 shows the intervals for a probability P = 99% for the Monte Carlo method and the LOK theory. This academic example was used to validate the implementation of the LOK theory in case of multiple large basic LOKs. The 99%-intervals presented in Tab. 1 are nearly identical, which might be expected given the simplicity of the structure. However, it is interesting to notice that intervals given by the LOK theory are included in the ones resulting from the Monte Carlo simulations. This induces higher precision at a much lower computation cost. An important question can then be asked whether the same accuracy can be obtained when considering a larger assembly, representative of a real industrial structure. The next study aims to provide an answer in this matter. Complex 3D assembly
In order to assess the accuracy of the extension of the LOK theory presented previously, it is necessary to study a case representative of a real complex structure.
|
2014
|
[
"phys"
] |
[
"247321",
"1306",
"247321",
"110103",
"1306"
] |
[
"958386",
"15159",
"3863"
] |
01023549
|
These solutions, called "snapshots", constitute the basis vectors on which the original problem will be projected. A statistical approximation of the solution can be then obtained by solving the reduced problem for many values of the uncertain parameters. Otherwise the technique is referred to as a priori, where the solution is built through the resolution of a set of separated problems, dependent of the uncertain parameters and the spatial variables. All these techniques are used to analyze uncertainty propagation in the case of system parameters that can be treated as random variables following known probability laws. Nevertheless, uncertainties are sometimes due to imprecise information, and that's why non-stochastic approaches were developed. The first intuitive non-stochastic method describes uncertainties with their range of variation, and is known as the interval theory. Even though this description is less precise than a stochastic approach, it is sufficient in many cases where the engineer's interest lies in identifying the bounds of the interval in which the output of interest varies. However, the major drawback of the interval theory is that it does not take into account the dependency of occurrences of each random variable. The interval propagation could be very pessimistic, which leads to an overestimation of the bounds. During the last years, efforts have been made to develop a method taking into account different sources of uncertainties.
|
2014
|
[
"phys"
] |
[
"247321",
"1306",
"1306",
"247321",
"110103"
] |
[
"958386",
"15159",
"3863"
] |
00967902
|
A Counter-Rotating System (CRS) is composed of a Front Rotor (FR) and a Rear Rotor (RR) which rotates in the opposite direction, as shown in Fig. 1. Compared with traditional rotor-stator system, the Rear Rotor is used not only to recover the static head but also to supply energy to the fluid. Therefore, to achieve the same performance, the use of a CRS may lead to a reduction of the rotational speed and may generate better homogeneous flow downstream of the stage. On the other hand, the mixing area in between the two rotors induces complicated interacting flow structures. The understanding of this highly unsteady flow in the mixing area is an open problem. Moreover, the design method of such machines is still not sophisticated, due to a lack of systematic studies on the influence of free parameters, such as the distribution of load (L), the axial distance (S), the ratio of the rotation rates (θ ) and so on... In the Dynfluid Laboratory, series of experiments focused on axial counter rotating fans have been performed [START_REF] Nouri | Experimental study of the instationary flow between two ducted counter-rotating rotors[END_REF]. Based on this research, three CRS (JW1, JW2 and JW3) have been designed to attain the same design point, while varying the distribution of load. The main concern in this paper is the influence of the distribution of load (L) on the overall performances of a CRS. The conception of the three CRS and their design parameters are presented in Section 2.
|
2014
|
[
"phys"
] |
[
"134975",
"134975",
"134975"
] |
[
"3893",
"762203"
] |
03004243
|
It will be important in the future to be able to take into account the true nature of the gas between the metal strands and the variation in its electrical conductivity as a function of electric field, temperature and pressure. No experimental measurements are available on DR16 cables to compare the simulated current values issued from the macroscopic model to experimental ones. Experimental current measurements were performed on a DR8 cable (same as DR16 with 3.85 mm diameter conductor. PI insulation is 50 µm thick PI and PTFE insulation is 200 µm thick) for temperatures ranging from 30°C to 90°C with mean electric fields from 1 kV/mm to 20 kV/mm. Fig. 8 shows a comparison between experimental current densities and simulated ones on DR8 model cable at 50°C. AT this temperature and field range, a good correlation is observed between simulation and measurements also on cable geometry. However, it will be interesting to perform current measurements on DR8 cable for fields higher than 20 kV/mm as the threshold in field is clearly above this value for both materials. A macroscopic model is proposed to predict the electric field distribution in plane parallel bi-layer Kapton/PTFE samples and in real aeronautic cable geometry with multilayers insulation. Depending on the applied voltage on the conductor, or on the temperature value, the maximal electric field can be localized either in the PI layer or in the PTFE layer. This behavior is directly linked to the interfacial charge, that can be positive or negative depending on the experimental conditions (voltage and temperature).
|
2020
|
[
"spi"
] |
[
"404574",
"404574"
] |
[
"741343",
"180342"
] |
03943235
|
Even the charge injection detected at 25 kV/mm for the treated sample seems stronger than the one at 25 kV/mm in an untreated one. This is due to the presence of heterocharges at the vicinity of the anode, increasing the electric field locally and hence increasing the charge injection. The same phenomenon holds at the cathode, as negative charge injection can be detected. Current measurements have been performed on reference LDPE and samples soaked for 4 hours, under fields of 10 and 15 kV/mm (Fig. 4) during 30 min. For untreated samples, the current decreases with increasing time, and there is no stabilization. For treated samples, the current is higher than the untreated one, the difference being of more than one decade in the case of 15 kV/mm. Moreover, the shape of the current is different compared to an untreated sample, as the current seems to stabilize at long time. For 10 kV/mm, the current even seems to increase with time. Such feature has already been observed in polyethylene (LDPE and XLPE) but only when the electric field is high and charge packets can be measured [START_REF] Teyssedre | Charge distribution and electroluminescence in crosslinked polyethylene under dc field[END_REF][8]. C. Effect of the soaking time on space charges measurements
The concentration of cumyl alcohol present in the LDPE sample is directly linked to the soaking time.
|
2015
|
[
"spi"
] |
[
"404574",
"404574"
] |
[
"741343",
"180342"
] |
01373036
|
The toroidal core is placed in x-y plane and three copper windings -one primary and two secondary -wrap the core. The planar copper on the top and bottom sides layouts as PCB traces. The z-direction copper winding connection can be realized by via-holes or copper pins. There is only one air gap cut in the core, to keep the core as one piece, which makes the structure stronger during lamination process and reduces the magnetic flux leakage. If two or more air gaps are distributed on the core, the air gap cannot be precisely controlled to be the desired value due to the high pressure in the lamination process. Besides, the core is hard to be in the same x-y plane if it is cut into several pieces. As for the air gap position, it locates out of windings to avoid extra AC winding loss, and between the two secondary windings to keep two secondary windings symmetric. Core material is selected as P61 from ACME. It has relatively low power loss coefficient compared with other products [START_REF] Huang | Design consideration of MHz active clamp flyback converter with GaN devices for low power adapter application[END_REF]. B.
|
2016
|
[
"spi"
] |
[
"458107",
"458107",
"458107",
"411815",
"411815",
"411815",
"419535",
"411815",
"300457"
] |
[
"989646",
"969566",
"761",
"20985"
] |
04364447
|
Taking advantage of the 1996 "mad cow" sanitary crisis, the Carrefour marketing campaign on its high-quality beef supply chain was mostly motivated by the desire to restore consumer confidence. Carrefour believed that mitigating consumer uncertainty would allow for an increase in their willingness to pay. Specifically, Carrefour emphasized through marketing and advertising two main quality attributes for its fresh beef product: (i) its organoleptic quality (in particular the tenderness and color of meat) and, (ii) its safety. The second item was not directly advertised (it is prohibited by regulation on consumer information) but this was indirectly done by emphasizing the origin of the animal (born and raised in France). The credibility of this information heavily relied on the reputation of the retailer label and its related expenses in marketing. At the same time, most of the critical points of the quality specifications were monitored and certified by an independent third-party organization. 19 This branding strategy translates itself into a private standard aimed at technically defining the quality of the final product, the specification and fulfillment of several constraints all along the production process as well as an information system ensuring the traceability of the information and of individual lots of intermediate products (see annex 1). Table 1 summarizes the relative importance of several stages of the production process for the desired quality. The quality of Cassegrain products rests mostly on the intrinsic quality of the raw materials. The choice of a relatively restricted pedo-climatic zone, homogeneous and favourable for growing vegetables, is the starting point for the quality of the products.
|
2009
|
[
"shs"
] |
[
"1002307",
"527123",
"570639",
"1002307"
] |
[
"1231590",
"1264991",
"738145"
] |
04365457
|
We develop a hypothesis linking the governance of the supply chains and the institutional environment and conduct a preliminary test which support the hypothesis. From a more normative point of view, the study of mechanisms designed to safeguard quality in the supply chains can also be useful for antitrust evaluation, when some vertical restraints seem necessary. This raises the question of compatibility between specific contractual restraints and antitrust law (see [START_REF] Joskow | Transaction Cost Economics, Antitrust Rules and Remedies[END_REF] for an application of Transaction Cost Economics to this topic). One can suggest that, far from reducing competition, some restraints result from the search for an efficient alignment between contractual hazards and governance in a competitive context. ANNEX
DATA AND QUALITATIVE INFORMATION COLLECTED IN THE RESEARCH
Quantitative data and qualitative information have been collected at different levels. A first level gives information about the general environment (both competitive and institutional environments). This general environment defines the constraints and resources surrounding individual or collective strategies and actors (part A). A second level provides information about specific quality strategies and their organizational support through a case study research. Each case study research gives the main economic and institutional components of a strategy/structure interplay at a chain level (from input suppliers to retailers) (part B). A-GENERAL ENVIRONMENT AND QUALITY STRATEGIES : A SURVEY 1-Industrial structure and institutional environment
The objective is to describe the diversity of competitive and institutional situations at hand in the three selected sectors, processed meat, cheese, fruit and vegetables.
|
2002
|
[
"shs"
] |
[
"1002307",
"570639",
"527123",
"1002307"
] |
[
"1231590",
"1264991",
"738145"
] |
00962383
|
A total of 8,050 observations with missing values are also dropped. Finally, we delete 15,897 observations associated with firms audited by non-Big 5 or non-Big 4 audit firms. We end up with a final sample for the ISP assignment analysis of 29,726 firm-year observations. For the regression analysis and ISP coefficient comparison, we delete financial institutions (SIC code 6000-6999) and observations with missing values for variables in the audit fee model. The final regression sample is composed of 23,887 firm-year observations. Table 3 provides descriptive statistics on the full sample. Insert Table 3 For the audit fee regression sample, the distribution of observations among the audit firms and years remains qualitatively unchanged compared to the full sample used for ISP calculation. Descriptive statistics
Table 4 provides a description of the audit fee and size variables used for the calculation of the market share, portfolio share and weighted market share allocations, and Table 5 presents descriptive statistics for the variables used in the audit fee regression. Insert Tables 4 & 5 here
The two tables show that our sample covers a wide range of firms with mean (median) assets of $10,254.76 ($856.59) million US dollars and mean (median) audit fees paid to auditors in the order of $1.99 ($0.79) million US dollars. Moreover, Table 5 shows that on average 31.6% of the sample firm-years experience losses and that 42.1% of the firm-years have foreign activities. Results
Internal association: Results of ISP assignments and analysis of correlations
We applied the five assignment methods with seven different measurement variables in order to classify the audit firms as "industry specialists" for each of the 70 industries and each of the 11 years included in our sample.
|
2013
|
[
"shs"
] |
[
"301020",
"301020",
"109317"
] |
[
"924336"
] |
02372119
|
Adding BMI² and waist circumference in the model significantly increased the R² from 46 to 89 % and decreased SEE from 3.18 to 1.47. Moreover addition of ultrasonic measurement as midthigh area significantly increased the R² from 0.46 to 0.89 (p<0.01). The concordance correlation calculated with the equation 6 is ρc = 0.941. We observed that the thickness of fat at umbilical level does not make a significant contribution in the equation 5. When comparing the remaining athletes, BF% determined by the equation 6 showed using the Paired t-test a non significant difference compared with DXA. Moreover the introduction
Mean bias for BF% = -0.004 ± 1.8 (P= 0.98). Cross-validation study
The characteristic of the 62 males athletes is given in Table 4. Variables
Discussion
Body composition analysis is currently used in clinics and in sports medicine. Reference methods such as DXA can provide accurate results. However this method is costly and largely inaccessible to coaches in routine.
|
2019
|
[
"shs"
] |
[
"552963"
] |
[
"1058161"
] |
02372104
|
More recently, The Beunen-Malina Freitas method was validated for the prediction of adult stature in girls aged 12-15 years. This method which does not require radiation unlike skeletal and dental maturity measurements, is best used in European populations [START_REF] Beunen | Prediction of adult height in girls: The Beunen-Malina-Freitas method[END_REF]. All insist on the fact that a longitudinal and not a transverse study, is necessary to better describe adolescent growth and that biological maturation is an essential factor. The prediction of adult stature of young basketball players is an essential step in the recruitment of future tall players. For this reason, the main objective of this study is to accurately predict the adult stature of basket-ball players from the age of 13 and a half using a non-invasive method. Methods
The prediction of adult stature of young male basketball players was established from the results of a longitudinal study within the framework of a National Research Agency program. In this program, we developed new growth curves using age and maturation from longitudinal data collected from 125 sedentary boys aged from 12 to 17.5 years. The essential longitudinal study to build these growth curves took place over a period 5 consecutive years. In order to ensure the sustainability of the study, data collection was carried out during the school years on sedentary boys. Longitudinal data were collected twice every school year.
|
2019
|
[
"shs"
] |
[
"552963"
] |
[
"1058161"
] |
01133863
|
A consumer's state is constantly influenced by the states of the other consumers within a given neighbourhood, such that the cellular automaton is constantly in motion. Thus a cellular automaton can perfectly represent both the idea of an absence of equilibrium found in Nelson and Consoli (2010 [START_REF] Pavitt | Sectoral patterns of technical change: towards a taxonomy and a theory[END_REF]) and that of individual embeddedness in influence networks (Rogers, 1995 [52]; Witt, 2001 [56]). A theory of the evolution of consumer preferences that can be used in a cellular automaton is to be found in studies of the influence of social interactions on individuals' opinions. Following the work of French (1956 [START_REF] John | A formal theory of social power[END_REF]), a number of authors, including Friedkin (1986 [16]) and Hegselmann and Krause (2002 [START_REF] Hegselmann | Opinion dynamics driven by various ways of averaging[END_REF] and 2005 [START_REF] Hill | On goods and services[END_REF]), have proposed representing an individual's opinion by the average of his opinion and that of his neighbourhood. This method has the advantage of being very flexible, since averages can be calculated in a number of different ways, just as the weighting coefficients used can be fixed or dynamic. In general terms, assuming that an individual i has j = n influential neighbours, and if x it is the opinion of the individual i at time t and a ij the importance attached by i to the opinion of his neighbour j, we have:
x it = j=n j=1 a ij .x jt-1 (2)
with a ij such as j=n j=1 a ij = 1. It can be demonstrated that equation [START_REF] Barras | Towards a theory of innovation in services[END_REF], which represents the evolution of individual preferences, leads to consensus from the point at which each pair of agents positively weights the opinion of a third agent (Hegselmann and Krause, 2002 [29]). We propose to adopt this equation to represent the dynamic of consumers' opinions in the model to be constructed in the second part of this article. Building of the agent-based model
In this second part, our objective is to construct an agent-based model of ecoinnovation in services that is consistent with the hypotheses discussed above (consumer behaviour and definition of the search space). The model will be constructed in two stages 8 .
|
2013
|
[
"shs"
] |
[
"1188",
"1188",
"1188"
] |
[
"4854"
] |
02151713
|
Silicon carbide authorize to work at high temperature and high voltage and is a good candidate for the design of a new current limiter for high voltage and high current. CURRENT LIMITATION : COMPARISON OF Si AND 6H-SiC CRD as DEVICES
Current Regulative Diode : Structure Description. (CRD) [START_REF]Current limiter circuit[END_REF] are currently used to limit current in low power applications. The current limitation is based on field effect. A cross section of the simulated device is presented in Fig. 1 . When applying a positive voltage on the cathode, the extension of space charge in the preformed channel region (6) will cause the current saturation. CRD : Simulations and Comparison Between Si / SiC. Simulation comparison of Si and 6H-SiC CRD like components were performed with ISE-tcad software to estimate the potentiality of this kind of device to realize a high voltage and high current CCL. The figure 2 shows the electrical characteristics comparison of the simulated silicon device and the equivalent silicon carbide one. Conclusion :
Considering the self heating of the CRD silicon device, thermal limitation induces low voltage capabilities.
|
2002
|
[
"spi"
] |
[
"539",
"219748",
"300746",
"300746",
"539",
"219748",
"539",
"219748",
"99129"
] |
[
"859352",
"15768",
"839241"
] |
02151712
|
The Schottky gate contact was lost and not connected for this measurement (Figure 8). Good ohmic contacts were formed, leading to a rise of current but inducing a degradation of saturation linearity. Nickel reaction with SiC involving channel geometry modification is probably at the origin of this. Using W instead of Ni should avoid this problem. Conclusion
Full compatibility of fabrication of SiC-VJFETs and lateral SiC-MESFETs has been demonstrated. High power density has been reached (30kW/cm²). State of art ohmic contact values were obtained. Next batch fabrication challenge will consist of both VJFET and MESFET optimization to realize a self controlled current limiter by a cascode-like association of MESFETs and VJFET . Metallization optimization will remain the main point to improve in the next generation of devices. Therefore, two mask levels for metal deposition and patterning should be envisaged with different metal deposition and annealing steps.
|
2002
|
[
"spi"
] |
[
"539",
"219748",
"300746",
"107568",
"539",
"219748",
"539",
"219748",
"99129"
] |
[
"859352",
"15768",
"839241"
] |
02458668
|
Later, a concept of "cracking density" is introduced and the Latin Hypercube Sampling (LHS) method is used this time to propagate the previously identified uncertainties over the thermo-mechanical model of a 2D column. A cracking density probability is then evaluated over the 2D column model. NUMERICAL MODEL
The roller-compacted concrete is a concrete which, as its nomination infer, can be placed and compacted by successive layers/lifts. It is widely applied in dam and road engineering. The RCC has a low cement content when compared to conventional concrete (CC). It will then produce less hydration heat, which is the reason why it allows greater construction rates, and consequent lower costs. Still, temperature gradients shall be controlled within the RCC structure in order to avoid cracking. RCC mechanical properties are, such as for CC, dependent on the chemical reaction evolution that occurs between cement (and additives) and water. Therefore, a thermo-chemo-mechanical model is needed in order to simulate the RCC behaviour. Several authors, such as [START_REF] Cervera | Thermo-chemomechanical model for concrete.
|
2015
|
[
"spi"
] |
[
"300854",
"1332",
"1332",
"300854"
] |
[
"19295",
"181937"
] |
04010055
|
This also raises the question of innovation orientation and social acceptability, which goes beyond the issue of knowledge coproduction [START_REF] Baba | L'acceptabilité sociale: une notion en consolidation[END_REF]. In other words, involving various actors in the coconstruction of knowledge and the emergence of new public services/devices is not strictly similar to thinking about the social acceptability of some innovations. On the one hand, citizens who participate in "co" mechanisms do not necessarily represent the whole of society in its diversity and complexity. On the other hand, the integration of stakeholders can induce biases; individuals or groups of individuals may act for their own interest or out of ideology [START_REF] Lyall | Beyond the limits to governance: new rules of engagement for the tentative governance of the life sciences[END_REF], raising the question of the intervention of public actors as a last resort. A particularly stimulating dimension of approaches in terms of RRI and tentative governance is focus on the mechanisms at work-how to experiment, to question, to adapt, and to transform the methods and procedures of governance of innovation policies. This perspective-process analysis-deepens and extends the concept of public OI. Ultimately, it is not only a matter of designing coconstruction mechanisms, such as coinitation, codesign, which are defined ex ante for the duration of the entire political action but also of demonstrating reflexivity, anticipation, and adaptability throughout the entire process. The issues of experimentation and learning are thus at the heart of this mode of governance. The following figure shows the extent to which the concept of tentative governance can flesh out that of public OI for innovation policies. With regard to innovation policies, we have used two case studies to show that (i) public actors deploy OI mechanisms through devices such as innovation prizes and their implementation opens up the possibility of real stakeholder inclusiveness; and (ii) stakeholder intervention through coproduction tools, however, brings out varied configurations depending on the American or European approach, appearing to be more notable for challenge.gov.
|
2022
|
[
"shs"
] |
[
"1001031",
"171535"
] |
[
"8958",
"11650"
] |
03524203
|
However, recently there has been a significant increase in hybrid inducement prizes combining technological and societal issues, as is the case, for example, of the "Early warning for epidemics" and "Affordable high tech for humanitarian aid" prizes. The target crowd is, to a large extent, made up of companies and individuals, and groups, and, for a small share of the prizes, international associations and organisations. Blockchains for Social Good is an exception in that a small proportion of participants are individuals (individual developers, activists, hackers). A few (but not many) prizes targeted specific groups of respondents (low-carbon hospital, i-Capital). The only configuration in which we were able to find identified individuals was in recognition prizes. For the Prize for Women Innovators, only women with an innovative entrepreneurial project can apply. A majority of prizes allow candidates to apply in groups (via a cooperation of actors): in this case, a Lead Contestant or Lead Participant must be chosen from the group. He/she will be the only one to communicate with the prize managers. This possibility is used relatively often and several interviewed managers emphasised the existence of applications run by consortiums, some of which were made up exclusively of SMBs. Blockchains for Social Good takes this even further, refusing applications from large companies. The rewards are all monetary and range from 100,000 euros to 10 million euros.
|
2022
|
[
"shs"
] |
[
"1001031"
] |
[
"8958",
"11650"
] |
04244225
|
Non-human components, including patterns of urbanisation, real-estate market business models and the politics of urban planning, are imbued with high levels of agency that jointly determine how the BRI intersects with local trajectories of urbanisation. Urban sites also serve as the fields where contextual and evolutionary processes of assemblage reframe or even redefine the BRI and its scope in specific contexts, in relation not only to doctrines of economic rentability but also to spatial imaginaries and narratives of urban futures. Assemblage thinking reminds us that urbanisation is a continuous process. It also draws attention to the interactions between the constituent components of urban sites at multiple scales. As conceptualised by [START_REF] Anderson | Assemblage and geography[END_REF], urban assemblages are underpinned by unequal power dynamics that determine access to various types of resources. We argue that new territories in Southeast Asia are opening up to urbanisation processes co-shaped by growing Chinese investments in various forms-development programmes, real-estate projects, infrastructure projects, industrial projects, factories in and outside of SEZs, etc.-which interact with national and local strategies for urban and industrial transition. These territories are located in the peripheries of secondary cities and small national capitals; Chinese public and private actors tend to target the fringes of major urban centres and economic hubs with untapped economic and logistic potential, seeking to transform them into strategic areas for Chinese economic, political or diplomatic ambitions abroad. Often these cities have already been involved in regional integration programmes or internationally sponsored infrastructure projects. There is a need to assess more finely how the BRI connects with, and benefits from, earlier regionalisation programs that have triggered urbanisation in Southeast Asia. The most dynamic of this kind has been the Greater Mekong Subregion (GMS) funded by the Asian Development Bank since 1992.
|
2023
|
[
"shs"
] |
[
"104169",
"302452",
"1005064",
"1005044",
"93027",
"300383",
"1164706"
] |
[
"173871",
"1295053",
"14960",
"18645",
"1295054"
] |
00993041
|
Results of the three other industries are qualitatively similar, as shown in panels B, C and D, suggesting that within a given assignment criterion, the use of different ISPs measures lead to inconsistent ISP allocations. It is also worthwhile to note that audit fee-based ISP measures appear to significantly differ from ISPs measured by other variables that are used as proxies for audit firm revenues. To further illustrate the contrasting results of ISP designation, we present the number of clients audited by each industry specialist auditor in Appendix 3. Based on the descriptive evidence provided by these different classification methods, we can conclude that the ISP assignment is very sensitive to the chosen ISP indicators. The analysis of the correlations between our 35 ISP variables is conducted across and within classification approaches. Table 7 provides the correlation tables for each of the seven measurement variables, across the five assignment approaches. Insert table 7 here
The correlations between the seven measurement variables within a single assignment approach are presented in Appendix 2. The correlations in Appendix 2 show that within each assignment approach, the ISPs are all positively correlated (with correlations ranging from 0.13 to 0.95). Audit fee and total fee-based measures are, not surprisingly, strongly correlated to each other, and client size measures are also correlated to each other. However, although the correlation between client size-based and audit fee-based measures are rather large, the average is70%, which means that the use of client size instead of audit fee is not neutral.
|
2013
|
[
"shs"
] |
[
"301020",
"301020",
"301020"
] |
[
"924336"
] |
00691057
|
Table 2 presents composition of the sample by industries. It shows that the largest industry is Metal and Coal Mining which has 211 observations (26% of the total sample), followed by the industry ofOil and Gas Extraction which has 151 observations (18% of the total sample). [Insert Table 2 here]
There are six main audit firms in the Canadian audit market. Those six audit firms are the BIG4 audit firms, namely Deloitte &Touche (DTT), Ernst & Young (EY), KPMG, and PricewaterhouseCoopers (PWC), and two other international audit networks which are Grant Thorntorn (GT) and BDO Dunwoody (BDO). Besides these six audit firms, other small audit firms also provides services in the Canadian market. Table 3 shows the distribution of sample companies' auditors. Table 3 Panel A separates the sample by BIG4 auditors versus NonBIG4 auditors and shows that the majority of sample companies are audited by BIG4 auditors (83%). Table 3 Panel B provides information of the number of sample companies that are audited by each audit firms. Because each small audit firm only takes very small proportion of market shares and are not comparable to the six major players, we aggregate other small audit firms as one category (Other). It shows that KMPG has the largest market share, followed by PWC, DTT and EY. [Insert Table 3 here] 4 Detailed audit fee data (with a split between legal audit fees, and non-audit fees) are not available in databases for Canadian companies.
|
2012
|
[
"shs"
] |
[
"301020",
"301020",
"109317"
] |
[
"924336"
] |
00973112
|
The financial advantage of future exporters is less pronounced in terms of the liquidity ratio and of the leverage ratio. This results is consistent with the idea that liquidity and leverage ratios, even though they inform on the financial heath of a firm, may be less consistent indexes of the ability of a firm to access external finance than multidimentional indexes as Score A and B. We performed the following robustness tests. First, we run the same regressions excluding switching firms from our sample. This implies eliminating all firms that do start exporting during the period under consideration, but do not continue to serve export markets ever since. Hence, we reduce the number of entries into foreign markets to "successful entries" only. Results are presented in table 9 of Appendix A: the estimated coefficients are lower but, in almost all cases, robust to the change of sample. Second, we implemented the specification which led GGK to reject the self-selection hypothesis for their UK dataset. 22 Results are presented in Table 12 of Appendix B. We end up with results very consistent with the previous literature.
|
2008
|
[
"shs"
] |
[
"15896",
"15896",
"250936",
"250936"
] |
[
"6219",
"742133",
"954787"
] |
01134483
|
The remainder of the chapter is organised as follows. Section 2 will start in a very simple way by considering the correlation between a number of the different proposed measures of subjective well-being. For example: Are individuals who are satisfied with their lives also happy, and do they report higher eudaimonia scores? Section 3 then considers how the different measures of well-being are correlated with individual characteristics (age, education, sex, employment, income etc.). This is particularly important in the context of public policy. Policy can affect well-being via some of these explanatory variables. And if these variables are similarly correlated with all well-being measures, then our policy choices will be less-dependent on the specific well-being measure retained. the current context, I am not able to make a statement about the relative importance of the correlated and uncorrelated components of measures of well-being. 9 An intriguing contribution somewhat along these lines is [START_REF] Tafarodi | What Makes for a Good Life? A Four-Nation Study[END_REF], who ask individuals to list the criteria by which, when they are aged 85, they could determine whether they had lived worthy lives (although the word "worthy" here might be thought to prime respondents).
|
2015
|
[
"shs"
] |
[
"139754",
"301309"
] |
[
"742286"
] |
03230851
|
These are quasi-experimental data from about 200 interviews with low-income households. [START_REF] Nowok | Linking residential relocation desires and behaviour with life domain satisfaction[END_REF] do make the distinction between geographical mobility in general and changes in housing tenure, using BHPS data. They do not analyse both at the same time though, and control for housing characteristics, such as space shortage, that may be thought to mediate part of the effect of housing transitions. [START_REF] Wolbring | Home Sweet Home! Does Moving Have (Lasting) Effects on Housing Satisfaction? [END_REF] only looks at renter-renter transitions. The full-adaptation conclusion refers to regressions without any covariates. When he introduces controls, the adaptation to renter-renter moves becomes only partial, which is what we will also conclude for this type of housing-market transition below. Including leads and lags in separate regressions implies that the omitted category for the lags is all of the periods prior to the event. If some of these periods were miserable due to lead effects, then we will overestimate the lags.
|
2021
|
[
"shs"
] |
[
"301309",
"578027",
"165020",
"505871"
] |
[
"742286"
] |
01074619
|
Which urban logistics fields seem the most adapted to public-private partnerships? From the example, we observe that PPPs can be a valid alternative to classical funding strategies. Moreover, other forms of public-private collaboration seem interesting for different types of urban logistics solutions, whereas for commercial applications or private actions they are few recommended. Furthermore, also in the case of public utility cases (mainly related to infrastructural or policy actions) the collective utility thinking dominates the other strategies. In this section we present several fields of urban logistics where public-private collaboration for funding seems a good alternative to consolidate their deployment and operability. We distinguish three categories of fields:
• Urban logistics facilities. • Urban logistics systems based on ITS and ICT • Resource sharing-based logistics schemes. Urban logistics facilities
Urban consolidation centres
Urban consolidation centres (UCC) have been considered as the main example of urban logistics [START_REF] Allen | Good Practice Guide on Urban Freight[END_REF]. However, most of the planned facilities are nowadays not operational [START_REF] Gonzalez-Feliu | FREILOT. Uban freight Energy efficiency Pilot.
|
2014
|
[
"shs"
] |
[
"738",
"304304",
"738"
] |
[
"460"
] |
03508575
|
In his article "Notes on Musical Composition in Primitive Culture" (1954), Nettl drew on all this work to dismiss "the romantic myth of the anonymous musical creation of the people as an entity, [affirming] the universalism of compositional techniques, whether music is written or not, and [underlining] the place of inspiration from the divine in many societies" (Zemp 1997: 192 about Nettl 1954: 81). Following Nettl, Alan Merriam criticized the idea that "folklore is the expression of an entire people and that the group of reference as an entity is the creator of each folklore item " (1964: 165). For Merriam, this theory was interesting, as long as one distinguished between creation, which he considered as an individual act, and style, whose character belonged to the collective. In Chapter 9 of The Anthropology of Music, he clearly laid the foundations for a study of musical creation, free of the stigmatizing stereotypes of so called "primitive" societies. "Composition seems clearly to be the product of the individual or a group of individuals and not to differ radically between literate and non-literate peoples save in the question of writing. All composition is conscious in the broadest sense of the word when viewed from an analytic standpoint. Composers may be casual individuals, specialists, or groups of people and their composition must be acceptable to society at large. Techniques of composition include at least the following: the reworking of old materials, the incorporation of borrowed or old material, improvisation, communal re-creation, creation arising out of particularly intense emotional experience, transposition and composition from individual idiosyncrasy. Composition of texts is quite as important as the composition of the sound structure. Composition involves learning, is subject to public acceptance and rejection, and is therefore a part of the broad learning process which contributes, in turn, to the processes of stability and change" (Merriam 1964: 184).
|
2021
|
[
"shs"
] |
[
"453499"
] |
[
"741432"
] |
04382048
|
Born into a well-to-do family in Bamako (his father, a senior civil servant), he developed a passion for music as a teenager. He became not only a rapper, but also a producer and owner of a DAW studio. His family tried to curb his enthusiasm for music, pushing him instead to pursue a higher education. He enrolled at Cadi Ayyad University in Marrakech, where he obtained a Master's degree in public law, specialising in political science and international relations. He completed his training with a second Master's degree in information and human resources management. After graduating, he returned to Bamako at the request of his family. During an 18-month period in which he was unemployed, he developed the ZikMali project. Mohamed's very ambitious initial goal was to 'change the music sector and permit artists to live from their art'. Though, he was then quick to concede that the project would, more modestly, 'allow artists to have an additional income' thanks to dividends from consumption of their recorded music on a download and streaming application. Mr. Diarra began by creating what he called a 'music blog' called MaliMoov, a free download and streaming platform, not unlike several that already existed in Mali (e.g. Rhhm and Bamada-city).
|
2023
|
[
"shs"
] |
[
"453499"
] |
[
"741432"
] |
03601676
|
The modernization of conventional forces is part of a purely dissuasive approach, given the hypothetical nature of threats from the great powers, including China, concerning Russian territory. Even if it can no longer claim to have universal industrial and military skills, Russia has made significant efforts in research and development, with a view to forcing companies to modernize and to regain competitiveness in military products on the war field and in exports. It has undoubtedly not lost its ambitions, but times are hard for it with the delicate learning of markets, globalization and the progressive distancing of its historical allies. However, if it no longer has the global influence of yesteryear, it remains a great military and nuclear power with which the world must always count. Russia has undergone a serious power crisis and is having difficulty asserting itself internationally and strategically. However, it has important assets, now that the restoration of public services and justice has begun. Its gas and oil production provides significant support to the economy as a whole and it has the economic means to sanction, if necessary, European countries that are heavily dependent on these energy materials. With Vladimir Putin, the desire to restore Russia's military, political and strategic power is an increasingly strong objective. The question is whether he will personally have the time to do so, since he will have to give up his mandate as President. Unless the constitution is changed in due course.
|
2008
|
[
"shs"
] |
[
"41628"
] |
[
"170260"
] |
01315182
|
ILP formulations are implemented using the IBM ILOG CPLEX Optimization Studio and executed on a PC with Intel Xeon E5649 processor under Linux. The ILP solver is executed with a time limit of 30 minutes. The default values of simulation parameters are summarized in Table 3. We simulate the Gaussian dispersion model with the parameters depicted in table 4. Moreover, we define the nodes neighboring Γ based on a given transmission range. We assume that the cost of nodes is independent from the position of the node, i. Application to the London Borough of Camden
As a proof of concept, we first execute our models on the London Borough of Camden. We use streetlights as potential positions of sensors in order to alleviate the energy constraints. The streetlights data set was provided by the Camden DataStore [START_REF]Camden Data: Camden lighting point[END_REF]. Camden is spread over an area of around 8km × 6km and contains 19 pollution sources.
|
2016
|
[
"info"
] |
[
"184333",
"184333",
"219748",
"184333",
"221981"
] |
[
"8625",
"748334",
"1193"
] |
01171765
|
Based on this pollution dispersion modeling and the related work on ILP formulations of WSN coverage and connectivity, we first propose an optimization model of the minimum cost WSN deployment for air pollution monitoring. In this model, pollution coverage is formulated by analogy to the Set Covering Problem and connectivity modeling is based on the flow concept. Then, we propose a second model more effective in which coverage and connectivity are jointly modeled using only the flow concept. To the best of our knowledge, this is the first work that targets wireless sensor networks deployment for air pollution monitoring based on real pollutants dispersion modeling. The rest of this paper is organized as follows. In section II, we present air pollution modeling. Then, we review some related works on ILP formulations of WSN coverage and connectivity in section III. Section IV details our two proposed optimization models while section V shows the simulation parameters and the obtained results. Finally, we conclude and propose some perspectives in section VI. II.
|
2015
|
[
"info"
] |
[
"184333",
"184333",
"184333",
"184333",
"221981"
] |
[
"8625",
"748334",
"1193"
] |
00360978
|
This technique is a method that attempts to decrease the number of conflicts between a grammar and a lexicon in order to better "synchronize" them. Impact of Jokers on the Parsing Rate
Unlike programming languages, natural languages are highly flexible and so have to be the grammars describing them. For instance, in French, one can easily use an adjective as a noun and conversely. This fact, combined with the ability of jokers to prevent conflicts during feature unification, can lead to successful parses of sentences even with a wrongly categorized joker. Thus, an irrelevant joker can still induce a set of signatures. Therefore, it is important to observe the parsing rate a joker has implied before studying the signatures it has induced. Results
In addition to the validation of our technique, our experiments strengthen the relevance of the detection technique with new results not present in the original paper by Sagot and de La Clergerie (2006). Precision of the Error Mining Technique
The curve of Fig. 5.1. shows an obvious correlation between the best parsing rates achieved after having introduced jokers and the suspicion rates of the forms exchanged. Thus, it confirms the validity of the detection technique that provided the suspicious forms. The values used for this curve are averages after grouping the suspicious forms by range of suspicion rate. Without this grouping, the curve would present many variations making its observation difficult.
|
2007
|
[
"info"
] |
[
"59924",
"2409"
] |
[
"858079"
] |
00734449
|
As the VMC model does not incorporate the notion of a path, ants deposit pheromone on each VM-PM pair within a pheromone matrix. To choose a particular VM as the next one to pack in a PM a probabilistic decision rule is used. This rule is based on the current pheromone concentration information on the VM-PM pair and a heuristic information which guides the ants towards choosing VMs leading to better overall PM utilization. Hence, the higher the amount of pheromone and heuristic information associated with an VM-PM pair is, the higher the probability that it will be chosen. At the end of each cycle, all solutions are compared and the one requiring the least number of PMs is saved as the new global best solution. Afterwards, the pheromone matrix is updated to simulate pheromone evaporation and reinforce VM-PM pairs which belong to the so-far best solution. In the following sections we describe the modifications (marked bold) done to the original algorithm for considering the dynamic VMC problem. They involve the objective function, heuristic information, the pheromone evaporation rule and finally the algorithm pseudo-code. C. Formal Problem Definition
The objective function (OF) we attempt to maximize is defined by Eq. 1.
|
2012
|
[
"info"
] |
[
"106587",
"106587",
"106587"
] |
[
"879794",
"1557"
] |
00676295
|
Moreover, fault tolerance is provided at all levels of the hierarchy. Consequently, the system is able to self-heal and continue its operation despite system component failures. Finally, VM monitoring and live migration are integrated into the framework thus allowing Snooze to detect and react to overload and underload situations as well as facilitating the development of VM reconfiguration algorithms (e.g. dynamic consolidation). Last but not least, when energy savings are enabled, idle servers are automatically transitioned into a lower power state (e.g. suspend) and woken up on demand. In the future, we plan to make the system even more autonomic by removing the distinction between GMs and LCs. Consequently, the decisions when a node should play the role of GM or LC in the hierarchy will be taken by the framework instead of the system administrator upon configuration. Another important contribution of this PhD thesis is a novel nature-inspired VM consolidation algorithm based on the Ant Colony Optimization (ACO). The proposed algorithm was implemented and experimentally validated in a centralized simulation environment. The first results have demonstrated that the ACO-based approach provides superior energy gains than traditional algorithms based on the evaluated First-Fit Decreasing (FFD) heuristic and achieves nearly optimal results. In the future we plan to integrate the proposed algorithm in Snooze.
|
2012
|
[
"info"
] |
[
"106587",
"106587"
] |
[
"879794",
"1557"
] |
00263670
|
Ray tracing of implicit surface is one of these applications that use interval arithmetic to increase the quality of a produced image. However these applications are computationally demanding. One solution is to use graphics processing unit (GPU) in order to take advantage of its computational power. We describe in this paper a GPU implementation of interval operators based on the Boost library. We tested these operators on a ray tracing algorithms and observe several order of execution speed improvements over the CPU version with the same image quality. Introduction
Graphics processing units (GPUs) are rising a lot of interest in the research community as these units are cost effective and offer more computing resources than available on general-purpose processors [START_REF]GPUGems 2 : Programming Techniques for High-Performance Graphics and General-Purpose Computation[END_REF]. They can provide up to 400 speed-ups depending on the application. However, to reach this performance the application has to map well on this architecture by being heavily data-parallel with high arithmetic density. Even though the application may fit the previous requirement, there exists a class of applications that may not map well on the GPU due to floating point restrictions. The numerous floating point units of today's GPU are single precision, not fully IEEE-754 compliant and do not offer the entire set of rounding modes.
|
2008
|
[
"info"
] |
[
"3191",
"55751",
"3191"
] |
[
"177452",
"4651"
] |
01147733
|
We consider the tool as an element giving a specific ability to its owner. In our case, possessing the screwdriver allows an actor to act upon the screws. In this section, we confront two different solutions, among others, to solve this problem. In one of our solution (see Figure 6), the actions to take or put back the tool are integrated as, respectively, first and last steps of the activity. These two actions are fully synchronized with the main part of the activity. Our second solution uses two Petri nets: one is used to express that an actor can take or put back the tool and the other is the activity itself. In this second case (see Figure 7), the actor can take or put back the tool at any time and any actor holding a screwdriver can continue the activity. This solution decorrelates the actions on the tool and the actions on the screws. The two solutions have a different level of guidance and realism. The first solution restricts more the actions of the actor, focusing on a specific sequencing.
|
2015
|
[
"info"
] |
[
"210616",
"117606",
"117606",
"210616",
"117606",
"210616"
] |
[
"765044",
"1272",
"15912"
] |
01199738
|
These data provides information such as conditions for an actor to be able to execute an action. We propose here four types of attributes: Abilities, Rights, Resources and Weightings. However, it is possible to define other attributes to fit to the actors model, to specificities of the domain or to the simulation itself. Abilities refer to technical or physical abilities defined in the actor's definition. In our solution, an actor that do not fulfil the condition can not execute the action. Rights refer to social positions or conditions defined in the actor definition. If an actor is technically able to execute an action (because he or she fulfil the abilities conditions), he or she can execute it even if he or she is not allowed to. This is equivalent to him or her going beyond his or her rights. Resources define an object, tool or element that must be in the possession of the actor for him or her to execute the specified action. As with abilities, resources are technical constraints.
|
2015
|
[
"info"
] |
[
"210616",
"117606",
"210616",
"117606",
"210616",
"117606"
] |
[
"765044",
"1272",
"15912"
] |
02931554
|
Figure 4 illustrates the use of lists over time, from their introduction up to the current release at the time of writing (v5.6). We have focused on the number of calls to the two operators for list insertion, list_add and list_add_tail, as to make a list it is necessary to add elements to it. Over time, many of the new uses of these list operators have been in code that adds new functionalities, but there are also instances where uses of other ad hoc forms of lists have been converted to use the standard API. 3 As can be seen, there has been a steady increase in the use of the standard Linux list API, and this is likely to continue in the future. Therefore, a technique for helping maintainers to understand lists may prove more and more useful. Documentation of list_head structures
As illustrated by the hidddev structure considered in Section 2.1, fields of type list_head are frequently not documented by appropriate comments, making it necessary to collect information from the code to understand what kinds of data structures they represent. To gain a more precise estimate of how often documentation is missing, we have built a code search tool finding all list_head structure fields and their associated comments. The search tool has been implemented with Coccinelle [START_REF] Lawall | Coccinelle: 10 years of automated evolution in the Linux kernel[END_REF][START_REF] Padioleau | Documenting and automating collateral evolutions in Linux device drivers[END_REF] (more details about Coccinelle will be given in Section 4). It searches for comments on a field before, after, and in the middle of the field declaration, as well as summary "kerneldoc" comments before the entire structure declaration. By applying this tool to the whole Linux kernel, we found a total of 8837 list_head fields in Linux v5.6, among which 3237 have at least one comment, amounting to 36%.
|
2020
|
[
"info"
] |
[
"528894",
"541725"
] |
[
"22131",
"1076574"
] |
01384263
|
Another possibility is that breaking events, whose contribution is thought to be non-polarized, play a major role in the production of this asymmetry. This would explain why the UDA is stronger in HH than VV (the relative contribution of breaking being stronger in the former case). C. Cross-polarized data
Similarly to the co-polarized data, we now study the azimuthal variation of the cross-polarized data. These data present an azimuthal modulation whose shape is akin to the VV-polarized data but which is remarkably less pronounced when compared to the co-polarized data in terms of asymmetry magnitudes. Figure 11 illustrates an example for the azimuthal variations of the cross-polarized for run day 9 with various grazing angles. Note that since cross-polarized measurements are usually very close to the noise floor, these conclusions on azimuthal variations have been confirmed on the higher grazing angle HV data where the SNR is good enough to assign confidence to the results. Note that the azimuth axis has been rotated so that 0 • is upwind. We now investigate the azimuthal dependence of the crosspolarized NRCS (VH or HV) in the light of a recent simplified formulation [START_REF] Guérin | A simplified formulation for the crosspolarized backscattering coefficient under the second-order small slope approximation[END_REF] of the second-order small-slope approximation (SSA2), [START_REF] Voronovich | Small-slope approximation for electromagnetic wave scattering at a rough interface of two dielectric half-spaces[END_REF]. In this work, a numerically efficient and almost equivalent version of SSA2 was developed for the cross-polarized backscatter. It was shown that, starting about 30 • away from nadir, the cross-polarized component, σ 0 HV , is well approximated by a quantity proportional to the wave number spectrum taken at the Bragg frequency, Γ B times the radar-filtered directional mean square slope, mss y crosswise to the radar line-of-sight.
|
2016
|
[
"spi"
] |
[
"252060",
"252060",
"174023",
"191652"
] |
[
"4588"
] |
02615286
|
SETHI is a pod-based system operating onboard a Falcon 20 Dassault aircraft, which is owned by AvDEF. Data under analysis for this study have been collected within the framework of the POLLUPROOF campaign during which quad-pol SAR data were acquired at X-and Lband, with a range (across-track) resolution of 0.5 m (bandwidth from 9.6 to 9.9 GHz) and 1.0 m (bandwidth from 1.25 to 1.4 GHz), respectively. Images are processed with an azimuth (along-track) resolution equal to the range resolution at both X-and L-band, which implies an integration time equal to 1.1 s at X-band and 4.1 s at Lband. The instrument noise floor has been estimated using the method proposed in [START_REF] Hajnsek | Inversion of surface parameters from polarimetric SAR[END_REF] and the results are shown in fig. 1. The estimated Noise Equivalent Sigma0 (NESZ) is very low, ranging from around -37 to -50 dB at X-band and from around -51 to -53 dB at L-band, providing a good signal to noise ratio (SNR) over the imaged areas for efficient analysis. Figure 1
RESULTS
To have a complete azimuth scan in the purpose of studying the directional dependency, a circular flight has been performed at a nearly constant incidence angle of 45° at X and L band. This scan corresponds to wind and wave conditions indicated in the first row of table 1. Figure 2 illustrates the variations of the resulting co-polarized and cross-polarized NRCS for this scan with respect to the azimuth angle relative to the wind direction. We recover the commonly observed pattern of a sinusoidal variation with respect to the wind direction: a maximum in the upwind direction, a secondary maximum in the downwind direction and two minima in the crosswind directions. This pattern can be accurately modeled by a truncated Fourier series expansion.
|
2017
|
[
"sdu"
] |
[
"252060",
"252060",
"191652"
] |
[
"4588"
] |
04269903
|
Same as in the previous section, the positive value of the sensitivity coefficient indicates that the increase in the input parameter leads to an increase in output and the inverse of the sensitivity coefficient is negative. Usually, the reaction kinetic parameters have a greater influence on the total production rate than on temperature. The largest effect of Ea on the total production rate is observed. The Speak on the front and back are -9.52 and -9.67 respectively. These values appear at the beginning of the decomposition. This is because the total production rate is very small at the beginning and small variations can lead to a large S which is similar as kvirgin and Cp,virgin on MLR. As the simulation proceeds, S decreases, but it remains relatively significant at the end. On the front and back sides, Speak are -7.09 and -9.50 respectively. In addition, the MLR is also sensitive to A and F. The Speak are all bigger than 1. Temperature is most sensitive to Ea where the Speak on the front and back are -0.23 and -0.21 respectively.
|
2023
|
[
"spi"
] |
[
"962"
] |
[
"1303073"
] |
00349419
|
Other quasi-redundant parts of this control structure are networks (Fig. 6). The same as in case of controllers, one of the networks can compensate another one after system reconfiguration. Usually, two networks are primary designed due to reduction amount of transmitted data. However, in case of network failure all data could be retransmitted through second one. Described approach for subsystem's failure compensation by using the shared redundancy requires logical reconfiguration of the NCCS. Thus, in case of failure the hardware configuration is non-touched but communication ways must be changed in order to transmit the data to non-failed component or through non-failed network. Fig. 6 NCC structure of mini-helicopter with two networks
SIMULATIONS AND RESULTS
All presented networked control architectures (Fig. 4, 5, 6) were modelled by using Petri nets. This tool was chosen thanks to its ability to model different types of complex systems and dependencies within them. To provide the reliability analysis the Monte Carlo simulation (further only MCS) method was used. The multiple simulations of the modelled architecture [START_REF] Pimentel | Dependability of Distributed Control System Fault Tolerant Units[END_REF] are provided to obtain the reliability curves for all above described NCCS.
|
2008
|
[
"spi"
] |
[
"388180",
"42393",
"388180",
"42393",
"42393"
] |
[
"11685"
] |
00529658
|
The EIE-Surveyor project had participants of almost every European Country and all project partners were asked to sample students, faculty and employers in the EIE area. III. RESULTS
A total of 3,275 completed questionnaires were returned. Of these 81.1% were from students, 3.4% employers, and 5.7% academics. Responses were received from 26 different European countries although many returned small numbers that are insufficient to allow country based analyses. 9 countries returned over 200 responses. 2,691 student questionnaires were returned of which 2,641 stated their gender. Of these 14.5% were female and 85.5% male. 74.3% of the students were studying First Cycle degrees, 24.8% Masters level and 0.9% were at the Doctoral/PhD level. As might be expected 99.0% were from individuals in the 'conventional' (under 30) age range for students.
|
2010
|
[
"spi"
] |
[
"124942",
"388574"
] |
[
"11685"
] |
01831691
|
The cool-down time of the camera is comparable to the one of an equivalent DDCA without an imagery function: it is equal to 6 minutes with a THALES Cryogenics RM3. Thus, the camera remains fully compatible with operational constraints. Designing a simple optical system (based on a pinhole array and a lenslet array only) and integrating it in an adapted cold shield leads to a very low additional optical and mechanical mass to be cooled. Then, the whole system has been integrated in an operational DDCA commercialized by SOFRADIR: it is illustrated in Figure 7. CHARACTERIZATION OF THE ULTRA-THIN CRYO-GENIC INFRARED CAMERA
We performed MTF and NETD measurements. Details on the measurements method and on results can be found in previous work [START_REF] De La Barrière | Development of an infrared ultra-compact multichannel camera integrated in a SOFRADIR's Detector Dewar Cooler Assembly[END_REF]. The main conclusions are that the four channels have similar MTF curves, which means that they are identical from an optical point of view. The quality of the optical system is almost limited by the diffraction, as expected from the theoretical study. The NETD of the whole camera (detector and optics) is equal to 17mK for 50% well-fill and a scene at 293K, which is close to the NETD of the detector alone. The background current due to the instrument is very low; indeed, we measure a very low background current, around 3pA only, because the whole optical system is cooled.
|
2018
|
[
"spi"
] |
[
"531209",
"531209",
"531209",
"531212",
"531212",
"40214",
"326244"
] |
[
"867389",
"893096",
"930115"
] |
00264046
|
The database is made of more than 20.000.000 publications covering a wide range of scientific content platforms 2 . Due to the numerous access to the database, data collection was pretty slow and we had to narrow our set of concepts to 448 terms 3 . Since co-occurrences extraction was very demanding in terms of server availability, we also decided to send a query for a co-occurrence of two terms only when the two queries on single terms gave a non zero result in the "authors key-words" field (each concept has been mentioned at least once as an article key-word for the year considered). Consequently our database is made of all queries results for single terms in full text from 1975 to 2005, and every query results on full text co-occurrences for pairs of concepts that both appeared at least once as author key-words the year considered. P α p (i, j, [Y 1 ...Y 2 ]) = ( t=Y 1 ...Y 2 n t ij t=Y 1 ...Y 2 n t i ) α ( t=Y 1 ...Y 2 n t ij t=Y 1 ...Y 2 n t j ) 1 α
We will now provide some examples of visualizations built upon our paradigmatic proximity measure at different scales : micro, meso and macro levels. 4
multi-level science mapping
A classical objective in bibliometric literature is to draw knowledge maps (1; 13). Clustering methods like Kohonen maps algorithms have been used to provide smarter navigation tools in articles databases thanks to conceptual mapping of a wide research area (12; 18). Many approaches also propose to use both terms occurrences and references to help producing knowledge maps [START_REF] Van Den Besselaar | Mapping research topics using word-reference co-occurrences: A method and an exploratory case study[END_REF]. Here our approach is restrained to the mere occurrences and co-occurrences statistics but we apply our asymmetric paradigmatic proximity in order to detect more detailed structure than classical flat maps from our set of terms as we are now able to distinguish between different levels of specificity/generality. We propose to represent our initial set of concepts at three distinct levels of aggregation.
|
2008
|
[
"shs",
"info"
] |
[
"1173",
"37763",
"1173"
] |
[
"10464",
"8720"
] |
02813834
|
This offers possibilities of multi-level mapping with multiple view points on the phylogeny according to the required degree of specificity. We also introduced a measure of fields structuration, the pseudo-inclusion index, based on this new asymmetric proximity and we showed that the pseudo-inclusion index appears to be very informative when assessing the evolution of a fields of research. 4: Extract of the full phylogeny of domains related to networks studies in biology and medical research. We kept fields made of more than four terms, set a threshold on the empirical quality (0.04) and removed shortest branches for editorial purposes. Some branches have been gathered compared to GraphViz display on the basis of their thematic. Colors map the pseudo-inclusion index of the fields. Fields are labeled with their most generic term, except for the beginning of a branch or for the most recent period, where all terms are displayed. The labels of inter-period arrows indicate which terms have been lost or gained between two periods.The number on the first line of a field label is the field id, le number on the last line is the number of articles mentioning all terms of the fields in the reference database. Zoom in to see details. Note that this index is increasing along most of the branches (warm colors) although its growth rate is decreasing with time.
|
2010
|
[
"shs",
"sdv"
] |
[
"441569",
"1033530",
"99539",
"128571"
] |
[
"8720",
"10464"
] |
01076298
|
Currently Saffron supports combining topics with other topics (up to three topics), an author with a topic, and a topic with an expert. Shifting through Breadcrumbs
As the user shifts and narrows in the information space Saffron constructs a navigation path in the form of the breadcrumbs located at the top of each page (see Figure 3). Shifting between resources merely substitutes the current resource in the breadcrumbs while narrowing adds more resources (the additional constraints) to the breadcrumbs. Each individual breadcrumb item can be removed with an X button. Alternatively, the user can click on any of the breadcrumb items in order to navigate to the page for the corresponding resource. This mechanism is another way of supporting the Shift strategy although a very limited one as it only allows the user to move to previously visited parts of the information space and not new unexplored areas. Summary
In contrast to other systems described earlier in the Related Work, in Saffron's user interface the Shift and Narrow facilities are interwined and coexist at any given point of the exploration. The user can shift between resources and at any point decide to narrow down the focus. While in the narrowed view the user can step back, narrow further, but also directly shift to any resource of the current global context. INITIAL USER STUDY
We conducted an initial user study to collect feedback on the Shift and Narrow mechanisms as implemented in the Saffron interface.
|
2014
|
[
"shs",
"info"
] |
[
"373032",
"373032",
"178918"
] |
[
"960939",
"960940",
"960941"
] |
01076477
|
Targeted drug delivery has become a target in itself justifying a warlike mobilization, reinforced by a rhetorical arsenal of 'therapeutic missile', 'smart bombs', and the like. As the military metaphors inspire a convergence of efforts on a common target designated as the enemy, they are self-vindicating. In using them, nanotechnologists, physicians, and patients struggling against cancer may feel that they participate in a general mobilisation for an intrinsically good cause (Nerlich, 2012). The magic bullet and the two cultures of pharmacology
The climate of crisis in pharmaceutical innovation may have intensified the use of warfare metaphors but the metaphors have been around in the twentieth century. Their success testifies to the triumph of the chemical approach to medicine, initiated by Paracelsus in the sixteenth century against the older Galenic tradition 7 . Paracelsus grounded his therapeutic approach on a theory of secrete correspondences or sympathies between each of the seven metals known in his time and the specific parts of the human body [START_REF] Debus | The French Paracelsians. The Chemical Challenge to the Medical and Scientific Tradition in Early Modern France[END_REF][START_REF] Debus | Paracelsus and the Medical Revolution of the Renaissance; A 500th Anniversary Celebration[END_REF]. He assumed that disease was caused by external foreign agents acting as poisons on a specific area of the body. Diseases were viewed as localized physical things that the physician sought to eradicate from the body with the help of an appropriate chemical substance. By contrast, the Galenic tradition emphasized the role of fluids and viewed disease as the result of disturbance in the balance of fluids due to an excess of one of the four basic humours.
|
2014
|
[
"shs"
] |
[
"123745"
] |
[
"857053"
] |
02573574
|
This metaphor connecting the "second Renaissance" [START_REF] Aberkane | Unbottling genes[END_REF] with the old Book of Nature trope points to a stark contrast of anthropological attitudes. While the first Industrial Revolution initiated an economy of resources where consumption was the main objective, biomimicry supports a knowledge-based bio-economy, where learning becomes a priority. However, the medieval metaphor of the Book of Nature was embedded in a specific metaphysics: Nature and the Bible were the two books of God's revelation. Nature had a divine origin. It was a coherent whole created by an intelligent designer. This theological context provided the background for Galileo's oft-quoted passage from the Assayer where he stated that the book of nature was written in the "language of mathematics". 4 Does it mean that biomimicry is bound to theological commitments? Although some fringe websites hint at connecting creationism with biomimicry principles, Benyus's own metaphysical views are closer to a holistic materialism. Humans are an integral part of the system of life; they are just "one vote in a parliament of 30 million (perhaps even 100 million), a species among species" (Benyus, 1997: 8). They consequently should act from within nature and try to fit in with nature's patterns [START_REF] Dicks | The philosophy of biomimicry[END_REF].
|
2019
|
[
"shs"
] |
[
"84078"
] |
[
"857053"
] |
00962455
|
Unsurprisingly, the cost of a bailout, relative to GDP, is higher in countries where bank balance sheets are large relative to GDP: the cost exceeds 100 percent of GDP in Cyprus, Spain, Ireland, the Netherlands, Portugal, Sweden and the United Kingdom. In these countries, even a 5 percent instantaneous probability of a banking crisis has a dramatic impact on the tax gap. The latter rises by 13.3 percentage points between he first two columns in Cyprus, 7.7 pp in Spain, 5.6 pp in Ireland, 4.8 pp in the Netherland, 5.8 pp in Portugal, 6.0 pp in Sweden and 5.8 pp in the UK. With a 10 percent instantaneous probability of a crisis, all these countries display tax gaps exceeding 6 percent, and four of them are double-digit. It should be noted also that even in countries with a more limited cost in the case of a crisis, the impact of accounting for a crisis event is significant. For instance, the German tax gap rises by 2.1 pp when accounting for a 5 percent probability of a crisis. The Finish tax gap rises by 1.0 pp although the cost of a bail out would be "only" 20.8% of GDP in this country. Contingent liabilities and fiscal sustainability: an alternative micro-based approach
Obviously, the figures obtained in the previous section are very large due to extreme assumptions concerning the amount of the bail-out in the case of a systemic bank crisis (indeed, we implicitly assume that the value of the RWA collapses to zero). Another approach is to rely on [START_REF] Acharya | Measuring Systemic Risk[END_REF] and [START_REF] Acharya | Measuring and Managing Global Systemic Risk[END_REF] who propose a measure of how much capital would a bank need in the case of a financial crisis. Their methodology proceeds in two steps. First, they estimate the long run marginal expected shortfall in percent (LRM ES), in the case of a 40% decline in market returns, which they consider the threshold of a financial crisis.
|
2012
|
[
"shs"
] |
[
"301309",
"39083",
"300604"
] |
[
"1356738",
"933048"
] |
04449557
|
And, unlike private firms that used human power, many state manufacturing firms in the first wave resorted to animal power in producing energy, whereas railways and steam navigation in the second wave resorted to steam power. Evaluating whether this technical shift increased output per worker or not requires having systematic data on output and costs from the production side for both state and private firms (e.g. manufacturing census) which we currently do not have. But based on the variation in survival across state firms, even after the dissolution of state monopolies in 1838, it is plausible that state firms might have varied in their relative productivity vis-à-vis the private sector. 8 Despite these common features, there is a counter-argument that the program does not qualify as industrialization. First, Egypt's program did not trigger a (permanent) shift of the labor force from the primary to the secondary sector, and the primary sector continued to employ a high share (65 percent) of Egypt's population. 9 Second, unlike the market-driven first IR, Egypt's firms were owned and run by the state and not by the private sector. Third, and perhaps most importantly, Egypt did not achieve "Modern Economic Growth" (MEG) during the nineteenth century. 10 With this counter-argument in mind, is it precise to characterize Egypt's program as industrialization? We argue that the answer is yes. First, the share of the population that was employed in the primary sector in Conti-nental Europe during the first IR was not much lower than in Egypt,11 suggesting that there was a limited employment shift to the secondary sector during this period.12 Second, like the Egyptian program, the twentiethcentury industrialization programs of China, Soviet Union, and Eastern Europe were all state-run.
|
2018
|
[
"shs"
] |
[
"1002422",
"1002422"
] |
[
"1356605"
] |
04444147
|
The first question triggered a long debate among historians. M. Fahmy (1954, 73-6) argues that state industrialization led to the abolition of restrictions on labor that existed under the guild system, and thus to the complete liberation of the labor market. Baer (1964, 127-49) criticizes this argument and refers to the strong qualitative evidence that shows that the guild system survived until the early twentieth century, and that the state manufactories did not give the final blow to the medieval guilds, although it might have indeed hurt specific guilds. In support of Baer's thesis, Owen (2002, 76) points out that even textile workers who were allegedly hurt the most did survive in the aftermath of the closure of many of the manufactories. In the same direction, Marsot (1984, 181-3) states that the guild system remained largely unaffected except in the occupations that were in direct competition with the manufactories, although she also notices that the factory system opened possibilities for children to work and to be promoted "by training rather than by heredity" in contrast to the guild rules. Nonetheless, a slightly different viewpoint is provided by Ghazaleh (1999, 116-9), who criticizes Baer's thesis on the grounds that although the guilds did indeed survive until the early twentieth century they were largely weakened by the manufacturing program and gradually came under state control. In response to Baer's claim that the artisanal guilds, which were affected the most by the program, did not constitute more than one third of the total size of guild membership, Ghazaleh points to the difficulty of estimating the total number of guild members let alone their breakdown. The census samples can provide useful insights into answering this question. First, by having the occupational title recorded for every male, along with the name of the work establishment if the person is employed by the government, one is able to estimate the percentage of labor force that was employed in the state manufactories (henceforth, modern sector) in 1848 under Muhammad Ali and twenty years later in 1868 under a second wave of state industrialization that focused on transportation. Second, equally importantly, one is able to estimate the size and breakdown of the traditional guilds outside the manufactories (henceforth, traditional sector).
|
2013
|
[
"shs"
] |
[
"1002422",
"506116"
] |
[
"1356605"
] |
01247599
|
Typicality, graphical representation, and packaging were influential factors in consumers' pleasantness and sensory perceptions. These factors also changed the perceived taste intensity the product. Graphic representation of sensory components influenced product evaluations and acceptability, which suggests new avenues for segmentation and positioning in the food industry. In practice, graphic representations may be particularly useful in conveying atypical taste combinations, which increases the acceptability of potential line extensions and encourages development of concepts directly linked to flavoring. Limitations and future research
This research has several limitations that could open the door to further research. First, the studies did not account for several variables that could influence beliefs and evaluations of products, such as the brand name or typeface used on the label, as well as mediating variables such as perceived authenticity. The typography could be a particularly interesting variable to manipulate, in that certain fonts might be more appropriate for innovations, depending on the product category. The use of certain typographies likely creates greater congruence with an innovative connotation, as well as increases the significance of the social representation of the product for the consumer. Secondly, Mueller, Lockshin, and Louviere (2009) recommend caution when using direct measures with visual packaging attributes, because their comparison to indirect discrete choice experiments finds a greater impact in discrete choice experiment, as well as significant preference heterogeneity. Further research therefore could therefore replicate the studies reported herein using this measurement method.
|
2016
|
[
"shs"
] |
[
"456031",
"74911",
"355894"
] |
[
"1643",
"183602"
] |
00773586
|
Precedence constraint means that each subtask can start its execution when all of its predecessors have finished theirs. If there is an edge from subtask τ i,u to τ i,v , then we can say that τ i,u is a predecessor of τ i,v , and τ i,v has to wait for τ i,u to finish its execution before it can start its own. Each subtask in the graph may have multiple predecessors, and multiple successors as well, but each graph should have a single source and a single sink vertex. In this work, we study the global scheduling of n synchronous periodic parallel real-time graphs with implicit deadlines on m identical processor system. A task set is denoted as Γ = {τ 1 , τ 2 , ..., τ n }, where each graph has period equals to its deadline. The schedulability is studied on the hyper period of each taskset. A. Notation
Definition 1: Critical path [START_REF] Qamhieh | A Parallelizing Algorithm for Real-Time Tasks of Directed Acyclic Graphs Model[END_REF] of a graph τ i is the longest path in the graph through its subtasks when respecting their dependencies. CP i = j∈critical subtasks c i,j
The critical path of the graph τ 1 from Figure 1 is
(τ 1,1 , τ 1,2 , τ 1,6 ) and CP 1 = 8. Definition 2: The worst case execution time of a graph C i is the total execution time of all the subtasks in the graph τ i when executed sequentially.
|
2012
|
[
"info"
] |
[
"3210",
"3210",
"2445"
] |
[
"920263",
"1337",
"18827"
] |
01090627
|
Knowing that the threads of segment Si,j have equal WCETs denoted by ei,j, which is equivalent to the shortest sequential execution length of the segment, we can identify how many entire threads are added to the master thread. Let fi,j denote the number of threads from segment Si,j to be added to the master thread: fi,j = fi * ei,j(mi,j -1) ei,j
= fi * (mi,j -1) (3)
According to this, each segment Si,j adds ⌊fi,j⌋ entire threads and a fraction of a thread of length (fi,j -⌊fi,j⌋)ei,j to the master thread. As a result, the slack Sli of task τ ′ i is filled completely and a fully-stretch master thread τ ′ i with U master i equal to 1 is generated. We conclude that each segment Si,j adds in total (1+fi,j) threads to the master thread (including the critical thread), while the remaining threads of the segment execute in parallel with the master thread. Hence, each segment Si,j has an intermediate deadline Di,j calculated as follows:
Di,j = (1 + fi,j) * ei,j (4)
From the definition of the MTS model, segments of a task τ ′ i execute sequentially and when one segment completes its execution, its successor starts its own. Hence, at any time t ≥ 0, there is only one active segment from each task τ ′ i . According to this, we can define an intermediate offset Oi,j for each segment Si,j ∈ τ ′ i based on the intermediate deadlines of the segments, where:
∀Si,j : j > 1 → Oi,j = j-1 k=1 D i,k
and Oi,1 = 0 (since τi has no offset). After applying the stretching algorithm, a segment Si,j of τ ′ i comprises of:
• a thread τ master i,j
which is part of the master thread τ master i of τ ′ i . It has a WCET of Di,j and a deadline of Di,j. • (mi,j -⌊fi,j⌋-2) parallel constrained-deadline threads with a WCET of ei,j and a deadline Di,j.
|
2014
|
[
"info"
] |
[
"3210",
"26324",
"3210"
] |
[
"920263",
"18827",
"1337"
] |
01660175
|
Figure 3 :
3 Figure 3: Code example for the hierarchical construction of graphical and audio rendering components as well as mouse interactions. The example features the looped playback of an audio file with a running cursor that the user can move to arbitrarily control the playback. 2 1 Track 2 Figure 4 :
124 Figure 4: Overview of the hierarchical relations between the main classes of the user interface API. Provided shapes and their possible alignment among a shared Timeline through the Track. Figure 5 :
5 Figure 5: Code example for the construction of two Track objects and their association to a Timeline. The example emphasizes how the Track objects are inserted into the HTML document through their relation to a given DOM element. 1 <Figure 6 :
16 Figure 6: A pseudo-code example for the DOM structure created by a Track when rendered into the DOM and how the visibleWidth and offset attributes of the TimelineTimeContext are used by a Track to maintain its DOM structure. Figure 7 :
7 Figure 7: A pseudo-code example for the DOM structure of a Layer when inserted and rendered into a Track and how the start, duration and offset attributes of the LayerTimeContext bounded to the Layer are used to maintain its DOM structure. Figure 8 :
8 Figure 8: Propagation of an event emitted by an EventSource to the current state of a Timeline. Different states extending the BaseState abstract class can modify programmatically different aspects of the visualization.
|
2016
|
[
"info"
] |
[
"259427",
"503036",
"503036",
"410627"
] |
[
"176787",
"6764"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.