text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Are there bilingual advantages on nonlinguistic interference tasks? Implications for the plasticity of executive control processes
Abstract
It has been proposed that the unique need for early bilinguals to manage multiple languages while their executive control mechanisms are developing might result in long-term cognitive advantages on inhibitory control processes that generalize beyond the language domain. We review the empirical data from the literature on nonlinguistic interference tasks to assess the validity of this proposed bilingual inhibitory control advantage. Our review of these findings reveals that the bilingual advantage on conflict resolution, which by hypothesis is mediated by inhibitory control, is sporadic at best, and in some cases conspicuously absent. A robust finding from this review is that bilinguals typically outperform monolinguals on both compatible and incompatible trials, often by similar magnitudes..
KeywordsBilingual advantageInhibitory controlNon-linguistic interference taskCognitive controlExecutive processing
A hallmark of the central nervous system is its tremendous capacity for change as a product of experience. This change is the signature of learning, a concept that will be defined broadly here as the measurable behavioral advantages that emerge as a function of training over time. An enduring question in the field of psychology has been the extent to which frequent exposure to or expertise in a particular task produces more general cognitive advantages. That is, when is learning strictly task specific, and when does it transfer to other, qualitatively similar tasks?
Some of the earliest models for learning (e.g., computation theories of learning) confined the skills required to successfully complete any one of a number of qualitatively distinct tasks to localized regions or pathways in the brain. Although one problem with these localizationist models (see Poggio & Bizzi, 2004, for a review) is that they severely restricted the generalizability of learning, considerable experimental research has continued to show, especially in the domains of perceptual learning and skill acquisition, evidence for task-specific learning (Fahle, 2004, 2005; Fiorentini & Berardi, 1980; Maehara & Goryo, 2003; Speelman & Kirsner, 1997). Saffell and Matthews (2003), for example, demonstrated that participants who trained extensively on a direction discrimination task failed to transfer this training to a speed discrimination task, and vice versa. Similarly, Ball, Berch, Helmers, Jobe, Leveck, Marisiske et al. (2002) have demonstrated that individuals who are trained in visual search show little transfer between search performance and memory or reasoning tests.
Although there are many examples of nontransfer between two arguably similar tasks, there are increasingly more studies that purport to demonstrate a link between engagement in certain activities and general cognitive advantages. One area of study that continues to demonstrate associations between lifelong activities and general, effective cognitive functioning relates to the cognitive enrichment hypothesis (Hebb, 1947, 1949). According to this hypothesis, a wide variety of specific lifestyle factors have pervasive beneficial effects on cognitive functioning well into old age (Fratiglioni, Paillard-Borg, & Winblad, 2004). High levels of physical activity throughout the life span, for example, are associated with protection against cognitive decline (Yaffe, Barnes, Nevitt, Lui, & Covinsky, 2001). Similarly, elevated participation in mentally stimulating activities (Wilson, Bennett, Bienias, Mendes de Leon, Bienias, Morris, & Evans, 2005), social interaction (Bassuk, Glass, & Berkman, 1999), intellectually demanding employment (Potter, Helms, & Plassman, 2008), and video game play (e.g., Gopher, Weil, & Bareket, 1994; C. S. Green & Bavelier, 2003; see C. S. Green & Bavelier, 2008, for a review) all seem to be associated with a general improvement in cognitive outcomes (see Hertzog, Kramer, Wilson, & Lindenberger, 2009, for a review of some of these factors).
In a similar vein of thought, Bialystok, Craik, and Freedman (2007) have shown that fluency in two languages protects against symptoms of dementia into old age. In a cohort of 184 patients selected from a memory clinic in Toronto, comprising an approximately even distribution of bilinguals and monolinguals who were equated for various other social and cognitive factors, the onset of dementia occurred 4.1 years later in bilinguals. The impetus for this investigation was the exciting report by Bialystok, Craik, Klein, and Viswanathan (2004) of better conflict resolution by bilinguals than monolinguals, particularly among older participants, in a nonlinguistic interference paradigm (the Simon task). The dramatic implication of this result is that the requirement imposed on bilinguals to manage two languages renders long-term cognitive benefits that extend beyond the sphere of language. The empirical findings leading to this conclusion, however, have been unreliable in children (Bialystok, Martin, & Viswanathan, 2005; Martin-Rhee & Bialystok, 2008) and young adults (Bialystok, 2006; Bialystok, Craik, Grady, Chau, Ishii, Gunji, & Pantev, 2005; Costa, Hernández, Costa-Faidella, & Sebastián-Gallés, 2009) and have been understudied in older age groups. The purpose of the present review is to examine the extent to which a bilingual advantage is present on tasks that require the ignoring of irrelevant, nonlinguistic information.
Regulation of the language system
Much of the work on bilingual advantages on conflict resolution that has developed in the twenty-first century has been stimulated by D. W. Green’s (1998) inhibitory control theory. D. W. Green proposed that an inhibitory control mechanism mediates the suppression of task-dependent irrelevant language in bilinguals. According to this model, there is parallel activation of lexical items associated with a particular concept between languages. The assumption, then, is that a particular experience or thought activates semantically linked units in both languages. In order to retrieve the desired word, one of these lexical candidates (often called “lemmas”) needs to be inhibited. The model hypothesizes a supervisory attentional system (SAS) that responds reactively (via inhibition) in a manner directly proportional to the degree of parallel activation elicited by a particular experience. That is, if an irrelevant language is strongly activated, the amount of inhibition generated by the SAS will increase proportionally in order to suppress the irrelevant information. The SAS therefore allows for the successful retrieval of the relevant semantic unit for speech or language by resolving the conflict associated with two simultaneously activated semantic units, by virtue of inhibition.
The assumption that competition may arise between two semantic units owing to parallel activation has been validated to some extent by empirical research. Data that converge on this idea show that bilinguals are also slower on picture-naming tasks (Gollan, Montoya, Fennema-Notestine, & Morris, 2005), produce fewer words in verbal fluency tasks (Rosselli, Ardila, Araujo, Weekes, Caracciolo, Padilla, & Ostrosky-Solis, 2000), perform worse on lexical decision tasks (Ransdell & Fischler, 1987), and experience much more difficulty with lexical access, despite sometimes similar receptive vocabulary scores (Gollan & Acenas, 2004; Yan & Nicoladis, 2009; see Bialystok, 2009a, for a review). Importantly, what might unite all of these findings is the idea that a second, task-irrelevant language is interfering with the production of a relevant linguistic response.
Moreover, there is an asymmetrical cost for switching from a dominant language (L1) to a nondominant language (L2) that is consistent with D. W. Green’s (1998) reactive inhibition assumption. Meuter and Allport (1999), for example, have shown that bilinguals are slower to name digits in L1 if the preceding digit is named in L2, as compared to when bilinguals first name a digit in L1 and subsequently name a second digit in L2. This asymmetry has been taken to indicate that more inhibition is required to suppress the dominant language and that this inhibition is more persistent than the inhibition for L2. It has also since been shown, despite some earlier controversy (e.g., Finkbeiner, Almeida, Janssen, & Caramazza, 2006), that this inhibition operates independently of or above and beyond any inhibition that may have arisen because of the cue repetition characteristics of a cued language-switching paradigm or any response set that may have emerged for a specific pattern of stimuli (Philipp & Koch, 2009). Thus, there is compelling evidence that the production of one language in lieu of the other engages certain inhibitory processes and that, in line with D. W. Green’s assumption, the inhibition required to suppress L1 is stronger than that required to suppress L2.
A second assumption of this model, stemming from the first, is that the mechanism that resolves conflict between two simultaneously activated linguistic representations is not necessarily language specific. That is, there may be a common brain mechanism that mediates many instances of cognitive conflict. This is a possibility if one hypothesizes an executive control system, possibly located in the frontal lobes (Goldman-Rakic, 1996), that has widespread inhibitory processing capacities throughout the central nervous system (e.g., Miyake, Friedman, Emerson, Witzki, Howerter, & Wagner, 2000).
Are the inhibitory processes involved in language specific to language tasks?
Early evidence revealed that young bilinguals tend to outperform monolinguals on tasks requiring the suppression of irrelevant information that had at one time been relevant. This has been shown in the dimensional change card sort (Bialystok, 1999) and in tasks for which there is a large amount of to-be-ignored irrelevant input, as in detecting grammatical errors while ignoring irrelevant and anomalous semantic content (Bialystok, 1988). These bilingual advantages might be expected if the same inhibitory control mechanism were used for all tasks involving conflict resolution. In this case, the routine need of bilinguals to suppress irrelevant lemmas would fine-tune this central inhibitory control mechanism. It is conceivable, however, that these particular tasks might engage a language-specific inhibitory mechanism allowing for improved accuracy. In the context of detecting grammatical errors, it is relatively self-evident that this type of processing might engage language-specific mechanisms. In the dimensional change card sort, although this is perhaps less obvious, the presentation of geometric shapes (i.e., a square or a circle) colored either red or blue might activate well-developed inhibitory control mechanisms for language. The improved ability of bilinguals to switch from sorting on one dimension to another may have more to do with coding the physical properties of the stimuli linguistically and exploiting the well-developed inhibitory processes of a language control system, rather than a more efficient SAS owing to bilingualism. More recently, Bialystok, Craik, and Luk (2008) found that bilinguals showed a smaller Stroop effect than did monolinguals. Importantly, this advantage was probably due to the superior ability of bilinguals to eliminate the influence of the irrelevant word. More importantly, to demonstrate that bilingualism confers a general inhibitory control advantage would require the use of tasks that are not so obviously language driven as the Stroop task.
One such paradigm, which has only recently been used to explore bilingual-versus-monolingual differences, is the task-switching paradigm (Garbin, Sanjuan, Forn, Bustamante, Rodriguez-Pujadas, Belloch et al., 2010; Prior & MacWhinney, 2010). So long as neither the switching nor the tasks are linguistically mediated, this paradigm would seem to have face validity for this purpose. However, when testing the idea that developmentally early and frequent switching between two languages causes a generalized improvement in inhibitory control, we believe that neither language (content) nor switching (mental operation) should be involved when assessing whether the advantage is “general.” We will address these studies in a more detailed theoretical discussion later on (see the Task Switching, Language Switching, and Neurocognitive Mechanisms section), but—primarily for this reason, and also because there are, as yet, very few studies of nonlinguistic task switching—such studies are not included in our empirical review.
An illustration of the Simon, spatial Stroop, and flanker interference tasks, respectively. Stimulus–response (S–R: Simon effect and spatial Stroop task) and stimulus–stimulus (S–S: flanker task, although arguably also representing an instance of S-R compatibility, see Egner, 2007) compatibility conditions are segregated by the midline. For the Simon and spatial Stroop tasks, when the task-irrelevant location of the task-relevant stimulus dimension (a to-be-discriminated color or arrow, respectively) corresponds with the location of the response, there is S–R compatibility. When there is noncorrespondence, there is S–R incompatibility. For the flanker task, when the task-irrelevant arrows are congruent with the direction of the central target arrow, there is S–S compatibility. When the task-irrelevant arrows are incongruent with the direction of the central target arrow, there is S–S incompatibility
A second task, closely resembling the Simon task but sometimes considered to be more difficult, is known as the “spatial Stroop task” or, occasionally, the “Simon arrow task” (e.g., Bialystok, 2006). In this task (see Fig. 1, middle), the target attribute, rather than being purely nonspatial (e.g., color), is a leftward or rightward arrow whose direction must be discriminated. In this task, the target arrow’s extracted form is a spatial attribute that will either be congruent or incongruent with the task-irrelevant location of the arrow.
The difference in RTs between trials on which the response and target onset positions are compatible (congruent trials) and on which the response and target onset positions are incompatible (incongruent trials) is known as the “Simon effect.” While this type of task, and variations thereof, could conceivably engage language-specific mechanisms to some extent, the Simon effect, which has been found in nonlinguistic species (Courtière, Hardouin, Burle, Vidal, & Hasbroucq, 2007; Urcuioli, Vu, & Proctor, 2005), is generally considered to be nonlinguistic.
A third approach to test this hypothesis has been to use the flanker task (Eriksen & Eriksen, 1974). This task (see Fig. 1, bottom), which has been embedded in the Attentional Network Test (ANT; Fan, McCandliss, Sommer, Raz, & Posner, 2002), has been used to examine inhibitory control processes in bilinguals (e.g., Carlson & Meltzoff, 2008; Costa et al., 2009; Costa, Hernández, & Sebastián-Gallés, 2008). In the flanker component of the ANT, a central target arrow points either left or right. The target arrow may be flanked by two arrows in close spatial proximity on each side. On half of such trials, the flanking arrows point in either the same (congruent trials) or the opposite (incongruent trials) direction as the target arrow. The difference in RTs between trials with congruent and incongruent arrows, which we will refer to as the “flanker effect” is, much like the Simon effect, taken to index the ability to suppress irrelevant information (but see Kornblum, Hasbroucq, & Osman, 1990 for theoretical dissociations between tasks; or Egner, 2008, for empirical dissociations). The flanker and Simon effects will be collectively referred to as “interference effects.”
Frequent use of the inhibitory processes involved in language selection in bilinguals will result in more efficient inhibitory processes, which will confer general advantages on nonlinguistic interference tasks—that is, those requiring conflict resolution. These advantages will be reflected in reduced interference effects in bilinguals as compared to monolinguals. In other words, bilinguals should show an advantage over monolinguals on trials with response conflict.
A critical review of the literature that has used these tasks to answer this question is the principal aim of this article.
The studies1
Key information about all the experiments that have so far been published that address the bilingual executive control advantage hypothesis. Bold font denotes the tasks that are illustrated in the empirical review
Interference effects: overview
a Top panel: The interference effect for monolinguals versus the interference effect for bilinguals, for each condition in all experiments. All values above the diagonal show an advantage for bilinguals. Conversely, all values below the diagonal show an advantage for monolinguals. (The data from Costa et al. (2008) are collapsed across all networks measured by the ANT) Data from the studies presented in Table 1 Differences between bilinguals and monolinguals on the interference effect as a function of age. A positive value is indicative of an advantage for bilinguals on the interference effect (i.e., bilinguals encounter less conflict). Data from the studies presented in Table 1 are included in this figure
It is apparent from Fig. 2a that few experiments have reported dramatically large interference effects and bilingual advantages, with the remaining data showing much smaller interference effects and, overall, little or no bilingual advantage. It is apparent from Fig. 2b, firstly, that the magnitudes of the interference effects between monolinguals and bilinguals are very similar for young adults and children. The absence of a bilingual advantage in these age groups is simply inconsistent with the proposal that bilingualism has a general positive effect on inhibitory control processes (i.e., BICA). Secondly, the magnitude of the interference effects seems to become markedly more pronounced in the middle-aged and old-aged participants. Importantly, for these age groups, the bilingual advantage appears to be robust. Although there is evidence to suggest that the magnitude of the Simon effect increases as a function of age, the standard Simon effect in older adults seems to peak at around 70 ms (Kubo-Kawai & Kawai, 2010; Van der Lubbe & Verleger, 2002). Thus, it is very puzzling that Simon effects for the monolingual language groups, in particular, were (as can be seen in Fig. 2a) sometimes around the 1,000–1,800 ms range (Bialystok et al., 2004; Bialystok, Martin, & Viswanathan, 2005).
Overall (global) RT effects: overview
a Top panel: Overall bilingual advantage on response times (RTs), collapsed across congruence, across all studies. Top panel: Bilingual advantage on response times (RTs) across all studies ,for congru. c Top panel: Bilingual advantage on response times (RTs) across all studies, for incongru. Note that very few of these studies were identical in methodology (see the text for details).
Recall that the inhibitory control model predicts superior performance for bilinguals on incongruent trials specifically. It is here where broadly defined inhibitory processes, or something akin to an SAS, might be more efficient in suppressing task-irrelevant input. It is hard to imagine how more efficient inhibitory processes would confer a benefit on congruent trials. That the bilingual advantage on the interference effect emerges almost invariably in the presence of a global advantage casts doubt on the role of a centrally based inhibitory process developed to resolve all instances of possible conflict (see Costa et al., 2009, for similar theoretical assertions). Yet, that bilinguals outperform monolinguals in overall performance so long as the task entails some level of conflict, however, strongly suggests that there is a cognitive advantage related to second-language learning (Bialystok, 2006; 2009a; Bialystok & Craik, 2010; Costa et al., 2009). As we will see later, and to reinforce an earlier point, current theoretical approaches appear to be at a loss to explain this robust phenomenon. We will return to this issue and offer an explicit proposal to explain the bilingual global advantage, after a more detailed examination of the empirical results.
Interference and global effects across the lifespan
Performance of elderly and middle-aged monolinguals and bilinguals on interference tasks
Left panel: Magnitude of the bilingual advantage on the interference effect for elderly adults (mean
Left panel: Magnitude of the bilingual advantage on the interference effect for middle-aged adults (mean age =
For five reasons, some potentially relevant aspects of their experimental approaches will be outlined in detail. First, these studies generated a tremendous amount of interest in the possibility that bilinguals have more efficient inhibitory control processes than monolinguals. As such, conditions that are fruitful for observing these results ought to be better known, so as to encourage follow-up research. Second, as already noted (in the Interference effects: overview section), at least one aspect of the results from these studies is anomalous, in that the magnitudes of the interference effects are extraordinarily large. Third, interference effect differences between language groups are typically only reported in middle- and older-aged groups. Fourth, these empirical data have not been replicated or are only partially replicated under a very restricted set of conditions (e.g., Bialystok et al., 2008), and it is thus important to identify why there are empirical differences. Finally, the interesting explanation of these nonreplications, which for the most part have been conducted with younger participants, is that the bilingual advantage on inhibitory control processes becomes more apparent as inhibitory control processes decline with increasing age. A less interesting explanation is that one or more methodological features were present in the original studies but not in subsequent investigations. These two possibilities are not necessarily mutually exclusive.
Using the Simon task, Bialystok et al. (2004) published the first study to evaluate general (nonlinguistic) inhibitory control processes in bilinguals as compared to monolinguals. In a series of three experiments, it was reported that bilinguals showed a smaller Simon effect than monolinguals, and this was interpreted as providing strong support for the BICA hypothesis by showing that older bilinguals had superior inhibitory control processes, perhaps reflecting a greater immunity to the ubiquitous cognitive decline with normal aging that is seen in this important executive control function.
In Experiment 1, Bialystok et al. (2004), administered a standard Simon task to two language groups (monolingual and bilingual) comprising 20 participants, each of which was decomposed into subgroups on the basis of age (middle-aged participants, with an age ranging from 30 to 54 years, and elderly participants, with an age ranging from 60 to 88 years). For each participant in the monolingual group, there was a gender-matched participant in the bilingual group of the same age. All bilinguals had begun to learn a second language at the age of 6 years and were, for the most part, considered to be equally proficient in both languages (as indexed by the language background questionnaire). The monolinguals were all native Canadian residents, while the bilinguals were all native residents of Southern India. This confound raises some concern about potential cultural differences, or the possibility of one or more uncontrolled demographic factors that may have influenced the outcome of this study (i.e., Bialystok, 2001; Morton & Harper, 2007; see the section Hidden Factors: The Controversy Surrounding the Implementation of Appropriate Environmental Controls, below). Nevertheless, participants were considered to be of a similar education background (given that they had all obtained bachelor’s degrees); all participants were selected from middle-class socioeconomic environments; and both groups performed similarly on Raven’s Standard Progressive Matrices, an index of general reasoning abilities and intelligence.
The first experimental design (Exp. 1) for the Simon task consisted of only 28 experimental trials, for which there was an even distribution of congruent and incongruent trials. This is, as the authors admit, an uncharacteristically small number of trials for a Simon task. Participants had to discriminate leftward- or rightward-presented squares on the basis of color (red or blue). The results revealed unusually large Simon effects. Middle-aged and old-aged monolinguals showed 535- and 1,713-ms Simon effects, respectively, whereas middle-aged and old-aged bilinguals showed 40- and 748-ms Simon effects, respectively. Clearly, middle-aged and old-aged bilinguals showed smaller Simon effects than monolinguals. Nevertheless, in general, bilinguals performed all aspects of the task (congruent and incongruent trials) more rapidly than monolinguals (e.g., middle-aged monolingual congruent RT = 770 ms, whereas middle-aged bilingual congruent RT = 497 ms; old-aged monolingual congruent RT = 1,437 ms, whereas old-aged bilingual congruent RT = 911 ms). The latter findings are not easily explained by BICA.
Four conditions were tested in separate blocks in Experiment 2. The blocks consisted of 24 trials, and there were two blocks in each condition, for a total of 192 trials. One condition was a control condition in which participants had to discriminate centrally presented targets on the basis of color. A second condition, similar to that in Experiment 1, consisted of a color discrimination task with peripherally presented targets (allowing for a measure of the Simon effect). Two other conditions were identical in all respects to the previously defined conditions, with the following exception: Instead of a two-stimulus/two-response discrimination task, four stimuli were mapped onto two responses in an effort to increase the load of the stimulus–response mapping rules that the participants would have to hold in working memory. These conditions were introduced in a preset order and then reversed (allowing for 48 trials in each condition), to assess the possibility that bilinguals, instead of possessing a superior ability to ignore irrelevant input, simply enjoyed better working memory ability. In this case, keeping two colors instead of four colors in mind would theoretically impose less of a load on the working memory system. One possibility, then, was that the Simon effect would be even more pronounced in monolinguals when working memory demands were elevated.
Unlike in Experiment 1, four practice trials were provided before the two-choice discrimination conditions, and eight practice trials were provided for the four-choice discrimination conditions, to demonstrate the unique configuration of the stimuli. In these practice blocks, if an error were made, the trial was recycled into the program until all trials were completed without error. The experiment consisted of 94 participants [64 middle-aged adults (ranging from 30 to 58 years of age) and 30 older adults (ranging from 60 to 80 years of age)]. The groups were age- and gender-matched but differed socioculturally: The bilingual groups were composed of Cantonese–English residents of Hong Kong, Tamil–English residents of India, and French–English residents of Canada. All monolingual (English-speaking) participants resided in Canada. Despite these eclectic cultural backgrounds, the mean scores on the Cattell Culture Fair Intelligence Test, a nonverbal test of general intelligence, were similar. Both language groups also scored similarly on measures of working memory span.
As in Experiment 1, bilinguals were advantaged on the Simon effect as compared to monolinguals. Perhaps because of the extra practice, the RT differences in the interference effects between the two groups were substantially less than in Experiment 1. In the condition that most closely resembled Experiment 1, the magnitude difference between middle-aged bilinguals and monolinguals was 116 ms, and the respective difference was 371 ms in the elderly group. The bilingual advantage in the elderly group was significantly greater than that in the younger group. However, subsequent experimentation (Exp. 3 in Bialystok et al., 2004) on middle-aged bilinguals and monolinguals from the same communities, while replicating the bilingual advantage early in practice, also demonstrated that this advantage diminished to nonsignificance as a function of practice. Furthermore, Experiment 2 revealed that the costs of increased working memory load were greater for monolinguals than for bilinguals on the univalent (central stimulus) color discrimination task. When there was a one-to-one mapping of colors to hands, monolinguals and bilinguals performed equivalently on RT. When, however, there was a two-to-one mapping of colors to hands, bilinguals outperformed monolinguals on RTs (by 460 ms), indicating that the global bilingual advantage might not be restricted to conflict resolution tasks, so long as the working memory load is elevated. Nevertheless, these seminal data, particularly from Experiment 2, are consistent with a role for superior inhibitory control processes in bilinguals relative to monolinguals (i.e., BICA). Subsequent investigation, however, has equivocated this interpretation considerably.
Two studies, in addition to Bialystok et al. (2004), have examined interference effect differences between bilinguals and monolinguals in these age groups (Bialystok et al., 2008; Emmorey et al., 2009) while using a large number of experimental trials and prior increased practice. The advantage of more trials is that it mitigates any factors between groups that might relate to initial strategy recruitment or learning how to perform the task successfully. Of course, bilingual advantages in such processes would be interesting, but they would not support the BICA hypothesis.
Emmorey et al. (2009) administered a flanker task in which the target arrow was positioned in the center or to the left or right of center.2 Middle-aged participants (mean age = 47.76 years) were instructed to indicate the direction in which the target arrow was pointing. The irrelevant arrows pointed in either the same direction as or the opposite direction from the target arrow. The 48 trials per block consisted of an even distribution of trials in which the flankers were either congruent or incongruent with the target arrow. Two blocks of trials were administered to three language groups [a bilingual, a monolingual, and a “bimodal” group (i.e., a group that was fluent in American Sign Language)], and 12 practice trials with feedback were provided to each participant before a block of experimental trials. Education level was taken as an index of socioeconomic status (SES), and participants in all groups were statistically equivalent on this measure. Both the bilingual and bimodal groups had a lifetime of experience in both languages, but their ages of language acquisition varied, despite the fact that most bilinguals had developed their second language early in childhood. Verbal reasoning abilities, age, and proficiency ratings (between the bilingual and monolingual groups) were balanced across groups.
Bilingual participants responded more rapidly on congruent and incongruent trials than did the other two groups (consistent with BEPA). The difference in the flanker effects between the monolingual and bilingual groups was not significant (although the RTs between these two groups on congruent and incongruent trials were only presented in figure form, an extraction of these data revealed a negligible, ~4-ms advantage for monolinguals on the interference effect).
Bialystok et al. (2008) conducted an investigation using the spatial Stroop task (see Figure 1, spatial Stroop). There were 48 participants, half bilingual and half monolingual, in the elderly group (mean age = 68 years; data from the young participants were reported in the previous section). While the language groups were equated on measures of working memory ability, a monolingual advantage was found on several verbal tasks. The bilingual language group comprised participants with heterogeneous linguistic backgrounds, with a wide range of second languages. In the elderly group, 20 bilinguals were immigrants, with all except 4 having arrived in Canada before the age of 12. Years of formal education were also compared within age groups; there were no statistical differences within these groups. Bialystok, Craik, and Luk administered 192 total trials (96 congruent and 96 incongruent) in the spatial Stroop task, separated by two blocks of 96 trials in two other conditions.3 They demonstrated that bilinguals outperformed monolinguals on the interference effect but not on overall RTs (see Fig. 4, right panel). Closer inspection of the composite scores for the interference effect, however, reveals a puzzling pattern of results. Bilinguals performed, on average, 10 ms faster on incongruent trials relative to monolinguals (bilinguals = 741 ms, monolinguals = 751 ms). However, the mean difference between monolinguals and bilinguals on congruent trials was a 50-ms monolingual advantage (monolinguals = 691 ms, bilinguals = 741 ms). Collectively, these data show no global advantage. Nevertheless, the surprising tendency for monolinguals to respond about 50 ms faster on congruent and 10 ms slower on incongruent trials, as compared to bilinguals, results in a statistically significant advantage for bilinguals on the interference task (60-ms advantage). Yet, seemingly against the predictions of BICA, this advantage cannot be attributed to the ability of bilinguals to outperform monolinguals on incongruent trials. Rather, it appears to be attributable to the exceptional finding that bilinguals were responding, on average, 50 ms slower than monolinguals on congruent trials.4
Performance of monolingual and bilingual young adults on interference tasks
a Left panel: Magnitude of the bilingual advantage on the interference effect for young adults (mean age = 20–30 years). (The data from Costa et al. (2008) are from a single study in which the bilingual advantage was plotted on overall RTs and the interference effect on all networks (orienting, alerting, and executive) in the ANT and on the no-cue condition to illustrate the results on all ANT measures. The Bialystok, Craik, et al. (2005) study compared two bilingual groups (one Cantonese and one French) against a monolingual control group. Consequently, both the Cantonese and French groups are represented in this figure.). b Left panel: Magnitude of the bilingual advantage on the interference effect for young adults (mean age = 20–30 years) on unconventional implementations of the spatial Stroop and Simon tasks and the ANT. The studies from which the data were derived appear between the two panels, and the letter identifier corresponds to the study and task information in Table 1. Although varieties of the ANT that employ neutral trials (e.g., Costa et al., 2008) or no neutral trials (Costa et al., 2009) but a 1:1 ratio of congruent to incongruent trials do not exactly constitute unconventional implementations, the relative proportions of incongruent trials afford an opportunity to observe any effect that conflict trials might have on modulating either interference or the global RT effect. All bilinguals and monolinguals from Bialystok (2006) are treated collectively, irrespective of video game history. Right panel: Magnitude of the global RT advantage (based on congruent RTs; see the text) from the same studies. Positive values, in both cases, indicate an advantage for bilinguals. In two cases (denoted by unfilled circles), a reverse interference effect was obtained (i.e., faster RTs on incongruent than on congruent trials were obtained in both language groups). In these cases, interpretation is difficult, but note that the same convention of subtracting the monolingual interference effect from the bilingual interference effect was used to obtain the bilingual advantage (i.e., positive values)
Noteworthy omissions from this figure are the data from Bialystok (2006) and selected data from Costa et al. (2009). These instances entailed unconventional uses of interference tasks, which are therefore discussed and plotted separately (see Fig. 6b). Bialystok (2006) administered even distributions of congruent and incongruent trials on both the standard Simon and spatial Stroop tasks while directly manipulating the frequency of intertrial response switches (i.e., how often a stimulus change occurred that required a response different from that on the preceding trial). That is, for each task, there was a fixed order of trial presentation in blocks in which there were many intertrial response switches (28 of 40 trials) and fewer intertrial response switches (15 of 40 trials), in an effort to examine the effects of intertrial response switching. The principal result here was that bilinguals outperformed monolinguals only on global RTs and only on the spatial Stroop task when there were many intertrial response switches (presumably, then, when task difficulty was highest).6 Thus, again, in no condition was there a significant bilingual advantage on the interference effect.
Costa et al. (2009) parametrically manipulated the proportions of congruent trials in blocks of the ANT without neutral trials. There were four conditions comprising three blocks of trials: an 8%-, a 92%-, a 75%-, and a 50%-congruent condition. When extreme probability manipulations were used (e.g., 8% and 92%), a bilingual advantage was observed on neither the interference effect nor the global advantage.7 We doubt the usefulness of data from these extreme blocks, because it is difficult if not impossible to ascertain whether participants are ignoring or paying attention to and strategically taking advantage of the flankers whose direction predicts the correct response with 92% accuracy (e.g., when 8% of the trials are congruent, by responding in the direction opposite to the flanking arrows, participants could achieve 92% accuracy; conversely, when 92% of the trials are congruent, the same level of accuracy could be achieved by responding in the direction of the flanking arrows). When the probability of congruent trials was 75%, a bilingual advantage on the interference effect appeared in the first block of trials but disappeared for the remaining two blocks, whereas a statistically significant global advantage was apparent.8 When the probability was 50%, there was no advantage for bilinguals on the interference effect, whereas a global advantage was apparent that was numerically greater than in the 75% condition.9
In young adults, conventional and unconventional implementations of interference effects alike revealed little evidence to suggest that bilinguals show superior inhibitory control relative to monolinguals. On the other hand, there was a remarkably robust advantage for bilinguals on global RTs. Of the seven studies that examined this effect via standard interference tasks (Fig. 6a), four revealed an overall RT advantage for bilinguals (on the flanker task, Costa et al., 2009; Exp. 2 and Costa et al., 2008; on the Simon task, Bialystok, Craik, et al. (2005); on the spatial Stroop task, Bialystok, & DePape, 2009), with all additional investigations showing numerical advantages for bilinguals on overall RT (Bialystok, Martin, & Viswanathan, 2005; Bialystok et al., 2008; Luk et al., 2010). Critically, in this age group, when a central arrow was presented (i.e., in cases where there was no apparent competition between automatically elicited task-irrelevant and task-relevant responses), bilinguals and monolinguals performed similarly (Bialystok et al., 2008; Bialystok, Martin, & Viswanathan, 2005), suggesting that either task difficulty or the introduction of response competition (Bialystok, 2009a; Bialystok & Craik, 2010)—two ideas that are not necessarily mutually exclusive—leads to an overall RT advantage.
Performance of bilingual and monolingual children on interference tasks
Left panel: Magnitude of the bilingual advantage on the interference effect for young children (mean age < 10 years).
No studies investigating the Simon effect in young children revealed significant differences in the magnitude of the effect between monolingual and bilingual groups (see Fig. 7, left panel; Bialystok, Martin, & Viswanathan, 2005; Martin-Rhee & Bialystok, 2008; Morton & Harper, 2007). Additionally, all experiments in this group comprised a maximum of 40 experimental trials. This is an exceptionally small number of trials for research on the Simon effect, which was explicitly or implicitly justified as ensuring that the task would sustain the child’s attention. All studies, in addition, controlled for various and ultimately different potentially confounding variables.
Whereas the bilingual advantage on the interference effect was conspicuously absent in this age group, the global advantage materialized strikingly often. In two of the three investigations, comprising six experiments, the global RT advantage for bilinguals was observed five of six times (see Fig. 7, right panel; on the standard Simon task, Bialystok, Martin, & Viswanathan, 2005, Exps. 1 and 2, and Martin-Rhee & Bialystok, 2008, Exps. 1 and 2; on the spatial Stroop task, Martin-Rhee & Bialystok, 2008, Exp. 3). Importantly, this global RT advantage was usually not seen in the absence of response competition, so long as task demands were minimal (e.g., Martin-Rhee & Bialystok, 2008, for responses to centrally presented arrows).
Claiming that SES had been inadequately controlled in most previous studies, Morton and Harper (2007) directly controlled for it, producing the exceptional finding of neither an overall RT nor an interference effect advantage for bilinguals. Although their interpretation is somewhat controversial (see the next section for more information), Morton and Harper (2007) suggested that instantiating better controls over SES might have eliminated bilingual advantages.
One final note is required for the Carlson and Meltzoff (2008) findings. In this study, whose authors assiduously controlled for SES, a battery of tests (including the ANT) was administered to bilingual and monolingual children. This battery of nine tests included language-based executive tasks (e.g., Simon says) and delayed-gratification tasks, which were collectively analyzed along with the ANT to produce Composite Executive Function scores. These scores on linguistic and nonlinguistic tasks were aggregated and used as an index of executive functioning ability. On the measure of accuracy in the ANT, there was no statistical difference between the language groups. In this literature, however, there is seldom a statistical difference between language groups on accuracy (e.g., Costa et al., 2009; Costa et al., 2008; Emmorey et al., 2009). Without analyzing the RT data, it was not possible to determine whether bilinguals enjoyed superior performance.
Hidden factors: the controversy surrounding the implementation of appropriate demographic controls
Having thus demonstrated the empirical differences underlying bilingual and monolingual language groups on nonlinguistic interference tasks, before the theoretical issues can be tackled in more detail, one major assumption of the present article must be fully disclosed. This review has operated under the assumption that demographic factors have been sufficiently controlled in the research programs guiding bilingual research on inhibitory control. It is well recognized that there are a multitude of factors, aside from early exposure to a bilingual environment, that might play a crucial role in shaping the information-processing (or if you prefer, neurocognitive) systems responsible for behavior. When these factors are not well controlled, a primary concern is that some of them might contribute or lead directly to what would appear to be bilingual processing advantages, and indeed, concerns of this sort have permeated the bilingualism literature.
The constellation of social, economic and political circumstances of life have a large bearing on how children will develop both linguistically and cognitively. If bilingual children differ from each other in these dimensions, as they surely do, then they will also differ in the way that their bilingualism has interacted with the highly variable dimensions of their linguistic and cognitive development. Therefore, any averaging of relevant developmental indices across the conditions for becoming bilingual will be confounded with an array of hidden factors that crucially influence development. (Bialystok, 2001, p. 7)
Thus, at any given time, there will be considerable uncertainty as to the degree to which certain understudied or unknown factors are associated with the measures that are taken to gauge certain components of information processing. This uncertainty, however, can be allayed by determining which other factors are associated with the measures of interest in the investigation, and then by either balancing the two language groups on these factors or regressing out the variance due to them.
The most widespread, and arguably unanswered, criticism of the literature on bilingualism and executive function is an apparent failure to control sufficiently for SES. It is relatively clear that SES covaries with executive ability, where higher SES tends to be associated with better performance on measures of cognitive functioning (Mezzacappa, 2004). As such, it has been suggested that SES, rather than bilingualism, may account for the bilingual advantage (Mindt, Arentoft, Germano, D’Aquila, Scheiner, Pizzirusso et al., 2008). These views are not unfounded, despite some objection to them (e.g., Bialystok, 2009b). Rarely is SES controlled for directly in this literature. Occasionally, the highest achieved level of formal education (e.g., Bialystok et al., 2004; Bialystok et al., 2008) or selecting from middle-class neighborhoods (e.g., Bialystok, 2009b; Emmorey et al., 2009) is taken as an index of homogeneity in SES, but these measures are all relatively indirect.
Furthermore, Morton and Harper (2007, 2009), recognizing that in this literature SES had been poorly controlled, opted to replicate previous findings on the Simon effect in bilingual children. When this factor was controlled for, monolinguals experienced a significant advantage on global RTs, and a nonsignificant (approximately 70-ms) advantage on the Simon effect (see Fig. 7). Despite this result, it is somewhat difficult to fully endorse the implications that Morton and Harper (2007) draw from these findings. The most important reason is that they tested children, and it is relatively clear now that the bilingual advantage on interference effects is appreciably more elusive in young children (see the Performance of bilingual and monolingual children on interference tasks section). On the other hand, the near-ubiquitous bilingual advantage for global performance on tasks with interference was significantly reversed (to a monolingual advantage) in Morton and Harper’s (2007) study, which is difficult to reconcile with the idea that early switching between languages in bilinguals makes for a more efficient executive processing system.
A different challenge to these results was offered by Bialystok, 2009b. She noted that the 6.5- year-old children in Morton and Harper’s (2007) study were approximately one-and-a-half years older than the children tested by Bialystok, Martin, & Viswanathan, (2005), who found a global bilingual advantage in their sample. This age difference, she argued, might have been sufficient to overcome the initial difficulties experienced by monolinguals. Rebutting this challenge, however, is the finding from Martin-Rhee and Bialystok (2008) of a bilingual advantage on global RTs with 8-year-old children. Bialystok, 2009b criticism of Morton and Harper (2007), then, would entail the proposal of a very narrow window of development, with the global bilingual advantage being present at 5 years of age, reversing at 6.5 years of age, and reappearing at 8 years of age. Most importantly, however, and to stress the validity of the point made by Morton and Harper (2007) and Mindt et al. (2008), current investigations must ensure that SES is controlled for to a greater extent than it typically has been in this literature.
One further concern in this literature is the extent to which other environmental factors, perhaps confounded with SES, affect global RT and interference effect differences between language groups. It is now known, for example, that high computer use (Bialystok, Craik, et al., 2005), video game play (Bialystok, 2006), and expertise in music (Bialystok & DePape, 2009) produce global RT advantages that are similar to those shown by bilinguals. At this point, scant evidence has indicated that these groups might also experience a reduced interference effect as compared to nonplayers (see also Bailey, West, & Anderson, 2010, who showed no evidence that video game use reduces the Stroop effect).
The onus is now on current investigative work to ensure that these factors are not influencing experimental outcomes. A step in this direction has been taken by Costa et al. (2009). Here, video game play was balanced between language groups. Moreover, and to expand on this trend, the field would profit appreciably from the use of a comprehensive survey that assessed a host of life experiences that might be associated with executive control. In this way, it would be possible to rule out other environmental factors that might covary with bilingualism, and therefore possibly confound experimental outcomes.
Although this issue remains largely unsettled, the foregoing empirical results and the conditions under which they have been obtained will be considered in the remaining sections as if bilingualism, and not a combinations of possibly uncontrolled demographic factors, is responsible. As we have shown, and will summarize in the next sections, the bilingual advantage on inhibitory control is a somewhat sporadic phenomenon, in contrast to the more robust global RT advantage.
To whatever extent the reader is concerned with the possibility that these bilingual advantages are caused by inadequately controlled demographic factors (the main one being SES), rather than by bilingualism per se, these advantages require some explanation, and we believe that the mechanisms discussed later are plausible under either causal attribution. Regardless, the extent to which bilingualism is the complete, partial, or apparent cause of these data is an area that warrants further investigative work, and we urge future investigators of the BICA/BEPA hypotheses to be assiduous in their efforts to match monolinguals and bilinguals on plausibly pertinent demographic factors.
When does a bilingual advantage materialize on the interference effect?
First and foremost, all studies taken collectively—unique design characteristics notwithstanding—reveal that interference effect advantages for bilinguals are relatively elusive in young adults and children, yet can be surprisingly large in middle-aged and elderly adults, despite not being consistently observed in these groups. This pattern raises serious concerns about the applicability of the inhibitory control model (e.g., D. W. Green, 1998) to nonlinguistic domains of inhibitory control, and it obviously undermines the BICA hypothesis. Only under a restricted set of experimental conditions in which there are sometimes unusual frequencies of intertrial compatibility switches, allowing for less exposure to conflict trials (consider, e.g., Costa et al., 2009; Costa et al., 2008 ), do young-adult bilinguals exhibit a short-term advantage over monolinguals on interference effects. These findings help dispel any notion of an enduring bilingual advantage on the interference effect.
It may also be helpful to draw on Linck, Hoshino, and Kroll (2008),10 whose results, for the reasons outlined in note 10, could not be visualized in the empirical section of this review. These authors demonstrated that young adult bilinguals, in general, exhibited reduced Simon effects relative to a monolingual control group. At first glance, these findings would appear to support BICA. The most striking feature of their data, however, was that the most inexperienced bilingual group (classroom learners who had never practiced their second language abroad) among three others showed the smallest Simon effect (most importantly, 25.1 ms, relative to 43.7 ms in monolinguals) when controlling for working memory span (on which bilinguals outperformed monolinguals). This inexperienced group also outperformed bilinguals (L1 = English, L2 = Spanish) who had performed in intermediate-level university language courses but who had the “advantage” of practicing their second language in courses in an L2 environment (Spain) for 3 months (Simon effect = 43.2 ms relative to 43.7 ms in monolinguals). This result is wholly unanticipated by BICA. Furthermore, an examination of proficient Japanese and Spanish bilinguals in a subsequent experiment revealed no evidence that increased L2 proficiency (as measured by performance on a picture-naming task) had any effect on the Simon effect.11
Magnitude of the bilingual advantage on the interference effect for middle-aged adults (mean age = 40–60 years) and old-aged adults as a function of number of experimental trials. Positive values indicate an advantage for bilinguals on the interference effect, but clearly this advantage wanes as a function of the number of experimental trials
None of the studies that have examined the Simon effect in young adults have contained as few experimental trials as those studies that have investigated bilingual advantages in middle-aged and old-aged adults (Bialystok et al., 2008). Bialystok et al. (2008) explained the absence of an interference effect advantage in their study of young adults by suggesting that young adults are at the zenith of their cognitive abilities and that the present measures were not sensitive to this effect. A conceivable alternative, however, is that a bilingual advantage on inhibitory control is present in young adults, but that it disappears so quickly with practice that one is not observed in a typical study. In other words, it is possible that the rate of disappearance, with practice, of the bilingual advantage on interference effects may vary with age. Indeed, some theories of cognitive aging specifically assume that as we age it becomes more difficult to reconfigure processing (Hasher & Zacks, 1984). Recall that young children, despite being fluent in two languages, show no bilingual advantages on the interference effect even with few experimental trials. Theoretically, then, children might be able to reconfigure relevant inhibitory control centers more rapidly than older adults. This would result in an increase in the short-lived interference effects with age, an increase that could be delayed by a well-oiled executive system.
An outstanding but important research question, however, must be whether the interference effects between the two language groups reach statistical equivalence, much as the findings from Bialystok et al. (2004, Exp. 3) suggest they may, even after a large number of experimental trials. If statistical differences between groups were reliable after many trials, much needed credence could be given to the possibility of an enduring, general cognitive advantage on inhibitory control processes in old age owing to bilingualism (i.e., Bialystok et al., 2007). Otherwise, we are confronted with the possibility that bilinguals only approach the task differently and that, with a minimal amount of experience on the task, monolinguals acquire this approach.
The data reviewed above, particularly the absence of a ubiquitous bilingual advantage in children and young adults, point to a rejection of the original form of the BICA hypothesis. With regard to a weakened form of the hypothesis, in which the bilingual advantage only becomes apparent in middle and old age, the evidence is at best inconclusive: That the advantage decreases so rapidly with practice (see Fig. 8), usually to nonsignificance, opens the door to a strategy difference rather than a structural advantage in the neural networks responsible for inhibitory control.
When does the bilingual advantage materialize on global RTs?
Having established that the bilingual advantage on the interference effect is rare rather than ubiquitous, and even when observed disappears with practice, it now seems appropriate to focus on the robustly observed advantage on global RTs and to begin developing a theoretical framework that might explain this phenomenon. The bilingual advantage on global RTs appears to materialize on any nonlinguistic interference task in children, middle-aged adults, and the elderly. The effect on global RTs is robust in these age groups, but the effect becomes more pronounced when task difficulty is elevated (Bialystok et al., 2004; Martin-Rhee & Bialystok, 2008). In young adults, the global RT advantage is detected ubiquitously on spatial Stroop and flanker interference tasks [especially when the frequency of intertrial switches (Bialystok, 2006; Costa et al., 2009) is high], though seemingly not in Simon tasks (Bialystok, 2006). The latter finding must be prefaced by the caveat that, to date, only one Simon task study has been reported in the literature comparing monolingual and bilingual young adults. Whereas it has already been shown in the conflict resolution literature on the bilingual advantage that conflict within a trial is not required in order to obtain a bilingual advantage, given that bilinguals outperform monolinguals on overall RT (Bialystok et al., 2004), there are studies showing that the magnitude of the global RT advantage might be more readily detected when there is a higher frequency of intertrial compatibility switches (Costa et al., 2009). The import of this observation for theory development cannot be understated, principally because of recent advances in the area of “conflict monitoring,” which have identified a potentially domain-general neurocognitive system, to which studies of this sort are most pertinent.
Why does the bilingual advantage materialize on global RT? In search of a theoretical framework
In recent years, conflict monitoring has been a hot topic of research. At its roots, it has examined the extent to which intertrial compatibility switches affect performance. Importantly, a considerable amount of research has demonstrated that a complex network subsuming several higher-order cognitive domains might be driving these so-called “sequencing effects.” If an advantage for bilinguals were found in sequencing effects, the implication could be that the bilingual advantage, rather than being restricted to general inhibitory control processes (an idea that is contradicted by the data reviewed above), extends more generally to many cognitive domains. The complex network that explains sequencing effects has been referred to as the “conflict-monitoring system” (Botvinick, Braver, Barch, Carter, & Cohen, 2001; Botvinick, Nystrom, Fissell, Carter, & Cohen, 1999). Because the conflict-monitoring proposal is a promising theoretical construct to account for global RT differences (Bialystok, 2006; Costa et al., 2009; Costa et al., 2008), it and its prospective relation to bilingualism will be addressed next.
Classic conflict monitoring
The classic conflict-monitoring theory, proposed by Botvinick and colleagues, suggests that a particular area in the frontal lobe, the anterior cingulate cortex (ACC), detects conflict, allowing for online shifts of attentional control that are regulated by the dorsolateral prefrontal cortex, which causes trial-by-trial modulations of cognitive control over the suppression of task-irrelevant input. More specifically, when task-relevant and task-irrelevant input automatically elicit competing responses, the conflict-monitoring system detects this discrepancy, and the level of cognitive control is consequently elevated to reduce the influence of the task-irrelevant dimension on response selection. The neuroscientific understanding of the conflict-monitoring system affords an opportunity to extend cognitive theoretical constructs for behavioral phenomena to specific brain regions or centers.
The conflict-monitoring proposal has evolved as a result of earlier findings showing first-order sequencing effects in traditional interference tasks (Gratton, Coles, & Donchin, 1992). Thus, at the crux of the conflict-monitoring account is the empirically validated proposal that congruent and incongruent trial response times are affected differentially depending on whether the preceding trial is congruent or incongruent (e.g., Chen, Li, He, & Chen, 2009; Gratton et al., 1992; Stadler & Hogan, 1996; Stürmer, Leuthold, Soetens, Schröter, & Sommer, 2002). The conflict-monitoring system operates as follows: on incongruent trials, two competing responses are activated—one for the task-irrelevant input and one for the task-relevant input. In this instance, the conflict-monitoring system detects the discrepant activated responses and, consequently, increases the level of cognitive control in order to ensure that the task-appropriate response is selected. Following an incongruent trial, the increased level of cognitive control needed to suppress extraneous information remains activated, resulting in significantly reduced interference effects in the subsequent trial as cognitive control is extended to suppress the task-irrelevant attribute, irrespective of whether the task-irrelevant attribute is congruent or incongruent with the task-relevant attribute (e.g., Stürmer et al., 2002; Wühr & Ansorge, 2005). Conversely, following a congruent trial, increased cognitive control is not recruited by the conflict-monitoring system, and thereafter the level of cognitive control in place is low, allowing for the task-irrelevant attributes to exert a greater influence over response selection (but see Wühr & Ansorge, 2005, for an indication that the ACC [or an “ancillary monitoring mechanism”] may also play a role in the Simon task on congruent trials, and Bialystok, Craik, et al., 2005, for MEG evidence partially consistent with this). Consequently, when the current trial is incongruent and the preceding trial was congruent, the magnitude of the interference effect is magnified as compared to when the preceding trial was incongruent (additional studies illustrating this robust empirical phenomenon can be found in Akçay & Hazeltine, 2008; Funes, Lupiáñez, & Humphreys, 2010; Iani, Rubichi, Gherri, & Nicoletti, 2009; Kerns, Cohen, MacDonald, Cho, Stenger, & Carter, 2004; Ullsperger, Bylsma, & Botvinick, 2005). The pattern of sequence effects resulting in cognitive up-regulation is typically referred to as “conflict adaptation.”
The conflict-monitoring proposal has similarly been called upon to explain how bilinguals attenuate the influence of one of two conflicting lemmas (e.g., Hernandez, Dapretto, Mazziotta, & Bookheimer, 2001) and the superior ability of bilinguals to switch between nonlinguistic tasks or “mental sets” (e.g., monolinguals show greater switch costs when switching between shape naming and color naming in a block of trials; Prior & MacWhinney, 2010). Costa et al. (2009) also drew a link to conflict monitoring, and Bialystok (2006) wrestled with a conceptually similar but somewhat distal idea of online monitoring. If a far-reaching, highly integrated system, like the conflict-monitoring system, were highly developed in bilinguals owing to a perpetual need to manage multiple languages, the theoretical implication would be that bilinguals would excel at most (primarily nonlinguistic) tasks that impose elevated demands on cognitive systems. This system would ultimately have the ability to account for global RT advantages in particular if, similar to what was proposed by the inhibitory control model for language, the conflict-monitoring account applied generally to instances in which online shifts of attentional control were required.
A link between conflict monitoring and bilingualism
Assuming that the conflict-monitoring system is adapted to detect any instance in which a conflict materialized, one could reasonably follow the same logical road map as D. W. Green (1998) to explain why bilinguals might possess a more advanced monitoring system. Thus, when two conflicting lemmas are activated simultaneously, the conflict-monitoring system will recognize the presence of two simultaneously active competing responses, adjust the level of cognitive control to aid in the resolution of competing representations, and signal relevant pathways to allow for task-appropriate response selection.
As we have shown earlier, there is little, or only sporadic, evidence to suggest a bilingual inhibitory control advantage on nonlinguistic interference tasks. However, if the advantage were owing to a general conflict-monitoring system in which one objective was to modulate processing in order to ensure an elevated level of cognitive control, such that response selection was universally improved in tasks for which a higher level of cognitive control was required, a global RT advantage would be expected. Similarly, if the conflict-monitoring system were involved in simultaneous language management, the frequent requirement for cognitive control in bilinguals would likely lead to an improvement in this area and in any other brain structure that contributed to cognitive control.
A possible extension of this proposal is suggested by the claim that the ACC encodes information about effort. With this in mind, it is interesting to consider the hypothesis that conflict might serve as an index of the demand for mental effort. Consistent with this, it has been noted that the ACC becomes active in just those task settings that are experienced as cognitively difficult. (p. 545)
Presumably then, the detection of a demanding task would result in the up-regulation of cognitive control to ensure optimal performance. The somewhat controversial hypothesis that can be derived from this line of thinking is that the constant strain of language management on the conflict-monitoring system might strengthen the extent to which bilinguals can focus processing on task-relevant stimuli (via cognitive control). When the task is difficult (competing responses comprise one instance of this, but certainly not the only instance), bilinguals may then be able to exercise superior cognitive control over responding to the relevant attributes of the task. Converging evidence in favor of this proposal would be provided if a global advantage were detected on a variety of nonlinguistic and noninterference tasks in which the principal manipulations were to increase cognitive demands in the absence of explicit (flanker) or implicit (Simon) response conflict. Just such a finding was reported by Bialystok et al. (2004) when they found a global advantage with centrally presented stimuli (entailing no Simon-generated response conflict) when they increased the cognitive load by increasing the number of stimulus–response mappings.
Task switching, language switching, and neurocognitive mechanisms
Abutalebi and Green (2008) have linked the vast literature on language-switching tasks in bilinguals to the literature on (putatively nonlinguistic) task switching. Yet, until recently (e.g., Garbin et al., 2010; Prior & MacWhinney, 2010), nonlinguistic task switching had not been explored in monolingual and bilingual language groups. There is compelling evidence to suggest that the ACC or components thereof (e.g., Wang, Xue, Chen, Xue, & Donga, 2007; see Abutalebi & Green, 2007, 2008, for reviews) are involved in language-switching tasks, and it has been assumed that this particular structure might also be involved in task switching (e.g., Abutalebi & Green, 2007). It must, however, be noted that this notion of the ACC being involved during task switching is a rather dramatic departure from the original boundaries imposed on the conflict-monitoring system. As described in the section above on classic conflict monitoring, it was originally thought that within-trial conflict (incongruent and not congruent trials) activated the ACC, causing cognitive up-regulation by way of the dorsolateral prefrontal cortex; i.e., increased cognitive control: see Botvinick et al., 2001; Botvinick et al., 1999; see Egner, 2007, 2008, for reviews). However, recall that a legion of researchers have attempted to extend the role of the ACC beyond intratrial conflict (Botvinick et al., 2004; Brown & Braver, 2005; Rushworth et al., 2007; see Woodward, Metzak, Meier, & Holroyd, 2008, and Hyafil, Summerfield, & Koechlin, 2009, for evidence of a distinct role for the ACC in task switching). One possibility, then, is that during task or language switching, there might be some amount of proactive interference (Philipp, Kalinich, Koch, & Schubotz, 2008) that would, because of the conflict between the current and previous representations, recruit the ACC. A simplification of this idea might be that the ACC becomes active at any time when conflict resolution is required.
Recent neuroimaging data by Garbin et al. (2010) on task switching in monolinguals and bilinguals reveal an interesting result with respect to bilinguals on task switching and conflict monitoring. They presented bivalent stimuli [i.e., colored (red or blue) shapes (squares or circles)], along with a word cue signaling participants to make a discrimination response on the basis of the color or shape of the stimulus. The experiment comprised an even number of nonswitch (color–color or shape–shape) and switch trials (color–shape or shape–color). The behavioral data indicated that bilinguals showed no switch costs (switch trials relative to nonswitch trials), whereas monolinguals showed a significant switch cost. (Note that in their purely behavioral study, Prior & MacWhinney, 2010, reported a similar pattern: Switch costs were larger for monolinguals than for bilinguals; the departure from Garbin et al.’s pattern was that bilinguals did show a significant switch cost.)
The neuroimaging data revealed a somewhat unusual dissociation between monolinguals and bilinguals. In monolingual speakers, the ACC, the right inferior frontal gyri (IFG), and the left posterior parietal lobe were reported as showing increased levels of activation on switch relative to nonswitch trials. That the ACC was involved during this task for monolinguals provides some evidence that there might be some amount of conflict that is detected by the conflict-monitoring system, or at the very least, that elements of the conflict-monitoring system are involved in task switching (Hyafil et al., 2009). In bilinguals, however, this switch-modulated activation was confined to the left IFG [which has been related to language control (Abutalebi & Green, 2007)] and left putamen, and was not observed in the ACC. On the one hand, this suggests that the ACC, the mainstay of the conflict-monitoring system, plays little or no role in mediating cognitive set in bilinguals in task switching. Conversely, multiple language use seems to result in the selective activation of the left putamen and left IFG in bilinguals, which somehow attenuates (Prior & MacWhinney, 2010) or eliminates (Garbin et al., 2010) task switch costs. A general cognitive implication of this neuroimaging finding might be that these differences are mediated by differences in strategy. In other words, when dealing with the requirement to switch rules or minimize interference from irrelevant information, bilinguals recruit different modules than do monolinguals. If true, this general possibility deviates dramatically from the typical assumption in this literature that both groups are using the same modules, but that a module in bilinguals has been made more efficient by the large amounts of linguistically mediated exercise.
With respect to neuroscientific implications, although the study of neurocognitive mechanisms in monolinguals and bilinguals in task-switching paradigms is a relatively novel enterprise in language research, these incipient data point to nontransfer of ACC-related processes from language switching to task switching. They point, instead, to the possibility that multilanguage use configures the IFG to respond to more general task demands, while the ACC appears to have a more restricted use in bilinguals.
Given that the focus of this review is bilingual performance on nonlinguistic interference tasks, specifically, it is important to describe how these neuroscientific findings relate to this topic. Firstly, and behaviorally, bilinguals do not outperform monolinguals on nonswitch trials in a task-switching paradigm, whereas they do outperform monolinguals on all trial types in nonlinguistic interference tasks. This suggests that task repetition in task switching and intertrial compatibility repetitions in interference tasks engage different processes. Secondly, neurocognitive parallels have not been established between the demand to switch tasks and the demand (in nonlinguistic interference tasks) to ignore or suppress irrelevant inputs. Furthermore, and thirdly, how well task switching can be likened neuroscientifically to intertrial compatibility switches is unknown. To this end, more specifically, it is not clear whether the reason behind the bilingual global advantage on interference tasks is due to reduced switch costs from incongruent to congruent trials or from congruent to congruent trials in which the response on trial n (e.g., > > > > >) is opposite that on the preceding (n – 1) trial (e.g., < < < < <); nor is it clear, neuroscientifically and behaviorally, whether there is a difference between bilinguals and monolinguals when trial n – 1 repeats on trial n (see the First-order sequencing effects section). Thus, there is an open question as to how closely intertrial switching of congruence or response in nonlinguistic interference tasks relates to literal task switching—a literal switching of tasks during the experiment—in bilinguals. Although it appears that bilinguals perform differently than monolinguals on task switching and that this difference has a neurocognitive correlate, it is not clear whether the IFG also plays a substantive role for largely equivalent bilingual advantages on congruent and incongruent trials in interference tasks after sufficient practice.
Nonlinguistic interference tasks, conflict monitoring, and neurocognitive mechanisms
There are only limited data on nonlinguistic interference tasks within this burgeoning area, but surely persistent research will be instrumental to developing a comprehensive, cogent theoretical framework. We are aware of only two brain-imaging investigations that have directly explored differences in neurocognitive architecture between monolinguals and bilinguals on the Simon (Bialystok, Craik, et al., 2005) and flanker (Luk et al., 2010) tasks.
Bialystok, Craik, et al. (2005; the behavioral data of which were covered in the Performance of monolingual and bilingual young adults on interference tasks section) administered a Simon task to French bilinguals, Cantonese bilinguals, and English-speaking monolinguals and used magnetoencephalography (MEG) imaging to tease apart any differences in the task-related modulation of brain activity between language groups. Although all language groups recruited similar brain regions for the task on congruent and incongruent trials, faster responses in the bilingual groups were related to increased involvement of the ACC, superior frontal, and inferior frontal regions situated predominantly in the left hemisphere, whereas faster responding in monolinguals was associated with increased activation of the middle frontal area of the left hemisphere. Comparison of the performance data with the neuroimaging findings from the different language groups in this study reveals an interesting and important dissociation. In this study, French bilinguals and monolinguals did not differ on overall RTs (they appeared to perform congruent and incongruent trials with equivalent proficiency). In contrast, the Cantonese bilinguals outperformed French bilinguals and monolinguals on both trial types. Yet, in both bilingual groups the same, above-mentioned bilingual-centric brain regions were associated with faster responding.
These results are telling for several reasons. First, because both the Cantonese and French bilinguals engaged similar brain regions when performing the task successfully, yet only the Cantonese bilinguals outperformed the monolinguals, the results underscore the idea that something other than exposure to two languages (see the section Hidden factors: the controversy surrounding the implementation of appropriate demographic controls). Second, while there were clear neurocognitive similarities in the regions on which the French and Cantonese bilinguals relied for faster RTs relative to monolinguals, the involvement of these brain regions, per se, was not necessarily responsible for improved performance. Reinforcing this idea, Bialystok, Craik, et al. (2005) noted that both French and Cantonese bilinguals demonstrated different brain–behavior correlations, which is at least somewhat suggestive that how these regions are used, and not necessarily the regions themselves, conduces to general behavioral advantages.
Luk et al. (2010) collected fMRI data from mono- and bilingual participants while they performed a flanker task comprising five randomly intermixed trial types. On congruent and incongruent trials, a singleton target chevron would appear to the left or right of center of four horizontally flanking chevrons either matching (congruent) or mismatching (incongruent) the direction of the target chevron. On neutral trials, a red target chevron was centered and flanked on each side by two diamonds. On no-go trials (which required withholding of a response to the target chevron), the target chevron appeared to the left or right of center of four horizontally aligned Xs. On baseline trials, a single target chevron appeared.12 Analyses of the behavioral data revealed no significant RT differences between language groups (there was an ~ 20-ms numerical advantage for bilinguals on congruent and incongruent trials, but this was not significant). Analyses comparing the brain–behavior relationship for congruent and incongruent trials against neutral trials revealed a striking pattern of results. Superior performance on congruent trials involved similar brain regions in both language groups. Increased activation levels in the bilateral middle occipital gyrus, left fusiform gyrus, left lingual gyrus, bilateral cerebellum, and right caudate and IFG was associated with superior performance on congruent trials. Divergence between language groups, however, was observed on incongruent trials. In bilinguals, superior incongruent performance was associated with increased activation in bilateral cerebellum, bilateral superior temporal gyri, left supramarginal gyri, bilateral postcentral gyri, and bilateral precuneus, whereas in monolinguals superior incongruent performance was associated with the same network that was identified with superior performance on congruent trials. Finally, activation of the left ACC, bilateral IFG, and right caudate nucleus was also associated with superior performance on incongruent trials, but analyses did not indicate that the involvement of these areas was unique to bilinguals. Again, however, it must be noted that bilinguals appeared to have contrasting activation patterns relative to monolinguals.
The most impressive aspect of the results from Luk et al. (2010) for the present purposes is that, relative to monolinguals, bilinguals appeared to activate different regions to respond to incongruent trials, whereas both language groups appeared to engage similar brain regions during congruent trials. This pattern seems to have reified already strong convictions in this literature that the bilingual advantage (in general) is driven by well-tuned inhibitory control processes (BICA). This variety of interpretation is commonplace today, but the locus of the advantage is not as often attributed directly to inhibitory processes. For example, Luk et al. (2010) explained that “these results support the proposition that bilingualism influences cognitive control of inhibition at the attention level, but not motor control of prepotent responses” (p. 356) and that “differential engagement of this more extensive set of regions during incongruent trials in the two groups suggests that bilinguals can recruit this control network for interference suppression more effectively than monolinguals, consistent with their tendency to show less interference in terms of RT”13 (p. 356). This conclusion is, in our opinion, impetuous and narrowly focused to the extent that it emphasizes inhibitory control. The issue with this interpretation is the same one that has beset the previously delineated interpretation of behavioral data on nonlinguistic interference tasks: The BICA model accounts well for a bilingual advantage on incongruent trials; it is challenged, however, by a literature showing little to no bilingual advantage on the interference effect (i.e., bilingual advantages that are largely similar on congruent and incongruent trials, as evidenced by the earlier empirical review of this literature). Thus, while it appears that functionally distinct brain regions are involved on incongruent and congruent trials for bilinguals, in contrast to monolinguals, whether these pathways are necessary, in and of themselves, for the bilingual advantage on incongruent trials, specifically, is less obvious. The necessity of this region, and especially the necessity of this region somehow being linked to superior inhibitory control processes (i.e., BICA), is severely undermined by our review of the behavioral data that, on balance, show a largely symmetrical bilingual advantage for congruent and incongruent trials (BEPA).14 We do not deny a role for inhibitory control processes in the brain, nor do we deny that they play an important part in language management; however, there is simply little to no direct evidence (neuroimagining, behavioral, or otherwise) that they play any special role in nonlinguistic interference tasks.
A less intrepid and far more parsimonious interpretation centers on the idea that there is a well-developed mechanism in bilinguals or, more likely, a network of mechanisms in the bilingual brain that mediates between congruent and incongruent trials (much as it might manage language selection), in a way that is different from the way in which the monolingual brain operates. This type of theoretical perspective has been developing in the literature, although somewhat vaguely (Bialystok, 2009a; Bialystok & Craik, 2010), but the bare bones of it are evinced nicely by Luk et al. (2010): “Unlike the bilinguals, monolinguals did not respond to facilitation and suppression of interference using different brain networks, leading to fewer neural resources being recruited when performing the flanker task” (p. 356). Indeed, Bialystok and colleagues have long been aware of the need to explain superior bilingual performance on congruent trials (at least as early as Bialystok, 2006). This is an interpretation that is much more consistent with BEPA, and it is akin to the one that we favor. Although there may be several ways in which this could be achieved neuroscientifically, we will take the liberty of hashing out this strong hypothesis with less of a focus on inhibitory pathways or inhibition-based processing sites.
Imagine that the system for performing nonlinguistic interference tasks in bilinguals interprets and selects pathways for inputs based on whether they contain conflict. In deference to the results from Luk et al. (2010) and the conflict-monitoring literature, perhaps something like the ACC (but there might also be a role for the IFG and other regions that have been identified in Luk et al.’s important investigation) causes inputs to be rerouted depending on the presence or absence of conflict; the detection of conflict activates a (domain-general) dedicated conflict resolution center. Increased activation in this center triggers a routing of the input to a domain-general pathway, well-adapted because of bilingualism, for conflict resolution. The absence of conflict precipitates activity in a brain region that has been configured specifically to deal with nonconflicting inputs. The division of labor between functionally distinct processing streams and the consequent freeing up of processing resources—not superior inhibitory control or the efficiency with which an inhibitory pathway can be recruited relative to noninhibitory pathways in bilinguals15—would then be responsible for the ubiquitous global RT advantage. In monolinguals, congruent and incongruent trials appear to be resolved in similar neurocognitive systems. Ancillary pathways are involved in both language groups for no-go trials, but not for monolinguals on incongruent trials. The suggestion is that these ancillary pathways for monolinguals lack more domain-general processing. This occurs because these pathways have not been adapted to such cognitive demands as dual language use.16 The advantage for bilinguals in the nonlinguistic interference task might have materialized because the bilingual brain possesses a system that can distribute inputs to separate processing centers, depending on the presence of within-trial conflict, and that has adapted a network of pathways to respond to more general instances of conflict because of experience with multiple languages.
If this were so, flanker interference tasks requiring spatial processing of target and irrelevant distractor stimuli—as when irrelevant distractor stimuli must be processed to complete the task successfully—would show greater congruency (congruent trial RT – neutral trial RT) and smaller incongruency (incongruent trial RT – neutral trial RT) effects for bilinguals as compared to monolinguals, despite an absence of language group differences on the interference effect. Precisely this was done by Hernández, Costa, Fuentes, Vivas, and Sebastián-Gallés (2010) using the number Stroop task17 (Luk et al., 2010, also showed some behavioral evidence of this phenomenon). These bilingual advantages on facilitation and interference relative to a neutral condition, however, would not be observed if spatial processing were restricted in advance (Laberge, 1983), as in when targets and distractors occupy fixed regions in a display (Costa et al., 2008). Nevertheless, bilinguals may outperform monolinguals on congruent and incongruent trials because of a language-mediated division of labor between congruent and incongruent information-processing streams (but see Bialystok, Craik, et al., 2005, pp. 46–47, and note 12). Alternatively, if this finding proves to be restricted to instances in which target location is variable, overall RT advantages may still be accounted for by the speed at which (in)congruent-relevant brain regions can be selected by the monitoring system (Luk et al., 2010), irrespective of inhibition.
On this view, the origin of the bilingual advantage would not be so related to the specific processes engaged in resolving lexical competition between languages (inhibitory control or language specific lexically built-in mechanisms) but rather to the previous step of setting the language in which communication will proceed. (Costa et al., 2009, p. 144)
Thus, early on in a nonlinguistic interference task, unique bilingually related brain regions jointly involved in language selection and the processing of distracting inputs might be primed by the language setting under which the task is administered.
Empirical testing of conflict monitoring and theoretical implications
First-order sequencing effects
Although we favor the abovementioned theoretical construct of the bilingual advantage, its link to the conflict-monitoring system—or any system for that matter—has yet to be shown unequivocally. As such, additional research between language groups on sequencing effects seems warranted, given the potential that either components of or the entire conflict-monitoring system might have in domain-general responding (cf. Costa et al., 2009; Costa et al., 2008). Consider the conflict-monitoring system specifically. It is not entirely clear whether bilinguals show superior efficiency (as measured by an enhanced rate of information processing, by signaling relevant processing sites, by routing inputs to functionally specific pathways, or by some combination) in the conflict-monitoring system. This is due, in large part, to conceptualizations of monitoring that seem to extend beyond intertrial compatibility switching into the domain of intertrial response switching (e.g., Bialystok, 2006). At other times, the first-order sequencing effects lack an appropriate baseline, due to a failure to remove sequences in which trial n is a complete repetition of trial n – 1 (e.g., Costa et al., 2009; Costa et al., 2008) and because two-alternative forced choice tasks are typically considered insufficient for getting at the core of conflict monitoring (see the next section). If, however, an experimental design obtaining a purer measure of sequencing effects in the context of the conflict-monitoring theory were to be implemented, and if it were to show this advantage on sequencing effects, there would be some preliminary evidence that might begin to account for a body of work demonstrating a bilingual advantage on a wealth of cognitive assessment tools (i.e., BEPA; see Adesope, Lavin, Thompson, & Ungerleider, 2010; Carlson & Meltzoff, 2008).
Toward a sounder measure of conflict-monitoring differences between monolinguals and bilinguals
Behaviorally, too, the relationship between bilingualism and conflict monitoring in nonlinguistic interference tasks is poorly understood. This is primarily due to the development of experimental designs that have been insensitive to the principles of conflict monitoring and to other theoretical constructs, with lesser known neurocognitive correlates, that compete with it. One theory that has opposed conflict monitoring as a candidate explanation for sequential modulations of the Simon effect is the event-file theory (Hommel, 1998). A treatment of how the event-file theory (or feature integration theory) relates to sequential modulation can be found in Hommel, Proctor, and Vu (2004; or in Hommel, 2004, for a more general treatment of this theory). For now, a relatively coarse description will suffice. According to the event-file theory, only a limited number of event files (or transient memory traces) can be held simultaneously, and partial overlap between event files results in a time-consuming update to the previously constructed event file, because one component of a multicomponent event file has been activated. Consider, for instance, if a green stimulus in trial n automatically activated a response to a location right of fixation, and a red stimulus in trial n – 1 automatically activated a response to a location right of fixation. In this situation, the unconditional coding of task-irrelevant location information in the previous trial would be activated again on trial n. But, according to event-file theory, a mismatch on this dimension would necessarily result in a modification to the event file from n – 1. Thus, the feature code from the previous event file would need to be “unbound,” because it is a necessary component of the new event file. Alternatively, if there is no overlap (thus, complete repetition, or complete alternation as in the case when a corresponding trial is followed by a corresponding trial), processes completely unrelated to conflict monitoring and the feature integration account might affect RTs. It is thought that comparing the sequence congruent to congruent with incongruent to congruent, for example, might artificially inflate the switch cost because the sequence congruent to congruent is comprised exclusively of complete repetitions and alternations whereas the sequence incongruent to congruent comprises partial matches (Hommel, 2004). Mechanisms related to priming (Christie, & Klein, 2001) could just as easily account for switch cost differences when complete alternations and matches on first order sequencing effects are compared to first order sequences in which one dimension from trial n–1 matches in trial n whereas the other mismatches.
Having described the feature integrationist account to some extent, it might now appear obvious that both conflict-monitoring and feature integration accounts make similar predictions on most (if not all) two-alternative forced choice tasks, and problematically, it is clear that other processes are likely involved in these tasks that are liable to produce sequential modulations on interference tasks. One such process might play a role in facilitating response on a trial n on which there is a complete repetition or alternation of the S–R code (e.g., Wühr & Ansorge, 2005). The second concern is that it is virtually impossible to dissociate feature integration from conflict monitoring in two-alternative forced choice tasks. The reason for this is relatively straightforward. When there are only two response values per stimulus dimension, partial alternations and partial repetitions are perfectly confounded with transition from congruent to incongruent trials, or vice versa (Egner, 2007; Funes et al., 2010). Thus, the sequential modulation can occur either because of the difficulty associated with “unbinding” an event file or because of an evaluatory mechanism regulating cognitive control on the basis of cognitive demand from one trial to the next.
The way to circumvent the issue of co-occurring mechanisms for either response priming or feature integration, both of which may modulate sequential effects, is to increase the number of stimulus–response relationships, which would unconfound feature integration and conflict monitoring. This would allow for first-order sequence analysis on a purely abstract level (congruence) if the rarer sequences on which there was a partial repetition (i.e., if a response repeats but the stimulus position changes) or a complete repetition (i.e., a repetition of both the stimulus location and the response) were excluded. The remaining trials, therefore, include those on which there is a difference only at the level of the processing relationship between task-irrelevant and task-relevant information (i.e., congruent to incongruent, incongruent to incongruent, etc.). A vast library of experimental designs illustrate precisely the types of steps that can be taken to eliminate the influence of co-occurring phenomena, and there is little reason why similar approaches could not be adopted to examine differences between language groups (Akçay & Hazeltine, 2007, 2008; Funes et al., 2010; Mayr, Awh, & Laurey, 2003; Stürmer et al., 2002; Ullsperger et al., 2005; Wühr & Ansorge, 2005; see Egner, 2007, 2008, for information on how to design these types of tasks). Thus, to the extent that conflict adaptation can occur in the absence of any dimensional overlap or repetition effects, this type of methodological and analytical approach would provide one of the purest measures. If an advantage were observed for bilinguals on this purer measure of conflict adaptation—the effect that the conflict-monitoring system is ostensibly reacting to—the implication would be, at the very least, that this system behaves differently in bilinguals. Reduced first-order sequencing effects in bilinguals would most likely be attributable to a more efficient conflict-monitoring system and not necessarily functionally distinct processing streams for incongruent and congruent trials. Of course, one of the drawbacks of introducing this type of approach is that an increase in the size of the response–stimulus set might correspond to an increase in cognitive load (which might or might not involve the conflict-monitoring system to some degree), which could theoretically be handled better by bilinguals than by monolinguals. One solution to this problem, however, might be to introduce stimuli that could reflexively activate responses [i.e., arrows (Ristic & Kingstone, 2006; Ristic, Friesen, & Kingstone, 2002) in, e.g., the spatial Stroop task (Bialystok, 2006)].
In practice, bilingual research could dissociate these competing theories for sequential modulation and arrive at a purer test of conflict adaptation between language groups. Assuming a performance advantage in bilinguals (as compared to monolinguals) on conflict adaptation (excepting trial sequences consistent with event-file theory), these findings would provide, to reiterate, relatively clear-cut evidence that processing efficiency is improved in this particular system, which, because of its close connection to many other structures in the brain, might imply widespread behavioral and cognitive advantages (a confirmation of the BEPA). Furthermore, although there may be relatively few a priori reasons to assume that bilinguals ought to outperform monolinguals on something akin to repetition priming (Pashler & Baylis, 1991) or perhaps feature integration, these components too could be studied on a trial-by-trial basis to examine numerical differences between language groups. To date, analyses of congruent–congruent and incongruent–incongruent sequences in bilingualism research have not distinguished between complete matches (e.g., trial n–1 = > > > > > and trial n = > > > > >) and complete mismatches (e.g., trial n – 1 = > > > > > and trial n = < < < < <) for congruence (or incongruence). Thus, to date, feature integration and conflict-monitoring accounts are perfectly confounded in the bilingualism literature.
Broader implications of conflict-monitoring advantages for domain-general bilingual advantages
Although behavioral and neurocognitive explorations of bilingual advantages must strive to determine the true relationship between domain-general processing systems and bilingualism, we would like to allow some pause to momentarily discuss the implications of an advanced conflict-monitoring system owing to dual-language management. There are, to be sure, a wide variety of hypotheses surrounding a domain-general role for the conflict-monitoring system and its components. Whereas the literature has attributed several possible roles to the conflict-monitoring system, a second consequence of the bilingual advantage on global RTs (or conflict monitoring) might be that one or more components of the conflict-monitoring system, having been relied on frequently for managing multiple languages, confer advantages on other neurocognitive systems for which these same components play a major part. One system, for example, that relies on a component of the conflict-monitoring system is the locus coeruleus norepinephrine system (LCNE), which receives projections from prefrontal regions and the ACC (Aston-Jones & Cohen, 2005). The LCNE is a biphasic multifaceted system. Functionally, it has been hypothesized to regulate task-related decision processes, to facilitate the execution of appropriate behavior, to facilitate attentional filtering and, similarly, to increase attention to task-relevant processes—to name a few of its proposed contributions. To the extent that the efficiency of something like the ACC, which is tightly integrated with other processing system (including the LCNE), has directly benefited from dual language use, we might also expect behavioral advantages on tasks engaging different systems but between which there is structural overlap.
The implications of interconnected neurocognitive systems are manifold. Among them is the possibility that if one or several modules in a general conflict-monitoring system were well developed because of L2 management, and if similar modules were involved in other brain circuits, advantages could extend to a variety of other cognitive domains. Moreover, it is not necessary, and perhaps it is unlikely, that only one or a spattering of regions in the brain would be affected by the acquisition of a second language. This would ultimately lead to exceptionally complex behavioral and neuropsychological interactions. As a result, while examining one feature of the system would provide invaluable insight as to whether such a system responded differently, or possibly more efficiently, in one group as compared to another, but on the basis of one or possibly several features alone, it would be enormously challenging to account for a pattern of results that has been produced by a tightly integrated system (e.g., Costa et al., 2009, alluded to something of this effect when they expressed the view that inhibitory control processes and conflict monitoring could interact in complex ways18). Surely, as we have seen from recent neuropsychological data comparing bilinguals and monolinguals on these tasks, there are awe-inspiring differences between the activation patterns and regions involved in the bilingual brain relative to the monolingual brain.
Closing remarks and conclusions
From all of the evidence we have considered, it is at this juncture enormously challenging and perhaps premature to conclude that bilinguals have profited from a lifetime of multilanguage management so as to ensure that they have developed a more adept, general, and multifaceted inhibitory control system that is less subject to degeneration, specifically, from aging. Certainly, there is very little evidence to support the BICA hypothesis; despite this fact, BICA has been endorsed, despite some recent softening of this idea by placing more emphasis on the cognitive control of inhibition (e.g., Bialystok & Craik, 2010; Luk et al., 2010). When bilingual advantages on the interference effect appear in young-adult populations, they appear only briefly, early on, and dissipate very rapidly. That is, when detected (to be sure, such advantages have been detected only twice in flanker interference tasks: Costa et al., 2009; Costa et al., 2008), their onsets occur following the first 24 trials, increase to a pinnacle after three or four blocks (primarily because monolinguals become somewhat monotonically slower on congruent trials), and then abruptly vanish. To date, only one study has found an interference effect advantage that is even remotely close to what an inhibitory control model might predict (Bialystok et al., 2004); this study involved older bilinguals and, in showing the advantage on the interference effect (Exp. 2), also showed the instability of it (Exp. 3). In this case, although there was an overall RT advantage for bilinguals, they also appeared to benefit most on incongruent trials. Critically, recall that the interference effect disappeared with practice and was largest after only a few experimental trials without practice, pointing to the possibility that a reconfiguration of cognitive processes, rather than any enduring bilingual advantage on inhibitory control, might better characterize this finding.
Thus, although there is scant evidence in favor of the BICA hypothesis, there is clearer evidence to suggest that bilinguals enjoy a general processing advantage that can be detected early developmentally and that persists throughout life. This is clear from the robust advantage of bilinguals on global RTs in difficult tasks and nonlinguistic interference tasks (see Fig. 2b), which begins in childhood and lasts into old age. The relative ubiquity of the bilingual advantage in global RTs provides strong support for the BEPA hypothesis. This hypothesis places the locus of control not on inhibitory processes per se, but on a central executive system that has some capacity to regulate processing across a wide variety of task demands. A model of this sort might be able to better accommodate the ephemeral advantage on those tasks that induce interference but that are apparently nonlinguistic or less linguistic in nature (e.g., the Simon or flanker tasks) as a function of unusual conflict adaptation effects early on, differences in task learning, and beyond. It is here that something akin to a more global conflict-monitoring system (Costa et al., 2009; Costa et al., 2008) operates, not entirely as a function of whether conflict or congruence had been perceived on previous trials, but as a general executive system that improves in efficiency owing to the need to monitor linguistic representations competing for selection. The components of such a system, being intricately related to a number of modules that may have also developed through cross-language use, contribute to the regulation of cognitive control by delegating processing between quasi-independent pathways or brain regions. This type of BEPA-oriented theoretical framework might lead to bilingual advantages across a broad range of tasks in which the need for executive control is most pressing and in which processing can be neatly divided between separate processing streams. On this last point, however, while it is clear from the neuroscience that a lifetime of dual-language use results in neurocognitive differences between bilinguals and monolinguals, how these differences translate into behavioral differences—and even whether these differences reflect bilingual advantages—is poorly understood, and moving forward, much remains to be learned about these processes.
Other studies have examined inhibitory control processes in bilinguals using the Stroop task (e.g., Bialystok et al., 2008), spatial negative-priming (Treccani, Argyri, Sorace, & Della Sala, 2009), and inhibition-of-return (IOR) paradigms (Colzato et al., 2008). The Stroop task has not been examined in this review because of its close relationship to language. The relationship of an IOR paradigm to active inhibitory control processes, on the other hand, is a much more ambiguous case, given that opinions are highly divergent on the causes (Hunt & Kingstone, 2003; Klein, 2000; Souto & Kerzel, 2009) and effects (Abrams & Dobkin, 1994; Taylor & Klein, 2000) of IOR. Thus, it is difficult to discern what greater IOR for bilinguals as compared to monolinguals (Colzato et al., 2008; but see Hernández et al., 2010, for a nonreplication) might mean. On a historical note, the Colzato et al. investigation, showing greater IOR in bilinguals at long cue–target intervals, concluded that this language group possesses a superior ability to maintain action goals, whereas a greater spatial negative-priming effect in bilinguals (Treccani et al., 2009) has been taken as evidence in favor of BICA.
This task was analyzed as a flanker interference task, but notably, placing a target arrow either to the left or right of center gives this task an element of the spatial Stroop task.
One interpolated condition, “respond in the direction of a central arrow,” was designed to establish response speed in the absence of any conflict. In the other interpolated condition, the stimulus–response mapping was incompatible: “respond in the direction opposite the center arrow.” This condition was designed to examine participants’ abilities to override a habitual response. The Simon task blocks were interrupted by one block of the second condition (a response in the direction of or opposite to a centrally presented pointing arrow). There were two possible orders, counterbalanced across participants. One order was as follows: a control condition (in which the participant had to respond in the direction indicated by a centrally presented arrow), a reverse condition (in which the participant had to respond opposite to the direction indicated by a centrally presented arrow), the Simon task, a reverse condition, the Simon task, and a control condition. The other order was a control condition, the Simon task, a reverse condition, the Simon task, a reverse condition, and a control condition. Instructions and an unknown number of trial examples were provided before each block of trials. The results for the Simon task revealed that the monolinguals showed a 60-ms Simon effect while the bilinguals showed no Simon effect, and consequently there was a significant 60-ms bilingual advantage in the Simon effect. There were no differences between language groups on the univalent control conditions (i.e., respond in the direction or in the opposite direction of a centrally presented arrow). See note 4 for a theoretical discussion of how this methodology might have affected the results.
Although this empirical section has been largely atheoretical, relating the results only to the inhibitory control model that has been proposed to account for a putative bilingual advantage on the interference effect, we are compelled to note that a sequence of tasks preceding an interference task and the sequence of events within an interference task (as described in note 3) are known to have observable behavioral consequences, and we suggest that these types of effects may have led to these unusual findings. Giving participants prior practice suppressing the natural tendency to respond in the direction of a target might be expected to decrease, if not eliminate, the Simon effect (Proctor & Lu, 1999), and it has been shown that performing a spatially incompatible S–R mapping task (i.e., right response to a stimulus appearing to the left of fixation, and vice versa) can result in a significant reversal of the Simon effect when a noncorresponding trial precedes a noncorresponding trial, as compared to a noncorresponding trial preceding a corresponding trial (Iani et al., 2009).
Although there was a significant 14-ms bilingual advantage on the interference effect when all three of Costa et al.’s (2008) experimental blocks were analyzed collectively, subsequent analyses revealed that the bilingual advantage on the interference effect decreased to nonsignificance in the final (third) block of trials. Extraction of the numerical data from Fig. 4 of Costa et al. (2008) revealed an ~16-ms bilingual advantage on the interference effect for the first two blocks of trials, which decreased to ~3 ms on the last block.
Several points are worth making with respect to this study.
a. Their analytical approach did not distinguish between whether the task-irrelevant dimension was congruent to or incongruent from the task-relevant dimension on the preceding trial. This is of utmost importance for testing the conflict-monitoring hypothesis (see the Classic conflict monitoring section). Relatedly, although it could be inferred from the conditions (response change, position change, both change, both the same) that joint response and position switches and repetitions resulted in either congruent to congruent or incongruent to incongruent trials (i.e., nonswitch trials), whereas changing only one stimulus feature resulted in a congruent-to-incongruent switch or vice versa, one cannot determine the extent to which the fixed order of trials led to intertrial compatibility switches (i.e., Did the response switches most often co-occur with a position switch? Was there an even ratio of response and position switches to response switches?). In the absence of this information, it is therefore impossible to determine whether the global advantage materialized because of a high frequency of intertrial compatibility switches (a more plausible alternative, given the tenets of the online-monitoring theory) or because of intertrial response switching.
b. The order of trials was fixed. To be sure, one fixed number of trials heavily weighted on response switches alone would promote congruent-to-incongruent (and vice versa) intertrial switches. A second fixed number of trials (i.e., response and position switches) would promote few intertrial switches (but some frequency of which would recruit the conflict-monitoring system [e.g., incongruent to incongruent trials] to a greater extent). A third fixed number of trials, for which the response-switch-to-response-switch and -position-switch ratio was equivalent throughout a block, would lead to an even distribution of all compatibility switches and nonswitches (a condition that would most closely resemble any condition for which these switches were randomly distributed). The latter condition represents an instance on which bilinguals have traditionally outperformed monolinguals on overall RTs (Bialystok et al., 2008; Bialystok & DePape, 2009). Thus, assuming a normally distributed number of intertrial compatibility switches (for which response switches and position switches would be normally distributed, as well), overall RT advantages typically emerge for young-adult bilinguals on the spatial Stroop task. This suggests either that the high-switch condition contained many intertrial compatibility switches, that the low-switch condition contained too few, or, more likely, that both of these speculations might be true.
c. The switch cost analysis collapsed across the low-switch condition (for which there was no global advantage) and the high-switch condition on the spatial Stroop task.
d. Because this was a two-alternative forced choice task, there are clearly several mechanisms that could have acted in concert with conflict adaptation, and the analytical approach did little to adjudicate between these mechanisms (see the Toward a sounder measure of conflict-monitoring differences between monolinguals and bilinguals section). Until conflict monitoring is dissociated from other competing theories experimentally, these results will invariably represent the unique contributions of at least two mechanisms that might interact in complex ways.
Several researchers in the conflict-monitoring literature (see the Classic conflict monitoring section and beyond for a theoretical discussion) have parametrically adjusted the probability of incongruence either by introducing a nonspatial (featural) attribute that predicted the likelihood of a congruent trial (Fernandez-Duque & Knight, 2008; Lehle & Hübner, 2008) or by modifying the frequency of congruent trials in a block of trials (Mayr & Awh, 2009). One interesting finding is that in a six-choice Stroop task, blockwise congruence manipulations seem to be dissociable from conflict adaptation (Fernandez-Duque & Knight, 2008; Mayr & Awh, 2009). For example, Mayr and Awh showed that when the frequency of congruent trials was set to 30%, 50%, or 70% in any one condition, over the course of the first two blocks of 88 trials, there were significant conflict adaptation effects. In the subsequent eight blocks, however, the conflict adaptation effect shrank to nonsignificance, but the proportional congruence effect remained (that is, almost monotonically greater interference effects as a function of congruence [30%–70%] remained). One possible explanation (but certainly not the only one) is that an attentional setting was created to store the most common episodic instance, which could then be retrieved again when a trial matched this setting. Critically, it is difficult to determine exactly what might have occurred here when extreme probability manipulations were employed.
For completeness, it must be noted that, numerically, the global advantage for bilinguals appears to have been nullified in the third block of trials.
Unlike in the 75%-congruent condition, the global RT advantage appeared more stable, persisting in all three blocks of trials in the 50%-congruent condition.
These data have not been covered in the empirical review for several key reasons: (1) In Experiment 1, Simon effects were obtained for bilinguals who were in the process of acquiring a second language in intermediate university classes. Although these bilingual Simon effects were compared against a monolingual control group, these bilingual groups do not satisfy the criteria of high proficiency and an early age of acquisition. (2) Although the bilinguals in their Experiment 2 were considered highly proficient, their age of acquisition of L2 was higher than the standard for this field of research (at ~10 years). (3) Moreover, a principle aim of this second experiment was to compare Simon effects between groups of highly proficient bilinguals in different language contexts, and not between bilinguals and monolinguals. Relationships between proficiency and greater inhibitory control were determined by performing a regression analysis, with L2 picture-naming accuracy as a predictor variable for the Simon effect. (4) The component scores for the Simon effect (i.e., congruent and incongruent trial RTs) were not reported. These scores would obviously be required in order to determine whether overall (global) RT differences were observable in these data sets when comparing bilinguals to monolinguals. (5) Accuracy was not reported for the Simon task, thereby making the interpretation of RTs somewhat ambiguous.
A noteworthy trend in the data from Linck et al. (2008) was that, in some cases, the language context [L1 (tested in Spain or Japan) or L2 (tested in the United States)] in which the Simon effect was elicited could lead to differences in Simon effects within groups of highly proficient bilinguals. Administering the task in an L2 context appears to increase Simon effects.
One aspect of this literature that makes it particularly challenging for drawing wide-sweeping generalizations is the oftentimes unexplained divergence of methodologies between studies exploring language groups in nonlinguistic interference tasks. In Luk et al.’s (2010) study, for instance, an imperative stimulus could appear at the midpoint, left of the midpoint, or right of the midpoint of the five-item stimulus line. This uncertainty about where a target might appear on the line in any given trial appeared to increase processing of all stimuli at all potential target locations. In other words, participants allowed the irrelevant distractors to influence their performance to a greater extent than they would have otherwise. Evidence for this is the surprising finding that performance on congruent trials was better than performance on neutral trials (unlike in Costa et al., 2008, and Costa et al., 2009, where neutral RT = congruent RT, presumably because the target stimulus invariably appeared at fixation). It is unclear how this “congruence advantage” affected the neuroimaging data, but again, it was likely spurred by more rapid processing for spatial locations that had a high processing potential (Laberge, 1983; LaBerge & Brown, 1986). Finally, their neutral trials do not seem to have been an appropriate baseline. In neutral trials, the target chevron always appeared between four neutral stimuli, whereas the target chevron on congruent/incongruent trials appeared randomly to the left or right of center.
Three points are worth noting here. (1) Luk et al. (2010) attributed interference suppression to inhibitory control. (2) This type of explanation would actually seem to be more consistent with their tendency to show a global RT advantage, if a theoretical perspective is taken like the one described below. (3) As we have shown, we do not believe that there is much of a tendency for bilinguals to show less interference in terms of RTs, as measured by the Simon, flanker, or spatial Stroop effect. Finally, although these phrases sound like tacit endorsements of BICA, primarily because they focus almost exclusively on the effects on incongruent trials and inhibitory control, they also seem to support BEPA by suggesting that it is the control of inhibition, and not superior inhibition in and of itself, that leads to the difference on incongruent trials. If this is an endorsement of BEPA, the control of inhibition is misleading because, surely, whatever is controlling inhibition appears, one way or another, to proportionally affect trials on which inhibition does not appear to be necessary. Thus, the cognitive control of inhibition, specifically, seems to be an unnecessary extension. Superior cognitive control, in the way that congruent and incongruent inputs are routed through different pathways, would seem to be sufficient. If the implication is that the cognitive control of inhibitory pathways frees up processing resources in the quasi-independent system for congruent trials (thereby making congruence advantages an indirect benefit), we are obliged to acknowledge this as a possibility, and in many ways, this is a view that we favor and develop to some extent below.
Moreover, the fact that there is no apparent behavioral advantage whatsoever for at least some bilingual groups in these previously described studies begs the question: How can we say, with any confidence, that these differences in neural circuitry underlie any bilingual advantage?
If there were any added value of inhibitory control to the system described here, one might expect superior performance in bilinguals relative to monolinguals on congruent and incongruent trials, but critically, one would expect that the advantage would be greater on incongruent relative to congruent trials. It is not. These advantages, to reiterate, are often similar. This view is only made possible by believing that those brain regions apt to deal exclusively with incongruent trials in the nonlinguistic interference task have developed because of some role that they play managing two languages. This does not imply superior inhibitory control; it implies a multifaceted system for conflict resolution in bilinguals that does not appear to possess any superior inhibitory power relative to the monolingual system (because the interference effect for monolinguals equals the interference effect for bilinguals). Simply, it is a system that can be called upon (by some cognitive control system) to perform extralinguistic conflict tasks in bilinguals. The global advantage is made possible by a division of labor between two largely independent systems presided over by some general cognitive control system—perhaps the conflict-monitoring system.
But one could see that frequent exposure to environments emphasizing music, video game play, computer use, or other tasks in which an elevated demand for conflict resolution was required might configure this pathway to behave in a more domain-general fashion.
In the number Stroop task, observers must count the number of stimuli appearing along an imaginary, horizontally positioned line. All stimuli on the line represent the same numeral (e.g., 3 3 3 or 1 1). The stimulus numeral is the irrelevant dimension. RTs are slower when the number of stimuli is incongruent with the stimulus numeral (incongruent – congruent). On neutral trials, identical letter stimuli instead of number stimuli are oriented horizontally.
Of course, while we allow that multilanguage use likely leads to a separate, possibly inhibitory, system to process incongruent inputs, we disagree that the advantage arises because of superior inhibitory control on incongruent trials or, for that matter, that these advantages reflect superior inhibitory control at all.
Author Note
The writing of this paper was made possible by an Natural Sciences and Engineering Research Council (NSERC). Discovery grant to Raymond Klein and an NSERC postgraduate fellowship to Matthew D. Hilchey. Special thanks to Albert Costa, Jared Linck and an anonymous reviewer for their constructive feedback in the review process. Correspondence regarding this article can be addressed to any of the authors: matt.hilchey@dal.ca, ray.klein@dal.ca. | http://link.springer.com/article/10.3758%2Fs13423-011-0116-7?LI=true | CC-MAIN-2017-09 | en | refinedweb |
Creating Excel Files with Python and Django
Downloading Excel from Django
Now that you can create Excel-compatible files with Python, how do you allow them to be downloaded with Django? It's pretty straightforward, and shows us how to generate and provide just about any kind of binary file. If you can create it, your users can download it.
The secret is in the 'content-type' HTTP header. When your browser requests a file from a server, the transaction looks like this:
GET /wp-content/uploads/2007/10/cropped-dsc_0020.jpg HTTP/1.1 Host: weblog.lonelylion.com User-Agent:Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.11) Gecko/20071204 Ubuntu/7.10 (gutsy) Firefox/2.0.0.11 Accept: image/png,*/*;q=0.5 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: HTTP/1.x 200 OK Date: Sat, 02 Feb 2008 17:53:58 GMT Server: Apache/1.3.37 (Unix) mod_throttle/3.1.2 DAV/1.0.3 mod_fastcgi/2.4.2 mod_gzip/1.3.26.1a PHP/4.4.8 mod_ssl/2.8.22 OpenSSL/0.9.7e Last-Modified: Thu, 01 Nov 2007 03:22:12 GMT Etag: "798f213-5c88-47294664" Accept-Ranges: bytes Content-Length: 23688 Keep-Alive: timeout=2, max=97 Connection: Keep-Alive Content-Type: image/jpeg
The above example is a jpeg image, which is why the Content-Type is image/jpeg. If you change that header to 'application/ms-excel', the file will be opened with Excel. Changing header types in Django is very easy; here is an example of a Django view that sends the browser a csv file and tells the browser to open it in Excel:
import csv from StringIO import StringIO from django.http import HttpResponse def show_excel(request): # use a StringIO buffer rather than opening a file output = StringIO() w = csv.writer(output) for i in range(10): w.writerow(range(10)) # rewind the virtual file output.seek(0) return HttpResponse(output.read(), mimetype='application/ms-excel')
I made a few modifications to your original csv example; it's using a file-like StringIO object rather than actually opening a file, and it's wrapping the response in the Django HttpResponse object, which is the standard return value object type of a Django view. You're also setting the Content-Type header to 'application/ms-excel' by passing HttpResponse the content_type option.
The above pattern lets you return just about any kind of binary data through the web browser. You can use libraries that generate PDFs, images, sound files, video, whatever...as long as you know the appropriate Content-Type header and can generate the binary data.
Now, apply your technique to outputting an actual Excel file generated by pyExcelerator:
from pyExcelerator import * from django.http import HttpResponse def show_excel(request): wb = Workbook() ws0 = wb.add_sheet('0') for x in range(10): for y in range(10): # writing to a specific x,y ws0.write(x,y,"this is cell %s, %s" % (x,y)) wb.save('output.xls') return HttpResponse(open('output.xls','r').read(), mimetype='application/ms-excel')
Again, it's pretty similar to the previous example of creating an Excel file with pyExcelerator. There's one bit that isn't ideal, though—creating a temporary file to write the data to, then opening it and reading from it. This could cause problems on a high traffic site. The file could become corrupted as one user hits it while another is reading from it. You could attach time stamps to the file name to help with this, but really it would be ideal if you could pass the Workbook save method a file-like object to save to. It isn't currently an option in pyExcelerator, but I'm sure a patch could be offered.
More often than not, I just send the browser a csv file with the Content-Type set to 'application/ms-excel'. The use of StringIO makes for a clean implementation, and most end users don't mind the bit of oddness because the amount of time it takes to build a downloadable spreadsheet is typically less than five minutes. It keeps users happy, and lets them play with their data very quickly. The more they play with it, the better able they'll be to request HTML views of it that they'll actually use. Everyone wins.
About the Author
Chris McAvoy is a developer for PSC Group LLC in Chicago, Illinois. He specializes in Python and Ruby web applications. He also keeps a blog at.
Page 2 of 2
| http://www.developer.com/tech/article.php/10923_3727616_2/Creating-Excel-Files-with-Python-and-Django.htm | CC-MAIN-2017-09 | en | refinedweb |
Technical Market Report
The good news is:
• The Dow Jones Industrial Average (DJIA) closed at a new all time high for 3 consecutive days last week.
Short Term
The market is overbought.
Bollinger bands are typically drawn 2 standard deviations either side of a 20 day simple moving average. An index is considered overbought when it penetrates the upper band and oversold when it penetrates the lower band. The charts that follow show several indices with Bollinger bands, all of the indices penetrated the upper band last week.
Wilshire 5000 with Bollinger bands.
NASDAQ composite (OTC) with Bollinger bands.
The S&P mid cap index has been one of the worst performing indices in the past few months poked through its upper Bollinger band last week.
In a strong market indices can stay near the upper Bollinger band for quite a while, however, more often than not the band represents short term resistance.
Intermediate term
Small capitalization issues lead both up and down.
The chart below shows several indices including the large cap DJIA and S&P 500 (SPX), the small cap Russell 2000 (R2K) and the tech heavy OTC on semi log scales (Y axis only). The large cap indices (DJIA, SPX) hit their lows in mid June while the small cap indices (OTC, R2K) hit their lows in mid July. The legend shows the range of values in percentage terms from the starting day, maximum drawdown (MDD) and annualized return (CAR). The chart covers the period from mid June to include the lows for the large cap indices; 82 trading days.
All of the indices have had nearly identical returns over the period.
The next chart is similar to the one above covering 82 trading days, but it begins at the August 2003 low. The OTC and R2K rose at nearly double the rate of the DJIA and SPX.
The next chart is similar to the one above but it begins at the August 2004 low. Again the OTC and R2K rose about twice as fast as the DJIA or SPX.
You can argue that the secondaries do not lead, but just have higher betas.
However you define it, the first chart presents an unusual picture because the secondaries have not been producing significantly higher betas (or leading) in the rally of the past few months.
Back in April there was deterioration in the breadth indicators that led me to think a decline was immanent. It was frustrating waiting for that decline to finally commence in mid May. Many of the breadth indicators are demonstrating a patterns similar to last April.
The chart below covers the past year showing the OTC in pink and an indicator that is momentum of a ratio of NASDAQ advancing issues to advancing issues + declining issues. There are grey vertical dashed lines drawn on the first trading day of each month and I have marked April on the chart.
The current pattern of the indicator rising to barely reach the neutral line as the index made a new cycle high is similar to pattern it traced out last April. The OTC made its high for the year in April, but some of the other indices including the R2K made their highs around mid May.
Seasonality
Next week is the 2nd week of October during the 2nd year of the Presidential Cycle.
The tables below show daily returns for the week prior to the 2nd Friday of October during the 2nd year of the Presidential Cycle. OTC data covers the period from 1966 - 2002 and SPX data from 1953 - 2002 during the 2nd year of the Presidential Cycle. The market traded 6 days a week prior to 1953 so data prior to 1953 has been omitted. There are summaries for both the 2nd year of the Presidential Cycle and all years combined beginning with 1963 for the OTC and 1953 for the SPX.
The coming week has had a very strong positive bias. The OTC has been up an unusually strong 80% of the time during the 2nd year of the Presidential Cycle. The SPX has been up an impressive 69% of the time. For both indices, the gains over all years combined, although positive, have not been as impressive.
2nd Friday Summary for OTC
Report for the week before the 2nd Friday during Oct
The number following the year is the position in the presidential cycle.
Daily returns from Monday to 2nd Friday.
Conclusion
The market is overbought and has not been following the seasonal pattern for the past few weeks.
I expect the major indices to be lower on Friday October 13 than they were on Friday October 6.
This report is free to anyone who wants it, so please tell your friends. They can sign up at:. If it is not for you, reply with REMOVE in the subject line.
My negative forecast last week based on deteriorating indicators and negative seasonality was a miss.
Gordon Harms produces a Power point (PPT) for our local timing group. You can get a copy of PPT at:.
TweetTweet | http://www.safehaven.com/article/6039/technical-market-report | CC-MAIN-2017-09 | en | refinedweb |
face
Chapter 1: Apache Web Services and Axis2
Chapter 2: Looking inside Axis2
Chapter 3: Axis 2 XML Model (AXIOM)
Chapter 4: Execution Chain
Chapter 5: Deployment Model
Chapter 6: Information Model
Chapter 7: Writing an Axis2 Service
Chapter 8: Writing an Axis2 Module
Chapter 9: The Client API
Chapter 10: Session Management
Chapter 11: Developing JAX-WS Web Services
Chapter 12: Axis2 Clustering
Chapter 13: Enterprise Integration Patterns
Chapter 14: Axis2 Advanced Features and Usage
Chapter 15: Building a Secure Reliable Web Service
Index
-
- Working with AXIOM
- Creating Axiom
- Creating Axiom from an input stream
- Creating Axiom using a string
- Creating Axiom programmatically
- Adding child nodes and attributes
- Working with OM namespaces
- Working with attribute
- Traversing the Axiom tree
- Serialization
- Advanced operations with Axiom
- 11: Developing JAX-WS Web Services
- Writing a simple JAX-WS web service
- JAX-WS annotations
- JSR 181 (Web Service Metadata) annotations
- javax.jws.WebService
- javax.jws.WebMethod
- javax.jws.OneWay
- javax.jws.WebParam
- name
- targetNamespace
- mode
- header
- partName
- javax.jws.WebResult
- javax.jws.soap.SOAPBinding
- JSR 224 (JAX-WS) annotations
- javax.xml.ws.BindingType
- javax.xml.ws.RequestWrapper and javax.xml.ws.ResponseWrapper
- javax.xml.ws.ServiceMode
- javax.xml.ws.WebEndpoint
- javax.xml.ws.WebFault
- javax.xml.ws.WebServiceClient
- javax.xml.ws.WebServiceProvider
- javax.xml.ws.WebServiceRef
- JSR 222 (JAXB) annotations
- j 12: Axis2 Clustering
- Setting up a simple Axis2 cluster
- Writing a highly available clusterable web service
- Stateless Axis2 Web Services
- Setting up a failover cluster
- Increasing horizontal scalability
- Setting up and configuring Axis2 clusters in production
- Clustering agent
- Clustering agent parameters
-.
In Detail
Web services are gaining popularity and have become one of the major techniques for application integration. Due to the flexibility and advantages of using web services, you want to enable Web service support to your applications. This book is your gateway to learning all you need to know about the Apache Axis2 web service framework and its hands on implementation.
Apache Axis2 Web Services, 2nd Edition is your comprehensive guide to implementing this incredibly powerful framework in practice. It gives you precisely what you need to know to develop a detailed practical understanding of this popular, modular and reliable web service framework.
This book starts with a short and relevant introduction about the Axis2 1.5 framework and then plunges you straight into its architectural model.
Learn to use and develop your own modules. Write a services.xml file so efficiently that you'll be creating more complex applications (rather than just POJOs) in no time.
Learn how straightforward it really is to turn a Java class into a web service in Axis2. Experiment with different types of sessions in Axis2. Learn different patterns of Enterprise deployment. Ensure reliability in your web service - a major concern in most enterprise applications - with minimum impact on performance.
This book will journey you through all this and more, giving you exactly what you need to learn Axis2 1.5 in the easiest way possible and create secure, reliable, and easy-to-use web services efficiently and systematically.
Approach
This is a step by step practical guide for developing web services using Apache Axis2. There are a lot of real-life examples, which makes this book an excellent practical guide.
Who this book is for
This book is for Java developers who are interested in building web services using Apache Axis2. Familiarity with web standards like SOAP, WSDL and XML parsing is assumed. | http://www.packtpub.com/apache-axis2-web-services/book?tag=rk/apaxis2-abr2/0211 | CC-MAIN-2014-15 | en | refinedweb |
You is opened determines whether locks on a file are treated as mandatory or advisory.
Of the two basic locking calls, fcntl(2) is more portable, more powerful, and less easy to use than lockf(3C). fcntl(2) is specified in POSIX 1003.1 standard. lockf(3C) is provided to be compatible with older applications.
For mandatory locks, the file must be a regular file with the set-group-ID bit on and the group execute permission off. If either condition fails, all record locks are advisory.
Set a mandatory lock as follows.
#include <sys/types.h> #include <sys/stat.h> int mode; struct stat buf; ... if (stat(filename, &buf) < 0) { perror("program"); exit (2); } /* get currently set mode */ mode = buf.st_mode; /* remove group execute permission from mode */ mode &= ~(S_IEXEC>>3); /* set 'set group id bit' in mode */ mode |= S_ISGID; if (chmod(filename, mode) < 0) { perror("program"); exit(2); } ...
The operating system ignores record locks when the system is executing a file. Any files with record locks should not have execute permissions set.
The chmod(1) command can also be used to set a file to permit mandatory locking.
This command sets the O20n0 permission bit in the file mode, which indicates mandatory locking on the file. If n is even, the bit is interpreted as enabling mandatory locking. If n is odd, the bit is interpreted as “set group ID on execution.”
The ls(1) command shows this setting when you ask for the long listing format with the -l option:
This command displays the following information:
The letter “l” in the permissions indicates that the set-group-ID bit is on. Since the set-group-ID bit is on, mandatory locking is enabled. Normal semantics of set group ID are also enabled.
Keep in mind the following aspects of locking:
Mandatory locking works only for local files. Mandatory locking is not supported when accessing files through NFS.
Mandatory locking protects only the segments of a file that are locked. The remainder of the file can be accessed according to normal file permissions.
If multiple reads or writes are needed for an atomic transaction, the process should explicitly lock all such segments before any I/O begins. Advisory locks are sufficient for all programs that perform in this way.
Arbitrary programs should not have unrestricted access permission to files on which record locks are used.
Advisory locking is more efficient because a record lock check does not have to be performed for every I/O request. trying to lock the file.
#include <fcntl.h> ... struct flock lck; ... lck.l_type = F_WRLCK; /* setting a write lock */ lck.l_whence = 0; /* offset l_start from beginning of file */ lck.l_start = (off_t)0; lck.l_len = (off_t)0; /* until the end of the file */ if (fcntl(fd, F_SETLK, &lck) <0) { if (errno == EAGAIN || errno == EACCES) { (void) fprintf(stderr, "File busy try again later!\n"); return; } perror("fcntl"); exit (2); } ...
Using fcntl(2), you can set the type and start of the lock request by setting structure variables.
You cannot lock mapped files with flock(3UCB). However, you can use the multithread-oriented synchronization mechanisms with mapped files. These synchronization mechanisms can be used in POSIX styles as well as in Solaris styles.
When locking a record, do not set the starting point and length of the lock segment to zero. The locking procedure is otherwise identical to file locking.
Contention for data is why you use record locking. Therefore, you should have a failure response for when you cannot obtain all the required locks:
Wait a certain amount of time, then try again
Abort the procedure, warn the user
Let the process sleep until signaled that the lock has been freed
Do some combination of the previous
This example shows a record being locked by using fcntl(2).
{ struct flock lck; ... lck.l_type = F_WRLCK; /* setting a write lock */ lck.l_whence = 0; /* offset l_start from beginning of file */ lck.l_start = here; lck.l_len = sizeof(struct record); /* lock "this" with write lock */ lck.l_start = this; if (fcntl(fd, F_SETLKW, &lck) < 0) { /* "this" lock failed. */ return (-1); ... }
The next example shows the lockf(3C) interface.
#include <unistd.h> { ... /* lock "this" */ (void) lseek(fd, this, SEEK_SET); if (lockf(fd, F_LOCK, sizeof(struct record)) < 0) { /* Lock on "this" failed. Clear lock on "here". */ (void) lseek(fd, here, 0); (void) lockf(fd, F_ULOCK, sizeof(struct record)); return (-1); }
You remove locks in the same way the locks were set. Only the lock type is different (F_ULOCK). An unlock cannot be blocked by another process and affects only locks placed by the calling process. The unlock affects only the segment of the file specified in the preceding locking call..
; } }
When a process forks, the child receives a copy of the file descriptors that the parent opened. Locks are not inherited by the child because the locks are owned by a specific process. The parent and child share a common file pointer for each file. Both processes can try to set locks on the same location in the same file. This problem occurs with both lockf(3C) and fcntl(2). If a program holding a record lock forks, the child process should close the file. After closing the file, the child process should reopen the file to set a new, separate file pointer.. | http://docs.oracle.com/cd/E19253-01/817-4415/fileio-9/index.html | CC-MAIN-2014-15 | en | refinedweb |
std::basic_ostream::swap
From cppreference.com
Calls basic_ios::swap(rhs) to swap all data members of the base class, except for rdbuf(), between *this and
rhs. This swap function is protected: it is called by the swap functions of the swappable output stream classes std::basic_ofstream and std::basic_ostringstream, which know how to correctly swap the associated streambuffers.
Parameters
Example
Run this code
#include <sstream> #include <iostream> #include <utility> int main() { std::ostringstream s1("hello"); std::ostringstream s2("bye"); s1.swap(s2); // OK, ostringstream has a public swap() std::swap(s1, s2); // OK, calls s1.swap(s2) // std::cout.swap(s2); // ERROR: swap is a protected member std::cout << s1.str() << '\n'; }
Output:
hello | http://en.cppreference.com/mwiki/index.php?title=cpp/io/basic_ostream/swap&oldid=49683 | CC-MAIN-2014-15 | en | refinedweb |
code example illustrating the design pattern for implementing Finalize and Dispose, see Implementing a Dispose Method.
Occasionally a domain-specific name is more appropriate than Dispose. For example, a file encapsulation might want to use the method name Close. In this case, implement Dispose privately and create a public Close method that calls Dispose. The following code example illustrates this pattern. You can replace Close with a method name appropriate to your domain. This example requires the System namespace.
The following rules outline the usage guidelines for the Finalize method:
Implement Finalize only on objects that require finalization. There are performance costs associated with Finalize methods.
If you require a Finalize method, the object has held onto. The Finalize method should not reference any other objects.
Do not directly call a Finalize method on an object other than the object's base class. This is not a valid operation in the C# programming language.
Call the base class's Finalize method from an object's Finalize method.. | http://msdn.microsoft.com/en-us/library/b1yfkh5e(VS.85).aspx | CC-MAIN-2014-15 | en | refinedweb |
csRefArray< T, Allocator > Class Template ReferenceAn array of smart pointers. More...
#include <csutil/refarr.h>
Inheritance diagram for csRefArray< T, Allocator >:
Detailed Description
template<class T, class Allocator = CS::Memory::AllocatorMalloc>
An array of smart pointers.
class csRefArray< T, Allocator >
- Warning:
- Get(), GetExtend() and operator[] are unsafe for element manipulations, as they will return references to pointers and not proper csRef<> objects - assigning a pointer will circumvent reference counting and cause unexpected problems. Use Put() to manipulate elements of the array.
Definition at line 92 of file refarr.h.
Constructor & Destructor Documentation
template<class T, class Allocator = CS::Memory::AllocatorMalloc>
Member Function Documentation
template<class T, class Allocator = CS::Memory::AllocatorMalloc>
Pop an element from tail end of array.
Reimplemented from csArray< T *, csRefArrayElementHandler< T * >, Allocator >.
Definition at line 105 of file refarr.h.
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.0.2 by doxygen 1.4.7 | http://www.crystalspace3d.org/docs/online/api-1.0/classcsRefArray.html | CC-MAIN-2014-15 | en | refinedweb |
About
(Note: for technical information regarding the OpenTK library, refer to the Project section).
This is a short recount of the project history from the perspective of Stefanos 'thefiddler' A., main developer of OpenTK.
I conceived OpenTK around December 2005, while perusing the Tao Framework as an avenue to learn C#. Without much knowledge about C# or .Net, I was soon annoyed with some of Tao's (perceived) shortcomings, namely the namespace redudancy (Tao.OpenGl.Gl.gl*, the 'gl' part is repeated three times!) and extension loading mechanism.
After a brief period, I started exploring the idea of creating my own OpenGL bindings. It took a while, but by February 2006 I had the first working prototype. No extension loading and many problems, but at least it worked! I was also becoming more fluent in C# (even though I would be really ashamed to show the code of these programs to anyone, now :))
I continued rewriting and improving the generator, and by the summer of 2006, I had working extension loading and opengl enumerant support (a first for any library, to the best of my knowledge). At that point, the developers of the Tao Framework were looking into ways to change the inflexible extension loading mechanism used in Tao 1.3, so I offered to adapt my code. By October 2006, the new code could support the Tao examples and it was checked into the Tao repository. At the same time, the OpenTK project was registered at Sourceforge: it would contain the alternative (non Tao-compatible) bindings, as well code for Context creation and other convenient functions.
Throughout the next months, OpenTK was ported to Linux by the author of the AgateLib, while the OpenGL bindings continued to improve. The Tao 2.0.0 release at the beginning of 2007 would be the biggest challenge - but the bindings fortunately worked. During the next few months I was devoted to fixing the compatibility problems with older Tao projects, and the latest Tao.OpenGl beta is now compatible with almost all legacy Tao projects.
Starting from May 2007 till now, OpenTK has grown exponentially. New, extremely useful features were introduced to the OpenGL bindings (function overloads, separate namespaces per extension category), and the codebase matured. At the same time, preparations were made for the OpenGL 3.0 - with a bit of luck, OpenTK and Tao (which now share code) will support the new standard as soon as it released.
As of 2009, the Tao framework is no longer being developed. Its OpenGL and OpenAL bindings have been incorporated into OpenTK to simplify porting of Tao applications.
OpenTK 1.0 was released on October 2010.
Complete OpenGL 4.1 support has been available since December 2010.
Complete OpenGL 4.4, OpenGL ES 3.0 and a fully-featured SDL2 backend were added in September 2014. | http://www.opentk.com/about | CC-MAIN-2014-15 | en | refinedweb |
Review Sheet for Midterm 1
This review sheet is not complete. All material covered in class or in reading assignments through the end of class on Monday, Oct 1, will be covered on the test.
1. Write a line of code that creates an object that can be used to write text to the file "numbers.dat".
PrintStream outStr = new PrintStream(new File("numbers.dat"));
2. What is a checked exception? How do we advertise that our code might throw an exception (if we don't intend to handle it)?
A checked exception is one that we must handle in some way, or advertise that our code might throw, in order for our program to compile. We advertise that a method might throw an exception by adding "throws Exception" to the header of the method.
3. Write a Java program that reads 100 integers and prints the min, max and average to the screen.
import java.util.*;
public class Numbers
{
public static void main(String[] args)
{
// set up for input
Scanner stdin = new Scanner(System.in);
System.out.print("Please enter 100 ints: ");
int min = stdin.nextInt(); // set min to first num read
int max = min;
int sum = min;
// read the remaining 99 values
for(int i = 0; i < 99; i++)
{
int num = stdin.nextInt();
if(num < min) min = num;
if(num > max) max = num;
sum += num;
}
System.out.println("Max: " + max);
System.out.println("Min: " + min);
System.out.println("Avg: " + sum/100.0);
}
}
4. Write a program that takes a sentence from the command line (ie, 0 or more command line arguments) and prints the reversed sentence to the screen.
This is NOT a complete program.
public static void main(String[] args)
{
for(int i = args.length-1; i >=0; i--)
System.out.print(args[i] + " ");
}
5. Write a program that reads an input file "test.txt" and prints each floating-point token to the screen.
We did a similar example in class.
6. Write a class that represents a Point in 2-dimensional space. The class should store the point's x and y coordinates. Include methods that:
compute and return the point's distance from the origin (0,0)
getter methods
setter methods
override the toString(), clone() and equals() methods from the Object class
constructor that takes the point's x and y coordinates
constructor that creates a point that represents the origin
8. Write a Java program that reads the name of a file from the user, and prints the number of lines in the file that contain the word "happy".
import java.util.*;
import java.io.*;
public class Happy
{
public static void main(String[] args) throws IOException
{
Scanner scan = new Scanner(System.in);
System.out.print("What is the file name? ");
String file = scan.next();
// create scanner associated with file
Scanner fileRead = new Scanner(new File(file));
int num = 0; // number of lines containing "happy"
while(fileRead.hasNextLine())
{
String line = fileRead.nextLine();
if(line.indexOf("happy") >= 0) num++;
}
System.out.println(num + " lines contain the word happy");
}
}
9. Declare and initialize a constant the represents the median price of a house in Austin.
public final int HOUSE_PRICE = 200000;
10. Give an example of an explicit cast, and explain when it is necessary to use an explicit cast.
11. Explain the difference between comparing 2 strings with == and the equals() method. Does it make sense to compare two floating-point values with ==?
12. For your class Point in #6, write a subclass called XAxisPoint that represents points with y-coordinate equal to 0.
13.)
14. Write Java code that prints the characters in an array
String words = ...; // assume words has been initialized
to the screen, one character per line.
15. Review the online notes we've covered in class and re-work the examples.
16. Give an example illustrating the use of the canRead() method in the File class. | http://www.cs.utexas.edu/~eberlein/cs313e/midterm1ReviewSols.html | CC-MAIN-2014-15 | en | refinedweb |
ripple
ripple is a rich Ruby toolkit for Riak, Basho’s distributed database. It consists of three gems:
riak-client(
Riaknamespace) contains a basic wrapper around typical operations, including bucket manipulation, object CRUD, link-walking, and map-reduce.
ripple(
Ripplenamespace) contains an ActiveModel-compatible modeling layer that is inspired by ActiveRecord, DataMapper, and MongoMapper.
riak-sessionscontains session stores for Rack and Rails 3 applications.
Dependencies
riak-client requires i18n and either json or yajl-ruby. For higher performance on HTTP requests, install the 'curb’ or 'excon’ gems. The cache store implementation requires ActiveSupport 3 or later.
ripple requires Ruby 1.8.7 or later and versions 3 or above of ActiveModel and ActiveSupport (and their dependencies, including i18n).
riak-sessions requires Rack (any version > 1.0), and Rails 3.0 if you want the Rails-specific session store.
Development dependencies are handled with bundler. Install bundler (
gem install bundler) and run this command in each sub-project to get started:
<notextile><pre>$ bundle install</pre></notextile>
Run the RSpec suite using
bundle exec:
<notextile><pre>$ bundle exec rake spec</pre></notextile>
Basic Example
<notextile><pre>require 'riak’
- Create a client interface
client = Riak::Client.new
- Create a client interface that uses Excon
client = Riak::Client.new(:http_backend => :Excon)
- Retrieve a bucket
bucket = client.bucket(“doc”) # a Riak::Bucket
- Get an object from the bucket
object = bucket.get(“index.html”) # a Riak::RObject
- Change the object’s data and save
object.data = “<html><body>Hello, world!</body><.data = “alert('Hello, World!’)”
new_one.store</pre></notextile>
Map-Reduce Example
<notextile><pre>
- Assuming you’ve already instantiated a client, get the album titles for The Beatles
results = Riak::MapReduce.new(client). add(“artists”,“Beatles”). link(:bucket => “albums”). map(“function(v){ return [JSON.parse(v.values0.data).title]; }”, :keep => true).run
p results # => [“Please Please Me”, “With The Beatles”, “A Hard Day’s Night”, # “Beatles For Sale”, “Help!”, “Rubber Soul”, # “Revolver”, “Sgt. Pepper’s Lonely Hearts Club Band”, “Magical Mystery Tour”, # “The Beatles”, “Yellow Submarine”, “Abbey Road”, “Let It Be”]</pre></notextile>
Riak Search Examples
For more information about Riak Search, see the Basho wiki.
<notextile><pre>
require 'riak/search’ # optional riak_search additions
- Create a client, specifying the Solr-compatible endpoint
client = Riak::Client.new :solr => “/solr”
- Search the default index for documents
result = client.search(“title:Yesterday”) # Returns a vivified JSON object # containing 'responseHeaders’ and 'response’ keys
result['response’]['numFound’] # total number of results
result['response’]['start’] # offset into the total result set
result['response’]['docs’] # the list of indexed documents
- Search the 'users’ index for documents
client.search(“users”, “name:Sean”)
- Add a document to an index
client.index(“users”, {:id => “sean@basho.com”, :name => “Sean Cribbs”}) # adds to the 'users’ index
client.index({:id => “index.html”, :content => “Hello, world!”}) # adds to the default index
client.index({:id => 1, :name => “one”}, {:id => 2, :name => “two”}) # adds multiple docs
- Remove document(s) from an index
client.remove({:id => 1}) # removes the document with ID 1
client.remove({:query => “archived”}) # removes all documents matching query
client.remove({:id => 1}, {:id => 5}) # removes multiple docs
client.remove(“users”, {:id => “sean@basho.com”}) # removes from the 'users’ index
- Seed MapReduce with search results
Riak::MapReduce.new(client). search(“users”,“email:basho”). map(“Riak.mapValuesJson”, :keep => true). run
- Detect whether a bucket has auto-indexing
client['users’].is_indexed?
- Enable auto-indexing on a bucket
client['users’].enable_index!
- Disable auto-indexing on a bucket
client['users’].disable_index!
</pre></notextile>
Document Model Examples
<notextile><pre>
require 'ripple’
- Documents are stored as JSON objects in Riak but have rich
- semantics, including validations and associations.
class Email include Ripple::Document property :from, String, :presence => true property :to, String, :presence => true property :sent, Time, :default => proc { Time.now } property :body, String
end
reply = Email.new
reply.from = “justin@bashoooo.com”
reply.to = “sean@geeemail.com”
reply.body = “Riak is a good fit for scalable Ruby apps.”
reply.save # POST /riak/emails (Riak-assigned key)
- Documents can contain embedded documents, and link to other standalone documents
- via associations using the many and one class methods.
class Person include Ripple::Document property :name, String many :addresses many :friends, :class_name => “Person” one :account
end
- Account and Address are embeddable documents
class Account include Ripple::EmbeddedDocument property :paid_until, Time embedded_in :person # Adds “person” method to get parent document
end
class Address
include Ripple::EmbeddedDocument
property :street, String
property :city, String
property :state, String
property :zip, String
end
person = Person.find(“adamhunter”)
person.friends << Person.find(“seancribbs”) # Links to people/seancribbs with tag “friend”
person.addresses << Address.new(:street => “100 Main Street”) # Adds an embedded address
person.account.paid_until = 3.months.from_now
</pre></notextile>
Configuration Example
When using Ripple with Rails 3,
config/ripple.yml should contain your Riak connection information:
<notextile><pre>
development: port: 8098 host: localhost
production:
port: 8098
host: production.load-balancer.example.com
</pre></notextile>
require 'ripple/railtie' from the top of your
config/application.rb file to turn this on.
How to Contribute
- Fork the project on Github. If you have already forked, use
git pull --rebaseto reapply your changes on top of the mainline. Example:
- Create a topic branch. If you’ve already created a topic branch, rebase it on top of changes from the mainline “master” branch. Examples:
- New branch:
- Existing branch:
-in Emacs, or “Remove Trailing Spaces in Document” in the “Text” bundle in Textmate).
- Commit, do not mess with Rakefile or VERSION. If related to an existing issue in the tracker, include “Closes #X” in the commit message (where X is the issue number).
- Send me a pull request.<<
The “Poor Man’s Fibers” implementation (lib/riak/util/fiber1.8.rb) is Copyright ©2008 Aman Gupta. | https://bitbucket.org/basho/riak-ruby-client | CC-MAIN-2014-15 | en | refinedweb |
QGeoBoundingCircle
Since: 1.1
#include <QtLocationSubset/QGeoBoundingCircle>
The QGeoBoundingCircle class defines a circular geographic area.
The circle is defined in terms of a QGeoCoordinate which specifies the center of the circle and a qreal which specifies the radius of the circle in metres.
The circle is considered invalid if the center coordinate is invalid or if the radius is less than zero.
Overview
Inheritance
Public Types Index
Only has inherited public types
Public Functions Index
Public Types
(Only has inherited public types)
Describes the type of a bounding area.
BoxType A box shaped bounding area.
CircleType A circular bounding area.
- BoxType
-
- CircleType
-
Public Functions
Constructs a new, invalid bounding circle.
Constructs a new bounding circle centered at center and with a radius of radius metres.
Constructs a new bounding circle from the contents of other.
Destructor.
QGeoCoordinate
Returns the center coordinate of this bounding circle.
virtual bool
Returns whether the coordinate coordinate is contained within this bounding circle.
virtual bool
Returns whether this bounding circle has a geometrical area of zero.
Returns true if this bounding circle is invalid.
virtual bool
Returns whether this bounding circle is valid.
A valid bounding circle has a valid center coordinate and a radius greater than or equal to zero.
bool
Returns whether this bounding circle is not equal to other.
QGeoBoundingCircle &
Assigns other to this bounding circle and returns a reference to this bounding circle.
bool
Returns whether this bounding circle is equal to other.
qreal
Returns the radius in meters of this bounding circle.
void
Sets the center coordinate of this bounding circle to center.
void
Sets the radius in meters of this bounding circle to radius.
void
Translates this bounding circle by degreesLatitude northwards and degreesLongitude eastwards.
Negative values of degreesLatitude and degreesLongitude correspond to southward and westward translation respectively.
QGeoBoundingCircle
Returns a copy of this bounding circle translated by degreesLatitude northwards and degreesLongitude eastwards.
Negative values of degreesLatitude and degreesLongitude correspond to southward and westward translation respectively.
virtualQGeoBoundingArea::AreaType
Returns QGeoBoundingArea::CircleType to identify this as a QGeoBoundingCircle instance.
This function is provided to help find the specific type of aQGeoBoundingArea instance.. | http://developer.blackberry.com/native/reference/cascades/qtmobilitysubset__qgeoboundingcircle.html | CC-MAIN-2014-15 | en | refinedweb |
Hello!
I have generated a MED field using MEDCoupling and would like to visualize it in VISU. I do it in this way:
First, Generate the field using ParaMEDMEM::MEDCouplingFieldDouble pointer;
Second, create an instance of ParaMEDMEM::MEDCouplingFieldDoubleServant using this field;
Third, pass out the field using SALOME_MED::MEDCouplingFieldDoubleCorbaInterface_ptr;
But for visualizing it in VISU, its method ImportMedField(SALOME_MED::Field) require a SALOME_MED::Field as a parameter.
Is there any possible way to make this conversion??
Any answer or even hints will be highly appreciated!
Best regards,
Yuefeng Qiu
Hello,
No it is not possible with visu but it is possible with PARAVIS. Using Para MEDCorbaPlugin in Sources Menu (below Mandelbrot) of PARAVIS.
Just copy paste the IOR ref given by.
from MEDCouplingCorba import *
fc=MEDCouplingFieldDoubleServant._this(myField)
import CORBA
orb=CORBA.ORB_init()
print orb.object_to_string(fc)
Anthony
Thanks, Anthony!
I would like to invoke PARAVIS from my C++ code, pass the IOR of the MEDCoupling field to it and then visualize. Now I understand how to obtain the IOR in MEDCoupling side.
Could you tell me how to write code to invoke PARAVIS, then how to call this plugin, then which method in this plugin to receive this IOR?
Yuefeng
Hello Yuefeng,
Before launching your application set the OMNIORB_CONFIG to those of your SALOME/PARAVIS session you want.
$OMNIORB_CONFIG should point to a file that describes the workstation and the port (typically 2810).
Then invoke paravis like the next python script using "ExecuteScript" of PARAVIS salome component.
#####
ref_f=MEDCouplingFieldDoubleServant._this(f)
import CORBAorb=CORBA.ORB_init()ior=orb.object_to_string(ref_f)
import PARAVIS_Gen_idlimport salomesalome.salome_init()paravis=salome.lcc.FindOrLoadComponent("FactoryServer","PARAVIS")
script="""src1 = ParaMEDCorbaPluginSource()src1.IORCorba = '%s'asc=GetAnimationScene()rw=GetRenderView()dr=Show()\ndr.Visibility = 1Render()"""content=script%(ior)paravis.ExecuteScript(content)
#############
The C++ code here : MED_SRC/src/MEDCalculator/MEDCalculatorDBField.cxx performs the job of python code above.
Best Regards,
Check that "paravis" variable is not null with is_nil. There"s a great chance to be null ref !
You have to activate PARAVIS component before invoking your external process !
You have to set OMNIORB_CONFIG correctly to the salome session in which PARAVIS has been activated !
You are right, Anthony. After loading the PARAVIS first and then call it, I can now visualize my MEDCouplingField in PARAVIS.
A further question: Can I use PARAVIS Engine to finish this job? I think it's convenient if I can use an uniform way to call another component in SALOME. The code above is not so straightforward for me.
Sorry I don't understand very well your question. What job do you mean ? data processing ?
It's clear that the standart way to communicate meshes and fields between components sequential engines (servers) it is by the use of CORBA interfaces MEDCoupling*CorbaInterfaces like you did successfully in your C++ code. Your code requires more CORBA knowledge because the invokation of remote service (ExecuteScript here) is done dynamically (it is needed for MED component), but you can perform the same thing using standart CORBA invcation because your are not expected to manage depedancies with PARAVIS as MED does.
Best regards
Hello Anthony,
Sorry for making the question unclear. I am a novice in CORBA programming so sometimes I feel a little bit lost.
I know in VISU we can visualized a SALOME_MED::Field in this way:
Engines::EngineComponent_var comp = app->lcc()->FindOrLoad_Component("FactoryServer", "VISU" );
VISU::VISU_Gen_var aVISUGen = VISU::VISU_Gen::_narrow(comp);
ASSERT( !CORBA::is_nil( aVISUGen ) );
aVISUGen->ImportMedField(aMEDField);
My question is: can I also call the PARAVIS and the plugin in this way?
Engines::EngineComponent_var comp = app->lcc()->FindOrLoad_Component("FactoryServer", "VISU" );
Yes absolutely ! Replace VISU by PARAVIS in you code. The code becomes :
Engines::EngineComponent_var comp = app->lcc()->FindOrLoad_Component("FactoryServer", "PARAVIS" );
PARAVIS::PARAVIS_Gen_var aPVGen = PARAVIS::PARAVIS_Gen::_narrow(comp);
ASSERT( !CORBA::is_nil( aPVGen ) );
aPVGen->ExcecuteScript(...)
It is exactly equivalent to the code with "req->invoke()" you written. Except that here you have a static dependancy to PARAVIS_Gen.idl.
Thanks Anthony!
That's exactly what I like!
Legal Information | http://www.salome-platform.org/forum/forum_12/842743364 | CC-MAIN-2014-15 | en | refinedweb |
Architecture
The first and most intuitive way to implement a multi-threaded
server is to follow the thread-per-connection
approach. This is the traditional pre-Java-1.4 solution, caused by
the lack of non-blocking I/O support in older Java versions. The
thread-per-connection approach uses an exclusive worker thread for
each connection. Within the handling loop, a worker thread waits
for new incoming data, processes the request, returns the response
data, and calls the blocking socket's
read method
again.
public class Server { private ExecutorService executors = Executors.newFixedThreadPool(10); private boolean isRunning = true; public static void main(String... args) throws ... { new Server().launch(Integer.parseInt(args[0])); } public void launch(int port) throws ... { ServerSocket sso = new ServerSocket(port); while (isRunning) { Socket s = sso.accept(); executors.execute(new Worker(s)); } } private class Worker implements Runnable { private LineNumberReader in = null; ... Worker(Socket s) throws ... { in = new LineNumberReader(new InputStreamReader(...)); out = ... } public void run() { while (isRunning) { try { // blocking read of a request (line) String request = in.readLine(); // processing the request ... String response = ... // return the response out.write(resonse); out.flush(); } catch (Exception e ) { ... } } in.close(); ... } } }
There is always a one-to-one relationship between simultaneous
client connections and the number of concurrent worker threads.
Because each connection has an associated thread waiting on the
server side, very good response times can be achieved. However,
higher loads require a higher number of running, concurrent
threads, which limits scalability. In particular, long-living
connections like persistent HTTP connections lead to a lot of
concurrent worker threads, which tend to waste their time waiting
for new client requests. In addition, hundreds or even thousands of
concurrent threads can waste a great deal of stack space. Note, for
example, that the default
Java thread stack size for Solaris/Sparc is 512 KB.
If the server has to handle a high number of simultaneous
clients and tolerate slow, unresponsive clients, an alternative
threading architecture is needed. The thread-on-event
approach implements such requirements in a very efficient way. The
worker threads are independent from the connections and will only
be used to handle specific events. For instance, if a
data received event occurs, a worker thread will be used to
process the application-specific encoding and service tasks (or at
least to start them). Once this job is complete, the worker will be
returned to the thread pool. This approach requires performing the
socket I/O operations in a non-blocking manner. The socket's
read or
write method calls have to be
non-blocking. Additionally, an event system is required; it signals
if new data is available, which in turn initiates the socket
read call. This removes the one-to-one relationship
between waiting reads and taken threads. The design of such an
event-driven I/O system is described by the Reactor
pattern.
The Reactor Pattern
The Reactor pattern, illustrated in
Figure 1, separates the detection of events like readiness for
read or readiness for accepting and the processing of
these events. If a readiness event occurs, an event handler will be
notified to perform the appropriate processing within dedicated
worker threads.
Figure 1. A NIO-based Reactor pattern implementation
To participate in the event architecture, the connection's
Channel has to be registered on a
Selector. This will be done by calling the
register method. Although this method is part of the
SocketChannel, the channel will be registered on the
Selector, not the other way around.
... SocketChannel channel = serverChannel.accept(); channel.configureBlocking(false); // register the connection SelectionKey sk = channel.register(selector, SelectionKey.OP_READ); ...
To detect new events, the
Selector provides the
capability to ask the registered channels for their readiness
events. By calling the
select method, the
Selector collects the readiness events of the
registered channels. This method call blocks until at least one
event has been occurred. In this case, the method returns the
number of connections that have become ready for I/O operations
since the last
select call. The selected connections
can be retrieved by calling the Selector's
selectedKey
method. This method returns a set of
SelectionKey
objects, which holds the IO event status and the reference of the
connection's
Channel.
A
Selector is held by the
Dispatcher.
This is a single-threaded active class that surrounds the
Selector. The
Dispatcher is responsible
to retrieve the events and to dispatch the handling of the consumed
events to the
EventHandler. Within the dispatch loop,
the
Dispatcher calls the
Selector's
select method to wait for new events. If at least one
event has been occurred, the method call returns and the associated
channel for each event can be acquired by calling the
selectedKeys method.
... while (isRunning) { // blocking call, to wait for new readiness events int eventCount = selector.select(); // get the events Iterator<SelectionKey> it = selector.selectedKeys().iterator(); while (it.hasNext()) { SelectionKey key = it.next(); it.remove(); // readable event? if (key.isValid() && key.isReadable()) { eventHandler.onReadableEvent(key.channel()); } // writable event? if (key.isValid() && key.isWritable()) { key.interestOps(SelectionKey.OP_READ); // reset to read only eventHandler.onWriteableEvent(key.channel()); } ... } ... }
Based on an event like readiness for read or readiness
for write, the
EventHandler will be called by the
Dispatcher to process the event. The
EventHandler decodes the request data, processes the
required service activities, and encodes the response data. Because
worker threads are not forced to waste time by waiting for new
requests to open a connection, the scalability and throughput of
this approach is conceptually only limited by system resources like
CPU or memory. That said, the response times wouldn't be as good
as for the thread-per-connection approach, because of the required
thread switches and synchronization. The challenge of the
event-driven approach is therefore to minimize synchronizations and
optimize thread management, so that this overhead will be
negligible.
Component Architecture
Most highly scalable Java servers are built on the top of the
Reactor pattern. By doing this, the classes of the Reactor
pattern will be enhanced by additional classes for connection
management, buffer management, and for load balancing reasons. The
entry class of such a server is the
Acceptor. This
arrangement is shown in Figure 2.
Figure 2. Major components of a connection-oriented server
Acceptor
Every new client connection of a server will be accepted by the
single
Acceptor, which is bound to the server port.
The
Acceptor is a single threaded active class.
Because it is only responsible for handling the very short-running
client connection request, it is often sufficient to implement the
Acceptor using the blocking I/O model. The
Acceptor gets the handle of a new connection by
calling the
ServerSocketChannel's blocking
accept method. The new connection will be registered
to a
Dispatcher. After this, the connection
participates in event handling.
Because the scalability of a single
Dispatcher is
limited, often a small pool of
Dispatchers will be
used. One reason for this limitation is the operating-system-specific implementation of the
Selector. Most popular
operating systems map a
SocketChannel to a file handle
in a one-to-one relationship. Depending on the concrete system, the
maximum number of file handles per
Selector is limited
in a different way.
class Acceptor implements Runnable { ... void init() { ServerSocketChannel serverChannel = ServerSocketChannel.open(); serverChannel.configureBlocking(true); serverChannel.socket().bind(new InetSocketAddress(serverPort)); } public void run() { while (isRunning) { try { SocketChannel channel = serverChannel.accept(); Connection con = new Connection(channel, appHandler); dispatcherPool.nextDispatcher().register(con); } catch (...) { ... } } } }
In the example code, a
Connection object holds the
SocketChannel and an application-level event handler.
These classes will be described below.
Dispatcher
By calling the
Dispatcher's
method, the
SocketChannel will be registered on the
underlying
Selector. Here is where the trouble comes
in. The
Selector manages the registered channels
internally by using key sets. This means that by registering a
channel, an associated
SelectionKey will be created
and be added to the Selector's registered key set. At the
same time, the concurrent dispatcher thread could call the
Selector's
select method, which also
accesses the key set. Because the key sets are not
thread-safe, an unsynchronized registration in the context of the
Acceptor thread can lead to deadlocks and race
conditions. This can be solved by implementing the selector
guard object idiom, which allows suspending the dispatcher
thread temporarily. See " "">
How to Build a Scalable Multiplexed Server with NIO" (PDF) for an
explanation of this approach.
class Dispatcher implements Runnable { private Object guard = new Object(); … void register(Connection con) { // retrieve the guard lock and wake up the dispatcher thread // to register the connection's channel synchronized (guard) { selector.wakeup(); con.getChannel().register(selector, SelectionKey.OP_READ, con); } // notify the application EventHandler about the new connection … } void announceWriteNeed(Connection con) { SelectionKey key = con.getChannel().keyFor(selector); synchronized (guard) { selector.wakeup(); key.interestOps(SelectionKey.OP_READ | SelectionKey.OP_WRITE); } } public void run() { while (isRunning) { synchronized (guard) { // suspend the dispatcher thead if guard is locked } int eventCount = selector.select(); Iterator<SelectionKey> it = selector.selectedKeys().iterator(); while (it.hasNext()) { SelectionKey key = it.next(); it.remove(); // read event? if (key.isValid() && key.isReadable()) { Connection con = (Connection) key.attachment(); disptacherEventHandler.onReadableEvent(con); } // write event? … } } } }
After a connection has been registered, the
Selector listens for readiness events of this
connection. If a event occurs, the appropriated callback method of
the
Dispatcher's event handler will be called by
passing the associated connection.
Dispatcher-Level
EventHandler
The first activity performed while processing a readiness for
read event is to call the channel's
read method.
In contrast to the streaming interface, the
Channel
interface requires that a read buffer has to be passed over. Often
direct-allocated
ByteBuffers will be used. Direct
buffers reside in native memory, bypassing the Java heap space. By
using direct buffers, socket IO operations will be performed
without the need to create internal intermediate buffers.
Normally the
read call will be performed very
quickly. Depending on the operating system, the socket read
operation often only puts a copy of the received data from the
kernel memory space into the read buffer, which resides in the user-controlled memory space.
The received data will be appended to the connection's
thread-safe read queue for further processing. Based on the
result of the I/O operation, application-specific tasks have to be
processed. Such tasks will be processed by the assigned
application-level event handler. This handler will typically called
by using a worker thread.
class DispatcherEventHandler { ... void onReadableEvent(final Connection con) { // get the received data ByteBuffer readBuffer = allocateMemory(); con.getChannel().read(readBuffer); ByteBuffer data = extractReadAndRecycleRenaming(readBuffer); // append it to read queue con.getReadQueue().add(data); ... // perform further operations (encode, process, decode) // by a worker thread if (con.getReadQueue().getSize() > 0) { workerPool.execute(new Runnable() { public void run() { synchronized (con) { con.getAppHandler().onData(con); } } }); } } void onWriteableEvent(Connection con) { ByteBuffer[] data = con.getWriteQueue().drain(); con.getChannel().write(data); // write the data ... if (con.getWriteQueue().isEmpty()) { if (con.isClosed()) { dispatcher.deregister(con); } } else { // there is remaining data to write dispatcher.announceWriteNeed(con); } } }
Within the application-specific tasks, data will be encoded,
services will be performed, and data will be written. By writing data,
the data to send will be appended to the write queue, and the
Dispatcher's
announceWriteNeed method will be called.
This method causes the
Selector to listen for
readiness for write events. If such an event occurs, the
Dispatcher-level event handler's method
onWriteableEvent will be performed. It gets the data
from the connection's write queue and performs the required write
I/O operation. Trying to write data in a direct way, by bypassing
this event approach, will end in deadlocks and race conditions.
Application-Level
EventHandler
In contrast to the
Dispatcher's event handler, the
application-specific event handler listens for higher-level
connection-oriented events, like connection established,
data received, or connection disconnected. The
concrete event handler design is one of the major differences
between NIO server frameworks like SEDA, MINA, or
emberIO. Such frameworks often implement a multi-staged
architecture, where chains of event handlers can be used. This
allows adding handlers like
SSLHandler or
DelayedWriteHandler, which intercept the
request/response processing. The following example shows an
application-level handler based on the xSocket framework. The xSocket
framework supports different handler interfaces that define
callback methods to be implemented by application-specific
code.
class POP3ProtocolHandler implements IConnectHandler, IDataHandler, ... { private static final String DELIMITER = ... private Mailbox mailbox = ... public static void main(String... args) throws ... { new MultithreadedServer(110, new POP3ProtocolHandler()).run(); } public boolean onConnect(INonBlockingConnection con) throws ... { if (gatekeeper.isSuspiciousAddress(con.getRemoteAddress())) { con.setWriteTransferRate(5); // reduce transfer: 5byte/sec } con.write("+OK My POP3-Server" + DELIMITER); return true; } public boolean onData(INonBlockingConnection con) throws ... { String request = con.readStringByDelimiter(DELIMITER); if (request.startsWith("QUIT")) { mailbox.close(); con.write("+OK POP3 server signing off" + DELIMITER); con.close(); } else if (request.startsWith("USER")) { this.user = request.substring(4, request.length()); con.write("+OK enter password" + DELIMITER); } else if (request.startsWith("PASS")) { String pwd = request.substring(4, request.length()); boolean isAuthenticated = authenticator.check(user, pwd); if (isAuthenticated) { mailbox = MailBox.openAndLock(user); con.write("+OK mailbox locked and ready" + DELIMITER); } else { ... } } else if (...) { ... } return true; } }
To ease in accessing the underlying read and write queue, the
Connection object provides several convenience
read and
write methods for stream- and
channel-oriented operations.
By closing the connection, the underlying implementation
initiates a writeable event round-trip to flush the write queue.
The connection will be terminated after the remaining data has been
written. Besides such a controlled termination, connections can be
disconnected for other reasons. For instance, hardware malfunctions
could cause the termination of a TCP-based connection. Such a
situation can only be detected by performing read or write
operations on the socket, or by idle timeouts. Most NIO frameworks
provide a built-in function to handle such uncontrolled
terminations.
Conclusion
An event-driven non-blocking architecture is a fundamental layer
to implement highly efficient, scalable, and reliable servers. The
challenge is to minimize the thread synchronization overhead and to
optimize the connection/buffer management. This will be the hardest
part to program.
But there is no need to reinvent the wheel. Server frameworks
like xSocket, emberIO, SEDA, or MINA abstract the low-level event
handling and thread management to ease the creation of highly
scalable servers. Most of these server frameworks also support
features like SSL or UDP, which haven't been discussed in this
article.
Resources
- "Scalable IO in
Java" (PDF) describes event-driven processing by using Java NIO
- "Tricks and Tips with NIO, Part 2: Why
SelectionKey.attach()Is
Evil" describes how a memory leak occurs by a unwary use of the
SelectionKey's
attachmethod.
- "Pico
Threads: Lightweight Threads in Java" shows the problems with
large-scale threaded programming and event-based techniques.
- A Reactor
pattern description by Douglas C. Schmidt (PDF)
- Unix
Network Programming: The Sockets Networking API gives a good
overview about network programming in general, and gives a good
impression what happens behind the Java I/O operations on the
operating-system level.
- xSocket is a
LGPL NIO-based library to build network applications. Most example
code of this article has been written based on xSocket.
- Login or register to post comments
- Printer-friendly version
- 50616 reads | https://today.java.net/node/219736/atom/feed | CC-MAIN-2014-15 | en | refinedweb |
std::upper_bound
From cppreference.com
< cpp | algorithm
Revision as of 05:19, 30 October 2013 by 217.112.209.42 (Talk)
#include <algorithm> #include <iostream> #include <iterator> #include <vector> int main() { std::vector<int> data = { 1, 1, 2, 3, 3, 3, 3, 4, 4, 4, 5, 5, 6 }; auto lower = std::lower_bound(data.begin(), data.end(), 4); auto upper = std::upper_bound(data.begin(), data.end(), 4); std::copy(lower, upper, std::ostream_iterator<int>(std::cout, " ")); }
Output:
4 4 4
[edit] Complexity
logarithmic in the distance between
first and | http://en.cppreference.com/mwiki/index.php?title=cpp/algorithm/upper_bound&oldid=66107 | CC-MAIN-2014-15 | en | refinedweb |
HTML and XHTML
XHTML is HTML written as XML.
What Is XHTML?
- XHTML stands for EXtensible HyperText Markup Language
- XHTML is almost identical to HTML 4.01
- XHTML is a stricter and cleaner version of HTML
- XHTML is HTML defined as an XML application
- XHTML is supported by all major browsers.
Why XHTML?
Many pages on the internet contain "bad" HTML.
The following HTML code will work fine if you view it in a browser (even if it does NOT follow the HTML rules):
<head>
<title>This is bad HTML</title>
<body>
<h1>Bad HTML
<p>This is a paragraph
</body>
XML is a markup language where documents must be marked up correctly and "well-formed".
If you want to study XML, please read our XML tutorial.
Today's market consists of different browser technologies. Some browsers run on computers, and some browsers run on mobile phones or other small devices. Smaller devices often lack the resources or power to interpret a "bad" markup language.
Therefore - by combining the strengths of HTML and XML, XHTML was developed. XHTML is HTML redesigned as XML.
The Most Important Differences from HTML:
Document Structure
- XHTML DOCTYPE is mandatory
- The XML namespace attribute in <html> is mandatory
- <html>, <head>, <title>, and <body>
<!DOCTYPE ....> Is Mandatory.
The example below shows an XHTML document with a minimum of required tags:
"">
<html xmlns="">
<head>
<title>Title of document</title>
</head>
<body>
......
</body>
</html>
XHTML Elements Must Be Properly Nested
In HTML, some elements can be improperly nested within each other, like this:
In XHTML, all elements must be properly nested within each other, like this:
XHTML Elements Must Always Be Closed
This is wrong:
<p>This is another paragraph
This is correct:
<p>This is another paragraph</p>
Empty Elements Must Also Be Closed
This is wrong:
A horizontal rule: <hr>
An image: <img src="happy.gif" alt="Happy face">
This is correct:
A horizontal rule: <hr />
An image: <img src="happy.gif" alt="Happy face" />
XHTML Elements Must Be In Lower Case
This is wrong:
<P>This is a paragraph</P>
</BODY>
This is correct:
<p>This is a paragraph</p>
</body>
Attribute Names Must Be In Lower Case
This is wrong:
This is correct:
Attribute Values Must Be Quoted
This is wrong:
This is correct:
Attribute Minimization Is Forbidden
This is wrong:
<input readonly>
<input disabled>
<option selected>
This is correct:
<input readonly="readonly">
<input disabled="disabled">
<option selected="selected">
Validate XHTML With The W3C Validator
XHTML Quiz Test
The test contains 20 questions and there is no time limit.
The test is not official, it's just a nice way to see how much you know, or don't know, about XHTML.
You will get 1 point for each correct answer. At the end of the Quiz, your total score will be displayed. Maximum score is 20 points.
Your suggestion:
Thank You For Helping Us!
Your message has been sent to W3Schools. | http://www.w3schools.com/htmL/html_xhtml.asp | CC-MAIN-2014-15 | en | refinedweb |
Extending Django Settings for the Real World
Originally Authored by Adam Nelson
A basic Django installation keeps its global variables in a file called settings.py. This is perfect for simple deployment because it allows the developer to overwrite Django variables like INSTALLED_APPS or SESSION_ENGINE very easily. You simply update the variable like so:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
From within the shell, you can see the result:
./manage.py shell >>> from django.conf import settings >>> settings.SESSION_ENGINE 'django.contrib.sessions.backends.cache'
Many people have two environments in which they work, and therefore a typical settings.py file will have something like this at the end:
try: from local_settings import * except ImportError: pass
This overwrites variables from a file called local_settings.py, overriding any existing variables in the settings.py file. Try it. Add the import code above into your settings.py file and create a new file called local_settings.py in the same directory as the settings.py file and add this to it:
SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db'
Now, if you enter the shell like you did above and request settings.SESSION_ENGINE, you’ll get ‘django.contrib.sessions.backends.cache’. This is very handy because, in a typical situation, you can have a settings.py file which works for all your environments and then have a local_settings.py file for each environment that overrides the variable values.
Problems with the Standard Settings files
Unfortunately, in this scenario, variables from the settings.py file cannot be interpreted in the local_settings.py file and therefore, you couldn’t do something like this:
INSTALLED_APPS += ('debug_toolbar',)
In this situation, you’ll get a NAME_ERROR in which INSTALLED_APPS is undefined rather than (‘django.contrib.auth’,’debug_toolbar’,).
A Modest Proposal
What we do at Yipit is to put all of our variables in a settings directory:
settings/ __init__.py (where the variable for all environments are) active.py (optional - defines the environment we're in - not under version control) development.py (shared by all the development environments) production.py (live site)
This allows us to create an init.py file for all the variables that are the same across all environments. The init.py file requires no imports (except whatever you may need from Python itself, or other libraries). Then, each file imports from init.py in the way you might imagine:
production.py:
from settings import * #Alter or add production specific variables
development.py:
from settings import * #Alter or add development specific variables
active.py:
from settings.development import * #This file denotes which environment we're in. #This active.py file creates a development environment
Note: If you’re not that familiar with Python, ‘from settings’ accesses settings/init.py.
In more complex scenarios, you may also want to inherit settings from files other than setting/init.py, and this system fully supports that option. For example, you may have a settings/staging.py files that pulls from settings/init.py and then settings/development.py could pull from staging. It’s really up to you.
This approach has some shortcomings, notably that you can’t dynamically change variables - but that’s really not the point of settings. Now, you can change variables on a per-environment basis like this (in, say, development.py):
INSTALLED_APPS += ('debug_toolbar',)
Which will set INSTALLED_APPS as (‘django.contrib.auth’,’debug_toolbar’,). Here is our manage.py file:
#!/usr/bin/env python import sys import traceback from os.path import abspath, dirname, join from django.core.management import execute_manager SETTINGS_ACTIVE_CONTENTS = "\033[1;32mfrom settings.local import *\033[1;33m" try: from settings import active as settings except ImportError, e: print '\033[1;33m' print "Apparently you don't have the file settings/active.py yet." print "Create it containing '%s'\033[0m" % SETTINGS_ACTIVE_CONTENTS print print "=" * 20 print "original traceback:" print "=" * 20 print traceback.print_exc(e) sys.exit(1) sys.path.insert(0, abspath(join(dirname(__file__), "../"))) sys.path.insert(0, join(settings.PROJECT_ROOT, "apps")) if __name__ == "__main__": execute_manager(settings)
Final Thoughts
If we make further changes to our settings configuration, we’ll do a follow-up post. Some modifications we are considering:
Using Chef to hold many of the systemwide configuration parameters (usernames, machine addresses, etc…) in order to move that information away from the application layer and onto the environment layer.
Creating an additional settings file that imports active.py for calculated settings. For example, if a read replica database has not been declared, but the application expects one, have the default database act as the read replica.
If you have a different way of handling settings, we would love to hear from you in the comments below.
Adam Nelson is CTO at Yipit. | http://css.dzone.com/articles/extending-django-settings-real?mz=55985-python&_escaped_fragment_=/varud | CC-MAIN-2014-15 | en | refinedweb |
This chapter describes issues associated with Oracle Access Management. It includes the following topics:
Section 5.1, "General Issues and Workarounds"
Section 5.2, "Configuration Issues and Workarounds"
Section 5.3, "Oracle Access Management Console Issues"
Section 5.4, "Documentation Errata"
This section describes general issues and workarounds organized around specific services. To streamline your experience, only services with a general issue are included. If you do not find a service-related topic (Security Token Service, for example), there are no general issues at this time.
The following topics are included:
Section 5.1.1, "General Issues and Workarounds: Access Manager"
Section 5.1.2, "General Issues and Workarounds: Security Token Service"
Section 5.1.3, "General Issues and Workarounds: Identity Federation"
Section 5.1.4, "General Issues and Workarounds: Mobile and Social"
This topic describes general issues and workarounds for Oracle Access Management Access Manager (Access Manager). It includes the following topics:
Section 5.1.1.1, "Benign Exceptions Observed."
Section 5.1.1.2, "Can't Use WLST Commands For Federated SSO Password Policy."
Section 5.1.1.3, "Exception Logged on Accessing Resource."
Section 5.1.1.4, "Can't Get Static Method UserSession.getSessionAttributes()."
Section 5.1.1.5, "Consecutive Logins in Multiple Tabs Doesn't Work for WebGate."
Section 5.1.1.6, "Unsupported Items in WebSphere Trust Association Interceptor."
Section 5.1.1.7, "Logged Error During OAM Server Configuration Test."
Section 5.1.1.8, "Simple Policy Not Migrated After Complete Migration."
Section 5.1.1.9, "Available Services Page Won't Open In Localized Internet Explorer 9."
Section 5.1.1.10, "RSA Plugin Removed From System."
Section 5.1.1.11, "Create Provider Manually When Extending OIM Domain."
Section 5.1.1.12, "Unable to Access "/" Context Root if Protected by OSSO Agent for 11g OHS."
Section 5.1.1.13, "Starting Access Manager When Protected by Oracle Entitlements Server Throws Exception."
Section 5.1.1.14, "Access Tester Does Not Work with Non-ASCII Agent Names."
Section 5.1.1.15, "Authentication Fails: WNA Challenge, Active Directory, Users with Non-ASCII Characters."
Section 5.1.1.16, "Simple Mode is Not Supported for JDK 1.6 and AIX."
Section 5.1.1.17, "User Might Need to Supply Credentials Twice with DCC-Enabled Webgate."
The following benign exception might be seen on the Administration and Managed servers. It can be ignored.
java.lang.NoClassDefFoundError: oracle/security/am/engines/rreg/common/RegistrationRequest oracle.security.am.admin.config.mgmt.beanimpl.AMBootstrap. isBootstrapCandidate (AMBootstrap.java:191) at oracle.security.am.admin.config.mgmt.beanimpl.AMBootstrap. invokeBootstrapMethods(AMBootstrap.java:146) at oracle.security.am.admin.config.mgmt.beanimpl.AMBootstrap. doServerBootstrap(AMBootstrap.java:106) at oracle.security.am.admin.config.mgmt.beanimpl.AMBootstrap load(AMBootstrap.java:247)
The following benign exception is seen in the AdminServer-diagnostic.log file. It does not impact the Administration Console functionality and can be ignored.
oracle.mds.exception.ReadOnlyStoreException: MDS-01273: The operation on the resource /oracle/oam/ui/adfm/DataBindings.cpx failed because source metadata store mapped to the namespace / DEFAULT is read only. at oracle.mds.core.MDSSession.checkAndSetWriteStoreInUse(MDSSession.java:2495) at oracle.mds.core.MDSSession.checkAndSetWriteStoreInUse(MDSSession.java:2548) at oracle.mds.core.MDSSession.getMutableMO(MDSSession.java:3493) at oracle.mds.core.MDSSession.getMutableMO(MDSSession.java:1660) at oracle.mds.core.MDSSession.getMutableMO(MDSSession.java:1546) at oracle.adfdt.model.mds.MDSApplicationService.findApplication (MDSApplicationService.java:57) at oracle.adfdt.model.mds.MDSModelDesignTimeContext.initServices (MDSModelDesignTimeContext.java:232) at oracle.adfdt.model.mds.MDSModelDesignTimeContext.<init> (MDSModelDesignTimeContext.java:82) at oracle.adfdt.mds.MDSDesignTimeContext.<init> (MDSDesignTimeContext.java:66) at oracle.adf.view.rich.dt.DtAtRtContext.<init> (DtAtRtContext.java:22) at oracle.adf.view.rich.dt.Page.<init>(Page.java:535) at oracle.adf.view.rich.dt.Page.getInstance(Page.java:80) at oracle.adf.view.page.editor.customize.ComposerPageResolver.getPageObject (ComposerPageResolver.java:200) at oracle.adfinternal.view.page.editor.contextual.event.ContextualResolver. getPageDefinition(ContextualResolver.java:1229) at oracle.adfinternal.view.page.editor.contextual.event.ContextualResolver. <init>(ContextualResolver.java:129)
WLST commands cannot be used for adding, editing or deleting the federated SSO password policy profile until the following modifications have been made to the oam-config.xml file manually.
Back up the existing oam-config.xml file.
Find Setting Name="UserProfileInstance" in the file and add the following entry as a child of the "UserProfileInstance" setting.
<Setting Name=""NEW_PROFILE" Type="htf:map"> <Setting Name="PasswordPolicyAttributes" Type="htf:map"> <Setting Name="FORCED_PASSWORD_CHANGE" Type="xsd:boolean">true</Setting> <Setting Name="USER_ACCOUNT_DISABLED" Type="xsd:boolean">true</Setting> <Setting Name="PASSWORD_EXPIRED" Type="xsd:boolean">true</Setting> <Setting Name="TENANT_DISABLED" Type="xsd:boolean">true</Setting> <Setting Name="USER_ACCOUNT_LOCKED" Type="xsd:boolean">true</Setting> </Setting> </Setting>
For edit and delete, the changes should be made on the existing profile entry in oam-config.xml.
Increment the oam-config.xml "Version" setting and persist the changes.
A CertPathValidatorException is seen in the Access Manager diagnostic log when accessing a Resource. For example:
[2013-03-12T21:39:09.281-07:00] [oam_server1] [ERROR] [OAMSSA-12117] [oracle.oam.engine.authn] [tid: WebContainer : 3] [ecid: disabled,0] [APP: oam_server_11.1.2.0.0] Cannot validate the user certificate.[[ java.security.cert.CertPathValidatorException: The certificate issued by O=My Company Ltd, L=Newbury, ST=Berkshire, C=GB is not trusted; internal cause is: java.security.cert.CertPathValidatorException: Certificate chaining error at com.ibm.security.cert.BasicChecker.<init>(BasicChecker.java:111) at
The static getSessionAttributes() method does not retrieve all Session attributes for a user - only those which have been set using the ASDK.
FORM Cache Mode should be used to support multi-tab browser behavior. By default, it is set to COOKIE Mode.
The following items are unsupported in the Access Manager WebSphere Trust Association Interceptor (TAI) when compared to the Access Manager WebLogic Server Id Asserter.
Access Manager WAS TAI does not support SAML assertions based on the OAM_IDENTITY_ASSERTION header.
OAM WAS TAI does not support the Identity Context. Identity Context is supported based on the OAM_IDENTITY_ASSERTION header by Access Manager WebLogic Server Identity Asserter.
After running
idmConfigTool.sh -configOAM, two WebGate profiles are created: Webgate_IDM and Webgate_IDM_11g; both are 11g. When validating each Access Manager server configuration using the
oamtest tool, the Administration Console displays the connection status correctly but a long error/exception for each Webgate is logged. This error log is expected and can be ignored.
When performing a fresh incremental migration or a delta incremental migration after a complete migration, Simple Policy are not migrated. This issue is due to a Maximum Session Time lapse. Either restart the Administration Server or change the value of Maximum Session Time to more than 120 minutes.
When accessing the OAM Administration Console localized for
cn or
jp using Internet Explorer 9, double-clicking the Available Services text will not open the related page. Clicking the folder icon as opposed to the text will work. Or use Internet Explorer 8 or Firefox to workaround. If it works when using Internet Explorer 7, you can force OAM to run in Explorer 7 compatibility mode. See the PDF called Run ADF Faces applications with IE 9 in IE 8 compatibility mode at Oracle Technology Network.
The RSA plugin has been removed as a system plugin. The functionality can still be accessed by installing and using a custom RSA plugin.
If extending the Oracle Identity Manager domain by adding Oracle Access Management Access Manager, the 'OIMAuthenticationProvider' will be deleted. When integrating OIM and OAM using idmConfigTool -configOIM, providers are automatically reordered as required. If not using idmConfigTool -configOIM, the provider needs to be created manually.
mod_osso agents shipped with 11g OHS cannot be configured to protect the @ context root '/'.
You will get a runtime exception when starting an instance of Access Manager protected by Oracle Entitlements Server. The exception can be ignored.
Register a Webgate with Access Manager using a non-ASCII name. In the Access Tester, enter the valid IP Address, Port, and Agent ID (non-ASCII name), then click Connect.
Connection testing fails.
Configure Access Manager to use Kerberos Authentication Scheme with WNA challenge method, and create a non-ASCII user in Microsoft Active Directory.
Problem ....
Cause
The username in the attribute is passed without modification as a java string.
Solution
Non-ASCII users can access the resource protected by Kerberos WNA scheme by applying the following JVM system property in the startManagedWeblogic.sh script in $DOMAIN_HOME/bin:
-Dsun.security.krb5.msinterop.kstring=true
Simple mode is not supported with JDK 1.6 and on AIX platforms. Use Open or Cert mode instead.
Problem
When you have a Detached Credential Collector-enabled Webgate combined with a resource Webgate, the user might have to provide credentials twice. This can occur when login is triggered with a URL that results in an internal forward by Oracle HTTP Server.
Workaround
To resolve this issue, you can use following workaround:
Edit the httpd.conf file to add rewrite rules that redirect the browser for directory access (before Webgate configuration include) For example:
RewriteEngine On RewriteRule ^(.*)/$ "$1/welcome-index.html" [R]
SSL-enabled Web server: Repeat these rules under SSL configuration.
This topic describes general issues and workarounds for Oracle Access Management Security Token Service (Security Token Service). It includes the following topics:
Section 5.1.2.1, "STS Does Not Honor The Lifetime Sent In RequestSecurityToken."
Section 5.1.2.2, "Click On Security Token Service Column Throws Exception."
Section 5.1.2.3, "Issues with Searches and Non-English Browser Settings."
Security Token Service does not process the Lifetime sent in the WS-Trust RequestSecurityToken message. Rather, the WS-Trust RequestSecurityTokenResponse contains the Lifetime per the configured token validity time in the Oracle Security Token Service Issuance Template.
When adding a new Attribute Name Mapping during the creation of a New Requester Profile in the Security Token Service section of the Access Manager Administration Console, an error message indicating an Unsupported Operation Exception can be displayed when clicking twice on a column titled Row No.
Security Token Service searches might not return the expected result when the browser language is set to a non-English language. For example, this occurs when setting the:
Partner Type field to
Requester,
Relying Party or
Issuing Authority in the Requesters, Relying Party or Issuing Authorities screens
Token Type to
Username on the Token Issuance Templates screen when the Oracle Access Manager Administration Console browser setting is non-English
Token Type to
Username on the Token Validation Templates screen when the Oracle Access Manager Administration Console browser setting is non-English
When the browser language is English, the search returns expected results.
This topic describes general issues and workarounds for Oracle Access Management Identity Federation (Identity Federation). It includes the following topic:
Section 5.1.3.1, "Errors when Webgate has Credential Collector Option Enabled"
This problem is seen in the following situation:
Webgate fronts a resource.
The "Allow Credential Collector Operations" option is checked for that Webgate.
The resource is protected by a policy using FederationScheme.
Due to this issue, when requesting access to the resource, the server returns a 200 with a URL where the browser will post the request to that URL using the POST, while the browser should have been redirected through a 302.
To resolve this issue, for Webgate agents fronting resources protected with the FederationScheme, disable the "Allow Credential Collector Operations" option.
This topic describes general issue and workarounds for Oracle Access Management Mobile and Social. It includes the following topics:
Section 5.1.4.1, "Mobile and Social Does not Support the Native Android OS Browser"
Section 5.1.4.2, "Internet Explorer Users Need to Enable Protected Mode"
Section 5.1.4.3, "Google Language Menu can Cause the Sign-in Page Flow to Display in Multiple Languages"
Section 5.1.4.4, "The Mobile and Social Settings Pane can be Dragged out of View"
Mobile and Social supports the Mozilla Firefox and Google Chrome browsers on Android devices. The following issues are known to occur if the native Android OS browser is used.
The login web page rendered by the native browser does not allow the user to enter a username or password.
If a mobile single sign-on app is not installed on the mobile client, the native Android browser is unable to redirect the user to a page where the user can authenticate. This is due to a limitation in the native browser's JavaScript support.
Internet Explorer users who do not enable Protected Mode cannot sign in with an Internet Identity Provider. Instead, an empty page will display.
To work around this issue in Internet Explorer versions 8 and 9, enable Protected Mode:
From the Internet Explorer menu choose Tools > Internet Options > Security.
Select Enable Protected Mode and restart the browser.
If a user who signs in with Google selects a different language from the on-screen menu, Google redirects the page request outside of the request flow managed by Mobile and Social. Consequently, the log-in pages that Google generates may be in a different language than the pages generated by Mobile and Social. Mobile and Social provides translated pages based on the browser's language settings. To avoid having pages display in different languages, users should only use their browser's preferred language settings to make changes.
In the Oracle Access Management console, when viewing the "Mobile and Social Settings" tree in the navigation pane, it is possible to click and drag the contents of this pane out of view.
To workaround this issue refresh the page or logout and login again.
This section describes configuration issues and their workarounds organized around specific services. To streamline your experience, only services with an issue are included. For example, Identity Context has no known issues at this time and is not included. The following topics are included:
Section 5.2.1, "Configuration Issues and Workarounds: Access Manager"
Section 5.2.2, "Configuration Issues and Workarounds: Security Token Service"
Section 5.2.3, "Configuration Issues and Workarounds: Identity Federation"
Section 5.2.4, "Configuration Issues and Workarounds: Mobile and Social"
This topic describes configuration issues and workarounds for Oracle Access Management Access Manager (Access Manager). It includes the following topics:
Section 5.2.1.1, "OAM Migration Doesn't Create All Data Sources"
Section 5.2.1.2, "Password Validation Scheme Defaults to LDAP after Upgrade"
Section 5.2.1.3, "Using Plugins Between IBM HTTP Server and WebSphere"
Section 5.2.1.4, "Using ObAccessClient Results in SDK Initialization Failure"
Section 5.2.1.5, "Configuring oamtai.xml for Multiple WebGates"
Section 5.2.1.6, "obLockedOn Attribute Missing From Oracle Internet Directory"
Section 5.2.1.7, "OAM 10g Webgates Used with OAM 11g Need Javascript"
Section 5.2.1.8, "Enabling OpenSSO Agent Configuration Hotswap"
If the OAM 10g environment that is being migrated to 11g has multiple database instances configured in a Directory Server Profile and some of them share the same
displayName value, the migration process does not convert all of the database instances in Data Sources to the new environment. To workaround, rename the 10g environment database instances such that no two instances in the Directory Server Profile have the same
displayName value.
After upgrading Access Manager to version 11gR2 PS1, the Password Validation Scheme is not set to the Password Policy Validation Module. Use the Console to set the Password Validation Scheme to the Password Policy Validation Module.
Communication between the IBM HTTP Server (IHS) and WebSphere Application Server (WAS) is made possible by installing and configuring plugins that are available with IHS. The following steps describe the installation and configuration process.
During IHS installation, install the out-of-the-box plugin.
After installation, navigate to the IHS plugin directory at (for example,
$IHS_HOME
\Plugins\config\webserver1) and verify that the
plugin-cfg.xml configuration file is available.
Modify
plugin-cfg.xml as follows and save the file.
Add the virtual host ports from which IHS can be accessed.
<VirtualHostGroup Name="default_host"> <!-- Include active IHS port details required for connecting to OAM on WAS --> <!-- <VirtualHost Name="*:9004"/> --> <VirtualHost Name="*:8080"/> <VirtualHost Name="*:17777"/> </VirtualHostGroup>
Add <ServerCluster> with the appropriate details comprising of the respective server entries where the resource is deployed.
Add <UriGroup> tag for the respective serverclusters.
<UriGroup Name="oamserver1_Cluster_URIs"> <Uri Name="/oam/*"/> </UriGroup>
Add the corresponding <Route> tag for the respective <UriGroup> tag.
<Route ServerCluster="oamserver1_Cluster" UriGroup="oamserver1_Cluster_URIs" VirtualHostGroup="default_host"/>
Add the respective VirtualHost entries in WebSphere by navigating to Environment ->Virtual Hosts -> default_hosts -> Host Alias using the IBM console.
Using an ObAccessClient (created with the 11.1.1.5.0 Access Manager Console) to create the AccessClient for the 11g ASDK (11.1.1.7.0, 11.1.2.0.0 and above) results in the following error because the older
ObAccessClient.xml file has Boolean settings expressed as
true/false rather than numeric:
oracle.security.am.asdk.AccessClient initialize SEVERE: Oracle Access SDK initialization failed.
To workaround, copy the original (older)
ObAccessClient.xml from
DOMAIN_HOME
/output/
AGENT_NAME to the ASDK configuration directory (configLocation). You may also manually edit the newer
ObAccessClient.xml to change the Boolean values ("true/false") to numeric values (0/1).
There is only one
oamtai.xml file for a single WebSphere instance. In a case where the deployment contains multiple WebGate profiles protecting applications deployed on the same WebSphere application server - for example, a mix of 10g and 11g WebGates - the OAM Trust Association Interceptor is required to be configured as below.
Irrespective of the number of Webgates in the deployment, the agent profile defined in the file should be an OAM10g type.
The assertion type should be defined as HeaderBasedAssertion.
After upgrading Access Manager from 11gR2 to 11gR2 PS1, the
obLockedOn attribute will be missing from the Oracle Internet Directory. Use the following steps to add this attribute back to the OID.
Manually add the obLockedOn attribute to the schema.
Import the LDIF to OID using the ldapmodify command.
Edit the
oam_user_write_acl_users_oblockedon_template.ldif to give oamSoftwareUser permission to modify obLockedOn.
Replace %s_UsersContainerDN% with User Search Base and replace %s_GroupsContainerDN% with Group Search Base.
Import the modified
oam_user_write_acl_users_oblockedon_template.ldif.
When Oracle Access Manager 10g Webgates are used with Oracle Access Management 11g, the
webgate_install_directory
/oamsso/logout.html page needs JavaScript code to initiate redirection to the Oracle Access Management 11g server logout page. This page, after logging out with the Webgate cookie also clears the 11g session. When migrating Oracle Access Manager 10g Webgates, follow the procedure documented in the Oracle Fusion Middleware Administrator's Guide for Oracle Access Management.
To enable OpenSSO Agent configuration hotswap, make sure the opensso agents have the following properties in the
Miscellaneous properties section of the agent's registration in the OpenSSO Proxy on OAM Server, and the agent servers are restarted:
J2ee Agents:
com.sun.identity.client.notification.url =http://<
AGENT_SERVER_HOST>:<
AGENT_SERVER_PORT>
/agentapp/notification
Web Agents:
com.sun.identity.client.notification.url=http://<
AGENT_SERVER_HOST>:<
AGENT_SERVER_PORT>/
UpdateAgentCacheServlet?shortcircuit=false
Not Supported for Web Agents:
com.sun.identity.agents.config.change.notification.enable=true
Restart the OAM Server hosting the agent.
This topic describes configuration issues and their workarounds for Oracle Access Management Security Token Service (Security Token Service). It includes the following topics:
Section 5.2.2.1, "Create Like (Duplicate) Does Not Copy All Properties of Original Template"
Section 5.2.2.2, "No Console Support Removing Partner Encryption or Signing Certificates"
Security Token Service Create Like (duplicate) button does not copy some properties on the original Issuing Authority Profile template (the Security and Attribute Mapping sections, for instance).
The Administrator must manually enter the necessary configuration items into the newly created Issuing Authority Profile:
From the Oracle Access Management Console System Configuration tab, Security Token Service section, go to Issuance Templates.
Select an existing Issuance Template Click the Create Like (duplicate) button.
Create the new copied Issuance Template and manually enter the necessary configuration items in the newly created Template.
Oracle Access Management Console does not provide a way to remove a signing or encryption certificate that was set for an Security Token Service Partner.
The Administrator must manually delete these using the following WLST commands:
To delete the signing certificate of an Security Token Service Partner
deletePartnerSigningCert
To delete the encryption certificate of an Security Token Service Partner
deletePartnerEncryptionCert
This topic describes configuration issues and their workarounds for Oracle Access Management Identity Federation (Identity Federation). It includes the following topics:
Section 5.2.3.1, "Provider Search Text Fields do an Exact Match Search"
Section 5.2.3.2, "Incorrect Error Message when an Invalid Signing Certificate is Uploaded"
Users should be aware that in the Oracle Access Management Console, the Identity Provider search screen does an exact match (==) for the ProviderId and Partner name fields, rather than a "contains" search.
Although it is an exact match, the user can employ "*" as a wild card in searches.
While creating/editing an IdP, if you upload an invalid file for a signing certificate, you will see a
Null pointer exception error message instead of a proper message indicating that the file does not contain a certificate.
This topic describes configuration issues and their workarounds for Oracle Access Management Mobile and Social (Mobile and Social). It includes the following topics:
Section 5.2.4.1, "Moving Mobile and Social From a Test to Production Environment on IBM WebSphere"
Section 5.2.4.2, "Steps Required to Localize the Register Page"
Section 5.2.4.3, "Mobile Clients do not Translate Error Messages Sent by the Server"
Section 5.2.4.4, "Yahoo Identity Provider Does not Return First Name and Last Name"
Section 5.2.4.5, "Once Set, Jail Breaking "Max OS Version" Setting Cannot be Empty"
Section 5.2.4.6, "Additional Configuration Required After Running Test-to-Production Scripts"
The following steps describe how to copy Mobile and Social from a test environment to a production environment.
Update
oam-config.xml in the production environment with the
secretKey value from the test environment.
In the test environment, use a text editor to open
oam-config.xml in the
fmwconfig directory and, for object
accessgate-oic, copy the value of the
secretKey attribute.
For example:
<Setting Name="accessgate-oic" Type="htf:map"> <Setting Name="ConfigurationProfile" Type="xsd:string">DefaultProfile</Setting> <Setting Name="aaaTimeoutThreshold" Type="xsd:string">-1</Setting>
...
<Setting Name="secretKey" Type="xsd:string">A686408D1020B93EAA8B411EE0137847FD2968D1285A2A37BB0BE0B00238F50464E9C01EB3E5319AED6D7CAC81BD9FF7</Setting>
In the production environment, use a text editor to open
oam-config.xml in the
fmwconfig directory and, for object
accessgate-oic, replace the value of the attribute
secretKey with the value from the test host.
Copy the
idaas.xml,
oauth.xml, and
oic_rp.xml files from the test environment
fmwconfig directory to the production environment
fmwconfig directory.
In the production environment, edit the host and port information as appropriate in
oic_rp.xml.
Search for the name of the test host and replace it with the name of the production host. Verify that the port number is correct for the host URL.
For example:
<SystemConfiguration> <hostURL></hostURL>
Stop the node manager.
Synchronize the node and start the node manager.
Restart the oam_server1 and OracleAdminServer applications.
Because of a design change, attribute names on the Register page are in English and are not localized to other languages. To translate this page, use the following steps to modify the attribute name values using the Oracle Access Management console.
In the Oracle Access Management console, open the Application Profile under Internet Identity Services, for example OAMApplication.
Go to the User Attribute Display Name list in the Registration Service Details with Application User Attribute Mapping section.
Replace the values in English with localized values.
Save your changes by clicking Apply on the OAMApplication page.
Open the Register page and confirm that the page shows the correct localized values.
The Mobile and Social server sends error messages to the mobile clients in the language that is configured in the server locale language settings. The mobile clients cannot translate server error messages to a different language.
The Yahoo Internet identity provider does not return
firstname and
lastname values following user authentication. To work around this issue, change the following Mobile and Social mappings in the Oracle Access Management console:
Open the Application Profile for editing.
Click Next until the Internet Identity Provider configuration page opens.
Open the Application User Attribute Vs Internet Identity Provider User Attributes Mapping section.
In the Attribute Mapping section, click Yahoo to select it in the Internet Identity Provider list.
Configure the values as follows:
Locate firstname in the Application User Attribute column and in the corresponding Internet Identity Provider User Attributes column, choose nickname.
Locate lastname in the Application User Attribute column and in the corresponding Internet Identity Provider User Attributes column, choose fullname.
Save the Application Profile.
Once you assign a value to the Jail Breaking Detection Policy "Max OS Version" setting, you cannot remove the value and leave the field empty. Per the documentation, the Max OS Version field is used to configure the maximum iOS version to which the Jail Breaking policy applies. If the value is empty, a maximum iOS version number is not checked so the policy applies to any iOS version higher than the value specified for Min OS Version. Once set, however, the value cannot go back to being empty. To work around this issue, set a value for the Max OS Version field., then click Apply.
For example:
:port
/oic_rp/login.jsp
Update the Mobile and Social credential store framework (CSF) entry to point from the test machine to the production machine. To do this, run the following WLST command:
oam-config.xml, which is located in the domain home
/config/fmwconfig directory on the production machine. Use the value from the
RPPartner entry,
TapCipherKey attribute..
This section documents issues that affect the Oracle Access Management Console. It includes the following topics:
Section 5.3.1, "Messages Sent From the Server to the Client Can Appear in a Foreign Language"
If the OAM Server and the Oracle Access Management Console client are configured for different locales, the server will report error messages to the client in whichever language the server is configured for.
There are no documentation issues for the following books:
Oracle Fusion Middleware Administrator's Guide for Oracle Access Management
Oracle Fusion Middleware Developer's Guide for Oracle Access Management | http://docs.oracle.com/html/E39887_01/acc_mgmnt.htm | CC-MAIN-2014-15 | en | refinedweb |
Popular JavaScript Snippets Tagged 'class'
- All /
- JavaScript /
- HTML /
- PHP /
- CSS /
- Ruby /
- Objective C
« Prev [Page 1 of 1] Next »
JavaScript class tooltip saved by 18 people
Tool Tip v1.1
posted on July 21, 2006 by Leech
JavaScript javascript class name saved by 9 people
Get Elements By Class Name
posted on November 20, 2006 by tylerhall
JavaScript class function add saved by 4 people
addClass Function
posted on February 19, 2007 by 1man
JavaScript javascript js class remove add has saved by 3 people
Class handling functions
posted on September 13, 2012 by pmw57
JavaScript javascript select class checkbox all saved by 3 people
Select All Checkboxes Based On Class
posted on September 10, 2008 by aristoworks
JavaScript class search suggest saved by 3 people
Incremental Search v1.3
posted on July 21, 2006 by Leech
JavaScript javascript class template saved by 2 people
Javascript Class Template
posted on October 25, 2012 by XtreamIT
JavaScript class element prototype functions manipulation saved by 2 people
Class Manipulation
posted on April 7, 2012 by ChristianOttinger
JavaScript class constructor saved by 2 people
getClass
posted on February 23, 2009 by Sephr
JavaScript javascript class countDown saved by 2 people
countDown Class in Javascript
posted on October 13, 2008 by gfazioli
JavaScript javascript class simple tabs saved by 2 people
Simple Javascript Tabs, Standalone OOP
posted on September 18, 2008 by MMDeveloper
JavaScript css class alter modify saved by 2 people
Change CSS
posted on March 5, 2008 by leandemon
JavaScript javascript class element id saved by 1 person
Bare-Bones Selector Engine
posted on December 9, 2012 by o0110o
JavaScript css class elements DOM classes saved by 1 person
Add and remove CSS classes from elements
posted on July 9, 2010 by jatkins
JavaScript ajax javascript class request saved by 1 person
Simple Ajax Class
posted on April 2, 2010 by Piotrek290
JavaScript javascript class oop saved by 1 person
Javascript OOP
posted on March 10, 2009 by marcio
JavaScript css class hasclass addclass removeclass saved by 1 person
testing, adding, and removing classes from an html element
posted on January 20, 2009 by jambalaya
JavaScript javascript class oop timer saved by 1 person
Timer
posted on September 16, 2008 by MMDeveloper
JavaScript javascript class object constructor prototype inheritance inherit saved by 1 person
JavaScript Object Inheritance
posted on September 2, 2008 by wizard04
JavaScript class date format saved by 1 person
Date Format v1.0
posted on July 21, 2006 by Leech
JavaScript class input forms restrict saved by 1 person
Restrict v1.0
posted on July 21, 2006 by Leech
JavaScript javascript class members private apphp
How To Define Class Private Members in JavaScript
posted on January 29, 2014 by apphp-snippets
JavaScript class oop yui namespace yui2
Javascript Object Oriented Template
posted on September 27, 2011 by casperskovgaard
JavaScript class object prototype private public
Classes objects prototype and static
posted on July 1, 2011 by devnull69
JavaScript class geolocation
Geolocation
posted on June 21, 2011 by Piotrek290
JavaScript javascript class jquery load body cmndo fido digitalsurgeons
Amazing Website Javascript Initialization Code Block v1
posted on April 22, 2011 by cmndo
JavaScript javascript class object template method pattern variable module scope namespace private public shield revealing
Javascript revealing module pattern template
posted on March 17, 2011 by coprolit
JavaScript javascript class template signature
Static and Instantiable Classes in JavaScript
posted on June 25, 2010 by geekyjohn
JavaScript javascript class object oop classes inheritance
JavaScript Classes
posted on January 28, 2010 by wizard04
JavaScript css class html
Browser Selector with JavaScript
posted on December 29, 2009 by shapeshifta
JavaScript class element prototype hover dhtml
Simple Hovering Element w/ Triggering Element
posted on May 20, 2009 by pdswan
JavaScript class jquery counter sequential
jQuery Sequential List
posted on February 10, 2009 by davebowker
JavaScript javascript class object method delay call setTimeout
JavaScript’s setTimeout and how to use it with your methods
posted on January 8, 2009 by eden159
JavaScript css class list html xhtml DOM name
List all class names used in a DOM tree
posted on May 28, 2008 by noah
JavaScript javascript class toggle
toogle class
posted on February 15, 2008 by marcio
JavaScript div js class changer
div class changer
posted on August 9, 2006 by leoh
« Prev [Page 1 of 1] Next » | http://snipplr.com/popular/language/javascript/tags/class/ | CC-MAIN-2014-15 | en | refinedweb |
You can subscribe to this list here.
Showing
8
results of 8
Bugs item #854294, was opened at 2003-12-04 15:11
Message generated for change (Tracker Item Submitted) made by Item Submitter
You can respond by visiting:
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Bob Ippolito (etrepum)
Assigned to: Nobody/Anonymous (nobody)
Summary: AutoBaseClass doesn't always bring over the right selectors
Initial Comment:
example:
>>> from AppKit import *
>>> NSView.rectForPage_.signature
'{_NSRect={_NSPoint=ff}{_NSSize=ff}}@:i'
>>> class MyNSView(NSView):
... def rectForPage_(self, page):
... return ((0, 0), (0, 0))
...
>>> MyNSView.rectForPage_.signature
'{_NSRect={_NSPoint=ff}{_NSSize=ff}}@:i'
the same "MyNSView" from an AutoBaseClass instead will
have a rectForPage_ signature of "@@:@"
----------------------------------------------------------------------
You can respond by visiting:.
smwxhzwnjjbgzo
xybx
jysds a
mmzuadyrnk tubhe kexjsycmshnnjj
On 4-dec-03, at 5:39, Carlos Phillips wrote:
>>?
Cobject is part of the Python core, but it isn't very well-known (nor
widely used). I've always felt that it needs more support. For one
thing it would be really good if various (all, really) wrapper packages
would
1. Return unknown pointers as Cobjects
2. Allow Cobjects as initializers to any object creation method.
Together these two would allow you to pass pointers back and forth
between extension modules that are unaware of each other. In your case
you would get the vtk object back as a Cobject, which you could then
coerce into a VTK renderer object.
--
Jack Jansen, <Jack.Jansen@...>,
If I can't dance I don't want to be part of your revolution -- Emma
Goldman
<P><FONT color=#009900 size=+3>(1) Brand new 2003 finlay 683 track mounted portable screening plant year end <FONT color=#ff0000><STRONG>blowout Special $129,500.00</STRONG></FONT> with remote control and 5x12 screen.</FONT></P>
<P><FONT color=#009900 size=+3><IMG src=""></FONT></P>
<P><FONT color=#009900 size=+3>Visit <A href=""></A> for more details or call toll free 877-328-8308 Contact: Eric Darling </FONT></P>
<P><FONT color=#009900 size=+3>Darling Sons Intl. LLC</FONT> </P>?
Carlos
On Dec 3, 2003, at 6:34 AM, Carlos Phillips wrote:
>, ideally I would be able to interact with the C++ objects
> pointed to using the already existing python wrappers. I'm not sure
> how this could be done though.
>
> What I have are objective-c++ files containing objective-c classes
> with methods which take and return pointers to c++ classes. VTK has
> its own python wrappers for these c++ classes. This allows for easy
> construction of rendering pipelines. I could wrap the c++ classes in
> objective-c and then wrap the new objective-c classes in python
> through PyObjC. However, this would not allow me to interact with the
> c++ classes directly in python using VTK python classes which is what
> I want to do.
>
> So ideally I would like to be able to see the c++ classes taken and
> returned by my objective-c classes as VTK python wrapper objects.
>
> Example:
> An unwrapped objective-c++ class VTKView has the following method.
> -(vtkRenderer *)renderer;
>
> vtkRenderer is a c++ class. There is python wrapper class by the same
> name. I would like to wrap and/or alter VTKView so that I can do the
> following in Python:
>
> v = VTKView.alloc().init()
> v.renderer().AddActor(...)
>
> or be able to use v.renderer() as an argument to some python wrapped
> VTK method.
Actually, if I can't do this, I would settle for simply hiding the c++
pointer attributes and c++ related methods from PyObjC. Can I do that?
Carlos | http://sourceforge.net/p/pyobjc/mailman/pyobjc-dev/?viewmonth=200312&viewday=4 | CC-MAIN-2014-15 | en | refinedweb |
csG2DDrawLine< Tpixel, Tpixmixer > Class Template ReferenceClass to draw a line.
More...
[Common Plugin Classes]
#include <csplugincommon/canvas/draw_line.h>
Detailed Description
template<class Tpixel, class Tpixmixer>
Class to draw a line.
class csG2DDrawLine< Tpixel, Tpixmixer >
Definition at line 36 of file draw_line.h.
Member Function Documentation
template<class Tpixel, class Tpixmixer>
Draw a line.
Definition at line 40 of file draw_line.h.
References csQfixed16(), csQint(), csGraphics2D::GetPixelAt(), csGraphics2D::LineAddress, and csGraphics2D::Memory.
The documentation for this class was generated from the following file:
- csplugincommon/canvas/draw_line.h
Generated for Crystal Space 1.2.1 by doxygen 1.5.3 | http://www.crystalspace3d.org/docs/online/api-1.2/classcsG2DDrawLine.html | CC-MAIN-2014-15 | en | refinedweb |
Writing python scripts for the RFID/NFC expansion
- William Srite last edited by
Hello all, I got the RFID/NFC expansion and I want to write a script that will tell the reader to continuously read any cards it comes into contact with then store those in a file. I know how to get the reader to read by entering:
nfc-list
Which it does, then it exits. So how would I script it in a python file to keep reading and store everything into a file? I know how to code in python fairly well, although it's been a bit since I've done any serious coding, but I know how to read from and write to a file. It's getting it to interact with the RFID/NFC expansion I'm not sure about.
- William Srite last edited by William Srite
@William-Srite
basically what i'm asking is how to control the rfid/nfc exp from inside a progra...any program: python node.js, etc. the documentation shows how to control it via the command line but doesn't explain how to use it with a script. If I need to import a library or anything.
I've tried based on some I've found online, like this one, but it doesn't show me what I need to know. Like, how do I keep it in read mode in a loop and then write the results to a file or even an array.
I'm a security researcher and need to build a handheld, barry powered device to scan a client's employees badges to do an audit. So, I have the RFID exp and the battery powered dock with LiPo battery, but I'm having trouble writing the script.
- William Srite last edited by William Srite
OK, I figured it out. Here's a test script I wrote real quick to have a proof of concept:
import os
scan = os.system("nfc-list")
def scanFunc():
while 1:
scan
return scan
print(scanFunc())
And the output:
nfc-list uses libnfc v0.2.2
NFC device: Omega NFC Expansion opened
1 ISO14443A passive target(s) found:
ISO/IEC 14443A (106 kbps) target:
ATQA (SENS_RES): 00 44
UID (NFCID1): 04 1e bb 32 ed 4c 80
SAK (SEL_RES): 00
So, after you have this, you can just output it to a text file or JSON file by:
import string
outfile = open("result.txt", "w")
outfile.write(scan)
I'm just starting with the expansion and also finding it frustrating to use. But I had some comments on your thread and thought I'd share what I've figured out in the two days I've been messing with it.
I'm a security researcher and need to build a handheld, barry powered device to scan a client's employees badges to do an audit.
Me too! Specifically I do penetration testing of all sorts. I would think you'd want something with a bit more range for an engagement?
basically what i'm asking is how to control the rfid/nfc exp from inside a progra...any program: python node.js, etc.
First, you should forget there are any "tools" installed on the Omega for use with the expansion. And while the documentation is frustrating, it does tell us some useful information that would have taken us a few minutes to figure out or if you are me quite a few minutes to figure out:
The Expansion is based on the popular PN532 NFC Chip and communicates with the Omega via UART1.
- PN532 chip
- talks via UART1
# nfc-scan-device nfc-scan-device uses libnfc reboot-3483-gd1bcad0 1 NFC device(s) found: - pn532_uart:/dev/ttyS1: pn532_uart:/dev/ttyS1
- device is on /dev/ttyS1
error libnfc.driver.pn532_uart Serial port already claimed: /dev/ttyS1 nfc-mfultralight: ERROR: Error opening NFC device
- their "tools" are using the libnfc driver
Ok so to get back answering your question, to control the expansion from inside a program you have two choices:
- Find a library in the language of your choice that uses the libnfc driver and tell it to talk to a pn532 chip on /dev/ttyS1.
- Find a library in the language of your choice that can talk to the pn532 on /dev/ttyS1 directly.
OK, I figured it out. Here's a test script I wrote real quick to have a proof of concept:
import os scan = os.system("nfc-list") def scanFunc(): while 1: scan return scan print(scanFunc()) import string outfile = open("result.txt", "w") outfile.write(scan)
Even ignoring the issues with the code I probably wouldn't want to use this in any real-world scenario. Did you by chance come up with something else? And honestly for something like what you are trying to do here you could just write a shell script and save yourself the overhead of the python interpreter.
However, if you do want to write something in python, find a library that lets you use one of the above two methods I mentioned and I think you'll be much happier with the expansion. I know I was. | https://community.onion.io/topic/3489/writing-python-scripts-for-the-rfid-nfc-expansion | CC-MAIN-2020-24 | en | refinedweb |
To install the Google Analytics API Java Client, you must download a zip file containing all of the jars you need to extract and copy into your Java classpath.
- Download the Google Analytics Java client library, which is bundled as a ZIP file with all the required dependencies.
- Extract the ZIP file
- Add all of the JARs within the
libsdirectory to your classpath.
- Add the
google-api-services-analytics-v3-[version].jarjar to your classpath.
Java Environment details
If you are using Eclipse, see here for instructions on adding JARs to your project's classpath.
If you are using NetBeans, see here for instructions on adding JARs to your project's classpath.
If you are using IntelliJ IDEA, see here for instructions on adding JARs to your project's classpath.
If developing from the command line, add
-classpath
/path/to/directory/with/unzipped/jars to your
javac and
java command invocations.
Step 3: Setup the sample
You'll need to create a single file named
HelloAnalytics.java, which will contain the given sample code.
- Copy or download the following source code to
HelloAnalytics.java.
- Move the previously downloaded
client_secrets.JSONwithin the same directory as the sample code.
- Replace the values of the
KEY_FILE_LOCATIONwith the appropriate values from the Developer Console.
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential; import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport; import com.google.api.client.http.HttpTransport; import com.google.api.client.json.JsonFactory; import com.google.api.client.json.gson.GsonFactory; import com.google.api.services.analytics.Analytics; import com.google.api.services.analytics.AnalyticsScopes; import com.google.api.services.analytics.model.Accounts; import com.google.api.services.analytics.model.GaData; import com.google.api.services.analytics.model.Profiles; import com.google.api.services.analytics.model.Webproperties; import java.io.FileInputStream; import java.io.IOException; import java.security.GeneralSecurityException; import java.io.IOException; /** * A simple example of how to access the Google Analytics API using a service * account. */ public class HelloAnalytics { private static final String APPLICATION_NAME = "Hello Analytics"; private static final JsonFactory JSON_FACTORY = GsonFactory.getDefaultInstance(); private static final String KEY_FILE_LOCATION = "<REPLACE_WITH_JSON_FILE>"; public static void main(String[] args) { try { Analytics analytics = initializeAnalytics(); String profile = getFirstProfileId(analytics); System.out.println("First Profile Id: "+ profile); printResults(getResults(analytics, profile)); } catch (Exception e) { e.printStackTrace(); } } /** * Initializes an Analytics service object. * * @return An authorized Analytics service object. * @throws IOException * @throws GeneralSecurityException */ private static AnalyticsReporting initializeAnalytic() throws GeneralSecurityException, IOException { HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport(); GoogleCredential credential = GoogleCredential .fromStream(new FileInputStream(KEY_FILE_LOCATION)) .createScoped(AnalyticsScopes.all()); // Construct the Analytics service object. return new Analytics.Builder(httpTransport, JSON_FACTORY, credential) .setApplicationName(APPLICATION_NAME).build(); } private static String getFirstProfileId(Analytics analytics) throws IOException { // Get the first view (profile) ID for the authorized user. String profileId = null; // Query for the list of all accounts associated with the service account. Accounts accounts = analytics.management().accounts().list().execute(); if (accounts.getItems().isEmpty()) { System.err.println("No accounts found"); } else { String firstAccountId = accounts.getItems().get(0).getId(); // Query for the list of properties associated with the first account. Webproperties properties = analytics.management().webproperties() .list(firstAccountId).execute(); if (properties.getItems().isEmpty()) { System.err.println("No Webproperties found"); } else { String firstWebpropertyId = properties.getItems().get(0).getId(); // Query for the list views (profiles) associated with the property. Profiles profiles = analytics.management().profiles() .list(firstAccountId, firstWebpropertyId).execute(); if (profiles.getItems().isEmpty()) { System.err.println("No views (profiles) found"); } else { // Return the first (view) profile associated with the property. profileId = profiles.getItems().get(0).getId(); } } } return profileId; } private static GaData getResults(Analytics analytics, String profileId) throws IOException { // Query the Core Reporting API for the number of sessions // in the past seven days. return analytics.data().ga() .get("ga:" + profileId, "7daysAgo", "today", "ga:sessions") .execute(); } private static void printResults(GaData results) { // Parse the response from the Core Reporting API for // the profile name and number of sessions. if (results != null && !results.getRows().isEmpty()) { System.out.println("View (Profile) Name: " + results.getProfileInfo().getProfileName()); System.out.println("Total Sessions: " + results.getRows().get(0).get(0)); } else { System.out.println("No results found"); } } }
Step 4: Run the sample
After you have enabled the Analytics API, installed the Google APIs client library for Java, and set up the sample source code the sample is ready to run.
If you're using an IDE, make sure you have a default run target set
to the
HelloAnalytics class.
Otherwise you can compile and run the application from the command line:
- Compile the sample using:
javac -classpath /path/to/google/lib/*:/path/to/google/lib/libs/* HelloAnalytics.java
- Run the sample using:
java -classpath ./:/path/to/google/lib/*:/path/to/google/lib/libs/* HelloAnalytics. | https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-java?hl=hu | CC-MAIN-2020-24 | en | refinedweb |
Administrators can perform various tasks in Oracle Business Process Management Workspace (Process Workspace), including setting the display of the Work Items panel, and administering and configuring task-related information.
When an end user launches an application from the Applications panel of the Process Tracking page, the application is launched and a form appears. If the user dismisses that form, then that process instance continues to run as a superfluous instance never to be processed or used. Such superfluous instances can be cleaned up by a background instance that can be specified and scheduled by the administrator of the BPMN service engine. If you are the administrator of the BPMN service engine and want more information about specifying this background instance, see Administering Oracle SOA Suite and Oracle Business Process Management Suite.
This chapter includes the following sections:
Setting the Display of the Work Items Panel in Process Workspace
Administering Approval Groups in Process Workspace
Managing Mapped Attributes (Flex Fields) in Process Workspace
Using Task Configuration in Process Workspace
Using Evidence Search in Process Workspace
Managing Exceptions in Process Workspace
Using the User Metadata Migration Utility
Administrators can specify various application preferences to set the display of the Work Items panel.
To set application preferences:
The Application Preferences window appears.
Figure 9-1 Setting Application Preferences
This section includes the following topics:
How to Specify the Login Page Realm Label in Process Workspace
How to Specify the Resource Bundle in Process Workspace
How to Specify the Language Locale Information in Process Workspace
How to Specify a Branding Logo in Process Workspace
How to Specify the Branding Title in Process Workspace
How to Choose a Skin in Process Workspace
How to Enable Customized Applications and Links in Process Workspace
How to Specify, Show, or Hide the Default Tab Pages in Process Workspace
Specifying Additional Process Workspace Settings
If the identity service is configured with multiple realms, then, when a user logs in to Oracle BPM Worklist, the login page displays a list of realm names.
LABEL_LOGIN_REALM specifies the resource bundle key used to look up the label to display these realms. All settings are application specific, set by the Admin user, and apply to all users.
You can change the term realm to fit the user community—terms such as country, company, division, or department may be more appropriate.
To change the term realm, customize the resource bundle, specify a resource bundle key for the string, and then set the Login page realm label parameter on the Application Preferences page (see Figure 9-1)to point to that resource bundle key.
For information about customizing the resource bundle, see Customizing a Resource Bundle.
For information about specifying a resource bundle, see Customizing a Resource Bundle.
Individual users can set up vacation rules that specify re-assignment or automatic delegation of their tasks when a vacation rule is in effect. Re-assignment/delegation can be enabled or disabled for all users using the Enable Delegate/automatic outcome setting in vacation rules property on the Application Preferences page, as shown in Figure 9-1.
From the Application Preferences page, you can specify how the Process Workspace display language is determined. Information about the language locale can be derived from either the user's browser or the identity provider that stores information about Process Workspace
When you open a task, task details open in a pop up window. From the Application Preferences page, you can specify the behavior of task details pop up window after the completion of the task.
Use Task Form Default: After task is completed, the task details page closes or loads the next task based on settings in each task form.
Load Next Task: After task is completed, the task details page shows the next available task.
Close Task Details: After task is completed, the task details page closes, and takes the screen back to the task list. The task list refreshes to show only the remaining tasks. Select a task to see the details page.
Figure 9-3 Specifying After Task Completion Behavior of Task Details Pop Up Window
From the Application Preferences page, you can specify how the user name format is displayed in the Creator, Assignee or other name columns throughout Process Workspace. You can select to display the user ID (for example, jstein) or the full user name (for example, John Stein).
You reach the Application Preferences page by clicking Administration on the global toolbar at the very top of Process Workspace, then, under Administration Areas, selecting Application Preferences. The Application Preferences panel appears in the right pane. To specify the display language, in the User Name Format field, select User ID or User Name, as shown in Figure 9-4.
Figure 9-4 User Name Format.
From the Choose a Skin list, select one of the default ADF skins.
If you want the Human Task flow application to use the skin selected in Process Workspace, you must verify that the
trinidad-config.xml file (found in the Human Task flow application) includes the following:
<skin-family>#{param.sf}</skin-family>
Upload your own customized skin
.css file in a
.JAR file and deploy it as a part of shared library. Then, when you restart your application from the console, your custom skin appears in the Choose a Skin list.
You can create a customized skin.
C:\temp\META-INF\adf\oracle\skin\images META-INF\skins\custom.css META-INF\trinidad-skins.xml
Note that, in this example, you can change the word
custom to the name of your own customized skin.
trinidad-skins.xmlfile>
.JARfile by issuing the following command from the
c:\tempdirectory:
jar -cvf customSkin.jar META-INF/
/scratch/
username
/sharedLib.
Note:
Refer to the images in your CSS file this way:
../adf/oracle/skin/images/example.gif (include the two leading dots).
This allows the search for the META-INF root to start one directory above the
META-INF/skin directory in which the
.css file is located.
For more information about deploying images in a shared library, see Deploying Images and JAR files as Part of a Shared Library JAR file.
In Process Workspace, you can create customized external applications and links that become available in the Links pane. Moreover, in both Process Workspace and the Worklist Application you can specify the columns that appear in the inbox.
To enable customized applications:
Depending on your customization, you can now see its effects. If your customization is for Process Workspace and involves either creating an external application or specifying inbox columns in the Task Details pane or both, you see the following:
Your custom application listed in the Links pane.
The columns of the inbox adjusted according to your specifications.
For information about customizing applications, see Customizing Applications in Process Workspace and the Worklist Application..
A timeout defines the length of time a completed screenflow task will wait for the next task to become available before processing continues. The default wait time is 5000 milliseconds.
Do not use the timeout parameter as an alternative to not setting the ScreenFlowMode parameter in the last task of the screenflow. It should be used only to increase or decrease the wait time for all tasks except the last one in the screenflow.
To set the timeout parameter, specify the milliseconds in the Screenflow Timeout Interval in ms field, as shown in Figure 9-5
Figure 9-5 Screenflow Timeout Interval
Users with the Administration or Process owner roles can register customized application URLs in Process Workspace. These are displayed in the Applications panel.
To register a custom application URL:
The Custom Application Registration dialog displays.
The Display Name and URL fields are required. If you specify a Category, the URL appears under the Category name in the Applications panel. Optionally, specify a Description.
If required, click Select Users, Groups to launch the Identity Browser, from which you can search for and select Users and Groups.
Depending on your customization, you can now see its effects in the Applications panel.
In Process Workspace, you can select multiple tasks, and then approve or reject them using the Task Actions buttons. These buttons appear as a green check mark or red X on the Tasks page. Each of these icons can be configured by an administrator to perform one or more than one approval or rejection task actions.
To specify the task actions which map to an image:
If you are an administrator, then you can specify whether to show or hide the Process Workspace standard tabs, such as Tasks, Case, Process Tracking, and Dashboards. Also, from the tabs you decide to show, you can select the one to be the default tab when a user logs in.
To show or hide the standard Process Workspace tabs:
Do this by selecting the radio button next to the check box for that tab. If you do not select a default tab, then the standard tabs do not show or hide as you specified.
Figure 9-6 shows the Application Preferences page with the Show Standard Tabs row highlighted.
Figure 9-6 Show Standard Tabs Selection
If you are an administrator, then you can specify whether to show or hide the Actions drop-down in the header area of a screen in the Process Workspace.
To show or hide the standard Process Workspace tabs:
Actions that can be performed for that screen are shown or hidden in the header area of the screen under Actions drop-down, depending on whether you choose show or hide options. file types that can be uploaded by a user. Enter the file type extensions in the field.
View task details in separate window - Choose theEnable option, if you want the task details to be opened in a separate window.
An approval group consists of a name and a predefined set of users configured to act on a task in a certain pattern. This pattern is similar to a human workflow routing slip pattern where users can act on tasks in serial or in parallel. An approval group also can contain a nested approval group in the pattern.
The name of an approval group is necessary when specifying the approval group list builder. The pattern configured in the approval group is used by default to order the users who must act on the task. However, when creating the list builder, the default pattern can be overridden by specifying the voting method.
Process Workspace enables with administrator rights to manage approval groups.
This section contains the following topics:
How to Create Mapped Attribute Labels
How to View Approval Groups
How to Search for an Approval Group
How to Add a Static Approval Group
How to Add a New Member to a Static Approval Group
How to Delete a Member from an Approval Group
How to Move an Approval Group's Members
How to Nest Approval Groups
How to Rename an Approval Group
Using Dynamic Approval Groups
How to Delete an Approval Group Oracle JDeveloper with Oracle Application Development Framework (Oracle ADF). If you are a process designer and want more information, see Designing Task Display Forms for Human Tasks in Oracle Fusion Middleware Developer's Guide for Oracle SOA Suite.
Process Workspace Process Workspace that users with administrator rights use to manage tasks. Figure 9-7 shows a task form.
Figure 9-7 Task Form
The Header view, shown in Figure 9-8, is created during design time in Oracle JDeveloper using the header drop handler.
Figure 9-8 Header View in Process Workspace
By default, the JDeveloper drop handler includes the Header fields listed in Table 9-1. However, using JDeveloper, the developer can include or remove any of the fields based on the use case.
Table 9-1 Header Fields
The Header also contains custom and system actions. Custom actions are those that depend on task metadata outcomes. For example, if the metadata contains Approve and Reject outcomes, then Approve and Reject appear in the Header as custom actions. If the metadata contains more than two outcomes, then the custom actions appear in the Header as a drop-down list instead of separate buttons.
System actions, such as Escalate, Suspend, and Resume, always appear in a drop-down list. The actions that appear depend on what the user is doing. For example, after a task has been initiated it can be withdrawn. Subsequently, if a user logs into Process Workspace to view the details of an initiated task, then Withdraw appears in the list containing the available actions.
Figure 9-2 lists all the actions the administrator can perform from the Header and their descriptions.
Table 9-2 Header Actions
The Task Payload view, shown in Figure 9-9,.
Figure 9-9 Task Payload
If you are a process designer and want more information, see Designing Task Display Forms for Human Tasks in Oracle Fusion Middleware Developer's Guide for Oracle SOA Suite.
The Task History view, shown in Figure 9-10, provides a graphical and tabular view of events in the task life cycle. In addition, if the Edit Approver Configuration option was selected in the designer, special controls in the tabular view that allow future are also saved.
Figure 9-10 Task History in Process Workspace
If the Allow all participants to edit future participants option is selected while configuring the approval task, the history region displays additional actions that allow a participant to edit the future participants list.
Figure 9-11 shows the addition of the Apply and Reset buttons.
Figure 9-11 Task History - Additional Actions
Table 9-3 describes all additional approval-task actions.
Table 9-3 Edit Future Participants List Actions
Table 9-4 lists the actions the administrator can perform from the Task History view and their descriptions.
Table 9-4 Task History Actions
The Comments and Attachments view, shown in Figure 9-12, is created in JDeveloper by using the task data control drop handler. It includes a text-entry field in which to enter comments about the task, and the functionality to attach supporting documents.
Figure 9-12 Comments and Attachments
You use mapped attributes to store use-case-specific data, such as data extracted from a task's payload. You can view and create mapped attribute labels on the server by using Process Workspace.
Note:
You must have the workflow.mapping.protectedFlexfield privilege to create protected flex field attributes. The default administrative user, weblogic, has this privilege.
For more information, see How to Specify Mapped Attributes.
Figure 9-13 Flex Field Mapping: Protected
The page displays a list of existing attribute labels. You can filter the list by selecting an attribute type from the Show list. Clicking a specific label displays the list of mappings the attribute uses in the Details panel.
The Create Label dialog displays.
The label is created and is made available for mapping in task components.
Note:
Attribute labels can be deleted only if they are not used in any mappings.
If attribute labels have been defined on one server and must be re-created on another, then Process Workspace.
For more information, see Using the User Metadata Migration Utility.
When attribute labels are displayed to end users, for example in the task listing page of the Process Workspace, the label name that is used is the one specified when the label was created. In cases where users of different nationalities may see the label, a translation of the label name appropriate to Process Workspace user's locale can be displayed instead. Translations of attribute labels can be customized using the WorkflowLabels.properties resource bundle.
For more information, see How to Specify the Language Locale Information in Process Workspace.
To view approval groups:
The Administration Areas panel appears in the left pane.
The Groups page appears in the right pane.
A details page for that approval group appears in the right pane, similar to the one shown in Figure 9-14.
Figure 9-14 Process Workspace: Approval Group Detail
The figure shows that the Disbursement Team approval group has two users, jcooper and jstein. These users act on a task in a specific sequence configuration.
You can search for an approval group either by user name or group name.
The Administration Areas panel appears in the left pane.
The Approval Groups page appears in the right pane.
You also can perform a wildcard search (*) with a partial user name.
A list of all approval groups to which the user belongs displays in the left pane, as shown in Figure 9-15.
Figure 9-15 User Name Search Results
Clicking the approval group name refreshes the details pane on the right with the structure of that group.
The Administration Areas panel appears in the left pane.
The Groups page appears in the right pane.
You also can perform a wildcard search (*) with a partial group name and click Search. A list of all matching approval groups displays in the left pane.
Clicking the approval group name refreshes the Details pane on the right with the structure of that group.
You can add a static approval group.
The Administration Areas panel appears in the left pane.
The Groups page appears in the middle pane.
Figure 9-16 Create Approval Group: Select Static Group
You can add members to the new approval group.
Members of a static approval group can be either users or other approval groups.
To add a new user member to an approval group:
The Administration Areas panel appears in the left pane.
The Groups page appears in the middle pane.
The other icons enable you to edit, delete, and reorder members in the approval sequence.
The Add to Group dialog box appears.
Enter a full user name and click OK.
The dialog box closes and the new member appears in the Members section of the Details pane.
Click the magnifying glass to search for a user.
If you click the magnifying glass, an Identity Browser pop-up dialog appears.
The Identity Browser dialog refreshes and the search results appear.
The details for that user appear in the Details section of the dialog.
A node representing the selected user appears in the approval group structure in the Members section of the Details pane.
You can add more members to the approval group by repeating the steps above. The resulting approval group structure looks similar to the one shown in Figure 9-17.
Figure 9-17 Approval Group Structure: Multiple Members
You can delete members from approval groups.
To delete a member from an approval group:
The Administration Areas panel appears in the left pane.
The Groups page appears in the middle pane.
The corresponding Details pane appears to the right.
The approval group structure refreshes and the member node has been deleted.
You can change the order of approval groups.
To change the sequence order of an approval group:
The Administration Areas panel appears in the left pane.
The Groups page appears in the right pane.
The corresponding Details pane appears to the right.
Nesting an approval group means making it part of another approval group.
To nest an approval group:
The Administration Areas panel appears in the left pane.
The Groups page appears in the right pane.
The corresponding Details pane appears to the right.
The Add to Group dialog box appears.
Its structure appears in the right pane.
The new approval group appears in the approval group's structure.
You can rename approval groups.
To rename an approval group:
The Administration Areas panel appears in the left pane.
The Groups page appears in the right pane.
The corresponding Details pane appears to the right. is the only input parameter. The primary key list can be obtained from the task: task/systemAttributes/collectionTarget.
Example 9-2 shows a code snippet for a sample dynamic approval group class.; }
Example 9-2 Code for Dynamic Approval Group Class
************** IDynamicApprovalGroup.java ****************** package oracle.bpel.services.workflow.repos.test; import java.util.ArrayList; import java.util.List; import oracle.bpel.services.worlflow.WorkflowException; import oracle.bpel.services.worlflow.task.IDynamicApprovalGroup; import oracle.bpel.services.worlflow.runtimeconfig.impl.RuntimeConfigUtil; import oracle.bpel.services.worlflow.runtimeconfig.model.ApprovalGroupMember; import oracle.bpel.services.worlflow.task.model.Task; import oracle.bpel.services.worlflow.IWorkflowConstants; public class SampleDynAprGrp1 implements IDynamicApprovalGroup { public List getMembers(Task task ) throws WorkflowException { // Todo: add logic to do conditional return of different group members based on some task attribute List approversList=new ArrayList(); ApprovalGroupMember taskAssignee1 = RuntimeConfigUtil.getFactory().createApprovalGroupMember(); taskAssignee1.setMember(“jlondon"); taskAssignee1.setType(IWorkflowConstants. IDENTITY_TYPE_USER); taskAssignee1.setSequence(1); approversList.add(taskAssignee1); ApprovalGroupMember taskAssignee2 = RuntimeConfigUtil.getFactory().createApprovalGroupMember(); taskAssignee2.setMember(“wfaulk"); taskAssignee2.setType(IWorkflowConstants. IDENTITY_TYPE_USER); taskAssignee2.setSequence(2); approversList.add(taskAssignee2); ApprovalGroupMember taskAssignee3 = RuntimeConfigUtil.getFactory().createApprovalGroupMember(); taskAssignee3.setMember(“jstein"); taskAssignee3.setType(IWorkflowConstants. IDENTITY_TYPE_USER); taskAssignee3.setSequence(3); approversList.add(taskAssignee3); ApprovalGroupMember taskAssignee4 = RuntimeConfigUtil.getFactory().createApprovalGroupMember(); taskAssignee4.setMember(“achrist"); taskAssignee4.setType(IWorkflowConstants. IDENTITY_TYPE_USER); taskAssignee4.setSequence(4); approversList.add(taskAssignee4); return approversList; } }
If you are a process designer and want more information, see Oracle Fusion Middleware Developing Business Processes with Oracle Business Process Management Studio.
To make the class file available in a globally well-known directory that is part of the Service Integration class path, put your class files in the following Oracle.
The following procedure explains how to add a dynamic approval group.
The Administration Areas panel appears in the left pane.
The Groups page appears in the right pane.
You can delete members from approval groups.
To delete an approval group:
The Administration Areas panel appears in the left pane.
The Groups page appears in the right pane.
A confirmation dialog box appears.
The approval group is deleted.
Note:
If the approval group you deleted is nested in other approval groups, it also is deleted from those parent groups. specify mapped attributes, see Oracle Fusion Middleware Developer's Guide for Oracle SOA Suite.
An administrator, or users with the necessary privileges, can use mapped attributes, shown in Figure 9-18,-18.
Figure 9-18 Specifying a Mapped Attribute
If you are a process designer and want more information about public and protected mapped attributes, see Oracle Fusion Middleware Developing Business Processes with Oracle Business Process Management Studio.
To create a mapped attribute mapping, an administrator first defines a semantic label, which provides a more meaningful display name for the mapped attribute. Click Add to use the Create Label dialog box shown in Figure 9-19.
Figure 9-19 Creating a Label.
Task Configuration is a web-based application in Process Workspace that enables business users and administrators to review and modify rules that were predefined by the workflow designer. These predefined rules can be changed for a specific customer based on the customer's applicable corporate policies.
For example, suppose that a corporate policy requires two levels of approvals for expense amounts greater than 1000. Suppose further that this policy is changed to require three levels. You can use Task Configuration to change the rule rather than having your IT department modify the rule in the underlying process and then deploy it again. Any change to the rule is applied starting with the next instance, and instances already in progress use the current rule definitions.
Task Configuration enables you to edit the event driven and data-driven rules associated with an approval flow at runtime—that is, when the workflow has already been deployed.
This section contains the following topics:
How to Access the Task Configuration Page
How to Edit Workflow Task Settings
How to Specify Routing Settings
How to Specify an Expiration Policy
How to Specify an Escalation Policy
How to Extend an Expiration Policy Period
How to Specify Participant Notification Preferences
How to Enable Task Access
How to Specify Actions for Acting Upon Tasks
How to View Settings for Rules
How to Add a Rule to a Ruleset in Process Workspace
How to Add and Customize Decision Tables
How to Add a Variable to a Rule
Use the task configuration tab to set aggregation, access, expiration, and notification settings.
To access the Task Configuration page:
The Administration Panel appears in the left pane.
The Task Configuration page appears in the right pane.
Figure 9-20 Task Configuration: Editable Task Page
The Tasks to be configured section in the middle pane lists all workflow tasks configured to use approval-flow rules. It also provides a search capability. When you click a link to one of the tasks, the corresponding data appears in the right pane in two tab pages: Task, shown in Figure 9-20, which displays the default configuration for the task, and Rules, shown in Figure 9-21, which you can use to override the approval-flow list builder configuration. The rule configurations are displayed graphically based on the stages defined in the approval flow.
Figure 9-21 Task Configuration: Rules Tab Page
This section contains information about workflow task settings (task metadata).
To edit a task setting:
The Administration Panel appears in the left pane.
The search results appear in the Tasks to be configured pane.
The corresponding Task tab page appears in the right pane..
Perform the following steps:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
The Task tab page contains a limited set of the routing options. Approval aggregation requirements can be any of the following:
None
Once per task
Once per stage
Perform the following steps:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
Never Expire
Expire After. Select Fixed Duration to specify the amount of days, hours, and minutes for completing the task. Alternatively, select By Expression to dynamically enter a value as an XPath expression.
You can extend the expiration period when the user does not respond within the allotted time. You do this by specifying the number of times the task can be renewed upon expiration (for example, renew it an additional three times) and the duration of each renewal (for example, three days for each renewal period).
To extend an expiration policy period:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
Perform the following steps:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
Maximum Escalation Levels
Number of management levels to which to escalate the task. This field is required.
Highest Approver Title
The title of the highest approver (for example, self, manager, director, or CEO). These titles are compared against the title of the task assignee in the corresponding user repository. This field is optional.
The escalation policy specifies the number of times the task can be escalated on expiration and the renewal duration.
Notifications indicate when a user or group is assigned a task or informed that the status of the task has changed. Notifications can be sent through email, voice message, instant message, or SMS. Notifications are sent to different types of participants for different actions. Notifications are configured by default with default messages. For example, a notification message is sent to indicate that a task has completed and closed. You can create your own or modify existing configurations.
Note:
Embedded LDAP does not support group email addresses. Therefore, when a task is assigned to a group ID, emails are sent to all of its members instead of to the group email address.
The notification message should be a valid xpath expression. If you must specify a simple string, then specify it in either double quotes or use the string function.
Perform the following steps:
To make email messages actionable:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
This action enables you to perform task actions through email.
Note:
FYI tasks are not actionable and cannot be acknowledged from email messages.
Perform the following steps:
To secure notifications, make messages actionable, and send attachments:
From the Process Workspace toolbar, click Administration.
The Administration Panel appears in the left pane.
In the Administration Panel, under Task Administration, click Task Configurations.
From the Tasks to be configured pane, select a task.
The corresponding Task tab page appears in the right pane.
In the Tasks to be configured pane, click the Edit task icon.
The Task tab page becomes editable as shown in Figure 9-20.
In Task tab page, in the Notification section, select Make notifications secure (exclude details).
If selected, a default notification message is used. There are no HTML worklist task details, attachments, or actionable links in the email. Only the task number is in the message.
You can send task reminders, which can be based on the time the task was assigned to a user or the expiration time of a task. The number of reminders and the interval between the reminders can also be configured.
To set up reminders:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
Three default status types display in the Task Status column: Assign, Complete, and Error. You can select other status types for which to receive notification messages.
Notifications can be sent to users involved in the task in various capacities. When the task is assigned to a group, and there is no notification endpoint available for the group, then each user in the group is sent a notification.
To notify recipients of changes to task status:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
Assign
When the task is assigned to users or a group, this captures the following actions:
Task is assigned to a user.
Task is assigned to a new user in a serial workflow.
Task is renewed.
Task is delegated.
Task is reassigned.
Task is escalated.
Information for a task is submitted.
Complete
Error
Expire
Request Info
Resume
Suspend
Update
Task payload is updated.
Task is updated.
Attachments are added and updated.
Update Outcome
Withdraw
All Other Actions
Any action not covered in the above task types. This includes acquiring a task.
Assignees
The users or groups to whom the task is currently assigned.
Initiator
The user who created the task.
Approvers
The users who have acted on the task up to this point. This applies in a serial participant type in which multiple users have approved the task and a notification must be sent to all of them.
Owner
The task owner.
Reviewer
The user who can add comments and attachments to a task.
A default notification message is available for delivery to the selected recipient. You can modify the default message text.
To edit the notification message:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
This message applies to all the supported notification channels: email, voice, instant messaging, and SMS. Email messages can also include the worklist task detail defined in this message. The channel by which the message is delivered is based upon the notification preferences you specify.
Perform the following steps:
To prevent multiple email notifications for a human task event:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
This action enables you to perform task actions through email.
Access-rule settings can be set to control the actions a user can perform. Content and action permissions can be specified based on the logical role of a user, such as creator (inititator), owner, assignee, and reviewers.
To enable task access:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
Note:
Access rules are always applied on top of what the system permits, depending on who is performing the action and the current state of the task.
You can specify the actions that specific users can perform when acting on the task content that you specified in the Configure Task Content Access dialog in How to Enable Task Access.
To specify actions for acting upon tasks:
The Administration Panel appears in the left pane.
The corresponding Task tab page appears in the right pane.
Perform the following steps:
To view a rule set for a rule-based participant:
The Administration Panel appears in the left pane.
The Tasks to Be Configured page appears in the middle pane.
The corresponding details appear in the Tasks tab page in the right pane.
The Rules tab page displays the Configure Task Approval Rules page, similar to what is shown in the example in Figure 9-22.
Figure 9-22 Editing a Task Rule
The Configure Task Approval Rules page graphically shows the approval flow for the task. The icon in each participant node indicates the type of participant: single, parallel, serial, or FYI.
To see a tool tip displaying the name of the ruleset for a participant in this task, hover over the link for that participant.
Note that value- based participants are shown but are disabled.
You can add a rule to a ruleset, specifying not only conditions, but also a description, the effective date, and priority.
To add rules for a rule-based participant:. Here you can either add or edit rules defined for the rule set. (You can also add or edit decision tables as explained in How to Add and Customize Decision Tables.)
Figure 9-23 Adding a Rule
This enables you to specify the conditions for the rule as described in How to Define Rule Conditions. To provide a description and specify the effective date and priority, click the Show Advanced Settings icon to the left of the rule field.
The changes are saved to the rule definitions in the rules dictionary. To commit the changes to MDS, click Commit. The task is refreshed and no longer editable.
For more information about adding rules, see Working with Rulesets and Rules in Oracle Fusion Middleware User's Guide for Oracle Business Rules.
You can set the left and right sides of a condition by selecting operands from condition browsers. Clicking the magnifying glass icon displays the browsers.
The operator for comparing the operands of the condition.
You can add actions to a rule.
To add an action:.
If there are rules defined for the ruleset, go to step 9.
Figure 9-24 Adding Actions to a Rule
Select the action you want to add from the displayed list as shown in Figure 9-25.
Figure 9-25 Available Actions for Rules
You can add multiple actions for any rule by repeating step 10.
Copy
Cut
Paste
Up and Down (to change the order of the listed actions)
Perform the following steps:
Select the action by clicking the check box next to the action as shown in Figure 9-26.
Figure 9-26 Delete, Up, Down, Copy, Cut and Paste Commands
Select any of the possible actions such as Delete, Cut, Copy or Paste by clicking the corresponding icons.
If you have multiple actions, you can change the order of the actions by using the up and down arrows.
You can customize existing decision tables or add new ones.
You can specify the actions for a ruleset that were specified in a previously created decision table.
To customize an existing decision table in a ruleset in Process Workspace: to the left of each participant in the flow indicates the type of participant: single, parallel, serial, or FYI.
This displays the ruleset associated with that participant.
The selected decision table appears in the lower half of the page.
Figure 9-27 Adding Actions to a Decision Table
The corresponding actions appear in the table for you to specify.
In the Rules tab, you can edit multiple parameters of an action from the Decision Table.
To edit multiple parameters:
The selected decision table appears in the lower half of the page as shown in Figure 9-28.
Figure 9-28 Decision Table with Editable Actions
You can modify the values available for the action you selected. You can choose not to specify values for all the parameters in the Action Editor dialog box.
Figure 9-29 Action Editor
The values appear in the Decision Table for the selected action for all rules. For more information on Decision Tables, see How to Add and Customize Decision Tables.
You have to option to edit one of the parameters of an action in the Decision Table.
To edit a single cell of any action:
The selected decision table appears in the lower half of the page as shown in Figure 9-28.
This makes the cell editable as shown in Figure 9-30.
Figure 9-30 Single Cell Editing
You can add a table to a ruleset and specify the actions it contains.
To add a new decision table to a ruleset in Process Workspace:
The Administration Panel appears in the left pane.
The Tasks to Be Configured pane appears as the middle pane.
The corresponding details appear in the Tasks tab page in the right pane.
The Rules tab page displays the Configure Task Approval Rules page showing the approval flow for the task. The icon to the left of each participant in the flow indicates the type of participant: single, parallel, serial, or FYI.
This displays the ruleset associated with that participant.
Figure 9-31 Adding a Decision Table
The new decision table appears in the lower half of the page.
The corresponding actions appear in the table for you to specify.
Perform the following steps:
To add a variable to a rule:
The Add Variable window appears.
The types displayed in the list correspond to those that are available in the rule dictionary (including built-in types by default as well as others that have been registered).
The variable can now be used to define conditions.
The evidence store service is used for digital signature storage and nonrepudiation of digitally signed human tasks.
To search the evidence store:
The Administration Areas panel appears in the left pane.
The Evidence Search page appears in the right pane, as shown in Figure 9-32.
Figure 9-32 Searching the.
If you are a process designer and want more information about using the User Metadata Migration Utility, see Oracle Fusion Middleware Developing Business Processes with Oracle Business Process Management Studio.
For information about moving Oracle BPM from a test site to a production site, see Oracle Application Server Administrator's Guide.
You can integrate some Oracle Business Process Management functionality into Oracle WebCenter Portal portals and pages, including task lists from which you can view and execute tasks.
WebCenter Portal Integration Architecture — WebCenter Portal
BPM is integrated into WebCenter Portal using a BPEL connection, and a WebCenter Portal resource catalog. With the BPEL connection established, the resource catalog provides task flows that can be embedded in a WebCenter Portal page.
In the WebCenter Portal portal or page, an ADF taskflow contains an in-line frame. The frame’s URL points to a page that provides BPM functionality (such as the Task List). This page is hosted from the BPM Workspace application, which resides on the BPM server.
Figure 9-33 WebCenter Portal BPM Integration Architecture
WebCenter Portal Integration Architecture — BPM
The components needed to integrate BPM Task List functionality into WebCenter Portal are built in to the BPM server application. No additional BPM server configuration is needed.
WebCenter Portal Integration Setup Overview
To set up WebCenter Portal integration with BPM, you create a connection to the BPM BPEL server, load the BPM Process Portal Catalog resource catalog, and then integrate BPM resources into a WebCenter Portal portal. Interactive BPM functionality is then available.
For configuration details and steps, see Configuring Oracle WebCenter Portal Integration. For implementation details, see Integrating BPM Functionality into WebCenter Portal.
Configure BPM Workspace functionality integration into Oracle WebCenter Portal by creating a BPEL connection to the BPM BPEL server, and loading the BPM Process Portal Catalog resource catalog.
A high level overview of the tasks needed to integrate BPM into WebCenter Portal is:
create a BPEL connection to the BPM BPEL server
load the BPM Process Portal Catalog resource catalog
use the BPM Process Portal Catalog resources to integrate BPM functionality into a portal or page
First, create the BPM BPEL service connection as follows:
<FMW_HOME>/wcportal/common/bin/wlst.sh
createBPELConnection, specifying the following parameters:
‘webcenter’
‘WebCenter-Worklist’
oracle/wss10_saml_token_client_policy
createBPELConnection(appName='webcenter', name='WebCenter-Worklist', url='', policy='oracle/wss10_saml_token_client_policy’).
Next, load the BPM Resource Catalog into your WebCenter Portal portal as follows:
Shared Assetstab.
Resource Catalogspage.
Upload.
<MW_HOME>/oracle/wcportal/common/lib/BPM_Process_Portal_Resource_Catalog.ear.
OK.
WebCenter Portal integration with BPM is configured.
For customization of an integrated BPM task list see Customizing an Integrated BPM Task List in WebCenter Portal. For integration configuration details and steps, see Configuring Oracle WebCenter Portal Integration. For an overview of WebCenter Portal integration architecture, see Integrating with Oracle WebCenter Portal.
To integrate BPM functionality into an Oracle WebCenter Portal portal, drag Process Workspace Resources located in the BPM Process Portal Resource Catalog into the content pane.
Edit Portal.
Settingsicon.
Edit Portalicon.
Show Catalog.
Process Workspace Resourcescatalog.
For an overview of WebCenter Portal integration architecture, see Integrating with Oracle WebCenter Portal. For configuration details and steps, see Configuring Oracle WebCenter Portal Integration.
You can customize which components of the Task List are displayed in integrated BPM functionality in an Oracle WebCenter Portal page, as well as some of their default values.
From the Pages and Portals Actions dropdown menu, select Edit Portal. Click the View Actions dropdown menu from the upper right corner of the panel containing the Task List, and select Parameters.
Table 9-6 Parameters, and Default and Possible Values for Customizing an Integrated BPM Task List
See Integrating with Oracle WebCenter Portal for more information.
Add /workflow to mod_wl_ohs to view task details (ADF Form) in Webcenter Portal.
The task details are redirected to the internal URL and frame busting comes into play preventing the task form from getting loaded. | https://docs.oracle.com/middleware/12211/bpm/bpm-user/GUID-41FAAE91-AF32-47A0-BC1B-92CB960F452F.htm | CC-MAIN-2020-24 | en | refinedweb |
Move just the axis of a polygonal object
I have a list with all the coordinates of the vertexes of a polygonal object, all in global coordinates, that I got with:
op_mg = child.GetMg() points = op.GetAllPoints() for p,pt in enumerate(points) : points[p]=pt*op_mg
After setting the global matrix of my polygonal object to the matrix that I want its axis to be, now I want to relocate all the points the location where they were previously (globally).
But I want it to be able to happen, no matter how deep inside a hierarchy my polygonal object is.
Meaning... I want to be able to relocate the axis of my polygonal object to a specific global location, even if my object is inside a hierarchy, but I want to keep all the vertexes in their initial global location.
How can I do this?!?
I have been working with matrixes but no matter what I do, the points of the object always end up moving.
It's actually the axis and the points that have to move.
- m_magalhaes last edited by m_magalhaes()
Cheers
Manuel
That is what I was trying to do, but it is still not working.
Lets assume I have a list to objects that are the objects that are all children of a Null. The objects can be children or childs of childs, or even deeper, but they all are inside a Null.
I want all the geometry of those children to remain in the same spacial location but I wan all the axis to move to the location of the Null (no matter how deep the objects are in the hierarchy.
My current code is as follows:
main_mg = op.GetMg() # the global matrix of the parent Null # op_list is a list that contains all the childs of the Null. # Some of them are childs of childs or even deeper. doc.StartUndo() for op in op_list: child_mg = op.GetMg() pointsG = [p * child_mg for p in op.GetAllPoints()] doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) op.SetMg(main_mg) op.Message(c4d.MSG_UPDATE) inv_mg = ~child_mg nPoints = [p * inv_mg for p in pointsG] doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) op.SetAllPoints(nPoints) doc.EndUndo() c4d.EventAdd()
But it is not working!!
All the geometry still moves.
- m_magalhaes last edited by
@rui_mac said in Move just the axis of a polygonal object:
inv_mg = ~child_mg
Be careful about the matrix you are using as reference to go from global to local.
The object new matrix now is the same as the null. If you are using the old one (child_mg), the points will move the same way you moved the axis. But if you are using the new one, the points be changed to global to local and in fact will not move. (and that's the goal)
inv_mg = ~main_mg
forget about the hierarchy in that case.
It's just matrix operation local to global and global to local.
@m_adam share his code, he think that could help you in your project.
Let me know if it's still not clear.
Cheers
Manuel
Thank you so much. It is clearer now.
It works!!
Spoke too soon.
It works, but not for childs of childs of childs...
This is my current code:
import c4d op_list=[] def Get_Op(op): global op_list while (op): op_list.append(op) Get_Op(op.GetDown()) op = op.GetNext() return None # Main function def main(): global op_list selected = doc.GetActiveObjects(c4d.GETACTIVEOBJECTFLAGS_0) if len(selected)!=1: return op = selected[0] main_mg = op.GetMg() child = op.GetDown() op_list = [] if child != None: Get_Op(child) if len(op_list)!= 0: doc.StartUndo() for op in op_list: if op.GetType()==c4d.Opolygon: child_mg = op.GetMg() pointsGlob = [p * child_mg for p in op.GetAllPoints()] doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) op.SetMg(main_mg) op.Message(c4d.MSG_UPDATE) inv_mg = ~main_mg newPoints = [p * inv_mg for p in pointsGlob] doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) op.SetAllPoints(newPoints) op.Message(c4d.MSG_UPDATE) doc.EndUndo() c4d.EventAdd() # Execute main() if __name__=='__main__': main()
And it doesn't work with the Cap1 and Cap2 objects, in this hierarchy:
Everything remains in the same location, with the axis moved to the location of the outermost parent Null, except the Cap1 and Cap2 objects, as their geometry changes location.
- m_magalhaes last edited by m_magalhaes
after trying several solutions, i've change your function with this one for non recursive iteration. and using the same idea to local to global and global to local but with the child "locoal" matrix. (the one you get with GetMl)
import c4d def GetNextObject(op): # Non recursive hierarchy iteration if op==None: return None if op.GetDown(): return op.GetDown() while not op.GetNext() and op.GetUp(): op = op.GetUp() return op.GetNext() def UpdateObject (op, newpos): # Update the object by moving it to the new position # and moving back the points to their old position # Store the old matrix of the object old_mg = op.GetMg() # Retrieve the points position in global space pointsGlob = [p * old_mg for p in op.GetAllPoints()] # Add an Undo state to the document doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) # update the object position matrix op.SetMg(newpos) # Inverses the matrix to come back to local space inv_mg = ~newpos # Retrieves points from global to local space newPoints = [p * inv_mg for p in pointsGlob] # Add an Undo state to the document doc.AddUndo(c4d.UNDOTYPE_CHANGE, op) # Updates the points op.SetAllPoints(newPoints) # Notifies a change on the obj op.Message(c4d.MSG_UPDATE) def UpdateChildsMatrix(op, oldMg, newMg): # Update the Childrend Matrix after the parent have been moved op = op.GetDown() # For All child of the object while op: # Store the local matrix to a global space with the old matrix object newChildMg = oldMg * op.GetMl() # Calculate the new local matrix of the object. newChildMg = ~newMg * newChildMg # Set the new local matrix of the object op.SetMl(newChildMg) # Notifies a change on the obj op.Message(c4d.MSG_UPDATE) # Get the next Object if any op = op.GetNext() # Main function def main(): # Get all Selected Object selected = doc.GetActiveObjects(c4d.GETACTIVEOBJECTFLAGS_NONE) if len(selected) != 1: return mainOp = selected[0] op_list = [] # Get the first object if any nextObj = GetNextObject(mainOp) while nextObj: # Add all object found in the array op_list.append(nextObj) # Retreive the next Object nextObj = GetNextObject(nextObj) # Retreive the global position of the main object main_mg = mainOp.GetMg() # if there's no child, just leave if len(op_list) == 0: return # Starts the undo process (the initial scene to be restored after an undo action) doc.StartUndo() for op in op_list: # Check if it's a polygon Object if op.GetType()==c4d.Opolygon: #save the matrix to have a chance to update the child savedMatrix = op.GetMg() # Move the object to the new location without moving the points UpdateObject(op,main_mg) # Because we moved the parent, we should move back the children. UpdateChildsMatrix(op, savedMatrix, main_mg) # Ends the undo process (the final scene state) doc.EndUndo() # Updates Cinema 4D c4d.EventAdd() # Execute main() if __name__=='__main__': main()
Cheers
Manuel
Thank you so much, Manuel.
It was, mainly, that children matrix update that was missing.
Now, it works fine
- m_magalhaes last edited by
hi,
when you will be sure that it's solved, don't forget to mark this thread as solved please.
Cheers
Manuel | https://plugincafe.maxon.net/topic/11531/move-just-the-axis-of-a-polygonal-object/10 | CC-MAIN-2020-24 | en | refinedweb |
libs/libkipi/src
#include <configwidget.h>
Detailed Description
Definition at line 45 of file configwidget.h.
Constructor & Destructor Documentation
Default constructor.
Definition at line 104 of file configwidget.cpp.
Definition at line 141 of file configwidget.cpp.
Member Function Documentation
Return the number of plugins actived in the list.
Definition at line 196 of file configwidget.cpp.
Apply all changes about plugins selected to be hosted in KIPI host application.
Definition at line 146 of file configwidget.cpp.
Clear all selected plugins in the list.
Definition at line 183 of file configwidget.cpp.
Return the total number of plugins in the list.
Definition at line 191 of file configwidget.cpp.
Return the current string used to filter the plugins list.
Definition at line 243 of file configwidget.cpp.
Select all plugins in the list.
Reimplemented from QTreeView.
Definition at line 175 of file configwidget.cpp.
Set the string used to filter the plugins list.
signalSearchResult() is emitted when all is done.
Definition at line 222 of file configwidget.cpp.
Signal emitted when filtering is done through slotSetFilter().
Number of plugins found is sent when item relevant of filtering match the query.
Return the number of visible plugins in the list.
Definition at line 209 of file configwidget.cpp.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Sun May 24 2020 23:25:52 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/4.x-api/kdegraphics-apidocs/libs/libkipi/src/html/classKIPI_1_1ConfigWidget.html | CC-MAIN-2020-24 | en | refinedweb |
Gatsby E-commerce Tutorial
In this advanced tutorial, you’ll learn how to use Gatsby to build the UI for a basic e-commerce site that can accept payments, with Stripe as the backend for processing payments.
- Demo running on Netlify
- Code hosted on GitHub
Why use Gatsby for an E-commerce site?
Benefits of using Gatsby for e-commerce sites include the following:
- Security inherent in static sites.
- Blazing fast performance when your pages are converted from React into static files.
- No server component required with Stripe’s client-only Checkout.
- Cost-efficient hosting of static sites.
Prerequisites
- Since this is a more advanced tutorial, building a site with Gatsby before will likely make this tutorial less time-consuming (see the main tutorial here)
- Stripe account: register for an account here
How does Gatsby work with Stripe?
Stripe is a payment processing service that allows you to securely collect and process payment information from your customers. To try out Stripe for yourself, go to Stripe’s Quick Start Guide.
Stripe offers a hosted checkout that doesn’t require any backend component. You can configure products, SKUs, and subscription plans in the Stripe Dashboard. If you’re selling a single product or subscription (like an eBook) you can hardcode the product’s SKU ID in your Gatsby site. If you’re selling multiple products, you can use the Stripe source plugin to retrieve all SKUs at build time. If you want your Gatsby site to automatically update, you can use the Stripe webhook event to trigger a redeploy when a new product or SKU is added.
Setting up a Gatsby site
Create a new Gatsby project by running the
gatsby new command in the terminal and change directories into the new project you just started:
See your site hot reload in the browser!
Run
gatsby develop in the terminal, which starts a development server and reloads changes you make to your site so you can preview them in the browser. Open up your browser to and you should see a default homepage.
Stripe provides a JavaScript library that allows you to securely redirect your customer to the Stripe hosted checkout page. Due to PCI compliance requirements, the Stripe.js library has to be loaded from Stripe’s servers. Stripe provides a loading wrapper that allows you to import Stripe.js like an ES module:
Stripe.js is loaded as a side effect of the
import '@stripe/stripe-js'; statement. To best leverage Stripe’s advanced fraud functionality, ensure that Stripe.js is loaded on every page of your customer’s checkout journey, not just your checkout page. This allows Stripe to detect anomalous behavior that may be indicative of fraud as customers browse your website.
To make use of this, install the
stripe-js module:
Getting your Stripe test keys
View your API credentials by logging into your Stripe account, and then going to Developers > API Keys.
You have 2 keys in both test mode and production mode:
- a publishable key
- a secret key
While testing, you must use the key(s) that include test. For production code, you will need to use the live keys. As the names imply, your publishable key may be included in code that you share publicly (for example, on the frontend, and in GitHub), whereas your secret key should not be shared with anyone or committed to any public repo. It’s important to restrict access to this secret key because anyone who has it could potentially read or send requests from your Stripe account and see information about charges or purchases or even refund customers.
Enabling the “Checkout client-only integration” for your Stripe account
In this tutorial you will be using Stripe Checkout in client-only mode. You need to enable client-only mode in the Checkout settings.
💡 This change will also modify the interface that Stripe provides to administer your products: keep this in mind in case you have previously used this tool. If you have never used the product administrator, you don’t need to worry.
Additionally, you need to set a name for your Stripe account in your Account settings. You can find more configuration details in the Stripe docs.
Examples
You can find an implementation of these examples on GitHub.
Example 1: One Button
If you’re selling a single product, like an eBook for example, you can create a single button that will perform a redirect to the Stripe Checkout page:
Create a product and SKU
To sell your products, you need to create them in your Stripe account using the Stripe Dashboard or the Stripe API. This is required for Stripe to validate that the request coming from the frontend is legitimate and to charge the correct amount for the selected product/SKU. Stripe requires every SKU used with Stripe Checkout to have a name: be sure to add one to all of your SKUs.
You will need to create both test and live product SKUs separately in the Stripe Dashboard. Make sure you toggle to “Viewing test data”, then create your products for local development.
Create a checkout component that loads Stripe.js and redirects to the checkout
Create a new file at
src/components/checkout.js. Your
What did you just do?
You imported React, created a function component that returns a button with some styles, and added a
redirectToCheckout handler that is executed when the button is clicked. The
loadStripe function returns a Promise that resolves with a newly created Stripe object once Stripe.js has loaded.
This identifies you with the Stripe platform, validates the checkout request against your products and security settings, and processes the payment on your Stripe account.
The
redirectToCheckout() function validates your checkout request and either redirects to the Stripe hosted checkout page or resolves with an error object. Make sure to replace
successUrl and
cancelUrl with the appropriate URLs for your application.
Importing the checkout component into the homepage
Now go to your
src/pages/index.js file. This is your homepage that shows at the root URL. Import your new checkout component in the file underneath the other imports and add your
<Layout> element. Your
index.js file should now look similar to this:
If you go back to in your browser and you have
gatsby develop running, you should now see a big, enticing “BUY MY BOOK” button. C’mon and give it a click!
Example 2: Import SKUs via source plugin
Instead of hardcoding the SKU IDs, you can use the gatsby-source-stripe plugin to retrieve your SKUs at build time.
Add the Stripe source plugin
Add the gatsby-source-stripe plugin which you can use to pull in the SKUs from your Stripe account.
Now you can add the plugin configuration in your
gatsby-config file:
To retrieve your SKUs from your Stripe account you will need to provide your secret API key. This key needs to be kept secret and must never be shared on the frontend or on GitHub. Therefore you need to set an environment variable to store the secret key. You can read more about the usage of env variables in the Gatsby docs.
In the root directory of your project add a
.env.development file:
To use the defined env variable you need to require it in your
gatsby-config.js or
gatsby-node.js like this:
Lastly, make sure that your
.gitignore file excludes all of your
.env.* files:
Create a component that lists your SKUs
In your components folder add a new
Products folder. This folder will include the components that interact with the Stripe SKUs. First, you need a component that queries and lists your SKUs:
You can validate your query and see what data is being returned in GraphiQL, which is available at when running
gatsby develop.
Once you’re happy with your query, create a new page where you can import the newly created Sku component:
When navigating to you should now see a list of paragraphs with your SKU names.
Create a component that presents a single SKU
To make your SKUs more visually appealing and interactive, create a new
SkuCard component in your
Products folder:
This component renders a neat card for each individual SKU, with the SKU name, nicely formatted pricing, and a “BUY ME” button. The button triggers the
redirectToCheckout function with the corresponding SKU ID.
Lastly, you need to refactor your
Skus component to
loadStripe, and render
SkuCards while handing down the Stripe promise in the
props:
Adding a cart component
You can call
redirectToCheckout() providing an array of SKUs and their quantities to charge for multiple items at the same time. Instead of each “BUY ME” button redirecting to the checkout page, you can therefore provide a central “GO TO CHECKOUT” button that uses the state of a cart component. You can see the necessary changes for this example on GitHub.
Testing Payments
In test mode (when using the API key that includes test) Stripe provides test cards for you to test different checkout scenarios. | https://www.gatsbyjs.org/tutorial/ecommerce-tutorial/ | CC-MAIN-2020-24 | en | refinedweb |
We value your feedback.
Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!
If CF_HDROP is not specified as a loaded file typeso that's what I was assuming.
Nevermind though, I guess it can't be done.
How are these items getting on to the clipboard? Are you talking about, eg. hitting Ctrl+C on an image file in the filesystem? Or otherwise?
Right click, then copy.
I guess it can't be done
import java.io.File; import java.io.IOException; public class TestTemporaryInternetFileRetrieval { public static void main(String[] args) throws IOException { File file1 = new File("C:\\Documents and Settings\\XXXXX\\Local Settings\\test.txt"); File file2 = new File("C:\\Documents and Settings\\XXXXX\\Local Settings\\Temporary Internet Files\\blank.gif"); System.out.println("file1 exists: " + file1.exists() + " (" + file1.getAbsolutePath() + ")"); System.out.println("file2 exists: " + file2.exists() + " (" + file2.getAbsolutePath() + ")"); } }
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions. | https://www.experts-exchange.com/questions/27630421/get-file-path-to-image-loaded-on-clipboard.html | CC-MAIN-2017-47 | en | refinedweb |
/ipc.h - XSI interprocess communication access structure
#include <sys/ipc.h> Owners user ID. gid_t gid Owners group ID. uid_t cuid Creators user ID. gid_t cgid Creators group ID. mode_t mode Read/write permission.
The uid_t, gid_t, mode_t, and key_t types shall be defined as described in <sys/types.h> .
Definitions shall be provided for the following constants:
Mode bits:The following shall be declared as a function and may also be defined as a macro. A function prototype shall be provided.
key_t ftok(const char *, int);
The following sections are informative.
None.
None.
None.
. | http://manpages.sgvulcan.com/sys_ipc.h.0p.php | CC-MAIN-2017-47 | en | refinedweb |
wcsxfrm(3) BSD Library Functions Manual wcsxfrm(3)
NAME
wcsxfrm, wcsxfrm_l -- transform a wide string under locale
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <wchar.h> size_t wcsxfrm(wchar_t *restrict ws1, const wchar_t *restrict ws2, size_t n); #include <wchar.h> #include <xlocale.h> size_t wcsxfrm_l(wchar_t *restrict ws1, const wchar_t *restrict ws2, size_t n, locale_t loc);
DESCRIPTION
The wcsxfrm() function transforms a null-terminated wide character string pointed to by ws2, according to the current locale's collation order, then copies the transformed string into ws1. No more than n wide charac- ters are copied into ws1, including the terminating null character. If n is set to 0 (it helps to determine an actual size needed for transforma- tion), ws1 is permitted to be a NULL pointer. Comparing two strings using wcscmp() after wcsxfrm() is equivalent to comparing two original strings with wcscoll(). Although the wcsxfrm() function uses the current locale, the wcsxfrm_l() function may be passed a locale directly. See xlocale(3) for more infor- mation.
RETURN VALUES
Upon successful completion, wcsxfrm() returns the length of the trans- formed string not including the terminating null character. If this value is n or more, the contents of ws1 are indeterminate.
SEE ALSO
setlocale(3), strxfrm(3), wcscmp(3), wcscoll(3), xlocale(3)
STANDARDS
The wcsxfrm() function conforms to ISO/IEC 9899:1999 (``ISO C99'').
BUGS ws1, whereas wcscoll() compares characters using both primary and secondary weights. BSD October 4, 2002 BSD
Mac OS X 10.6 - Generated Thu Sep 17 20:23:46 CDT 2009 | http://www.manpagez.com/man/3/wcsxfrm_l/ | CC-MAIN-2017-47 | en | refinedweb |
Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.
One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.
namespace xyzzy { public partial class Form1 : Form { // delegate enables asynchronous calls from other threads delegate void AppendTextBox1Delegate(string text); public Form1() { InitializeComponent(); } /* Call this method to append text to textBox1 */ public void AppendTextBox1(string text) { if (this.textBox1.InvokeRequired) // Check if we are running on a different thread { // It's on a different thread, so use Invoke. (We call ourself through a delegate.) AppendTextBox1Delegate d = new AppendTextBox1Delegate(AppendTextBox1); this.textBox1.Invoke (d, new object[] { text }); } else { // It's on the main UI thread, no need for Invoke this.textBox1.AppendText(text + Environment.NewLine); } } private void LogMsg(string text) { AppendTextBox1(text); } private void serialPort1_ErrorReceived(object sender, SerialErrorReceivedEventArgs e) { //I could be on any thread. Can I still call LogMsg? LogMsg("serialPort1_ErrorReceived."); } } }
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions. | https://www.experts-exchange.com/questions/27313101/Is-it-OK-to-call-my-method-from-any-thread.html | CC-MAIN-2017-47 | en | refinedweb |
Note: This will be the first in a series of beginning XNA articles.
When starting an XNA project, you quickly discover that hard things are easy, and easy things are hard. One of the first things you want to do is to output some text. The TextOutput component presented here is an XNA drawable component that can output that text for you in an efficient batched manner. It was developed mainly for use as a debugging aid.
XNA is a set of software tools that make developing games and multimedia applications a lot easier. XNA is built on top of the .NET 2.0 Framework, but it includes its own environment and additional libraries. Currently only C# is officially supported by XNA, but technically any .NET language should be able to work under it. The neat part is that programs written to XNA can be compiled to run under both Windows and XBox-360.
Believe it or not, the first few versions of XNA had no built in support for displaying text what-so-ever. The development community quickly came up with several solutions. But most of them involved having to run a separate program to compile your own fonts to bitmap, having to copy that bitmap file to your project and then using a library to load that bitmap into a spriteBatch so that it could be rendered onto the screen. Luckily, Microsoft decided to include a built-in solution in the latest version of XNA that simplifies things a little by having the font automatically converted by XNA's built in content management system.
Once you have the latest version of XNA GSE installed, you can go under Help to learn how to draw text on the screen. In the current version it is under:"Help:Contents:XNA Game Studio Express:Programming Guide:How to: Draw Text."
The code example from that help is:
ForegroundBatch.Begin();
// Draw Hello World
string output = "Hello World";
// Find the center of the string
Vector2 FontOrigin = CourierNew.MeasureString( output ) / 2;
// Draw the string
ForegroundBatch.DrawString( CourierNew, output, FontPos, Color.LightGreen,
FontRotation, FontOrigin, 1.0f, SpriteEffects.None, 0.5f );
...
ForegroundBatch.End();
That is not that bad, but it is a little cumbersome. Plus, it can be slow calling the sprite batch many times.
To use the TextOutput component, add the TextOutput.cs file to your project. Note, you will also have to create a file called Arial.SpriteFont (just copy it from the sample project).
TextOutput
Add a using statement to the following namespace:
using UTM.CSIS.Xedge;
Add a member variable for the new object inside of your main game class:
TextOutput textOutput;
Add the component to your game in its constructor:
// Do this in the game's constructor
this.Components.Add(textOutput = new TextOutput(this));
Now we can easily output text to anywhere on the screen by using the WriteAt method, or we can output text that follows the mouse around by using the writeAtMouse method:
WriteAt
writeAtMouse
textOutput.WriteAt(50, 50, "Hello World!");
textOutput.writeAtMouse("Mouse is\r\n here");
The text will be stored in a list and when the component draws itself, it will automatically draw all of the text you have submitted.
You can also output text in different colors or at different rotations.
textOutput.WriteAt(100, 100, "This will be red.", Color.Red);
This is a component that inherits from DrawableGameComponent. Because of this, once you add this component to your main game, it will automatically Load and Draw itself.
DrawableGameComponent
Here is a class diagram that shows how the class is setup. By the way, did you know you can generate these neat diagrams right from Visual Studio just by right-clicking on the class?
The most important method is the WriteAt method. It simply adds the text and attributes to a list as shown below. It is overloaded so that you can call it with or without the color parameter.
public void WriteAt(int x, int y, string s, Color c)
{
//create a new textNode
textNode n = new textNode();
//populate x pos
n.X = x;
//populate y pos
n.Y = y;
//populate string
n.Text = s;
//set color
n.FontColor = c;
//set rotation
n.Rotation = m_FontRotation;
//add to the batch list
m_List.Add(n);
}
The WriteAt method uses the following struct to hold the text and attributes:
struct
struct textNode
{
public int X, Y;
public string Text;
public Color FontColor;
public float Rotation;
}
Now, when it comes time to draw, the code simply loops though the list and outputs all of the text in one fast sprite batch.
public override void Draw(GameTime gameTime)
{
base.Draw(gameTime);
if (m_Enabled)//if we want text rendered
{
m_SpriteBatch.Begin(SpriteBlendMode.AlphaBlend,
SpriteSortMode.FrontToBack, SaveStateMode.SaveState);
//foregBatch.Begin();
// This is quicker, but may cause side-effects
foreach (textNode n in m_List)
{
m_SpriteBatch.DrawString(m_SpriteFont, n.Text,
new Vector2(n.X, n.Y), n.FontColor,
n.Rotation, new Vector2(0, 0), 1.0f, SpriteEffects.None, 0.5f);
}
m_SpriteBatch.End();
m_List.Clear();
}//end if (m_Enabled)
}
Note, the Draw method will be called for us automatically, since we inherited the class from the DrawableGameComponent base class.
Draw
DrawableGame. | https://www.codeproject.com/Articles/19440/XNA-Batched-Text-Output | CC-MAIN-2017-47 | en | refinedweb |
Hello Everyone,
I need to implement a declarative command pattern that can update a value on a variety of static types, given a string value for the key. These types can be modified as needed to support the pattern, but need to be operated on in a generic way (IE. I can make all these types implement a common interface). Each type must react to changes made to these properties.
Here is what I have so far:).Code:public class Main implements EntryPoint{ @Override public void onModuleLoad() { MyView view = new MyView(); SetPropertyCommand command = new SetPropertyCommand(); command.target = view; command.property = "text"; command.value = "Hello World!"; command.execute(); RootPanel.get().add(view); GWT.log("text = " + view.getValue("text"), null); } } interface IValueTarget { public void setValue(String name, Object value); public Object getValue(String property); } class SetPropertyCommand { public String property; public IValueTarget target; public Object value; public void execute() { this.target.setValue(this.property, this.value); } } class MyView extends Html implements IValueTarget { private HashMap<String,Object> _properties = new HashMap<String,Object>(); public void setValue(String name, Object value) { _properties.put(name, value); //gross! if (name == "text") setHtml((String) value); } public Object getValue(String property) { return _properties.get(property); } }
Is there a way to implement this with strong typing? I was looking at using the new ValueProvider functionality to expose these properties, but then I am still not sure how I would reference them using a String value, and react to changes when a value is set.
Any guidance you can provide will be greatly appreciated.
Thanks for your time! | https://www.sencha.com/forum/showthread.php?196945-Best-way-to-implement-dynamic-property-access-on-a-static-type | CC-MAIN-2017-47 | en | refinedweb |
A plugin to allow external authentication informations (commonly headers from apache to work with most SSO) to create users and groups.
Introduction
This package propose a plugin for Zope PAS not only to manage authentication from an external source, mostly HTTP headers provided by some SSO, thing it does in a scriptable and highly configurable manner, but also to manage groups definition, groups belonging, and users properties .
The use case this package was created for was to integrate a Shibboleth SSO for Plone coupled with a group management application known as GROUPER, at University of geneva. In our case, Shibboleth, the SSO, fill up headers from GROUPER groups definitions and we needed them in Plone to manage local roles and permissions.
Known Bugs
The principle of solution relies on the ability of PAS to have multiple source of users and group plugins. Unfortunatly ther is a bug in this feature implementation both in Zope and Plone rewrite see bug #12794 . Once this will be corrected undoubtely but for those versions of plone and PAS for which it is not, you could use the following monkey patch in __init__.py (this one is for Plone GroupsTool):
from Products.PlonePAS.tools.groups import GroupsTool from AccessControl.requestmethod import postonly if not hasattr(GroupsTool, '_patched_ea__'): @postonly def removeGroup(self, group_id, keep_workspaces=0, REQUEST=None): """Remove a single group, including group workspace, unless keep_workspaces==true. """ retval = False managers = self._getGroupManagers() if not managers: raise NotSupported, 'No plugins allow for group management' for mid, manager in managers: if manager.getGroupById(group_id): if manager.removeGroup(group_id): retval = True gwf = self.getGroupWorkspacesFolder() if retval and gwf and not keep_workspaces: grouparea = self.getGroupareaFolder(group_id) if grouparea is not None: workspace_id = grouparea.getId() if hasattr(aq_base(gwf), workspace_id): gwf._delObject(workspace_id) self.invalidateGroup(group_id) return retval @postonly def addPrincipalToGroup(self, principal_id, group_id, REQUEST=None): managers = self._getGroupManagers() if not managers: raise NotSupported, 'No plugins allow for group management' for mid, manager in managers: if manager.getGroupById(group_id): if manager.addPrincipalToGroup(principal_id, group_id): return True return False @postonly def removePrincipalFromGroup(self, principal_id, group_id, REQUEST=None): managers = self._getGroupManagers() if not managers: raise NotSupported, 'No plugins allow for group management' for mid, manager in managers: if manager.getGroupById(group_id): if manager.removePrincipalFromGroup(principal_id, group_id): return True return False GroupsTool.removeGroup = removeGroup GroupsTool.addPrincipalToGroup = addPrincipalToGroup GroupsTool.removePrincipalFromGroup = removePrincipalFromGroup GroupsTool._patched_ea__ = True
Also, we just need to test the proxy part which we don’t use actually, or remove it. If you experience problem with it you should use the redirect to external url scheme. Also with some versions of python this could not work with https (because of a bug in old urllib2).
TODO
- Unit tests
- More Documentation
- redirect on logout url doesn’t work
- Consistent profiles for use without Plone.
COPYLEFT
Copyright (C) 2012 Smile Suisse See COPYING for copyright informations and LICENSE.txt for a copy of GPLv3 license in source package “docs” directory.
Changelog
0.1 (beta)
- Initial release
pas.plugins.external_auth package installation
We provide standard zc.buildout installation instruction.
- Add pas.plugins.external_auth to the list of eggs to install:
[buildout] ... eggs = ... pas.plugins.external_auth
- If you don’t use plone with z3c.autoinclude, tell the plone.recipe.zope2instance recipe to install a ZCML slug:
[instance] recipe = plone.recipe.zope2instance ... zcml = pas.plugins.external_auth
- Re-run buildout, e.g. with:
$ ./bin/buildout
You can skip the ZCML slug if you are going to explicitly include the package from another package’s configure.zcml file.
Installation in zope
Once this is done you can use provided profiles to populate your PAS. Two of them are provided for version 3 and 4 of Plone as default unconfigured example. The one which is used by cmfquickinstaller is targeted at Plone 4 empty profile so you could just install the product with plone control panel.
Those profiles are more examples than real use case. They are intended to be used as a basis for a real profile correspondig to your configuration, unless you prefer to just use TTW ZMI configuration. Anyway using the install product through zope for external_auth has not much sense except for quickly evaluating the product.
Once ou have configured at least one plugin in acl_users, go into ZMI management page of tyhis plugin and follow the main page configuration instructions.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pas.plugins.external_auth/ | CC-MAIN-2017-47 | en | refinedweb |
INSTRUCTIONS
- Create a WordCounter class with a constructor that takes a file name as a parameter
- The class should have two fields: one for the file name and one for a HashMap to store word count information
- The constructor should call a private method, countWords, that reads in the file and counts the word frequencies
- The class should contain a get method for each field, as well as a print method that prints out the map in the following format: word: frequency
- When printing, the map should be sorted by either the word order or frequency (Hint: see Collections.sort)
- You should include the sample text file on Blackboard
Code :
import java.io.File; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.Scanner; public class WordCounter { // 2 fields for file name and hashmap private ArrayList<String> fn,hm; private Scanner x; // constructor public WordCounter(String frame) { countwords(); } // methods private void countwords () { try{ x = new Scanner (new File("test.file.txt")); System.out.println(Collections.frequency(fn, hm)); } catch (Exception e){ System.out.println("File not found"); } } private void print() { } public static void main(String[] args) { String fname = "test-file.txt"; if (args.length > 0) fname = args[0]; WordCounter wc = new WordCounter(fname); wc.print(); } }
I still need help with step 4 and 5. please help im still learning | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36146-word-counter-program-printingthethread.html | CC-MAIN-2017-47 | en | refinedweb |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
TypedPhoneGap
TypeScript layer for PhoneGap (Apache Cordova) version 3.4.0 and all core plugins
Quick links:
Motivation
TypeScript definitions allow editing code with some degree of type checking and code completion. It saves time and eliminates bugs.
It turns out that simple
.d.ts is not enough, since Cordova API is
weird. So TypedPhoneGap provides a little JavaScript layer as
well. To see why it is worth it, consider for example how Cordova has
you write (with
inappbrowser plugin):
window.open('', '_blank', 'location=yes');
Here, what should go into options string? You do not know without reading the docs.
TypedPhoneGap, on the other hand, makes everything pleasantly typed:
import IAB = TypedPhoneGap.InAppBrowser; var iab = IAB.getPlugin(); iab.openWindow('', IAB.Target.blank, { location: true });
You can examine
Options interface to see what is available. Also,
where possible,
getPlugin functions do checks and fire informative
exceptions when you forget to configure your application with a given
plugin.
Status
Rewritten for version 3.4.0 and TypeScript 0.9.7. Currently testing, API subject to change.
Building
Build the project with npm. In the root folder, do:
npm install
This should obtain three files:
build/TypedPhoneGap.d.ts build/TypedPhoneGap.js build/TypedPhoneGap.min.js
Usage
In your application, reference the
.d.ts for TypeScript development
and include the
.js or
.min.js after
cordova.js.
The project includes a command-line tool to help populate files in a
project generated by the
cordova tool. Synopsis:
cd myapp npm install -g typedphonegap typedphonegap install
This populates your project with:
www/js/TypedPhoneGap.js www/js/TypedPhoneGap.min.js www/typings/TypedPhoneGap.d.ts
License
All files are released under Apache 2.0 License.
Contributing
Contributions are welcome, especially for testing the code on real-world examples. | https://bitbucket.org/IntelliFactory/typedphonegap | CC-MAIN-2017-47 | en | refinedweb |
I'm in need of some help. I have a file with a attribute field named SPCOMP and I need to separate it out. The SPCOMP field is populated with a string such as "Sb 90Bf 10" or Pj 80Sb 20" or Pj 40Bw 40Sb 20" etc. I'm trying to write a code using the field calculator in VB
script that will search through the SPCOMP field for "Sb 90" and populate another field SB with 90 or search for "Sb 60" and put 60
in the SB field. My code is:
Dim SB
if [SPCOMP] = "%Sb 90%" Then
SB = 90
elseif [SPCOMP] = "%Sb 60%" Then
SB = 60
else
SB = 1
end if
When I run this it will populate the SB field with 1 so I think my mistake is with the wildcard search? Can anyone offer suggestions on this or where I could find answers? I'm using ArcGIS advanced 10.3.
Thanks
Like operator perhaps? from the 10.3 help files...but nothing has changed you can try "building a query expression" or "wildcard" as search terms in your help
Building a query expression—Help | ArcGIS for Desktop
Use the LIKE operator (instead of the = operator) to build a partial string search. For example, this expression would select Mississippi and Missouri among U.S. state names:
STATE_NAME LIKE 'Miss%'
% means that anything is acceptable in its place: one character, a hundred characters, or no character. Alternatively, if you want to search with a wildcard that represents one character, use _.
For example, this expression would find Catherine Smith and Katherine Smith:
OWNER_NAME LIKE '_atherine smith'
The wildcards above work for any file-based data or ArcSDE geodatabase. The wildcards you use to query personal geodatabases are * for any number of characters and ? for one character.
If you are interested in using the Python syntax you could use this:
def getSB(spcomb):
if 'Sb ' in spcomb:
n = spcomb.find('Sb ')
return spcomb[n+4:n+6]
else:
return 0
getSB( !SPCOMP! )
and the result:
Before you try this, you ask this question: Is it always a number between 10 and 99 (two digits)?
... it will fail if:
I have tried the LIKE operator in VB but the error message is always 99999 (general error) I tried may different ways but the same result. If I type the SPCOMP code in exactly then it works.
(ie: if [SPCOMP] = 'Sb 90Bf 10')
I tried the python but it does not work either. I never used python before and don't understand it either. Maybe I should though.
And yes the string is exact (it's either SB or Sb but never both in the same file.)
in the VB calculator, this is my code
Dim SB
if [SPCOMP] LIKE '%Sb 100%' Then
SB = 100
elseif [SPCOMP] LIKE '%Sb 90%' Then
SB = 90
elseif [SPCOMP] LIKE '%Sb 80%' Then
SB = 80
elseif [SPCOMP] LIKE '%Sb 70%' Then
SB = 70
elseif [SPCOMP] LIKE '%Sb 60%' Then
SB = 60
elseif [SPCOMP] LIKE '%Sb 50%' Then
SB = 50
elseif [SPCOMP] LIKE '%Sb 40%' Then
SB = 40
elseif [SPCOMP] LIKE '%Sb 30%' Then
SB = 30
elseif [SPCOMP] LIKE '%Sb 20%' Then
SB = 20
elseif [SPCOMP] LIKE '%Sb 10%' Then
SB = 10
else
SB = 0
end if
in the lower box (SB=)
SB
I posted my .gdb here as a zip file.
Use select by attributes to get a selection then use field calculator to populate the intended field
fieldname LIKE '%90%'
Then field calculate the selection set to 90
repeat for 60
Could you attach the ZIP to the thread (click on advanced editor upper right corner and then on Attach in lower right corner), since I wasn't able to download your file:
Thanks I didn't know how to attach a file.
Wes suggested select by attributes then using the calculator and I know that works but there are 14 species and 10 percentages each so that tends to become tedious after a while. The combination I'm looking for is Sb 90 or SB 80 or PO 20 or LA 70 etc Hopefully, the attached file will help. I can do this in MS excel but I would like to do it in ARCmap
The code I posted earlier seems to work on your data:
... however, looking at your data and the fact that you want to apply this to multiple fields, I think using Python code and an update cursor would be easier...
Edit: I see that the result is wrong for "Sb 100"
Yes, this does work, Thanks. I even modified it a bit for other species and it works.
def getSB(spcomb):
if 'Sb' in spcomb:
n = spcomb.find('Sb')
return spcomb[n+3:n+6]
else:
return 0
I don't understand what "return spcomb[n+3:n+6]" is but I changed the n+4 to n+3 and the 'Sb ' to 'Sb' and it worked for the Sb 100 so I'm assuming it relates to 6 spaces to the right of Sb and 3 back from that.
Thanks to everyone for helping me with this.
As mentioned before, I would probably use some Python code to do the job:
This table was updated using this code:
def main():
import arcpy
fc = r"C:\GeoNet\SPCOMP\ARC.gdb\SPCOMP"
fld_in = "SPCOMP"
# create list of fields with length 2
flds = [fld.name for fld in arcpy.ListFields(fc) if len(fld.name) == 2]
flds.insert(0, fld_in)
# start update cursor
with arcpy.da.UpdateCursor(fc, flds) as curs:
for row in curs:
spcomp = row[0].upper()
for fldname in flds[1:]:
if fldname.upper() in spcomp:
perc = getPercentage(fldname.upper(), spcomp)
row[flds.index(fldname)] = perc
curs.updateRow(row)
def getPercentage(fldname, spcomp):
try:
n = spcomp.find(fldname)
part = spcomp[n+2:n+6]
return int(part)
except:
return -1
if __name__ == '__main__':
main()
Some assumptions that seem to be true for this table: | https://community.esri.com/t5/geodatabase-questions/code-help/td-p/824454 | CC-MAIN-2021-04 | en | refinedweb |
Share Code Between Services
In these next couple of chapters we’ll look at how to organize all our business logic services (APIs) in the same repo. We’ll start by attempting to answer the following questions:
- Do I have just one or multiple
package.jsonfiles?
- How do I share common code and config between services?
- How do I share common config between the various
serverless.yml?
We are using an extended version of the notes app for this section. You can find the sample repo here. Let’s take a quick look at how the repo is organized.
/ package.json config.js serverless.common.yml libs/ services/ notes-api/ package.json serverless.yml handler.js billing-api/ package.json serverless.yml handler.js notify-job/ serverless.yml handler.js
1. Structuring the package.json
The first question you typically have is about the
package.json. Do I just have one
package.json or do I have one for each service? We recommend having multiple
package.json files.
We use the
package.json at the project root to install the dependencies that will be shared across all the services. For example, the serverless-bundle plugin that we are using to optimally package our Lambda functions is installed at the root level. It doesn’t make sense to install it in each and every service.
On the other hand, dependencies that are specific to a single service are installed in the
package.json for that service. In our example, the
billing-api service uses the
stripe NPM package. So it’s added just to that
package.json. Similarly, the
notes-api service uses the
uuid NPM package, and it’s added just to that
package.json.
This setup implies that when you are deploying your app through a CI; you’ll need to do an
npm install twice. Once in the root level and once in a specific service. Seed does this automatically for you.
You can also use Yarn Workspaces (and Lerna) to manage the dependencies for your monorepo setup. We cover this setup in a separate chapter — Using Lerna and Yarn Workspaces with Serverless.
Usually, you might have to manually pick and choose the modules that need to be packaged with your Lambda function. Simply packaging all the dependencies will increase the code size of your Lambda function and this leads to longer cold start times. However, in our example we are using the
serverless-bundle plugin that internally uses Webpack’s tree shaking algorithm to only package the code that our Lambda function needs.
2. Sharing common code and config
The biggest reason you are using a monorepo setup is because your services need to share some common code, and this is the most convenient way to do so.
Alternatively, you could use a multi-repo approach where all your common code is published as private NPM packages. However, this adds an extra layer of complexity and it doesn’t make sense if you are a small team just wanting to share some common code.
In our example, we want to share some common code. We’ll be placing these in a
libs/ directory. Our services need to make calls to various AWS services using the AWS SDK. And we have the common SDK configuration code in the
libs/aws-sdk.js file.
import aws from "aws-sdk"; import xray from "aws-xray-sdk"; // Do not enable tracing for 'invoke local' const awsWrapped = process.env.IS_LOCAL ? aws : xray.captureAWS(aws); export default awsWrapped;
Our Lambda functions will now import this instead of the standard AWS SDK.
import AWS from '../../libs/aws-sdk';
The great thing about this is that we can easily change any AWS related config and it’ll apply across all of our services. In this case, we are using AWS X-Ray to enable tracing across our entire application. You don’t need to do this but we are going to be talking about this in one of the later chapters. And this is a good example of how to share the same AWS config across all our services.
3. Share common serverless.yml config
We have separate
serverless.yml configs for our services. However, we end up needing to share some config across all of our
serverless.yml files. To do that:
- Place the shared config values in a common yaml file at the root level.
- And reference them in your individual
serverless.ymlfiles.
For example, we want to define the current stage and the resources stage we want to connect to across all of our services. custom definition in their
serverless.yml:
custom: ${file(../../serverless.common.yml):custom}
And we include the lambdaPolicyXRay IAM policy:
iamRoleStatements: - ${file(../../serverless.common.yml):lambdaPolicyXRay}
You can do something similar for any other
serverless.yml config that needs to be shared.
For simplifying our
serverless.yml config within a service, we split it up further. In our
services/notes-api/serverless.yml in our sample repo you’ll notice the following:
resources: # API Gateway Errors - ${file(resources/api-gateway-errors.yml)} # Cognito Identity Pool Policy - ${file(resources/cognito-policy.yml)}
The
api-gateway-errors.yml adds the headers for 4xx and 5xx API errors. While the
cognito-policy.yml adds the IAM policy for allowing our Cognito authenticated users to access the Notes API.
Statement: - Effect: 'Allow' Action: - 'execute-api:Invoke' Resource: !Sub 'arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${ApiGatewayRestApi}/*'
Next, let’s look at what happens when multiple API services need to share the same API endpoint.
For help and discussionComments on this chapter | https://serverless-stack.com/chapters/share-code-between-services.html | CC-MAIN-2021-04 | en | refinedweb |
Caching in Spring Boot RESTful Service: Part 10 Comments. You can address this problem using caching.
In this first part of the series on caching, I will explain how to cache frequently retrieved data in a Spring Boot RESTful API.
The Sample Application
I have a bare minimum Spring Boot REST API that enables users to add products and retrieve all products at one go. As it is apparent, I will set up caching for the operation of retrieving all products. I want the application to return all products from the cache instead of querying the database for each request. To enable caching add the following dependency to your pom.xml file.
Here is the caching dependency in the pom.xml file.
<dependency>; <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-cache</artifactId> </dependency>
The next step is to enable caching in the application by adding the
@EnableCaching class-level annotation.
@EnableCaching @SpringBootApplication public class RestServiceForProductApplication { public static void main(String[] args) { SpringApplication.run(RestServiceForProductApplication.class, args); } }
Implement Caching for Product Retrieval
The application has a
ProductServiceImpl class where we will enable caching. The code that implements caching is this:
ProductServiceImpl.java
@CacheConfig(cacheNames = "product") @Service public class ProductServiceImpl implements ProductService { private ProductRepository productRepository; public ProductServiceImpl() { } @Autowired public void setProductRepository(ProductRepository productRepository) { this.productRepository = productRepository; } @Autowired public ProductServiceImpl(ProductRepository productRepository) { this.productRepository = productRepository; } }
In the preceding code, the class is marked with the
@CacheConfig(cacheNames = "product") annotation.
It is a class-level annotation that provides common cache-related settings. It tells the string where to store the cache for the class. In the example provided above, “product” is the name of the cache.
Now, let’s add a service method
addProduct() for adding products to the database.
@Caching(evict = {@CacheEvict(value = "allproductcache", allEntries = true), @CacheEvict(value = "productcache", key = "#product.id") }) @Override public Product addProduct(Product product) { return productRepository.save(product); }
In the preceding code,
@Caching annotation is required when we need both
@CachePut and
@CacheEvictat the same time. In other words, when we want to use multiple annotations of the same type we use this annotation. When you want to remove or evict cache of previously loaded master data you will have to use
@CacheEvict . If you want to remove all entries of the cache then you need to use
allEntries = true.
Finally, let’s implement the service method to retrieve all products.
@Cacheable(value = "allproductcache") @Override public List<Product> getAllProducts() { System.out.println("Data is retrieved from database "); return (List<Product>) productRepository.findAll(); }
@Cacheableis a method level annotation. It defines a cache for a method’s return value. You can also add a cache name by using the value attribute. You can also specify a unique key to identify values in the cache.
Now that our implementation is ready let’s test the caching functionality.
Test for Caching
To test the application, I am using Spring Test with JUnit 5 and Mockito.
The test code is this:
@ExtendWith(MockitoExtension.class) @SpringBootTest public class ProductServiceTest { @Mock private ProductRepository productRepository; @Autowired @InjectMocks private ProductServiceImpl productService; private Product product1; private Product product2; @BeforeEach public void setUp() { product1 = new Product(1, "Bread", 20); product2 = new Product(2, "jam", 140); } @AfterEach public void tearDown() { product1 = product2 = null; } @Test void givenCallToGetAllUsersThenShouldReturnListOfAllProduct() { productService.addProduct(product1); productService.addProduct(product2); productService.getAllProducts(); productService.getAllProducts(); productService.getAllProducts(); productService.getAllProducts(); verify(productRepository, times(1)).findAll(); } }
In the test code, we are mocking the product repository. The test case adds to products and makes four calls to retrieve all the products. Without caching this would have involved four calls to the database. But let’s verify that only a single call happens instead of four calls.
This is done in the verify method on Line 38.
Now, let’s run the test.
The output of the test is this.
As you can see the test passes. Because of caching only a single call got made to the repository.
Summary
In this post, you can see that whenever products are added to the database instead of querying the database for retrieving products for each incoming request, cache comes into play.
Now, consider a scenario where you need to delete a product. The cache has to reflect the change as well. Else, the deleted product will still be present in the cache and returned to users. The same thing will happen when a product gets updated. I will be discussing how to manage such scenarios in Part 2 of this series on caching.
You can find the source code of this post on Github.
For in-depth knowledge of the Spring Framework check my Udemy Best Seller Course Spring Framework 5: Beginner to Guru | https://springframework.guru/caching-in-spring-boot-restful-service-part-1/ | CC-MAIN-2021-04 | en | refinedweb |
There have been some reasonably big changes to tab-based navigation in Ionic, and one issue that I see pop up quite a lot is creating more complex navigation within individual tabs. Specifically, a lot of people seem to be running into an issue where after navigating to another page inside of a tab, the tab bar will disappear.
In this tutorial, we will be looking at how to create a multi-level master/detail style navigation pattern inside of a tabs layout. This will allow the tab bar to remain in place as pages within a single tab are being navigated, and the current state of a tab will also be remembered when switching back and forth between tabs..
If you would like to follow along with this tutorial step-by-step, I will be using the tabs starter template that Ionic provides. You can create a new project based on this template by running the following command:
ionic start tabs-sub-navigation tabs --type=angular
1. Create the Detail Pages
First, we need to create some pages. We are going to create
ProductList and
ViewProduct pages. We will have one of our tabs provide the ability to navigate to a page that would theoretically display a list of products, and then that page will be able to further navigate to a page that displays a specific product.
Run the following commands to create the pages:
ionic g page tab2/ProductList
ionic g page tab2/ViewProduct
It isn’t important that you generate the pages inside of the folder for the tab they will be used in, I think this just helps to keep things organised.
2. Set up the Routes
By default, when you generate a page it will add the routes automatically for you to src/app/app-routing.module.ts. We don’t want that, so make sure to remove those routes.
Make sure that you remove the routes generated for
ProductListand
ViewProductfrom src/app/app-routing.module.ts:
import { NgModule } from '@angular/core'; import { PreloadAllModules, RouterModule, Routes } from '@angular/router'; const routes: Routes = [ { path: '', loadChildren: './tabs/tabs.module#TabsPageModule' } ]; @NgModule({ imports: [ RouterModule.forRoot(routes, { preloadingStrategy: PreloadAllModules }) ], exports: [RouterModule] }) export class AppRoutingModule {}
Instead, we are going to add the routes to our tabs routing module.
Modify the routes in src/app/tabs/tabs.router.module.ts to reflect the following:
const routes: Routes = [ { path: 'tabs', component: TabsPage, children: [ { path: 'tab1', children: [ { path: '', loadChildren: '../tab1/tab1.module#Tab1PageModule' } ] }, { path: 'tab2', children: [ { path: '', loadChildren: '../tab2/tab2.module#Tab2PageModule' } ] }, { path: 'tab2/products', loadChildren: '../tab2/product-list/product-list.module#ProductListPageModule' }, { path: 'tab2/products/:id', loadChildren: '../tab2/view-product/view-product.module#ViewProductPageModule' }, { path: 'tab3', children: [ { path: '', loadChildren: '../tab3/tab3.module#Tab3PageModule' } ] }, { path: '', redirectTo: '/tabs/tab1', pathMatch: 'full' } ] }, { path: '', redirectTo: '/tabs/tab1', pathMatch: 'full' } ];
We have left all of the default tabs we had set up unchanged, but we have added in the two routes for our new pages under the routing information for
tab2. Note that these are not being added as
children routes of
tab2, we are just listing them close to the
tab2 routing for organisational purposes. Aside from modifying the
loadChildren path to correctly locate the modules for the pages we added, there isn’t anything special about these routes. We have followed a logical URL progression of
tab2 ->
tab2/products ->
tab2/products/:id but that isn’t strictly necessary - you could use whatever you like for the route paths.
3. Implement the Templates
What we have done so far is actually all that is required to set up this style of multi-level tabs navigation. Let’s take a look at implementing the navigation in the templates, though.
Modify src/app/tab2/tab2.page.html to reflect the following:
<ion-header> <ion-toolbar> <ion-title> Tab Two </ion-title> </ion-toolbar> </ion-header> <ion-content padding> <ion-buttonView Products</ion-button> </ion-content>
We have just added a simple button that links to the
ProductList page that we created. Now let’s take a look at the navigation in that page.
Modify src/app/tabs2/product-list/product-list.page.html to reflect the following:
<ion-header> <ion-toolbar> <ion-title>ProductList</ion-title> <ion-buttons <ion-back-button</ion-back-button> </ion-buttons> </ion-toolbar> </ion-header> <ion-content padding> <ion-buttonProduct Detail</ion-button> </ion-content>
Same idea here, except we are linking to a specific product now. We have also added an
<ion-back-button> to the header, and it is important that we supply an appropriate
defaultHref here incase the user refreshes the application directly to this page (which causes the navigation history to be lost). In that case, the
defaultHref will be used when the back button is clicked. If we have a
defaultHref of the root of the application, then the application can get in a state where it is stuck. The back button will link back to the default tab page, but the second tab will still be on the
ProductList page and you won’t be able to get back to the root
tab2 page because the back button will always link back to the
tab1 page. Providing an appropriate
defaultHref as we have above means we will never get in this situation.
Modify src/app/tabs2/view-product/view-product.page.html to reflect the following:
<ion-header> <ion-toolbar> <ion-title>ViewProduct</ion-title> <ion-buttons <ion-back-button</ion-back-button> </ion-buttons> </ion-toolbar> </ion-header> <ion-content padding> </ion-content>
We don’t have anything on this page, but again, it is important to make sure to set up that
defaultHref correctly.
Summary
We now have an application where we can navigate within a single tab without breaking the general tabs layout, and each individual tab will also remember its own state/position when navigating between other tabs. The key here is to make sure to define your routes in the routing file for the tabs, not the root routing file for the application. | https://www.joshmorony.com/master-detail-navigation-within-a-tabs-layout-in-ionic/ | CC-MAIN-2021-04 | en | refinedweb |
Working with Date and Time Objects in the Java 8 Date/Time API
Java 8 has introduced a new Date and Time API that is more intuitive and less cumbersome than the standard
java.util.Date and
java.util.Calendar packages. The new date and time API is located in
java.time. You can create a local date/time object that contains both the date and the time (including milliseconds, a local time object containing only the time, and a local date object containing only the date. To obtain the current date and/or time, you will use the
now method. To create other times and/or dates, you will use the
of method. To learn how to work with date and time objects in Java 8, follow these four steps.
- Open your text editor and create the Java program that will demonstrate working with date and time objects. Type in the following Java statements:
import java.time.*; public class WorkingWithDateAndTime { public static void main (String args[]) { // Create a local date time object: LocalDateTime today=LocalDateTime.now(); System.out.format("Current date and time is %s\n", today); // Create a local time object: LocalTime timeNow=LocalTime.now(); System.out.format("Current time is %s\n", timeNow); // Create a local date object: LocalDate dateToday=LocalDate.now(); System.out.format("Current date is %s\n" , dateToday); // Create a date time object from a date object and a time object: LocalDateTime todayFromDateAndTime=LocalDateTime.of(dateToday, timeNow); System.out.format("Current date and time date from date and time objects is %s\n", todayFromDateAndTime); // Create a date time of July 4, 1988 4AM: LocalDateTime july4_1988_4AM=LocalDateTime.of(1988, Month.JULY, 4, 4, 0, 0); System.out.format("Fourth of July, 1988 at 4AM: %s\n", july4_1988_4AM); } }
The
LocalDateTimeclass can be used to create a date time object. The
nowmethod returns an object representing the current date and current time. The
LocalTimeand
LocalDateclasses also support a
nowmethod that returns the current time or current date, respectively. We also create a date/time object from the local date and time objects using the
ofmethod. In addition, you can use the
ofmethod to create a date/time object with a past or future value. In the program we create a date time object representing July 4, 1988.
- Save your file as
WorkingWithDateAndTime values of the various date and time objects that you created. | https://www.webucator.com/how-to/working-with-date-time-objects-the-java-8-date-time-api.cfm | CC-MAIN-2021-04 | en | refinedweb |
Introduction to Pointers
A pointer is nothing more than a variable that holds a memory address. When used properly, a pointer holds a valid memory address that contains an object, which is compatible with the type of the pointer. Like references in C#, all pointers in a particular execution environment have the same size, regardless of the type of data the pointer points to. For example, when a program is compiled for and run on a 32-bit operating system, a pointer will typically be 4 bytes (32 bits).
Pointers can point to any memory address. You can, and frequently will, have pointers to objects that are on the stack. You can also have pointers to static objects, to thread local objects, and, of course, to dynamic (i.e., heap allocated) objects. When programmers with only a passing familiarity with pointers think of them, it’s usually in the context of dynamic objects.
Because of potential leaks, you should never allocate dynamic memory outside of a smart pointer. The C++ Standard Library provides two smart pointers that you should consider:
std::shared_ptr and
std::unique_ptr.
By putting dynamic duration objects inside one of these, you guarantee that when the
std::unique_ptr, or the last
std::shared_ptr that contains a pointer to that memory goes out of scope, the memory will be properly freed with the correct version of delete (delete or delete[]) so it won't leak. That’s the RAII pattern from the previous chapter in action.
Only two things can happen when you do RAII right with smart pointers: The allocation succeeds, and therefore the memory will be properly freed when the smart pointer goes out of scope or the allocation fails, in which case there was no memory allocated and thus no leak. In practice, the last situation should be quite rare on modern PCs and servers due to their large memory and their provision of virtual memory.
If you don’t use smart pointers, you're just asking for a memory leak. Any exception between allocating the memory with new or new[] and freeing the memory with delete or delete[] will likely result in a memory leak. If you aren’t careful, you could accidentally use a pointer that was already deleted, but was not set equal to nullptr. You would then be accessing some random location in memory and treating it like it’s a valid pointer.
The best thing that could happen in that case is for your program to crash. If it doesn’t, then you’re corrupting data in strange, unknown ways and possibly saving those corruptions to a database or pushing them across the web. You could be opening the door to security problems too. So use smart pointers and let the language handle memory-management issues for you.
Const Pointer
A const pointer takes the form
SomeClass* const someClass2 = &someClass1;. In other words, the * comes before const. The result is that the pointer itself cannot point to anything else, but the data the pointer points at remains mutable. This is not likely to be very useful in most situations.
Pointer to Const
A pointer to const takes the form
const SomeClass* someClass2 = &someClass1;. In this case the * comes after const. The result is that the pointer can point to other things, but you cannot modify the data it points to. This is a common way to declare parameters that you simply want to inspect without modifying their data.
Const Pointer to Const
A const pointer to const takes the form
const SomeClass* const someClass2 = &someClass1;. Here, the * is sandwiched between two const keywords. The result is that the pointer cannot point to anything else, and you cannot modify the data it points to.
Const-Correctness and Const Member Functions
Const-correctness refers to using the const keyword to decorate both parameters and functions so the presence or absence of the const keyword properly conveys any potential side effects. You can mark a member function const by putting the const keyword after the declaration of the function’s parameters.
For example,
int GetSomeInt(void) const; declares a const member function—a member function that does not modify the data of the object it belongs to. The compiler will enforce this guarantee. It will also enforce the guarantee that when you pass an object into a function that takes it as const, that function cannot call any non-const member functions of that object.
Designing your program to adhere to const-correctness is easier when you start doing it from the beginning. When you adhere to const-correctness, it becomes easier to use multithreading, since you know exactly which member functions have side effects. It’s also easier to track down bugs related to invalid data states. Others who are collaborating with you on a project will also be aware of potential changes to the class’ data when they call certain member functions.
The
*,
&, and
-> Operators
When working with pointers, including smart pointers, three operators are of interest: *, &, and ->.
The indirection operator, *, de-references a pointer, meaning you work with the data that is pointed to, instead of the pointer itself. For the next few paragraphs, let’s assume that p_someInt is a valid pointer to an integer with no const qualifications.
The statement
p_someInt = 5000000; would not assign the value 5000000 to the integer that is pointed to. Instead, it would set the pointer to point to the memory address 5000000, 0X004C4B40 on a 32-bit system. What is at memory address 0X004C4B40? Who knows? It could be your integer, but chances are it is something else. If you are lucky, it is an invalid address. The next time you try to use
p_someInt properly, your program will crash. If it is a valid data address though, then you will likely corrupt data.
The statement
*p_someInt = 5000000; will assign the value 5000000 to the integer pointed to by p_someInt. This is the indirection operator in action; it takes p_someInt and replaces it with an L-value that represents the data at the address pointed to (we’ll discuss L-values soon).
The address-of operator, &, fetches the address of a variable or a function. This allows you to create a pointer to a local object, which you can pass to a function that wants a pointer. You don’t even need to create a local pointer to do that; you can simply use your local variable with the address-of operator in front of it as the argument, and everything will work just fine.
Pointers to functions are similar to delegate instances in C#. Given this function declaration:
double GetValue(int idx); this would be the right function pointer:
double (*SomeFunctionPtr)(int);.
If your function returned a pointer, say like this:
int* GetIntPtr(void); then this would be the right function pointer:
int* (*SomeIntPtrDelegate)(void);. Don’t let the double asterisks bother you; just remember the first set of parentheses around the * and function pointer name so the compiler properly interprets this as a function pointer rather than a function declaration.
The -> member access operator is what you use to access class members when you have a pointer to a class instance. It functions as a combination of the indirection operator and the . member access operator. So
p_someClassInstance->SetValue(10); and
(*p_someClassInstance).SetValue(10); both do the same thing.
L-values and R-values
It wouldn’t be C++ if we didn’t talk about L-values and R-values at least briefly. L-values are so called because they traditionally appear on the left side of an equal sign. In other words, they are values that can be assigned to—those which will survive the evaluation of the current expression. The most familiar type of L-value is a variable, but it also includes the result of calling a function that returns an L-value reference.
R-values traditionally appear on the right side of the equation or, perhaps more accurately, they are values that could not appear on the left. They are things such as constants, or the result of evaluating an equation. For example, a + b where a and b might be L-values, but the result of adding them together is an R-value, or the return value of a function that returns anything other than void or an L-value reference.
References
References act just like non-pointer variables. Once a reference is initialized, it cannot refer to another object. You also must initialize a reference where you declare it. If your functions take references rather than objects, you will not incur the cost of a copy construction. Since the reference refers to the object, changes to it are changes to the object itself.
Just like pointers, you can also have a const reference. Unless you need to modify the object, you should use const references since they provide compiler checks to ensure that you aren’t mutating the object when you think you aren’t.
There are two types of references: L-value references and R-value references. An L-value reference is marked by an & appended to the type name (e.g., SomeClass&), whereas an R-value reference is marked by an && appended to the type name (e.g., SomeClass&&). For the most part, they act the same; the main difference is that the R-value reference is extremely important to move semantics.
Pointer and Reference Sample
The following sample shows pointer and reference usage with explanations in the comments.
Sample: PointerSample\PointerSample.cpp
#include <memory> //// See the comment to the first use of assert() in _pmain below. //#define NDEBUG 1 #include <cassert> #include "../pchar.h" using namespace std; void SetValueToZero(int& value) { value = 0; } void SetValueToZero(int* value) { *value = 0; } int _pmain(int /*argc*/, _pchar* /*argv*/[]) { int value = 0; const int intArrCount = 20; // Create a pointer to int. int* p_intArr = new int[intArrCount]; // Create a const pointer to int. int* const cp_intArr = p_intArr; // These two statements are fine since we can modify the data that a // const pointer points to. // Set all elements to 5. uninitialized_fill_n(cp_intArr, intArrCount, 5); // Sets the first element to zero. *cp_intArr = 0; //// This statement is illegal because we cannot modify what a const //// pointer points to. //cp_intArr = nullptr; // Create a pointer to const int. const int* pc_intArr = nullptr; // This is fine because we can modify what a pointer to const points // to. pc_intArr = p_intArr; // Make sure we "use" pc_intArr. value = *pc_intArr; //// This statement is illegal since we cannot modify the data that a //// pointer to const points to. //*pc_intArr = 10; const int* const cpc_intArr = p_intArr; //// These two statements are illegal because we cannot modify //// what a const pointer to const points to or the data it //// points to. //cpc_intArr = p_intArr; //*cpc_intArr = 20; // Make sure we "use" cpc_intArr. value = *cpc_intArr; *p_intArr = 6; SetValueToZero(*p_intArr); // From <cassert>, this macro will display a diagnostic message if the // expression in parentheses evaluates to anything other than zero. // Unlike the _ASSERTE macro, this will run during Release builds. To // disable it, define NDEBUG before including the <cassert> header. assert(*p_intArr == 0); *p_intArr = 9; int& r_first = *p_intArr; SetValueToZero(r_first); assert(*p_intArr == 0); const int& cr_first = *p_intArr; //// This statement is illegal because cr_first is a const reference, //// but SetValueToZero does not take a const reference, only a //// non-const reference, which makes sense considering it wants to //// modify the value. //SetValueToZero(cr_first); value = cr_first; // We can initialize a pointer using the address-of operator. // Just be wary because local non-static variables become // invalid when you exit their scope, so any pointers to them // become invalid. int* p_firstElement = &r_first; *p_firstElement = 10; SetValueToZero(*p_firstElement); assert(*p_firstElement == 0); // This will call the SetValueToZero(int*) overload because we // are using the address-of operator to turn the reference into // a pointer. SetValueToZero(&r_first); *p_intArr = 3; SetValueToZero(&(*p_intArr)); assert(*p_firstElement == 0); // Create a function pointer. Notice how we need to put the // variable name in parentheses with a * before it. void (*FunctionPtrToSVTZ)(int&) = nullptr; // Set the function pointer to point to SetValueToZero. It picks // the correct overload automatically. FunctionPtrToSVTZ = &SetValueToZero; *p_intArr = 20; // Call the function pointed to by FunctionPtrToSVTZ, i.e. // SetValueToZero(int&). FunctionPtrToSVTZ(*p_intArr); assert(*p_intArr == 0); *p_intArr = 50; // We can also call a function pointer like this. This is // closer to what is actually happening behind the scenes; // FunctionPtrToSVTZ is being de-referenced with the result // being the function that is pointed to, which we then // call using the value(s) specified in the second set of // parentheses, i.e. *p_intArr here. (*FunctionPtrToSVTZ)(*p_intArr); assert(*p_intArr == 0); // Make sure that we get value set to 0 so we can "use" it. *p_intArr = 0; value = *p_intArr; // Delete the p_intArray using the delete[] operator since it is a // dynamic p_intArray. delete[] p_intArr; p_intArr = nullptr; return value; }
Volatile
I mention volatile only to caution against using it. Like const, a variable can be declared volatile. You can even have a const volatile; the two are not mutually exclusive.
Here’s the thing about volatile: It likely does not mean what you think it means. For example, it is not good for multithreaded programming. The actual use case for volatile is extremely narrow. Chances are, if you put the volatile qualifier on a variable, you are doing something horribly wrong.
Eric Lippert, a member of the C# language team at Microsoft, described the use of volatile as, “A sign that you are doing something downright crazy: You're attempting to read and write the same value on two different threads without putting a lock in place.” He's right, and his argument carries over perfectly into C++.
The use of volatile should be greeted with more skepticism than the use of goto. I say this because I can think of at least one valid general-purpose use of goto: breaking out of a deeply nested loop construct upon the completion of a non-exceptional condition. volatile, by contrast, is really only useful if you are writing a device driver or writing code for some type of ROM chip. On that point, you really should be thoroughly familiar with the ISO/IEC C++ Programming Language Standard itself, the hardware specs for the execution environment your code will be running in, and probably the ISO/IEC C Language Standard too.
Note: You should also be familiar with assembly language for the target hardware, so you can look at code that is generated and make sure the compiler is generating correct code (PDF) for your use of volatile.
I have been ignoring the existence of the volatile keyword and shall continue to do so for the remainder of this book. This is perfectly safe, since:
- It's a language feature that doesn't come into play unless you actually use it.
- Its use can safely be avoided by virtually everyone.
One last note about volatile: The one effect it is very likely to produce is slower code. Once upon a time, people thought volatile produced the same result as atomicity. It doesn’t. When properly implemented, atomicity guarantees that multiple threads and multiple processors cannot read and write an atomically accessed chunk of memory at the same time. The mechanisms for this are locks, mutexes, semaphones, fences, special processor instructions, and the like. The only thing volatile does is force the CPU to fetch a volatile variable from memory rather than use any value it might have cached in a register or on a stack. It is the memory fetching that slows everything down.
Conclusion
Pointers and references not only confuse a lot of developers, they are very important in a language like C++. It's therefore important to take your time to grasp the concept so that you don't run into problems down the road. The next article is all about casting<< | https://code.tutsplus.com/articles/c-succinctly-pointers-references-and-const-correctness--mobile-22055 | CC-MAIN-2021-04 | en | refinedweb |
An Introduction to Multithreading
A program is ‘multithreading’ or ‘running concurrently’ when it is doing multiple things at once. In a basic program, you will step through one operation at a time. We all have learned basic looping over an array. But what if our program could run two instances at once, each over half the array? If each instance was as fast as the original, they could cover the array together, and in half the time.
A basic looping function in Ruby:
def get_sum_of_two_arrays(a1, a2) sum_a1 = 0
a1.each {|e1| sum_a1 += e1}
sum_a2 = 0
a2.each {|e2| sum_a2 += e2}
return sum_a1 + sum_a2end
But how would this happen with two concurrent threads? Let’s spitball something:
def get_sum_of_two_arrays_with_threads(a1, a2) # Does not need a return value immediately
Thread(get_sum_of_one_array(a1))
Thread(get_sum_of_one_array(a2) # Now we need some way to combine the result of the two
return result of threadsend
But hold on! That was not necessary, nor was it a good example. Why would we bother doing that for such a small thing? In fact, few simple programmatical examples need multithreading. Multithreading is most interesting, and most useful, in complex scenarios. Many introductions to threads use concurrency in unnecessary situations. I dislike this approach. Let’s try to build something that is simple, but still shows the usefulness of multithreading.
The Thought Process
Consider an online multiplayer game. In a game, there are several things happening at once:
- The game is being rendered
- You are sending data to the server (your actions in-game)
- You are receiving from the server (other people’s actions in-game)
In this example, we *need* to handle multiple things at once. We can’t just run a linear block:
while true render_game()
receive_data() # have to wait for this?
send_data()end
Networks are unreliable, and your data from the server will not come in nice, clean increments. The server may send the first packet after 5ms, but then you may lose connection for 1000ms. The player avatar should not freeze in place while you are waiting for data. The game should continue to run in the meantime, then correct things after the lag. We could consider something like this:
while true render_game()
if receive_data() # just wait & try to receive some data
=> then do something with the received data
end
send_data()end
But we will still have to wait on the
receive_datafunction. It will be difficult to predict how long the transmission takes. How long should we wait?
If we listen for server data on another thread, we can run our tasks independently. The only question is how to get them to communicate. Let’s try some pseudocode:
$game_data = {SOME_INITIAL_VALUE}tell some thread(wait_for_incoming_data)
then, using(incoming_data)
$game_data = incoming_data
end
endwhile true
render_game($game_data)
send_data(whatever_the_player_is_doing)
end
Note how our two threads communicate through
$game_data. Now if the server or network has some hiccups, so be it! The game will continue to run, and just experience some lag (hopefully for no longer than a few ms). Player experience will be less interrupted.
At the end of this, I will go through an example of multithreading in Ruby. However, I believe that the important aspect is to understand the larger picture. With that in mind, let’s consider some potential problems that can arise.
Common problems
Problems with concurrency can be complex and difficult to debug. Due to multiple sources of modification, you can experience unintuitive issues.
- Security Issues: if a variable is accessible at a high level, this in turn makes it more vulnerable
- Visibility Issues: a thread reads shared data before it is changed, but is unaware of an update to the data
- Access Issue/Race Conditions: occur when multiple threads attempt to change a shared value at one time. This is one of the biggest concerns with multithreading, and its fixes provide their own issues, such as deadlocks and inefficient locking methods.
Ruby example
This example prints a pattern, but accommodates outside changes. It prints linearly increasing values, but changes the delta depending on user input.
$delta = 5def pattern
i = 0
while true
puts i
i = i + $delta
sleep(3) # pauses program for 3 seconds
end
enddef change_delta
while true
i = gets.chomp.to_i
$delta = i
puts “Changed delta to: #{i}”
end
endt1 = Thread.new{pattern()}
t2 = Thread.new{change_delta()}t1.join()
t2.join()
Note that our example is still a bit dangerous. We are using a global variable to communicate.
TutorialsPoint has an even more simple example. I use ours to illustrate how different threads can communicate. | https://eking-30347.medium.com/an-introduction-to-multithreading-72d57d075ef1 | CC-MAIN-2021-04 | en | refinedweb |
39987/what-is-a-tuple-in-python-and-how-to-use-it
A tuple is created in a similar way as a list but the values are included in ().
Example:
tup3 = "a", "b", "c", "d";
tup1 = ();
Access Tuple
tup1 = ('a', 'b', 1, 2);
tup2 = (1, 2, 3, 4, 5, 6, 7 );
print "tup1[0]: ", tup1[0];
print "tup2[1:5]: ", tup2[1:5];
Output:
tup1[0]: a
tup2[1:5]: [2,3,4,5]
Tuples are a Unchanging sequence of values, each one accessible individually, and a tuple is a basic type in Python. You can recognize tuples when they are created because they are surrounded by ‘()’ parentheses . Tuples contain data such as strings and numbers. Open the python shell and perform the following example.
Program:
>>> filler = ("string", "filled", "by a", "tuple")
>>> print("A %s %s %s %s" % ("string", "filled", "by a", "tuple")
A string filled by a tuple
>>> print("A %s %s %s %s" % filler)
A string filled by a tuple
Python doesn't have a native array data ...READ MORE
Every occurence of "foreach" I've seen (PHP, ...READ MORE
For Python 3, try doing this:
import urllib.request, ...READ MORE
An absolute {import, path, URL} tells you exactly how ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
can you give an example using a ...READ MORE
Try this:
if cookie and not cookie.isspace():
# the ...READ MORE
You want to avoid interfering with this ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/39987/what-is-a-tuple-in-python-and-how-to-use-it?show=72872 | CC-MAIN-2021-04 | en | refinedweb |
bugzilla.mozilla.org has resumed normal operation. Attachments prior to 2014 will be unavailable for a few days. This is tracked in Bug 1475801.
Please report any other irregularities here.
Please report any other irregularities here.
View Source window accepts commands it shouldn't
RESOLVED FIXED
Status
▸
Toolbars & Menus
People
(Reporter: Stuart Morgan, Assigned: Chris Lawson (gone))
Tracking
({fixed1.8.1.10})
Details
Attachments
(1 attachment, 1 obsolete attachment)
At least two menu commands are being applied to view source when they should go to the frontmost browser window: - Open File… - New Tab We should find out if popups are also affected.
PC running 10.3??? What sort of hardware did you walk off with ;)
(In reply to comment #1) > PC running 10.3??? What sort of hardware did you walk off with ;) The "Hardware" field's default for bugs filed with Intel Macs is "PC", see bug 348770.
... but of course you already know that because you commented on that bug right after your comment on this one. Nevermind! :)
(In reply to comment #0) > At least two menu commands are being applied to view source when they should go > to the frontmost browser window: > - Open File… > - New Tab > We should find out if popups are also affected. Using the popup generated by clicking the link on I can confirm that both the aforementioned commands are applied to the popup rather than the frontmost browser window. Might I suggest, however, that it would be awfully damn weird for either of those to apply to what appears (for all intents and purposes) to be a background window, and that a more logical course of action might be to simply disable these two menu commands when a window other than a browser window is frontmost? cl
I'll take this for 1.6, and unless anyone has objections, I'm going to do what I suggested in comment 4; namely, I plan to disable these two menu items when (frontmost window) != [self getFrontmostBrowserWindow]. cl
Status: NEW → ASSIGNED
Target Milestone: --- → Camino1.6
Assignee: nobody → cl-bugs
Status: ASSIGNED → NEW
Status: NEW → ASSIGNED
Mass un-setting milestone per 1.6 roadmap. Filter on RemoveRedonkulousBuglist to remove bugspam. Developers: if you have a patch in hand for one of these bugs, you may pull the bug back to 1.6 *at that point*.
Target Milestone: Camino1.6 → ---
I don't think we need to worry about Open File until some sort of decision is reached on bug 361157. I have a patch for the New Tab case ready to go, but I'd like to discuss the other bug first so I can submit a combined patch for both Open File and New Tab.
Created attachment 281420 [details] [diff] [review] fix Fixes both cases as applicable to our current behaviour. If bug 361157 is fixed to match Safari's behaviour (always open new window for Open File regardless), then we won't need to validate the |openFile:| selector any more and that block can be removed.
Comment on attachment 281420 [details] [diff] [review] fix Oh, and this removes |isMainWindowABrowserWindow|, which wasn't being used anywhere and had a totally different definition of "browser window" than |getFrontmostBrowserWindow|, despite a comment in the code implying otherwise.
Comment on attachment 281420 [details] [diff] [review] fix >- return (browserController || ![NSApp mainWindow]); >+ return (((browserController && ([NSApp mainWindow] == [self getFrontmostBrowserWindow])) || >+ ![NSApp mainWindow]); You are missing a parenthesis right before the ; So, change ![NSApp mainWindow]); to ![NSApp mainWindow])); r=me with that change.
Attachment #281420 - Flags: superreview?(mikepinkerton)
Attachment #281420 - Flags: review+
(In reply to comment #10) > (From update of attachment 281420 [details] [diff] [review]) > You are missing a parenthesis right before the ; So I am. Thanks! I'll wait to respin until pink comments for sr, or someone can fix the chicken. cl
Comment on attachment 281420 [details] [diff] [review] fix sr=pink
Attachment #281420 - Flags: superreview?(mikepinkerton) → superreview+
Created attachment 287284 [details] [diff] [review] patch as checked in Landed on the trunk and MOZILLA_1_8_BRANCH. Someone should sanity-check this due to the bitrot from de-get-ification; Chris and I talked through it, and it built and ran without any noticeable issues.
Attachment #281420 - Attachment is obsolete: true
Status: ASSIGNED → RESOLVED
Last Resolved: 11 years ago
Keywords: fixed1.8.1.10
Resolution: --- → FIXED | https://bugzilla.mozilla.org/show_bug.cgi?id=353265 | CC-MAIN-2018-30 | en | refinedweb |
bluemix “docker images” results in “json: cannot unmarshal string into Go value of type int”
I have successfully uploaded two images from the docker registry to my IBM Bluemix registry, which I can see when logging into the Bluemix UI. However, I am unable to list the images using the CLIs.
Using the docker command I receive the following:
krsyoung:code krsyoung$ docker images json: cannot unmarshal string into Go value of type int
And with cf ic:
krsyoung:code krsyoung$ cf ic images json: cannot unmarshal string into Go value of type int Invalid IBM-Containers commands or docker commands. Available IBM Containers commands are {login, group, ip, route, volume, namespace, info} In addition, IBM Containers supports native docker cli passthrough, run cf ic help for more details
Here are the versions of software I’m running:
krsyoung:code krsyoung$ docker --version Docker version 1.7.0, build 0baf609 krsyoung:code krsyoung$ cf --version cf version 6.12.0-8c65bbd-2015-06-30T00:05:01+00:00 krsyoung:code krsyoung$ cf plugins Listing Installed Plugins... OK Plugin Name Version Command Name Command Help IBM-Containers 0.8.598 ic IBM-Containers Plugin
Adding the debug flag does not help. Any ideas?
Thanks!
One Solution collect form web for “bluemix “docker images” results in “json: cannot unmarshal string into Go value of type int””
Please use docker 1.6.2. Docker 1.7 isn’t supported at the moment | http://dockerdaily.com/bluemix-docker-images-results-in-json-cannot-unmarshal-string-into-go-value-of-type-int/ | CC-MAIN-2018-30 | en | refinedweb |
Morning all, I've got a grid of rooms that I need to navigate round using directions in main that's working fine, in main I've put a random starting location that also works.
Ive also got a program with two functions in it, random starting location and random direction facing to implement at start up. What I'd like to do is use the functions but I have no idea how to implement this into one peice of code. I've tried to figure it out but I don't know whether I need to return the function by value or try to do it by reference, one function returns a string (facing) while the other (start room) returns a random int, any suggestions or help? Both bits of code below.
#include <iostream> //#include <fstream.h> using namespace std; void start_direction (){ time_t now; time(&now); srand(now); string start_facing; string str7 ("North"); string str8 ("East"); string str9 ("South"); string str10 ("West"); int facing; facing = rand() % 4 + 1; if (facing == 1) start_facing = str7; else if (facing == 2) start_facing = str8; else if (facing == 3) start_facing = str9; else start_facing = str10; cout << "Facing " << start_facing << endl; // starting direction } void start_room (){ time_t now; time(&now); srand(now); int room_location; room_location = rand() % 27; //0 to 26 cout << "starting room " << room_location << endl; } int main (){ start_direction (); start_room (); // cout << "starting room " << room_location << endl; system ("Pause"); return 0; }
#include <iostream> #include <stdlib.h> using namespace std; //int location; void printInstructions () { cout << "You awake to find yourself locked in a grid of rooms" << endl; cout << "\n" << "Directions of travel are 2 for N, 6 for E, 8 for South, 4 for West," << endl; cout << "1 for UP, 3 for DOWN" << "\n" << endl; cout << "Enter your direction to travel?" << endl; } int main() { //random start location bit srand((unsigned)time(0)); //random time generator int start = rand () % 27; //Placement location (start off) // int facing; // facing = rand() % 4 + 1; int direction; int location, old_location; int map[3][9][10] = { { // 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 {99, 9, 3, 99, 99, 99, 1, 99, 99, 99}, // 0 {99, 10, 4, 99, 0, 99, 2, 99, 99, 99}, // 1 {99, 11, 5, 99, 1, 99, 99, 99, 99, 99}, // 2 {99, 12, 6, 99, 99, 99, 4, 99, 0, 99}, // 3 {99, 13, 7, 99, 3, 99, 5, 99, 1, 99}, // 4 {99, 14, 8, 99, 4, 99, 99, 99, 2, 99}, // 5 {99, 15, 99, 99, 99, 99, 7, 99, 3, 99}, // 6 {99, 16, 99, 99, 6, 99, 8, 99, 4, 99}, // 7 {99, 17, 99, 99, 7, 99, 9, 99, 5, 99} // 8 }, { {99, 18, 12, 0, 99, 99, 10, 99, 99, 99}, // 9 {99, 19, 13, 1, 9, 99, 11, 99, 99, 99}, // 10 {99, 20, 14, 2, 10, 99, 99, 99, 99, 99}, // 11 {99, 21, 15, 3, 99, 99, 13, 99, 9, 99}, // 12 {99, 22, 16, 4, 12, 99, 14, 99, 10, 99}, // 13 {99, 23, 17, 5, 13, 99, 99, 99, 11, 99}, // 14 {99, 24, 99, 6, 99, 99, 16, 99, 12, 99}, // 15 {99, 25, 99, 7, 15, 99, 17, 99, 13, 99}, // 16 {99, 26, 99, 8, 16, 99, 99, 99, 14, 99} // 17 }, { {99, 99, 21, 9, 99, 99, 19, 99, 99, 99}, // 18 {99, 99, 22, 10, 18, 99, 20, 99, 99, 99}, // 19 {99, 99, 23, 11, 19, 99, 99, 99, 99, 99}, // 20 {99, 99, 24, 12, 99, 99, 22, 99, 18, 99}, // 21 {99, 99, 25, 13, 21, 99, 23, 99, 19, 99}, // 22 {99, 99, 26, 14, 22, 99, 99, 99, 20, 99}, // 23 {99, 99, 99, 15, 99, 99, 25, 99, 21, 99}, // 24 {99, 99, 99, 16, 24, 99, 26, 99, 22, 99}, // 25 {99, 99, 99, 17, 25, 99, 99, 99, 23, 99} // 26 } }; location = start; old_location = location; printInstructions (); //call to void function = no return arguement cout << "starting location = room " << location << "\n" << endl; while (location !=26) { cin >> direction; location = map[0][location][direction]; if(location == 99) { cout<<"Incorrect move...!!!... Enter direction again."<< endl; location = old_location; } else { cout << "Your at location " << location << endl; old_location = location; if (location == 26) cout << "You've found the exit" << endl; } } return 0; }
Thanks in advance
Leppie | https://www.daniweb.com/programming/software-development/threads/291464/how-do-i-return-a-rand-function-back-to-main | CC-MAIN-2018-30 | en | refinedweb |
Microsoft Office Tutorials and References
In Depth Information
Filling cells and creating data series
dragged the ill handle down. The text in cells A2:A9 was filled in automatically matching
the sequence in the custom list we created.
Figure 8-31 You can insert your own custom lists and sequences that you enter by dragging the
ill handle.
To create a custom list, follow these steps:
Click the File tab, click Options, and click the Advanced category.
1.
2. Scroll all the way to the bottom, and click the Edit Custom Lists button (under
General).
With New List selected in the Custom Lists box, type the items you want to include in
your list in the List Entries box. Be sure to type the items in the order you want them
to appear.
3.
Click Add to add the list to the Custom Lists box.
4.
5. Click OK to return to the worksheet.
Importing custom lists
You can also create a custom list by importing the entries in an existing cell range. To
import the entries shown in Figure 8-31, we selected a cell range containing the list of
names before opening the Excel Options dialog box. When you open the Edit Custom Lists
dialog box, the address of the selected range appears next to the Import button, which
you can click to add the new list. (You can also select the list after opening the dialog box.
You need to click in the edit box next to the Import button, and then you can drag on the
worksheet to select the cells.)
Search JabSto ::
Custom Search | http://jabsto.com/Tutorial/topic-25/Microsoft-Excel-2013-258.html | CC-MAIN-2018-30 | en | refinedweb |
Introduction: Java (Programming Language) for Beginners
This Instructable will show you the wonders of Java (programming language). You will also be able to DIY (Do It Yourself) at home. There is no cost involved within this Instructable. It's very easy, and requires no other programming language at all.
I have spread the main part over steps 3-5. Simply beacuse there is quite a lot of information.
Please rate this Instructable and leave comments, questions or statements. All questions, statements, and comments will be answered.
Step 1: What Is Java?
Java is just one of the hundreds of different programming languages in the world. Java language is an object-orientated programming language which was developed by Sun Microsystems. Java programmes are platform independent which means they can be run on any operating system with any type of processor as long as the Java interpreter is available on that system.
Step 2: What You Will Need
You will need the Java Software Development Kit from Sun's Java site. Follow the instructions on Sun's website to install it. Make sure that you add the java bin directory to your PATH environment variable. To find the Java Software Development Kit, go to the top right-hand corner of the screen and you will see a search bar. Type in: Java Software Development Kit. The the search results appear, find the one that says something along the lines of download.
Step 3: Writing Your First Java Programme:Part 1
You will need to write your Java programs using a text editor. When you type the examples that follow you must make sure that you use capital and small letters in the right places because Java is case sensitive. The first line you must type is:
public class Hello
This creates a class called Hello. All class names must start with a capital letter. The main part of the program must go between curly brackets after the class declaration. The curly brackets are used to group together everything inside them.
public class Hello
{
}
Step 4: Writing Your First Java Programme:Part 2
We must now create the main method which is the section that a program starts.
public class Hello
{
public static void main(String[] args)
{
}
}
You will see that the main method code has been moved over a few spaces from the left. This is called indentation and is used to make a program easier to read and understand.
Here is how you print the words Hello World on the screen:
public class Hello
{
public static void main(String[] args)
{
System.out.println("Hello World");
}
}
Step 5: Writing Your First Java Programme:Part 3
Make sure that you use a capital S in System because it is the name of a class. println is a method that prints the words that you put between the brackets after it on the screen. When you work with letters like in Hello World you must always put them between quotes. The semi-colon is used to show that it is the end of your line of code. You must put semi-colons after every line like this.
Step 6: Compiling the Programme
What we have just finished typing is called the source code. You must save the source code with the file name Hello.java before you can compile it. The file name must always be the same as the class name.
Make sure you have a command prompt open and then enter the following:
javac Hello.java
If you did everything right then you will see no errors messages and your program will be compiled. If you get errors then go through this lesson again and see where your mistake is.
Step 7: Running the Programme
Once your program has been compiled you will get a file called Hello.class. This is not like normal programs that you just type the name to run but it is actually a file containing the Java bytecode that is run by the Java interpreter. To run your program with the Java interpeter use the following command:
java Hello
Do not add .class on to the end of Hello. You will now see the following output on the screen:
Hello World
Congratulations! You have just made your first Java program.
11 Discussions
Your screenshots are of python not java code.
Thanks for that. I didn't pick up on that. I have fixed them up now.
um so I was wondering can u program like your own game with this Java stuff cause I'd like to do that I no Minecraft is based on Java: and was wondering?
I'd suggest you use Eclipse.
I'd suggest NetBeans mainly because by default it generates projects that can be easily compiled anywhere outside the IDE using Ant (instead of Eclipse's just calling the compiler directly) And the last 5 images need to be seriously rethought.
I have rethought the last 5 images and have changed them.
do I download i T for tablet,PC,,my phone,or does it matter?
Yea. I have the programme Eclipse already, but I wanted to do this a different way.
Does it matter if i download java 2 software development kit? are they the same thing?
This might be useful as notes for someone who has taken a class on Java, but as a stand alone work? Not useful. To many presumptions, no definitions, no why something works, no alternatives to make something work... This is 'how to write hello world', not 'java for beginners'.
I second your opinion. | http://www.instructables.com/id/Java-Programming-Language-For-Beginners/ | CC-MAIN-2018-30 | en | refinedweb |
Hello, I have recently started with aerospike, and I am trying to add new namespace but I cannot find aerospike.conf. I searched for some solutions but there is no any for mac installation. Path /etc/aerospike/aerospike.conf does not exist. Otherwise, everything works fine!
Is it possible at all to change/add new namespace on OSX?
Thanks! | https://discuss.aerospike.com/t/no-aerospike-conf-file-in-osx/3762 | CC-MAIN-2018-30 | en | refinedweb |
I’ve learn a while ago on StackOverflow that we can get the “instance ID” of any resource, for instance:
var_dump(intval(curl_init())); // int(2) var_dump(intval(finfo_open())); // int(3) var_dump(intval(curl_init())); // int(4) var_dump(intval(finfo_open())); // int(5) var_dump(intval(curl_init())); // int(6)
I need something similar but applied to classes:
class foo { public function __construct() { ob_start(); var_dump($this); // object(foo)#INSTANCE_ID (0) { } echo preg_replace('~.+#(/d+).+~s', '$1', ob_get_clean()); } } $foo = new foo(); // 1 $foo2 = new foo(); // 2
The above works but I was hoping for a faster solution or, at least, one that didn’t involve output buffers. Please note that this won’t necessarily be used within the constructor or even inside the class itself!
spl_object_hash() is not what I’m looking for because the two objects produce identical hashes:
var_dump(spl_object_hash($foo)); // 000000005111e639000000003a87b42e var_dump(spl_object_hash($foo2)); // 000000005111e639000000003a87b42e
Casting to int like resources doesn’t seem to work for objects:
Notice: Object of class foo could not be converted to int.
Is there a quick way to grab the same output without using object properties?
Besides var_dump(), I’ve discovered by trial and error that debug_zval_dump() also outputs the object instance, unfortunately it also needs output buffering since it doesn’t return the result.
To the down voters: explain your reasons or, if you think this is a basic question, suggest a solution.
How to get instance ID with PHP
I’m looking for a way to get the instance ID of a given object / resource with PHP, the same way var_dump() does: var_dump(curl_init()); // resource #1 of type curl var_dump(curl_init()); // resource
Client Object Model Sharepoint How to get the Instance ID
So In my sharepoint site contents page i have an application. Can anyone tell me how to get the instance id of it. so that i can invoke the link :
Get Amazon EC2 Instance ID via PHP
I’m looking to create a PHP script that will echo the unique ID of an Amazon EC2 instance. Anyone know how to do this? Found a way via command line:
Access object members from Instance ID
I’m getting the Instance ID of an object from collision_line() Now that I have this instance, I want to get it’s image_angle, but I get an ‘unknown variable’ message when I try that. What should I do?
Get php file from instance of object [duplicate]
This question already has an answer here: Finding the PHP File (at run time) where a Class was Defined 2 answers Is it possible in PHP to get the file location of an object? The object is an in
Get instance by instance-id
I need to get the instance by instance-id, is it possible to do it without requesting a list of all instances? I’ve tried: ec2_conn = boto.connect_ec2(aws_access_key_id=key, aws_secret_access_key=acce
PHP get index of object [duplicate]
Possible Duplicate: Get Instance ID of an Object in PHP I’m new to OOP and I have an object. If I: var_dump($obj); I get: object(stdClass)[55] public ‘date’ => int 1295297161 public ‘id’ => i
PHP Instance object with path
I’m trying to create an instance of an object with a path like(C:/wamp/www…) In a new project I have this method, and I try with that to instance an object of an another project. public function get
PHP OOP save instance of object in instance or only the id?
I am a little bit confused how to implement the following OO structure in PHP. I have a class ‘User’ and a class ‘UserGroup’ a ‘User’ is part of one ‘UserGroup’. In the database the user table has a f
php object within object, getting the top level object instance
I am experimenting with PHP OOP what i’m trying to find out is, Is it possible to access a object instance from withing a object that was created in this object instance? sounds confusing, so here is
Answers
Have a look at spl_object_hash(). Usage example:
$id = spl_object_hash($object);
Note that you’ll need PHP 5 >= 5.2.0 for that to work.
spl_object_hash() could help you out here. It
returns a unique identifier for the object
which is always the same for a given instance.
EDIT after OP comment:
You could implement such a behavior using a static class property, e.g:
class MyClass { private static $_initialized = false; public function __construct() { if (!self::$_initialized) { self::$_initialized = true; // your run-only-once code } } }
But actually this has nothing to with your original question.
I don’t have the PECL runkit enabled to test this, but this may allow you to remove the constructor code from the class definition after the first time that an instance of the class has been created.
Whether you can remove the constructor from within the constructor would be an interesting experiment.
Well, yes, with an extension.
Note that the handles used for objects that were, in the meantime, destroyed, can be reused.
Build with phpize && ./configure && make && make install
testext.h
#ifndef PHP_EXTTEST_H # define PHP_EXTTEST_H # ifdef HAVE_CONFIG_H # include<config.h> # endif # include <php.h> extern zend_module_entry testext_module_entry; #define phpext_testext_ptr &testext_module_entry #endif
testext.c
#include "testext.h" PHP_FUNCTION(get_object_id) { zval *obj; if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "o", &obj) == FAILURE) { return; } RETURN_LONG(Z_OBJ_HANDLE_P(obj)); } static zend_function_entry ext_functions[] = { PHP_FE(get_object_id, NULL) {NULL, NULL, NULL, 0, 0} }; zend_module_entry testext_module_entry = { STANDARD_MODULE_HEADER, "testext", ext_functions, /* Functions */ NULL, /* MINIT */ NULL, /* MSHUTDOWN */ NULL, /* RINIT */ NULL, /* RSHUTDOWN */ NULL, /* MINFO */ NO_VERSION_YET, STANDARD_MODULE_PROPERTIES }; ZEND_GET_MODULE(testext)
config.m4
PHP_ARG_ENABLE(testext, [Whether to enable the "testext" extension], [ enable-testext Enable "testext" extension support]) if test $PHP_EXTTEST != "no"; then PHP_SUBST(EXTTEST_SHARED_LIBADD) PHP_NEW_EXTENSION(testext, testext.c, $ext_shared) fi
Test script
<?php $a = new stdclass(); $b = new stdclass(); var_dump(get_object_id($a)); var_dump(get_object_id($b));
Output
int(1) int(2)
If you don’t want to use output buffering… perhaps use var_export instead of var_dump?
As long as you implement the base class all the classes you’re going to need this from, you can do something like this:
class MyBase { protected static $instances = 0; private $_instanceId = null; public function getInstanceId() { return $this->_instanceId; } public function __construct() { $this->_instanceId = ++self::$instances; } } class MyTest extends MyBase { public function Foo() { /* do something really nifty */ } } $a = new MyBase(); $b = new MyBase(); $c = new MyTest(); $d = new MyTest(); printf("%d (should be 1) /n", $a->getInstanceId()); printf("%d (should be 2) /n", $b->getInstanceId()); printf("%d (should be 3) /n", $c->getInstanceId()); printf("%d (should be 4) /n", $d->getInstanceId());
The output would be:
1 (should be 1) 2 (should be 2) 3 (should be 3) 4 (should be 4)
What you’re trying to do, is actually Aspect-Oriented Programming (AOP).
There are at least a couple of frameworks available for AOP in PHP at this point:
- seasar (formerly PHPaspect) is a larger framework integrating with Eclipse – the screenshot shows you a little code snippet that answers your question, weaving some code around a particular new statement throughout a project.
- php-aop is a lightweight framework for AOP.
- typo3 has an AOP framework built in.
This may be overkill for your needs, but you may find that exploring the kind of thinking behind ideas like these will lead you down the rabbithole, and teach you new ways to think about software development in general – AOP is a powerful concept, allowing you to program in terms of strategies and concerns, or “aspects”.
Languages like PHP were designed to solve programming tasks – the concept of APO was designed to solve a programmer’s tasks. When normally you would need to think about how to ensure that a particular concern gets fulfilled every time in your codebase, you can think of this as simply an “aspect” of how you’re programming, implement it in those terms directly, and count on your concerns to be implemented every time.
It requires less discipline, and you can focus on solving the practical programming tasks rather than trying to architect your way through high-level structural code requirements.
Might be worth 5 minutes of your time, anyway 😉
Good luck!
Alix, your solution in the question was exactly what I needed, but actually breaks when there’s an object in an object, returns the last # in the var_dump. I fixed this, made the regex faster, and put it in a nice little function.
/** * Get global object ID * From: * By: Alix Axel, non-greedy fix by Nate Ferrero */ function get_object_id(&$obj) { if(!is_object($obj)) return false; ob_start(); var_dump($obj);// object(foo)#INSTANCE_ID (0) { } preg_match('~^.+?#(/d+)~s', ob_get_clean(), $oid); return $oid[1]; }
This is a bit late to the party but I didn’t see this answer and just recently implemented something similar for a debugging class ( to handle circular references). As you guys may or may not know the normal printing functions such as var_export, have limited or no circular reference support.
As noted the spl_object_hash is unique per instance, the problem I had with it is that it is ugly. Not really suited to printing for my debugger as it’s something like this 000000006ac56bae0000000044fda36f which can be hard to compare to say this 000000006ac56bae0000000044fda35f. So like the OP stated what I wanted was just a number of the instance ( I only really needed this on a per class basis ).
Therefor the simple solution for me was to do the following.
$class = get_class( $input ); $hash = spl_object_hash( $input ); if( !isset( $objInstances[ $class ] )){ $objInstances[ $class ] = array(); } $output = 'object(%s) #%s (%s){%s}'; //class, instance, prop_count, props if( false === ( $index = array_search($hash, $objInstances[ $class ] ) ) ){ $index = count($objInstances[ $class ]); //set init index for instance $objInstances[ $class ][] = $hash; // .... debugging code $output = 'debugging result.', //sprintf }else{ $output = sprintf( $output, $class, $index, 0, '#_CIRCULAR_REFRENCE_#'); }
Obviously the debugging code is way more complex, but the essential thing here is that by tracking the class and spl hash in $objInstances I can easily assign my own instance numbers outside of the class. This means I don’t need some ugly hack ( that affects the class’s code ) to get a reference number. Also, I don’t need to display the “ugly” spl hash. Anyway my full code for this outputs something like this.
$obj = new TestObj(); $obj1 = new TestObj(); $obj->setProProp($obj1); $obj1->setProProp($obj); //create a circular reference object(TestObj) #0 (7){ ["SOME_CONST":const] => string(10) 'some_const', ["SOMEOTHER_CONST":const] => string(16) 'some_other_const', ["SOME_STATIC":public static] => string(6) 'static', ["_PRO_STATIC":protected static] => string(10) 'pro_static', ["someProp":public] => string(8) 'someProp', ["_pro_prop":protected] => object(TestObj) #1 (7){ ["SOME_CONST":const] => string(10) 'some_const', ["SOMEOTHER_CONST":const] => string(16) 'some_other_const', ["SOME_STATIC":public static] => string(6) 'static', ["_PRO_STATIC":protected static] => string(10) 'pro_static', ["someProp":public] => string(8) 'someProp', ["_pro_prop":protected] => object(TestObj) #0 (0){#_CIRCULAR_REFRENCE_#}, ["_proProp":protected] => string(7) 'proProp' }, ["_proProp":protected] => string(7) 'proProp' }
As you can see it’s very easy to see where object(TestObj) #0 (0){#_CIRCULAR_REFRENCE_#} came from now. I wanted to keep this debugging code as close to the native var_dump which outputs this.
object(TestObj)#7 (3) { ["someProp"]=> string(8) "someProp" ["_pro_prop":protected]=> object(TestObj)#10 (3) { ["someProp"]=> string(8) "someProp" ["_pro_prop":protected]=> *RECURSION* ["_proProp":protected]=> string(7) "proProp" } ["_proProp":protected]=> string(7) "proProp" }
The difference here is I needed the return as a string, not output to the browser. I also wanted to be able to show class constants, static properties, and private properties ( with flags to change what the debugger outputs, and the depth limit). And, I wanted a bit more information as to what the circular reference was instead of just *RECURSION* which doesn’t tell me anything.
Hope it helps someone in the future. | http://w3cgeek.com/get-instance-id-of-an-object-in-php.html | CC-MAIN-2018-30 | en | refinedweb |
Logging messages
Objects.toString(Object).
That method has the advantage of being null-friendly.
import java.util.Objects; public final class Util { /** * Surround the result of <tt>Objects.toString(aObject)</tt> with single quotes. */ public static String quote(Object aObject){ return "'" + Objects.toString(aObject) + "'"; } } | http://javapractices.com/topic/TopicAction.do;jsessionid=C151B8B556C7200C5DDB4F7134886F2B?Id=204 | CC-MAIN-2018-30 | en | refinedweb |
The strstr() function searches the given string in the specified main string and returns the pointer to the first occurrence of the given string.
C strstr() function declaration
char *strstr(const char *str, const char *searchString)
str – The string to be searched.
searchString – The string that we need to search in string str
Return value of strstr()
This function returns the pointer to the first occurrence of the given string, which means if we print the return value of this function, it should display the part of the main string, starting from the given string till the end of main string.
Example: strstr() function in C
#include <stdio.h> #include <string.h> int main () { const char str[20] = "Hello, how are you?"; const char searchString[10] = "you"; char *result; /* This function returns the pointer of the first occurrence * of the given string (i.e. searchString) */ result = strstr(str, searchString); printf("The substring starting from the given string: %s", result); return 0; }
Output:
The substring starting from the given string: you?
As you can see that we are searching the string “you” in the string “Hello, how are you?” using the function strstr(). Since the function returned the pointer to the first occurrence of string “you”, the substring of string str starting from “you” has been printed as output. | https://beginnersbook.com/2017/11/c-strstr-function/ | CC-MAIN-2018-30 | en | refinedweb |
A python framework for getting useful stuff out of HAR files
Project description
A Python Framework For Using HAR Files To Analyze Web Pages.
Overview
The haralyzer module contains two classes for analyzing web pages based on a HAR file. HarParser() represents a full file (which might have multiple pages), and HarPage() represents a single page from said file.
HarParser has a couple of helpful methods for analyzing single entries from a HAR file, but most of the pertinent functions are inside of the page object.
haralyzer was designed to be easy to use, but you can also access more powerful functions directly.
Quick Intro
HarParser
The HarParser takes a single argument of a dict representing the JSON of a full HAR file. It has the same properties of the HAR file, EXCEPT that each page in HarParser.pages is a HarPage object:
import json from haralyzer import HarParser, HarPage with open('har_data.har', 'r') as f: har_parser = HarParser(json.loads(f.read())) print har_parser.browser # {u'name': u'Firefox', u'version': u'25.0.1'} print har_parser.hostname # 'humanssuck.net' for page in har_parser.pages: assert isinstance(page, HarPage, None) # returns True for each
HarPage
The HarPage object contains most of the goods you need to easily analyze a page. It has helper methods that are accessible, but most of the data you need is in properties for easy access. You can create a HarPage object directly by giving it the page ID (yes, I know it is stupid, it’s just how HAR is organized), and either a HarParser with har_parser=parser, or a dict representing the JSON of a full HAR file (see example above) with har_data=har_data:
import json From haralyzer import HarPage with open('har_data.har', 'r') as f: har_page = HarPage('page_3', har_data=json.loads(f.read())) ### GET BASIC INFO har_page.hostname # 'humanssuck.net' har_page.url $ '' ### WORK WITH LOAD TIMES (all load times are in ms) ### # Get image load time in milliseconds as rendered by the browser har_page.image_load_time # prints 713 # We could do this with 'css', 'js', 'html', 'audio', or 'video' ### WORK WITH SIZES (all sizes are in bytes) ### # Get the total page size (with all assets) har_page.page_size # prints 2423765 # Get the total image size har_page.image_size # prints 733488 # We could do this with 'css', 'js', 'html', 'audio', or 'video' # Get the transferred sizes (works only with HAR files, generated with Chrome) har_page.page_size_trans har_page.image_size_trans har_page.css_size_trans har_page.text_size_trans har_page.js_size_trans har_page.audio_size_trans har_page.video_size_trans
MultiHarParser
The MutliHarParser takes a list of dict, each of which represents the JSON of a full HAR file. The concept here is that you can provide multiple HAR files of the same page (representing multiple test runs) and the MultiHarParser will provide aggregate results for load times:
import json from haralyzer import HarParser, HarPage test_runs = [] with open('har_data1.har', 'r') as f1: test_runs.append( (json.loads( f1.read() ) ) with open('har_data2.har', 'r') as f2: test_runs.append( (json.loads( f2.read() ) ) multi_har_parser = MultiHarParser(har_data=test_runs) # Get the mean for the time to first byte of all runs in MS print multi_har_parser.time_to_first_byte # 70 # Get the total page load time mean for all runs in MS print multi_har_parser.load_time # 150 # Get the javascript load time mean for all runs in MS print multi_har_parser.js_load_time # 50 # You can get the standard deviation for any of these as well # Let's get the standard deviation for javascript load time print multi_har_parser.get_stdev('js') # 5 # We can also do that with 'page' or 'ttfb' (time to first byte) print multi_har_parser.get_stdev('page') # 11 print multi_har_parser.get_stdev('ttfb') # 10 ### DECIMAL PRECISION ### # You will notice that all of the results are above. That is because # the default decimal precision for the multi parser is 0. However, you # can pass whatever you want into the constructor to control this. multi_har_parser = MultiHarParser(har_data=test_runs, decimal_precision=2) print multi_har_parser.time_to_first_byte # 70.15
Advanced Usage
HarPage includes a lot of helpful properties, but they are all easily produced using the public methods of HarParser and HarPage:
import json from haralyzer import HarPage with open('har_data.har', 'r') as f: har_page = HarPage('page_3', har_data=json.loads(f.read())) ### ACCESSING FILES ### # You can get a JSON representation of all assets using HarPage.entries # for entry in har_page.entries: if entry['startedDateTime'] == 'whatever I expect': ... do stuff ... # It also has methods for filtering assets # # Get a collection of entries that were images in the 2XX status code range # entries = har_page.filter_entries(content_type='image.*', status_code='2.*') # This method can filter by: # * content_type ('application/json' for example) # * status_code ('200' for example) # * request_type ('GET' for example) # * http_version ('HTTP/1.1' for example) # It will use a regex by default, but you can also force a literal string match by passing regex=False # Get the size of the collection we just made # collection_size = har_page.get_total_size(entries) # We can also access files by type with a property # for js_file in har_page.js_files: ... do stuff .... ### GETTING LOAD TIMES ### # Get the BROWSER load time for all images in the 2XX status code range # load_time = har_page.get_load_time(content_type='image.*', status_code='2.*') # Get the TOTAL load time for all images in the 2XX status code range # load_time = har_page.get_load_time(content_type='image.*', status_code='2.*', async=False)
This could potentially be out of date, so please check out the sphinx docs.
More…. Advanced Usage
All of the HarPage methods above leverage stuff from the HarParser, some of which can be useful for more complex operations. They either operate on a single entry (from a HarPage) or a list of entries:
import json from haralyzer import HarParser with open('har_data.har', 'r') as f: har_parser = HarParser(json.loads(f.read())) for page in har_parser.pages: for entry in page.entries: ### MATCH HEADERS ### if har_parser.match_headers(entry, 'Content-Type', 'image.*'): print 'This would appear to be an image' ### MATCH REQUEST TYPE ### if har_parser.match_request_type(entry, 'GET'): print 'This is a GET request' ### MATCH STATUS CODE ### if har_parser.match_status_code(entry, '2.*'): print 'Looks like all is well in the world'
Asset Timelines
The last helper function of HarParser requires it’s own section, because it is odd, but can be helpful, especially for creating charts and reports.
It can create an asset timeline, which gives you back a dict where each key is a datetime object, and the value is a list of assets that were loading at that time. Each value of the list is a dict representing an entry from a page.
It takes a list of entries to analyze, so it assumes that you have already filtered the entries you want to know about:
import json from haralyzer import HarParser with open('har_data.har', 'r') as f: har_parser = HarParser(json.loads(f.read())) ### CREATE A TIMELINE OF ALL THE ENTRIES ### entries = [] for page in har_parser.pages: for entry in page.entries: entries.append(entry) timeline = har_parser.create_asset_timeline(entries) for key, value in timeline.items(): print(type(key)) # <type 'datetime.datetime'> print(key) # 2015-02-21 19:15:41.450000-08:00 print(type(value)) # <type 'list'> print(value) # Each entry in the list is an asset from the page # [{u'serverIPAddress': u'157.166.249.67', u'cache': {}, u'startedDateTime': u'2015-02-21T19:15:40.351-08:00', u'pageref': u'page_3', u'request': {u'cookies':............................
With this, you can examine the timeline for any number of assets. Since the key is a datetime object, this is a heavy operation. We could always change this in the future, but for now, limit the assets you give this method to only what you need to examine.
Project details
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/haralyzer/ | CC-MAIN-2018-30 | en | refinedweb |
Creating Channel Providers for effbot.exe/effnews
January 8, 2003 | Fredrik Lundh
Release 0.9 of the effnews RSS reader adds support for pluggable channel providers. Providers are simply Python scripts that process data from an external source, and present it to the application as if it were an RSS file.
Using a Channel Provider
The provider mechanism is used to control how data is read from a given URL. Each provider is associated with one or more URLs. For example, the Daily Python-URL provider is associated with the URL.
To use an installed provider, just drag the source URI to the EffNews window as usual. EffNews will now use the provider to fetch data, instead of the standard RSS reader.
Writing Simple Channel Providers
Providers should be installed in the c:/effbot.exe/effnews directory, and must use the .provider extension. The actual filename doesn’t matter; the application loads all provider scripts, and uses data in the script to figure out what provider to use for a given URL.
The current version only supports the simpleprovider protocol. This protocol uses the standard HTTP transport to read data from the source, and passes the data to a parser function which turns it into an RSS-style channel header and a list of RSS-style items. To implement a channel provider, create a Python module which defines two names:
- urls
This variable should contain a list of URLs associated with this provider.
- simpleprovider
This function is used to parse the data. It is called with two arguments: a context object, and the text to parse. The context object has a single public method, called push. This method is used to add RSS-style channel and item elements to the internal database.
Example:
import re urls = [ "" ] pattern = r"..." def simpleprovider(context, text): context.push("channel", title="my channel", link=urls[0]) for title, body in re.findall(pattern, text): context.push("item", title=title, description=body)
The push method takes an RSS element name (“channel” or “item”), and one or more keyword arguments which provide RSS subelements.
The channel element can have title, link, and description subelements. All subelements are optional.
The item element can have title, link, and description subelements. You must specify at least one of the title or description elements. The link element is always optional.
Element values should be either ASCII strings, or Unicode strings. Do not use encoded 8-bit strings. Embedded HTML is allowed, but should be avoided.
If the provider cannot parse the input data, it should raise an appropriate Python exception. | http://effbot.org/zone/effnews-provider.htm | crawl-002 | en | refinedweb |
This article describes making of an external tools add-in for PragmaSQL Editor by using ProcessStartInfo and Process classes from System.Diagnostics namespace.
Download Source Code
Download PragmaSQL External Tools Add-In Source
Introduction
PragmaSQL T-SQL editor has very extensive add-in support. External Tools add-in presented in this article has two goals
1. Serve as PragmaSQL add-in development example. 2. Provide a very common feature included in all development IDEs and editors.
IC#Code Add-In Architecture Overview
PragmaSQL makes use of IC#Code's Add-In architecture. This architecture provides fantastic fatures that enables us to develop plugins for our applications easly thus making our applications extendable. IC#Code Add-In architecture basicly provides
1 - Simple and neat XML add-in definition via .addin files 2 - Loading of add-in assemblies 3 - Expose add-in functionality through menus and toolbars of the host application 4 - And many utility services like MenuService and MessageService
For details about the architecture please refer to SODA - SharpDevelop Open Development Architecture by Mike Krueger
PragmaSQL Services Overview
PragmaSQL exposes many of the built-in host features to the add-in developers with a core librarary.
Exposed features include
1- Host Options 2- Editor Services: Access to T-SQL script editor and text editor 3- Object Explorer Service: Access to database object explorer 4- Project Explorer Service 5- Shared Scripts and Code Snippets 6- Internal web browser 7- Text Diff Service 8- Code completion lists 9- Application messages service
Entry point to all these services is HostServicesSingleton, as its name mimics this class is a singleton.
HostServicesSingleton Usage Example From PragmaSQL.ExternalTools
Example code provided below shows
1) How we can use Application Message Service to print messages into Host Application Messages window. 2) Evaluate macros with HostServices EvalMacro function and prepare tool arguments
private void process_Exited(object sender, EventArgs e)
{
Process p = sender as Process;
if (p == null)
return;
long handle = p.Handle.ToInt64();
if (!_runningToolDefs.ContainsKey(handle))
try
{
ExternalToolDef def = _runningToolDefs[handle];
//Here we clear application messages window
if (chkClearOutput.Checked)
HostServicesSingleton.HostServices.MsgService.ClearMessages();
bool shallShow = false;
while (p.StandardOutput.Peek() > -1)
{
string info = p.StandardOutput.ReadLine();
if (!String.IsNullOrEmpty(info))
{
// Print info message to Applicatin Messages Window
HostServicesSingleton.HostServices.MsgService.InfoMsg(info, "Tool : " + def.Title, String.Empty, String.Empty);
shallShow = true;
}
}
while (p.StandardError.Peek() > -1)
string error = p.StandardError.ReadLine();
if (!String.IsNullOrEmpty(error))
// Print error message to Applicatin Messages Window
HostServicesSingleton.HostServices.MsgService.ErrorMsg(error, "Tool : " + def.Title, String.Empty, String.Empty);
shallShow = true;
if (shallShow == true)
HostServicesSingleton.HostServices.MsgService.ShowMessages();
}
finally
_runningToolDefs.Remove(handle);
}
private void RenderExternalToolDef(ExternalToolDef exDef)
tbCmd.Text = String.Empty;
tbArgs.Text = String.Empty;
tbWorkingDir.Text = String.Empty;
if (exDef == null)
tbCmd.Text = exDef.Command;
// Here we prepare arguments by evaluating macros
tbArgs.Text = HostServicesSingleton.HostServices.EvalMacro(exDef.Args);
tbWorkingDir.Text = exDef.WorkingDir;
NOTE: In order to develop PragmaSQL Add-Ins you need to get PragmaSQL.Core.dll. All host functionality and many utility classes are hosted in this assembly. Download from here
PragmaSQL.ExternalTools
Add-In Definition
<AddIn name = "External Tools AddIn for PragmaSQL"
author = "Ali Özgür"
description = "Enables you to define external tools for PragmaSQL">
<Manifest>
<Identity name = "PragmaSQL.ExternalTools"/>
</Manifest>
<Runtime>
<Import assembly="PragmaSQL.ExternalTools.dll"/>
</Runtime>
<Path name = "/Workspace/ToolsMenu">
<MenuItem id = "ExtTools.Configure"
label = "External Tools..."
class ="PragmaSQL.ExternalTools.ConfigureTools"/>
<MenuItem id = "ExtTools.Run"
label = "Run External Tool"
shortcut = "Control|Shift|E"
class ="PragmaSQL.ExternalTools.RunExternalTool"/>
</Path>
</AddIn>
In the add-in definition file above we provide the description of our add-in and how our add-in integrates to PragmaSQL. Most important part of this add-in definition is the Path tag. We provide the predefined host path along with the MenuItems we want to be created for our add-in.
Another very important tag is Class. IC#Code Add-In architecture makes use of Command Pattern. We define commands associated to the specified menu/toolbar items with Command classes through this tag. In the above example for External Tools... menu item you can see that we want PragmaSQL.ExternalTools.ConfigureTools command to be invoked. ConfigureTools command class inherits from AbstractMenuCommand and has Run() method.
NOTE: Predefined host paths can be found in Base.addin file that comes with PragmaSQL installation.
ExternalTools Add-In Specific Classes
· ExternalToolDef: Serialazable class which is used to hold external tool configuration data for a single tool
· ExternalToolsCfg: Static class which is used to load and save serialized tool configuration data from a file into an IList<ExternalToolDef> static instance.Tool configuration items are accessible through Current public static property.
· ConfigureTools: Command class inherited from AbstractMenuCommand. This command is used to show tool configuration form
· RunExternalTool: Command class inherited from AbstractMenuCommand. This command is used to show run tool form.
· ConfigForm: Tool configuration form. Form is opened with ConfigureExternalTools public static method. This method returns one of these DialogResult enumeration values
OK: User pressed OK button and changes to tool configuration applied
Ignore: User pressed OK button and no changes to tool configuration exist
Cancel: User pressed Cancel button.
· RunToolForm: The form that lists external tool definitions and used to run a selected tool.
Running a tool by using Process and ProcessStartInfo
.NET Framework provides ProcessStartInfo and Process classes under System.Diagnostics namespace that can be used to run an external process. from our code. ExternalTools add-in makes use of these classes from RunToolForm.
RunTool function looks like:
private void RunTool()
if (CurrentDef == null)
ProcessStartInfo psi = new System.Diagnostics.ProcessStartInfo();
tbArgs.Text = HostServicesSingleton.HostServices.EvalMacro(CurrentDef.Args);
psi.FileName = CurrentDef.Command;
psi.Arguments = tbArgs.Text;
psi.WorkingDirectory = CurrentDef.WorkingDir;
psi.RedirectStandardOutput = CurrentDef.UseOuput;
psi.RedirectStandardError = CurrentDef.UseOuput;
psi.CreateNoWindow = CurrentDef.UseOuput;
psi.UseShellExecute = !CurrentDef.UseOuput;
Process p = new Process();
p.EnableRaisingEvents = CurrentDef.UseOuput;
if (CurrentDef.UseOuput)
p.Exited += new EventHandler(p_Exited);
p.StartInfo = psi;
p.Start();
_runningToolDefs.Add(p.Handle.ToInt64(), CurrentDef);
As you can see there is nothing special about starting a process. We simply define the fileName, arguments and working directory of the process with a ProcessStartInfo class and then create a Process instance using this ProcessStartInfo.
Interesting points in this implementation are
1) Standard output and error redirecting.
You can redirect standard output/error from a process anywhere you like providing these
2) Synchronous vs Asynchronous Output Reading
Process class provides synchronous vs asynchronous reading of redirected output/error. However in implementation choosing between one of the methods makes a big difference. In our implementation it seems as if we have chosen synchronous output reading. But that is not true. Take a look at the RunTool method again. We attach to the Exited event of the Process instance with p.Exited += new EventHandler(p_Exited); on line 17. After implementing synchronous read in mind p_Exited method threw cross-thread call exception. This exception indicated two problems
Simply I fixed PragmaSQL Message Service to support cross thread calls by adding some Invoke() related code and that was it.
Conclusion
External tool support was a general requirement for PragmaSQL. In this article we covered IC#Code Add-In support, how PragmaSQL makes use of IC#Code Add-In architecture, what services does PragmaSQL exposes to provide a pluggable/extendable application and some initial insight to PragmaSQL Add-In development with source code examples. | http://www.c-sharpcorner.com/UploadFile/deadtone/pragmasql_externaltools12062007045612AM/pragmasql_externaltools.aspx | crawl-002 | en | refinedweb |
27 April 2012 09:16 [Source: ICIS news]
TOKYO (ICIS)--Japanese chemical producer Showa Denko said on Friday its first-quarter net profit doubled to yen (Y) 5.33bn ($65.8m) partly on the back of improved performance in the electronics segment as chemical earnings slumped.
This compares with a net profit of Y2.58bn in the same period a year earlier.
Showa Denko’s sales rose by 6% year on year to Y181.4bn in the first three months of 2012, while operating profit grew by 2% to Y9.54bn, the company said in a statement.
Operating profit at the firm’s petrochemical segment fell by 97% to Y73m in the first quarter, while sales declined by 20% to Y47.1bn, it said.
Earnings at the petrochemical segment were weighed by reduced ethylene output as its plant at ?xml:namespace>
($1 = Y | http://www.icis.com/Articles/2012/04/27/9554260/japans-showa-denkos-q1-net-profit-doubles-to-66m.html | CC-MAIN-2015-22 | en | refinedweb |
08 August 2012 09:20 [Source: ICIS news]
MELBOURNE (ICIS)--?xml:namespace>
The company’s 30,000 tonne/year unit at Mahad and 100,000 tonne/year plant at Sakarwadi have been operating at about full capacity from early this week, the source said.
The company was running its two plants at about 50% of capacity in July.
Domestic etac prices in
Other etac | http://www.icis.com/Articles/2012/08/08/9584913/indias-somaiya-restores-maharashtra-etac-plants-op.html | CC-MAIN-2015-22 | en | refinedweb |
in reply to
Re: Convert nested list HTML TOC to JS Tree ?
in thread Convert nested list HTML TOC to JS Tree ?
Thnx for the links. I also found a
use.perl.org entry reviewing Pod::POM::Web with a screenshot link. And then
found several other solutions on CPAN. I'd forgotten about the
Pod::POM namespace. | http://www.perlmonks.org/?node_id=619187 | CC-MAIN-2015-22 | en | refinedweb |
Yeah, the title is cryptic.
The problem (we think) is in here
my @disk_space = qx{df -k .};
map { $_ =~ s/ +/ /g } @disk_space;
[download]
It works fine to check the available disk space on *nix servers. But occasionally - just occasionally - it brings up command windows when executed on servers (Apache or IIS) running XP or Win 2003 and stops the script the code is in from working properly.
I read that there are better ways of writing qx{df -k .}; and having a Win32 compatible method of checking disk space would be nice.
The current work around is simply to not run this section of code when the OS is MSWin32. But there has to be a better way.
(I didn't write this code, I'm just in the process of debugging it.)
Dandelio:
I'd suggest using Win32::DriveInfo for windows machines.
Update: Fixed cpan link.
...roboticus
When your only tool is a hammer, all problems look like your thumb.
I'd use fsutil on windows:
C:\test>fsutil volume diskfree .
Total # of free bytes : 89573777408
Total # of bytes : 627247673344
Total # of avail free bytes : 89573777408
[download]
RIP Neil Armstrong
So, what I think is being said is that the script this is in needs an OS checker so it can load a Win32 specific utility to do the job?
The script needs to be cross-platform. 95%+ of the installations are on *nix. Maybe 5% of the Win installs come up with this problem. So it's a pretty rare issue.
So, what I think is being said is that the script this is in needs an OS checker so it can load a Win32 specific utility to do the job?
I have a df utility on my windows system -- part of the UnxUtils package though it segfaults on my 64-bit OS -- but generally most windows systems will not have it, and expecting users to find and install one is naive.
Far simpler I think to use something like:
sub freespace {
if( $^O eq 'MSWin32' ) {
`fsutil volume diskfree .` =~ m[avail free bytes : (\d+)];
return ( $1 // die $! ) / 1024;
}
elsif( $^O eq ... ) {
...
}
else {
`df -k .` =~ m[...];
return $1 // die $!
}
}
...
my $free = freesp | http://www.perlmonks.org/?node_id=995108 | CC-MAIN-2015-22 | en | refinedweb |
Inheritance & Polymorphism
John Lockheart
Ranch Hand
Joined: Oct 13, 2006
Posts: 115
posted
Jan 24, 2007 11:14:00
0
I created the following program, but am having difficulties. Most notably with the compareTo() method I have to implement. Also when I have to call a superclass. I commented out problems throughout the program. Any help would be great! (it's kind of long).
import javax.swing.JOptionPane; public class A2Q1 { public static void main (String args[]){ // An array of Threats Threat[] threats = new Threat[5]; System.out.println("Threat assessment : "); // Construct objects threats[0] = new Phishing("Credit card information", "CanadaTrust"); threats[1] = new Hoax("Wanna Win an Ipod?"); threats[2] = new Virus("Trojan", "e-mail reproduction"); threats[3] = new Spyware("WeatherFinder"); threats[4] = new Spyware("WeatherFinder"); // Processing // This one should be negative System.out.println("0 and 1: " + threats[0].compareTo(threats[1])); // This one should be positive System.out.println("2 and 1: " + threats[2].compareTo(threats[1])); // This one should be negative System.out.println("2 and 3: " + threats[2].compareTo(threats[3])); // This one should be zero System.out.println("3 and 4: " + threats[3].compareTo(threats[4])); ((Malware)threats[2]).increaseSeverity(); // This one should be positive // (since 2 now has greater severity than 3) System.out.println("2 and 3: " + threats[2].compareTo(threats[3])); // Output // Polymorphism at work! for (int i = 0; i < threats.length; i++) { System.out.println(threats[i].toString()); } } } abstract class Threat{ public Threat(String new_name){ name = new_name; } public String getName(){ return name; } public String toString(){ return "Threat : " + name; //format properly } public int compareTo(Object other){ //only be overidden in malware class *Causes problems when compiled* final int less = -1; final int equals = 0; final int more = 1; int compare; if(this == other){ return equals; } compare = this.name.compareTo(other.name); //cannot find symbol ERROR! if (compare != equals){ return compare; } } private String name; } class Phishing extends Threat{ public Phishing(String new_name, String new_target){ super(new_name); target = new_target; } public String toString(){ return super.toString() + ", " + target; } public String getTarget(){ return target; } private String target; } abstract class Malware extends Threat{ public Malware(String new_name, int new_severity){ super(new_name); severity = new_severity; } public int getSeverity(){ return severity; } public void increaseSeverity(){ severity++; } public String toString(){ return super.toString() + ", " + severity; } public int compareTo(Object other){//Comparison, if both are malware compares severity, if severity matches compares names... //copied one from threat trying to find out where to begin..? final int less = -1; final int equals = 0; final int more = 1; int compare; //need a base case incase they do not equal if(this == other){ //if both are malware, compare severity //must compare severity here... compare = this.severity.compareTo(other.severity); if(compare !=equals){ //severity doesn't match return compare } return this.name.compareTo(other.name); } return compare } private int severity = 1; //supposed to be initialized to 1 } class Virus extends Malware{ //public Virus(int new_severity, String new_damage){ *what i had before* public Virus(String new_name, String new_damage){ //super(new_damage, new_severity); *what i had before* super(new_name, 1); //where to get the integer from for super class? set it as any value just to compile program damage = new_damage; } public String toString(){ return super.toString() + ", " + damage; } public String getDamage(){ return damage; } private String damage; } class Spyware extends Malware{ public Spyware(String new_name){ super(new_name, 1); //where to get the integer from for super class? set it as any value just to compile program } public String toString(){ return super.toString(); } } class Hoax extends Threat{ public Hoax(String new_name){ super(new_name); } public String toString(){ return super.toString(); } }
Ernest Friedman-Hill
author and iconoclast
Marshal
Joined: Jul 08, 2003
Posts: 24189
34
I like...
posted
Jan 24, 2007 11:31:00
0
In your compareTo() method, the "can't find symbol" error happens because "other", correctly, is an Object reference, and Object doesn't have a "name" member. Since "name" is a member of Threat, you need to case "other" to type Threat before "name" is visible:
compare = this.name.compareTo(((Threat) other).name);
Now, there's going to be another problem in this method: the compiler is going to complain that if "compare" is zero, the method doesn't return a value. Regardless of whether you think this is possible or not, the compiler will think it is, so you have to return a value no matter what. The easiest thing to do would just be to remove that final "if" and return "compare" directly.
If it were me, I'd probably write compareTo() like this:
public int compareTo(Object other){ return name.compareTo(((Threat) other).name); }
Notice that by making it shorter and removing unnecessary logic, it becomes so clear and readable that comments aren't even needed. Also note that if "other" isn't a Threat, then you'll get a
ClassCastException
. The Javadocs for Comparable explicitly say that this is OK, since it't not really meaningful to impose an ordering on Threats vs. non-Threats.
Now, as to your other questions about the constructor parameters: it depends on what you want to do, really. I've certainly seen plenty of code where a subclass constructor provides default parameters to a superclass constructor. Your default values may well be hard-coded, here based on some notion of how severe a thread "malware" is, for example. So you probably don't want to use "1" everywhere; you probably want to use values that reflect the actual relative threats.
[Jess in Action]
[AskingGoodQuestions]
John Lockheart
Ranch Hand
Joined: Oct 13, 2006
Posts: 115
posted
Jan 25, 2007 13:47:00
0
Thanks, that was very helpful. Although i'm still confused on how to make it work in the malware class. In the threat class the compareTo() method will compare by names. In the malware class it's supposed to be overidden, And if both objects are malware, compare them by severity, then by name (last) if severity matches. Any thoughts on that situation? As for the severity when calling the super class, I just initialized it to 1 to make sure the program works, I think i'll leave it for now until I solve the compareTo() problem.
I agree. Here's the link:
subject: Inheritance & Polymorphism
Similar Threads
How can I create an object based on input
Storing objects polymorphically in a Figure array
compareTo() method
Need help with code!!!
compareTo method
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/405944/java/java/Inheritance-Polymorphism | CC-MAIN-2015-22 | en | refinedweb |
Introduction
After we wrote formulas for Excel, the values would be displayed in specified cells. It is possible that the formulas are wrongly used or lost because there are too many formulas. Therefore, sometimes, we need to read the formula for one cell to ensure if the formula is right. Well then, how to read Excel formula?
Read Formula in Microsoft Excel
Through Formulas tab in Excel 2007, we can find one button named Show Formulas. Select the cells which we want to read formulas, and then click the button. After that, we can see the formulas instead of values.
Read Excel Formulas via Spire.XLS
Spire.XLS presents you an easy way to read formula in the worksheet. You can get the formula through the value you provide, while you should specify where the value is. In the demo, we load a workbook from file named "ReadFormulaSample.xls" which has a formula written in the sheet["B5"], we can read the formula through its value in sheet["C5"]. In this example, in order to view, we read the formula to the sheet["D5"].
The following code displays the method to read formulas for cells with C#/VB.NET:
using Spire.Xls; namespace ReadFormula { class Program { static void Main(string[] args) { //Create a new workbook Workbook workbook = new Workbook(); //Load a workbook from file workbook.LoadFromFile("ReadFormulaSample.xls"); //Initialize the worksheet Worksheet sheet = workbook.Worksheets[0]; //Read the formula sheet.Range["D5"].Text = sheet.Range["C5"].Formula; //Save the file workbook.SaveToFile("sample.xls"); //Launch the file System.Diagnostics.Process.Start("Sample.xls"); } }
Imports Spire.Xls Module Module1 Sub Main() 'Create a new workbook Dim workbook As New Workbook() 'Load a workbook from file workbook.LoadFromFile("ReadFormulaSample.xls") 'Initialize the worksheet Dim sheet As Worksheet = workbook.Worksheets(0) 'Read the formula sheet.Range("D5").Text = sheet.Range("C5").Formula 'Save doc file. workbook.SaveToFile("Sample.xls") 'Launching the MS Word file. System.Diagnostics.Process.Start("Sample.xls") End Sub End Module
After running the demo, you may find a formula appear in the worksheet you specify:
| http://www.e-iceblue.com/Knowledgebase/Spire.XLS/Program-Guide/How-to-Read-Excel-Formulas.html | CC-MAIN-2015-22 | en | refinedweb |
2.6.32-stable review patch. If anyone has any objections, please let us know.------------------From: Alexey Dobriyan <adobriyan@gmail.com>commit 13ccdfc2af03e09e60791f7d4bc4ccf53398af7c upstream.Expectation hashtable size was simply glued to a variable with no codeto rehash expectations, so it was a bug to allow writing to it.Make "expect_hashsize" readonly.Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>Signed-off-by: Patrick McHardy <kaber@trash.net>Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>--- net/netfilter/nf_conntrack_expect.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)--- a/net/netfilter/nf_conntrack_expect.c+++ b/net/netfilter/nf_conntrack_expect.c@@ -569,7 +569,7 @@ static void exp_proc_remove(struct net * #endif /* CONFIG_PROC_FS */ } -module_param_named(expect_hashsize, nf_ct_expect_hsize, uint, 0600);+module_param_named(expect_hashsize, nf_ct_expect_hsize, uint, 0400); int nf_conntrack_expect_init(struct net *net) { | https://lkml.org/lkml/2010/2/19/208 | CC-MAIN-2015-22 | en | refinedweb |
Accessing Custom Assemblies Through Expressions
Once you have created a custom assembly, made it available to Report Designer or the report server, added the appropriate security policy, and added a reference to your custom assembly in your report definition, you can access the members of the classes in your assembly using report expressions. To refer to custom code in an expression, you must call the member of a class within the assembly. How you do this depends on whether the method is static or instance-based.
Static members belong to the class or type itself and not to an instantiated object. These members can be accessed by directly calling them from the class. You should use static members to call custom functions in a report whenever possible, because static members perform best. To call a static member, you need to reference it as an expression that takes the form =Namespace.Class.Method.
To call static members
To call a static member, set your expression equal to the fully qualified name of the member, which includes the namespace, class name, and member name. The following example calls the ToGBP method, which converts the StandardCost field value from dollars to pounds sterling and displays it in a report:
Important Information Regarding Static Fields and Properties
Currently, all reports are executed in the same application domain. This means that reports with user-specific, static data expose this data to other instances of the same report. This condition might make it possible for the static data of one user to be available to all users currently running a particular report. For this reason, it is highly recommended that you not use static fields or properties in custom assemblies or in the Code element; instead, use instance fields or properties in your reports. Static methods can still be used, because they do not store state or data.
If your custom assembly contains instance members that you need to access in a report definition, you must add an instance name for your class to the report. You can add an instance name for a class using the Code tab of the Report Properties dialog. For more information about adding instances of classes to a report, see Custom Code and Assembly References in Expressions in Report Designer (SSRS).
To call a static member, you need to reference it as an expression that takes the form =Code.InstanceName.Method.
To call instance members
To call an instance member of a custom assembly, you must reference the Code keyword followed by the instance name and the method. The following example calls an instance method ToEUR which converts the StandardCost field value from dollars to euros and displays it in a report: | https://msdn.microsoft.com/en-us/library/ms154507.aspx | CC-MAIN-2015-22 | en | refinedweb |
Using Windows PowerShell with AD RMS
Published: July 8, 2009
Updated: October 22, 2009
Applies To: Windows Server 2008 R2, Windows Server 2008 R2 with SP1
Active Directory Rights Management Services (AD RMS) for the Windows Server® 2008 R2 operating system is information protection technology that works with AD RMS-enabled applications to help safeguard digital information from unauthorized use, both online and offline, and inside and outside of the firewall. In addition to such graphical user interface (GUI) tools as an installation wizard, a server-role page in Server Manager, and the AD RMS snap-in for Microsoft Management Console (MMC), you can use Windows PowerShell cmdlets to install, configure, and administer AD RMS. This guide provides information about using Windows PowerShell cmdlets to perform these tasks with AD RMS.
This guide explains how to use the Windows PowerShell cmdlets that enable you to install, configure, and administer the AD RMS server role on a computer running Windows Server 2008 R2. It introduces the Windows PowerShell modules that implement AD RMS-specific cmdlets, describes the namespace that these cmdlets work in, and also shows how to use general-purpose cmdlets, such as Set-Itemproperty, to manipulate items in these namespaces that represent AD RMS settings.
AD RMS cmdlets are implemented by two Windows PowerShell modules: The AD RMS deployment Windows PowerShell module (AdRmsInstall) and the AD RMS administrative Windows PowerShell module (AdRmsAdmin). To access cmdlets implemented by these modules, you must import their corresponding modules. To import the deployment module, at a Windows PowerShell prompt, type
Import-Module AdRms. To import the administration module, type
Import-Module AdRmsAdmin. You can also import these modules by importing all available Windows PowerShell modules.
This document does not provide overview or introductory information about AD RMS or Windows PowerShell. To use this document, you should have enough experience with AD RMS that you can install, configure, and administer it by using GUI tools. You should also have basic experience running Windows PowerShell cmdlets. For general information about AD RMS, see the AD RMS TechCenter(). For information about Windows PowerShell, see Scripting with Windows PowerShell(). | https://technet.microsoft.com/en-us/library/ee221065(d=printer,v=ws.10).aspx | CC-MAIN-2015-22 | en | refinedweb |
public interface DOMMetaFactory
Extension interface for
OMMetaFactory implementations that support
OMAbstractFactory.FEATURE_DOM.
Axiom implementations supporting this feature MUST conform to the Axiom API as well as the DOM API, and nodes created by the implementation MUST implement both the Axiom interfaces and the DOM interfaces corresponding, as specified by the following table:
Document.createElementNS(String, String).
Document.createAttributeNS(String, String)and that don't represent namespace declarations. Axiom doesn't use
OMAttributeto represent namespace declarations, and
OMNamespaceinstances representing a namespace declarations are not expected to implement
Attr.
The Axiom API is designed such that nodes are created using a factory (
OMFactory or
SOAPFactory) that is expected to be a singleton and stateless. On the other hand, in the
DOM API, the
Document instance plays the role of node factory, and each node (explicitly
or implicitly) keeps a reference to the
Document instance from which it was created (the
owner document). To address this difference in a consistent way and to make it possible to
use both the Axiom API and the DOM API on the same object model instance, the implementation MUST
conform to the following rules:
OMFactorythat don't have an
OMContainerparameter or that are invoked with a
null
OMContaineras well as to methods such as
OMElement.cloneOMElement().
DOMException.WRONG_DOCUMENT_ERRerror.
OMNode.detach()
OMElement.removeAttribute(OMAttribute)
OMElement.setText(String)and
OMElement.setText(QName)(in the case where the side effect of the invocation is to detach preexisting nodes)
OMElement.addAttribute(OMAttribute)and
OMElement.addAttribute(String, String, OMNamespace)(in the case where the new attribute replaces an existing one, which will be removed from its owner)
Documentinstances created using the
DocumentBuilderFactoryand
DOMImplementationAPIs as well as the
Documentinstances implicitly created (as owner documents) by the Axiom API will have as their
OMFactory(as reported by
OMInformationItem.getOMFactory()) the instance returned by
OMMetaFactory.getOMFactory(). Any additional nodes created using the DOM API will inherit the
OMFactoryof the owner document.
The implementation SHOULD instantiate the implicitly created owner documents lazily (typically
when explicitly requested using DOM's
Node.getOwnerDocument() API) to avoid creating a
large number of temporary
Document instances when the Axiom API is used. Note however
that this no impact on the behavior visible to the application code.
DocumentBuilderFactory newDocumentBuilderFactory()
DocumentBuilderFactory. Since Axiom doesn't support non namespace aware processing, the returned factory is always configured with
namespaceAwareset to
true(in contrast to the default settings used by
DocumentBuilderFactory.newInstance()).
DOMImplementation getDOMImplementation()
DOMImplementationinstance.
DOMImplementationinstance | http://ws.apache.org/axiom/apidocs/org/apache/axiom/om/dom/DOMMetaFactory.html | CC-MAIN-2015-22 | en | refinedweb |
in reply to
Practical example of "Is Perl code maintainable"
There is "readable" and then there is "rambling and confusing"
I would argue that from a maintainability standpoint, your version may be too terse for a novice, while the original version may be too verbose and full of noise. Might i suggest something like...
sub file_mode {
my ($file) = @_;
-f $file or return -1;
return (stat(_))[2]; # reuse stat from -f
}
[download]
...as the appropriate middle ground.
(NOTE: technically the two versions you posted aren't equivalent since yours only does one stat op and the original does two, but unless you expect the file perms to change after the "-f" test i'm guessing one stat is fine) | http://www.perlmonks.org/?node_id=632154 | CC-MAIN-2015-22 | en | refinedweb |
Default WSDL
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
The default WSDL is returned when the argument string wsdl is passed to an HTTP SOAP endpoint that is running on an instance of SQL Server; for example:.
For default WSDL documents, the parameter elements contain type mappings from the WSDL sqltypes:type mappings to SQL Server system data types. For information about these mappings, see Data Type Mappings in Native XML Web Services.
The default WSDL document generated by an endpoint specifies parameter type by referencing one of the types defined in Data Type Mappings or by referencing subtypes of those types, and then additionally adding restrictions. For all string and binary types, the default WSDL returned by the endpoint will include a subtype that includes the XSD maxLength facet for specifying number of bytes or characters.
The maxLength facet is not generated for variable-length types that are declared with the (max) specifier. For decimal and numeric data types, the default WSDL will generate a subtype that includes the XSD totalDigits and fractionDigits facets that will describe the precision and scale of the parameter.
Default WSDL instances include support for mapping SQL Server data types to common language runtime (CLR) types that are provided in the Microsoft .NET Framework 2.0. Because some built-in CLR data types do not align well with SQL Server data types, the System.Data.SqlTypes namespace is provided in the .NET Framework 2.0. This enables a managed type system that could specifically handle many of the facets of the SQL Server data type system that otherwise cannot be mapped within XSD types.
For the .NET Framework 2.0 release, the Web Services Definition Language Tool (Wsdl.exe) has built-in support for generating the appropriate SQL Server to CLR type mappings for default WSDL instances. This support is available whether you are using Wsdl.exe at the command prompt or by way of adding a Web Reference to a Visual Studio 2005 project.
In the WSDL instances that are generated by using this type mapping support, all SQL Server data types are encoded by declaring and using a specific namespace URI () and its corresponding prefix (sqltypes). The following table shows how sqltype:type mappings in WSDL instances generated through the Wsdl.exe tool to map to their corresponding CLR types.
1 Mapped to a class wrapper for an array of System.Xml.XmlNode objects. The wrapper class is named xml for untyped XML parameters, but uses a generated name for typed XML parameters.
2 Mapped to a wrapper class around a single System.Xml.XmlElement object that uses a generated name.
When xml data type parameters are defined in the default WSDL, they are mapped to the sqltypes:xml WSDL data type. This allows for any well-formed XML to be specified without any additional content or schema-based validation. For more information, see Implementing XML in SQL Server
For xml data type parameters bound to an XML namespace, the default WSDL document will generate an <xsd:complexType> element that contains restrictions over the sqltypes:xml type. The typed XML in turn, specifies xsd:any type as the restriction rule. As shown in the following example, this restriction in turn will specify the processContents attribute setting its value to strict and will also specify the namespace attribute and set its value to a list of the corresponding XML namespaces associated with the schema collection to which the typed XML is bound. | https://technet.microsoft.com/en-us/library/ms190983(v=sql.105).aspx | CC-MAIN-2015-22 | en | refinedweb |
Highlighting the Current Page of WebGrid
Introduction:
WebGrid helper makes it very easy to display/show tabular data quickly in your ASP.NET MVC 3(or ASP.NET Web Pages) application. This works great in most cases. However, sometimes, you may need to customize the WebGrid. For example, you may need to highlight the current page of WebGrid. In this article, I will show you how to highlight the current page by using a very simple trick.
Description:
For example purpose, I am putting all the code in a MVC view. It is not recommended for production use. So, just open your MVC view(or your ASP.NET web page) and add these lines,
@using System.Text.RegularExpressions @functions{ public class Employee { public Employee() { ContactNumbers = new List<string>(); } public string Name { get; set; } public Address Address { get; set; } public List<string> ContactNumbers { get; set; } public Address NotNullableAddress { get { return Address ?? new Address(); } } public string Contacts { get { return string.Join("; ", ContactNumbers); } } } public class Address { public string City { get; set; } } } @{ var list = new List<Employee>(); for (int i = 1; i < 51; i++) { list.Add(new Employee { Name = "MyName" + i, Address = new Address { City = "MyCity" + i }, ContactNumbers = new List<string> { "021-XXXXXXXX" + i, "XXXXXXXX" + i } }); } var grid = new WebGrid(source: list); var pager = grid.Pager().ToString(); var newPager = Regex.Replace(pager, "(\\d+) ", "<span class='selected'>$1</span>"); var gridHtml = grid.GetHtml(columns: grid.Columns( grid.Column("NotNullableAddress.City", header: "City"), grid.Column("Name"), grid.Column("Contacts"))).ToString(); gridHtml = Regex.Replace(gridHtml, "(<tfoot>\\s*<tr>\\s*<td[^>]*>)([^<]*|<[^/]*|</[^t]*)*(</tfoot>)", "$1" + newPager + "</td></tr>$3"); } @(new HtmlString(gridHtml)) <style> .selected { background: none repeat scroll 0 0 #267CB2; color: #FFFFFF; margin: 5px; } </style>
WebGrid helper allows you to style the pager using footerStyle parameter. But unfortunately it is very difficult to style the current page because the current page is render as just text inside the td element. This is why I have used the Regex class above. The above code will simply insert a span with selected class on the position of selected page text. I have also added the selected class style on the view. So, you can easily style the selected page text.
Summary:
WebGrid makes it very easy to show tabular data with paging, sorting, filtering, etc. Sometimes, you may need to highlight the current/selected page text because highlighting the current page will increase the user experience. In this article, I showed you how to highlight the current page using a very simple trick. Hopefully you will enjoy this article too. | http://weblogs.asp.net/imranbaloch/highlighting-the-current-page-of-webgrid | CC-MAIN-2015-22 | en | refinedweb |
.
System.Management.Automation
Then, you have to add the following 'using' statements to import the required types:
using
using System.Collections.ObjectModel;
using System.Management.Automation;
using System.Management.Automation.Runspaces;
The following code block shows the RunScript method that does all the hard work. It takes the script text, executes it, and returns the result as a string.
RunScript:
pipeline.Invoke()
runspace.SessionStateProxy.SetVariable("someName", someObject)
SetVariable()
...
//.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
You
runspace.SessionStateProxy.SetVariable("DemoForm", this);
runspace.RunspaceConfiguration.Cmdlets.Append()
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/18229/How-to-run-PowerShell-scripts-from-C?msg=4346791 | CC-MAIN-2015-22 | en | refinedweb |
NAME
epoll - I/O event notification facility
SYNOPSIS
#include <sys/epoll.h>
DESCRIPTION
epoll is a variant of poll(2) that can be used either as an edge- triggered or a level-triggered interface and scales well to large numbers of watched file descriptors.) should use non-blocking file descriptors to avoid having a blocking read or write starve a task that is handling multiple file descriptors. The suggested way to use epoll as an edge-triggered (EPOLLET) interface is as follows: i with non-blocking file descriptors. Example for Suggested Usage While the usage of epoll when employed as a level-triggered interface does have the same semantics as poll(2),.-read available I/O. Q9 Do I need to continuously read/write a file descriptor until EAGAIN when using the EPOLLET flag (edge-triggered behavior) ?, the condition that the read/write I/O space is exhausted can. | http://manpages.ubuntu.com/manpages/hardy/man7/epoll.7.html | CC-MAIN-2015-22 | en | refinedweb |
.
('/', '/RPC2'). New in version 2.5.
Example:
from SimpleXMLRPCServer import SimpleXMLRPCServer # Create server server = SimpleXMLRPCServer(("localhost", 8000))(''). | http://www.wingware.com/psupport/python-manual/2.5/lib/simple-xmlrpc-servers.html | CC-MAIN-2015-22 | en | refinedweb |
22 January 2008 15:37 [Source: ICIS news]
LONDON (ICIS news)--NYMEX light sweet crude futures recovered some of the losses posted on Monday and earlier on Tuesday after the US Federal Reserve cut interest rates by three quarters of a percentage point in a bid to restore confidence in the ?xml:namespace>
?xml:namespace>
However, the emergency rate cut seemed to have little effect in reversing the overall downward trend in crude oil prices.
By 14:30 GMT, February NYMEX crude was trading around $88.04/bbl, down $2.52/bbl from the Friday close of $90.57/bbl but up on earlier figures on Tuesday.
Due to the holidays in the
At the same time, March Brent crude on ICE Futures was trading around $86.84/bbl, down $0.67/bbl from the Monday close.
The
The oil markets have been fallen in tandem with global markets over fears that a
Investment banking sources were expecting the Fed to cut interest rates in their official meeting next week by a further half a percentage point, followed by another half point cut at the end of March.
Tony Dillon | http://www.icis.com/Articles/2008/01/22/9094888/nymex-crude-recovers-slightly-after-fed-rate-cut.html | CC-MAIN-2015-22 | en | refinedweb |
11 May 2011 21:04 [Source: ICIS news]
TORONTO (ICIS)--Chemtura will increase production capacity for calcium sulphonate grease at its West Hill plant in ?xml:namespace>
Chemtura expects the expansion to come on stream by the end of the year, it said, without disclosing financial or capacity details.
Calcium sulphonate grease is part of Chemtura’s petroleum additives business. It is used in high-load, high-temperature applications in the marine, steel, paper and food industries.
In 2008, Chemtura announced a 60% capacity expansion at the plant. | http://www.icis.com/Articles/2011/05/11/9459094/chemtura-to-boost-calcium-sulphonate-grease-capacity-at-toronto-plant.html | CC-MAIN-2015-22 | en | refinedweb |
in reply to
Variable scope in packages.
When you require a script(*.pl), it brings it's variables, subs, and all into your namespace so you can simply access it as %crap. Voila!
NOTE: That if you require a package it will not be the same. It will not be automatically included in your namespace and you would have to make each call explicitly -- Credit goes to Petruchio on the note.
$_.=($=+(6<<1));print(chr(my$a=$_));$^H=$_+$_;$_=$^H;
print chr($_-39); # Easy but its ok.
| http://www.perlmonks.org/?node_id=92788 | CC-MAIN-2015-22 | en | refinedweb |
@Generated(value="OracleSDKGenerator", comments="API Version: 20160918") public class ActivateDomainRequest extends BmcRequest<Void>
getBody$, getInvocationCallback, getRetryConfiguration, setInvocationCallback, setRetryConfiguration, supportsExpect100Continue
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
public ActivateDomainRequest()
public String getDomainId()
The OCID of the identity domain. ActivateDomainRequest.Builder toBuilder()
Return an instance of
ActivateDomainRequest.Builder that allows you to modify request properties.
ActivateDomainRequest.Builderthat allows you to modify request properties.
public static ActivateDom> | https://docs.oracle.com/en-us/iaas/tools/java/2.44.0/com/oracle/bmc/identity/requests/ActivateDomainRequest.html | CC-MAIN-2022-40 | en | refinedweb |
The idea of function parameters in Python is to allow a programmer who is using that function, define variables dynamically within that function. For example:
def simple_addition(num1,num2): answer = num1 + num2 print('num1 is', num1) print(answer) simple_addition(5,3)
Here, we defined our function name as simple_addition. In the function parameters (often called parms for short), we specify variables named num1 and num2.
Next, within the function, we say this new answer variable is equal to whatever num1 plus num2 is. We then print out what num1 is, whatever it happens to be. Finally, the last line of this function just prints out the answer variable, which is num1 plus num2.
Now, to run this function and make use of these parameters, we run simple_addition(5,3). This runs the simple_addition function using the parameters of num1=5 and num2=3. Then our program sums 5 and 3 together, then we print out that num1 is 5, and finally we print out the "answer" which was defined already, which is the sum of 5 and 3, which is of course 8.
There is no limit to the amount of function parameters you have. If you want to just specify the definitions of these parameters without saying the parameter, like when we just said 5 and 3, instead of putting the parameter=5, then you must put them in the exact order. If you have a lot of parameters where it might be difficult to remember their order, you could do something like:
simple_addition(num2=3,num1=5)
In that case, when you call the function and define the parameters, you can see how we actually defined num2 before num1, even though in the function definition we ask for them in the other way around. As long as you specify the parameter you are defining, you can jumble them up. Otherwise, you must keep them in order!
Finally, not only must they be in perfect order, but you must not specify too many or two few definitions.
This will not work:
simple_addition(3,5,6)nor will this:
simple_addition(3) | https://pythonprogramming.net/function-parameters-python-3-basics/ | CC-MAIN-2022-40 | en | refinedweb |
Challenge
Christmas movies are the best movies. How much do they make at the box office? Show the power distribution curve with a vertical barchart.
My Solution
We built this one with React hooks because we can. Not a class-based component in sight ✌️
Styled components for styling, D3 for scales and data loading and parsing, hooks to hook it all together.
I looked around for a good data loading hook. None could be found. So we made our own 💪
Not that hard as it turns out. You need a dash of
useState to save the data
you load, a bit of
useEffect to run data loading on component mount aaaand
... that's it. Goes in your
App function.
function App() {const [data, setData] = useState(null);useEffect(() => {d3.tsv("/data.tsv", d => {const year = Number(d.movie.match(/\((\d+)\)/)[1]);return {movie: d.movie.replace(/\(\d+\)/, ""),year: year,per_year: Number(d.box_office) / (2018 - year),box_office: Number(d.box_office)};}).then(setData);},[!data]);
The
useState hook takes a default value, and always returns current state -
data - and a setter -
setData.
useEffect runs our function on every component render. After committing to
the DOM, I believe. We use
d3.tsv to load and parse our christmas movie
dataset, use a parsing function to transform each row into an object with all
the info we need, then call
setData when he have it.
Each datapoint holds
- a
moviename
- the
yeara movie was produced parsed from the movie name with a regex
- the
per_yearrevenue of the movie as a fraction
- the total
box_officerevenue
Switch display modes with React hooks
Movie box office revenue follows a pretty clear power law distribution. The highest grossing movie or two make a lot more than the next best. Which makes way more than next one down the list, etc.
But how does age factor into this?
Home Alone has had 28 years to make its revenue. Daddy's Home 2 is only a year old.
I decided to add a button to switch modes. From total
box_office to
per_year revenue. And boy does it change the story. Altho maybe I'm being
unfair because how long are theater runs anyway? 🤔
Driving that logic with React hooks looks like this 👇
const [perYear, setPerYear] = useState(false)const valueFunction = perYear ? d => d.per_year : d => d.box_office// ...<Button onClick={() => setPerYear(!perYear)}>{perYear ? "Show Total Box Office" : "Show Box Office Per Year"}</Button>
A
useState hook gives us current state and a setter. We use the state,
perYear, to define a value accessor function, and a butto's
onClick method
to toggle the value.
We're going to use that value accessor to render our graph. Makes the switch feel seamless.
Render
First you need this bit in your
App function. It renders
<VerticalBarchart>
in an SVG, if
data exists.
<Svg width="800" height="600" showKevin={perYear}>{data && (<VerticalBarchartdata={data}width={600}height={600}value={valueFunction}/>)}</Svg>
That
data && ... is a common trick. The return value of
true && something
is something, return value of
false && something is nothing. Means when
data is defined, we render, otherwise we don't.
Oh and
Svg is a styled SVG component. Gets a nice gif background when
showKevin is set to true 😛
The
VerticalBarchart itself is a functional component. We said no classes
right?
const VerticalBarchart = ({ data, width, height, value }) => {const yScale = d3.scaleBand().paddingInner(0.1).domain(data.map(d => d.movie)).range([0, height]);const widthScale = d3.scaleLinear().domain([0, d3.max(data, value)]).range([0, width]);return (<g>{data.map(d => (<React.Fragment key={d.movie}><Barx={0}y={yScale(d.movie)}height={yScale.bandwidth()}width={widthScale(value(d))}/><Label x={10} y={yScale(d.movie) + yScale.bandwidth() / 2}>{d.movie}</Label></React.Fragment>))}</g>);};
We can define our D3 scales right in the render function. Means we re-define them from scratch on every render and sometimes that's okay. Particularly when data is small and calculating domains and ranges is easy.
Once we have a
scaleBand for the vertical axis and a
scaleLinear for
widths, it's a matter of iterating over our data and rendering styled
<Bar>
and
<Label> components.
Notice that we use the
value accessor function every time we need the value
of a datapoint. To find the max value for our domain and to grab each
individual width.
Makes our chart automatically adapt to flicking that
perYear toggle 👌
That smooth width transition effect? That's just CSS.
const Bar = styled.rect`fill: green;transition: width 500ms;`;
React hooks really do make life easy 🎣
What you learned today
- the
useStateReact hook
- the
useEffectReact hook
- that it's okay to define D3 stuff in the render method | https://reactfordataviz.com/cookbook/4/ | CC-MAIN-2022-40 | en | refinedweb |
@Generated(value="OracleSDKGenerator", comments="API Version: 20200430") public class ConnectionSummary extends ExplicitlySetBmcModel
The connection summary object.={"key","modelVersion","parentRef","name","description","objectVersion","objectStatus","identifier","primarySchema","connectionProperties","isDefault","metadata","keyMap"}) protected ConnectionSummary(String key, String modelVersion, ParentReference parentRef, String name, String description, Integer objectVersion, Integer objectStatus, String identifier, Schema primarySchema, List<ConnectionProperty> connectionProperties, Boolean isDefault, ObjectMetadata metadata, Map<String,String> keyMap)Version()
The version of the object that is used to track changes in the object instance. Schema getPrimarySchema()
public List<ConnectionProperty> getConnectionProperties()
The properties for the connection.
public Boolean getIsDefault()
The default property for the connection.
public ObjectMetadata getMetadata()
public Map<String,String> getKeyMap()
A key map. If provided, key is replaced with generated key. This structure provides mapping between user provided key and generated | https://docs.oracle.com/en-us/iaas/tools/java/2.44.0/com/oracle/bmc/dataintegration/model/ConnectionSummary.html | CC-MAIN-2022-40 | en | refinedweb |
I downloaded Code::Blocks from here:
I'm learning c programming. When I run the following program, I get error:
iostream: No such file or directory error: syntax error before "namespace" warning: type defaults to `int' in declaration of `std' warning: data definition has no type or storage class In function `main': error: `cout' undeclared (first use in this function) error: (Each undeclared identifier is reported only once error: for each function it appears in.) error: `cin' undeclared (first use in this function)
I'm running the following program:
#include <iostream> using namespace std; int main() { int x; x = 0; do { // "Hello, world!" is printed at least one time // even though the condition is false cout<<"Hello, world!\n"; } while ( x != 0 ); cin.get(); }
I tried Dev-C++, I get the same error.
How to fix this?
Best Solution
Is this in a file like "program.c" or "program.cpp"? If it's a .c file, then your compiler may be interpreting it as C, and not C++. This could easily cause such an error. It's possible to "force" the compiler to treat either such extension as the other, but by default, .c files are for C, and .cpp files are compiled as C++.
It's either this, or somehow your default "include" directories for the standard library are not set up right, but I don't know how you'd fix that, as that'd be compiler/environment dependent. | https://itecnote.com/tecnote/c-codeblocks-dev-c-error-iostream-no-such-file-or-directory/ | CC-MAIN-2022-40 | en | refinedweb |
Investors in Raytheon Co. (Symbol: RTN) saw new options begin trading today, for the December 6th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the RTN options chain for the new December 6th contracts and identified one put and one call contract of particular interest.
The put contract at the $202.50 strike price has a current bid of $2.74. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $202.50, but will also collect the premium, putting the cost basis of the shares at $199.76 (before broker commissions). To an investor already interested in purchasing shares of RTN, that could represent an attractive alternative to paying $207.65/share today.
Because the $202.35% return on the cash commitment, or 11.47% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Raytheon Co., and highlighting in green where the $202.50 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $210.00 strike price has a current bid of $5.05. If an investor was to purchase shares of RTN stock at the current price level of $207.65/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $210.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 3.56% if the stock gets called away at the December 6th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if RTN shares really soar, which is why looking at the trailing twelve month trading history for Raytheon Co., as well as studying the business fundamentals becomes important. Below is a chart showing RTN's trailing twelve month trading history, with the $210.00 strike highlighted in red:
Considering the fact that the $210.43% boost of extra return to the investor, or 20.62% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 29%, while the implied volatility in the call contract example is 28%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $207.65). | https://www.nasdaq.com/articles/rtn-december-6th-options-begin-trading-2019-10-24 | CC-MAIN-2022-40 | en | refinedweb |
#include <rte_flow.h>
RTE_FLOW_ITEM_TYPE configure by threshold. Driver can divide the range of threshold and map to different accuracy levels that device support.
Threshold 0 means perfect match (no fuzziness), while threshold 0xffffffff means fuzziest match.
Definition at line 987 of file rte_flow.h.
Accuracy threshold.
Definition at line 988 of file rte_flow.h. | https://doc.dpdk.org/api-19.11/structrte__flow__item__fuzzy.html | CC-MAIN-2022-40 | en | refinedweb |
Building PySide2 against non-gui Qt
Hello there!
I'm trying to build PySide2/Shiboken2 against a minimized Qt installation, that is, a Qt installation that eschews all modules not necessary for our purposes, which are basically all GUI modules.
However, when I try to run the
setup.pyscript from PySide's Git repo, I always end up with the following error message:
$ python3 setup.py build --qmake=/path/to/my/qmake --build-tests --ignore-git --parallel=8 --skip-modules=Gui,Declarative,Widgets,Location,Sensors,GraphicalEffects,QuickControls,WebSockets,QuickControls2,Qml,UiTools,Help,OpenGL,OpenGLFunctions,QuickWidgets,Svg,Quick,X11Extras,Test,Sql,PrintSupport Project ERROR: Unknown module(s) in QT: gui Traceback (most recent call last): File "setup.py", line 286, in <module> from build_scripts.main import get_package_version, check_allowed_python_version File "/home/<user>/Projekte/<client>/pyside-setup/build_scripts/main.py", line 289, in <module> qtinfo = QtInfo(QMAKE_COMMAND) File "/home/<user>/Projekte/<client>/pyside-setup/build_scripts/qtinfo.py", line 61, in __init__ self._init_properties() File "/home/<user>/Projekte/<client>/pyside-setup/build_scripts/qtinfo.py", line 199, in _init_properties self._get_other_properties() File "/home/<user>/Projekte/<client>/pyside-setup/build_scripts/qtinfo.py", line 184, in _get_other_properties self._get_qmake_mkspecs_variables() File "/home/<user>/Projekte/<client>/pyside-setup/build_scripts/qtinfo.py", line 208, in _get_qmake_mkspecs_variables lines = [s.strip() for s in qmake_output.splitlines()] AttributeError: 'NoneType' object has no attribute 'splitlines'
The same message appears when I try to build a different project that makes no use of GUI components and disables the GUI module in it's QMake project file using
QT -= gui. Yet, somehow, QMake seems to be looking for it, if I understand the error message and my earlier Google results correctly.
I'm building on Fedora 31, Qt 5.14.1 was build on the same machine, PySide 2 version checked out from repo is also 5.14.1. There are no spaces or similar in the path to the QMake installation (which, due to us using Conan as a package manager, is
/home/<user>/.conan/data/qt/5.14.1/<client>/stable/package/<package-hash>/).
- SGaist Lifetime Qt Champion last edited by
Hi,
I am not sure it is currently supported. I'd recommend bringing that question to the PySide mailing list. You'll find there PySide2 developers/maintainers. This forum is more user oriented.
- CristianMaureira last edited by
This is the bit it's failing:
def _get_qmake_mkspecs_variables(self): # Create empty temporary qmake project file. temp_file_name = 'qmake_fake_empty_project.txt' open(temp_file_name, 'a').close() # Query qmake for all of its mkspecs variables. qmake_output = self._get_qmake_output(['-E', temp_file_name]) lines = [s.strip() for s in qmake_output.splitlines()]
That means calling
qmakefailed on the fake_empty_project
Can you maybe share the whole output of the build process? Then we can check if something is wrong with the Qt detection.
- CristianMaureira last edited by
Quick update, we continued the discussion on the PySide2 Gitter room, and since QtSQL has dependencies with QtWidgets, OP will try to have a local
qtbasewithout QtSQL and hence without the graphical dependencies, because QtGui was not present in the installation.
The qmake step that produced the error is that we create an empty
.txtfile to use
qmake -E empty.txtand get information from the Qt installation and that command was complaining that GUI was not found.
Interesting nice to know.
Although I am not sure why anyone would use QtSql since straight python to a database works just fine and the front-end Gui ought to be completely divorced from your back-end data source in almost every application I can think of. This is because the benefits to using this methodology are well worth the implementation and further implementing and maintaining this methodology is often a lot easier and safer than tightly tying the front-end directly to the back-end which is what would occur, if understand its use purpose correctly, when you use QtSql
I've been working on this and actually got Shiboken and PySide to build, though I could only build with the Core, Xml and Network modules included (including any of the other modules we depend upon in our main application would lead to error messages about QtGui or QtWidgets not being found by CMake). Haven't testet them with our bindings, yet, as we seem to pull QtGui into them at some subproject.
That said, I needed to patch the pyside2-tools project. As it turns out, pylupdate includes
qapplication.hand
qmessagebox.hin
metatranslator.cpp- to display a message box in case of an error (using a global variable called
qAppthat I couldn't trace down to decide whether or not to use the message box or stdout).
I'll mark this as resolved as my original problem has been fixed. Thanks again to all who helped!
EDIT: For the record, these are my changes in the pyside2-tools subrepo:
diff --git a/CMakeLists.txt b/CMakeLists.txt index ce65750..179ce23 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -55,12 +55,14 @@ else() set(DESIGNER_PATH "${TOOLS_PATH}/designer${EXE_EXT}") endif() -install(FILES "${UIC_PATH}" - DESTINATION bin - PERMISSIONS - OWNER_EXECUTE OWNER_WRITE OWNER_READ - GROUP_EXECUTE GROUP_READ - WORLD_EXECUTE WORLD_READ) +if (EXISTS ${UIC_PATH}) + install(FILES "${UIC_PATH}" + DESTINATION bin + PERMISSIONS + OWNER_EXECUTE OWNER_WRITE OWNER_READ + GROUP_EXECUTE GROUP_READ + WORLD_EXECUTE WORLD_READ) +endif() install(FILES "${RCC_PATH}" DESTINATION bin diff --git a/pylupdate/CMakeLists.txt b/pylupdate/CMakeLists.txt index a46608c..3f67fe7 100644 --- a/pylupdate/CMakeLists.txt +++ b/pylupdate/CMakeLists.txt @@ -12,9 +12,7 @@ translator.cpp ) find_package(Qt5Core) -find_package(Qt5Gui) find_package(Qt5Xml) -find_package(Qt5Widgets) set(lupdate_MOC_HEADERS translator.h) qt5_wrap_cpp(lupdate_MOC_OUTFILES ${lupdate_MOC_HEADERS}) @@ -24,15 +22,11 @@ include_directories(pyside2-lupdate ${CMAKE_CURRENT_SOURCE_DIR} ${Qt5Xml_INCLUDE_DIRS} ${Qt5Core_INCLUDE_DIRS} - ${Qt5Gui_INCLUDE_DIRS} - ${Qt5Widgets_INCLUDE_DIRS} ) target_link_libraries(pyside2-lupdate ${Qt5Core_LIBRARIES} ${Qt5Xml_LIBRARIES} - ${Qt5Gui_LIBRARIES} - ${Qt5Widgets_LIBRARIES} ) install(TARGETS pyside2-lupdate RUNTIME DESTINATION bin) diff --git a/pylupdate/metatranslator.cpp b/pylupdate/metatranslator.cpp index 8a8ac8e..2588895 100644 --- a/pylupdate/metatranslator.cpp +++ b/pylupdate/metatranslator.cpp @@ -25,10 +25,8 @@ #include "metatranslator.h" -#include <qapplication.h> #include <qbytearray.h> #include <qfile.h> -#include <qmessagebox.h> #include <qtextcodec.h> #include <qtextstream.h> #include <qxml.h> @@ -204,11 +202,7 @@ bool TsHandler::fatalError( const QXmlParseException& exception ) const QString msg = QString::asprintf( "Parse error at line %d, column %d (%s).", exception.lineNumber(), exception.columnNumber(), exception.message().toLatin1().data() ); - if ( qApp == 0 ) - fprintf( stderr, "XML error: %s\n", msg.toLatin1().data() ); - else - QMessageBox::information(0, - QObject::tr("Qt Linguist"), msg ); + fprintf( stderr, "XML error: %s\n", msg.toLatin1().data() ); } return false; } | https://forum.qt.io/topic/113116/building-pyside2-against-non-gui-qt | CC-MAIN-2022-40 | en | refinedweb |
June 2007
Introduction
Sometimes there's no substitute for seeing what goes over the wire. This is especially true when writing software which uses web services like the Google Data APIs, where lots of operations involve making HTTP requests. When all else fails, you can verify that your program is doing what you'd expect by seeing the actual transmitted and received bytes. Many of the client libraries for the Google Data APIs have a debugging mode which displays the HTTP traffic. This is especially useful when you you don't have access to a packet sniffer like WireShark or Fiddler.
I can't count the number of times that I could have sworn my program was correct, only to find upon inspecting a packet trace that there was an extra newline character, or a misnamed HTTP header. Programming against a web service without looking at the HTTP traffic can be like trying to thread a needle with your eyes glued shut.
However, you may find yourself in a situation where a packet sniffer is unavailable or is inadequate to deal with encrypted packets. Never fear-you can get around this limitation by leveraging some in-program logging mechanisms. By utilizing these logging facilities, you can see some, if not all, of the exchanged data, even for encrypted HTTPS data or remote running code..
Java
You can use the
java.util.logging classes to set the logging
levels (and consequently expose traffic data) for a couple of key objects in
the client library. In the example below, I chose to look at the HTTP headers
and the activities of the XML parser to get a complete view of what is
traveling over the wire.
The Google Data Java client library has separate classes to
handle HTTP requests and XML parsing; thus, I need to create two Logger objects,
one for each class:
com.google.gdata.client.http.HttpGDataRequest handles the HTTP
traffic while
com.google.gdata.util.XmlParser is responsible for
XML parsing.
The logger instances will record activities for
HttpGDataRequest and
XmlParser, and you can control
the level of detail of each logger's output. For
this demonstration, I've chosen to view all of the events produced by the
HttpGDataRequest and
XmlParser objects.
Once I've created
and configured my Loggers, I need to tell them what to do when they receive an event from their classes. For now, I want to write all logging information out
to the console, so I create a
ConsoleHandler and add it to both of my Loggers.
Here's my sample code:
import com.google.gdata.client.spreadsheet.*;
import com.google.gdata.data.spreadsheet.*;
import com.google.gdata.util.*;
import java.io.*;
import java.net.URL;
import java.util.*;
import java.util.logging.*;
public class PrintSpreadsheetsWithLogging {
public static void main(String [] args) throws AuthenticationException,
ServiceException, IOException {
// Configure the logging mechanisms.
Logger httpLogger = Logger.getLogger("com.google.gdata.client.http.HttpGDataRequest");
httpLogger.setLevel(Level.ALL);
Logger xmlLogger = Logger.getLogger("com.google.gdata.util.XmlParser");
xmlLogger.setLevel(Level.ALL);
// Create a log handler which prints all log events to the console.
ConsoleHandler logHandler = new ConsoleHandler();
logHandler.setLevel(Level.ALL);
httpLogger.addHandler(logHandler);
xmlLogger.addHandler (logHandler);
SpreadsheetService service = new SpreadsheetService("testing-loggingExampleApp-1");
service.setUserCredentials(email, password);
// Get a list of your spreadsheets.
URL metafeedUrl = new URL(" ");
SpreadsheetFeed feed = service.getFeed(metafeedUrl, SpreadsheetFeed.class);
// Print the title of each spreadsheet.
List spreadsheets = feed.getEntries();
for (int i = 0; i < spreadsheets.size(); i++) {
SpreadsheetEntry entry = (SpreadsheetEntry)spreadsheets.get(i);
System.out.println("\t" + entry.getTitle().getPlainText());
}
}
}
When you run this program, you'll see something like this on the console (I cut out some of the less interesting parts):
Jun 7, 2007 10:24:50 AM ...HttpGDataRequest setPrivateHeader FINER: Authorization: <Not Logged> Jun 7, 2007 10:24:50 AM ...HttpGDataRequest setHeader FINER: User-Agent: ... ... Jun 7, 2007 10:25:20 AM ...HttpGDataRequest execute FINE: 200 OK Jun 7, 2007 10:25:20 AM ...HttpGDataRequest execute FINER: Date: Thu, 07 Jun 2007 17:25:24 GMT Jun 7, 2007 10:25:20 AM ...HttpGDataRequest execute FINER: null: HTTP/1.1 200 OK Jun 7, 2007 10:25:20 AM ...HttpGDataRequest execute FINER: Content-Type: application/atom+xml; charset=UTF-8 Jun 7, 2007 10:25:20 AM ...HttpGDataRequest execute FINER: Last-Modified: Thu, 07 Jun 2007 17:25:22 GMT ... Jun 7, 2007 10:25:20 AM ...XmlParser startElement FINE: Start element id Jun 7, 2007 10:25:20 AM ...XmlParser endElement FINE: End element id ... Jun 7, 2007 10:25:20 AM ...XmlParser startElement FINE: Start element title Jun 7, 2007 10:25:20 AM ...XmlParser startElement FINER: Attribute type='text' Jun 7, 2007 10:25:20 AM ...XmlParser endElement FINE: End element title ... Jun 7, 2007 10:25:20 AM ...XmlParser endElement FINE: End element entry ... Jun 7, 2007 10:25:20 AM ...XmlParser endElement FINE: End element feed
These logs can get quite large, so you might want to be more selective in setting the Loggers' levels. You could also
create a
FileHandler instead of a
ConsoleHandler to allow you to store the log data for later
use.
Of course, if Java isn't your bag, you could try .NET.
.NET
To capture the HTTP traffic in the .NET client library, you can replace the default request factory in the client with a
GDataLoggingRequestFactory.
The HTTP requests in the .NET library are created by the
GDataRequestFactory which is inside each Service object. The normal request factories don't perform
any logging but the
GDataLoggingRequestFactory, which is a subclass of the
GDataRequestFactory, has logging built in. You can specify the full path of the
log file by setting
CombinedFileName.
After setting up your request factory, you need to replace the request factory in your Service object by setting the
RequestFactory of the service object.
Your code might look something like this:
using System;
using Google.GData.Client;
using Google.GData.Extensions;
using Google.GData.Spreadsheets;
namespace LogginTest
{
class Program
{
static void Main(string[] args)
{
SpreadsheetsService service = new SpreadsheetsService("-exampleApp-1");
service.setUserCredentials(email, password);
Google.GData.Client.GDataLoggingRequestFactory factory = new GDataLoggingRequestFactory("wise", "SpreadsheetsLoggingTest");
factory.MethodOverride = true;
factory.CombinedLogFileName = "c:\\temp\\xmllog.log";
Console.WriteLine("Log file name:" + factory.CombinedLogFileName);
service.RequestFactory = factory;
SpreadsheetQuery query = new SpreadsheetQuery();
SpreadsheetFeed feed = service.Query(query);
Console.WriteLine("Your spreadsheets:");
foreach (SpreadsheetEntry entry in feed.Entries)
{
Console.WriteLine(entry.Title.Text);
}
Console.ReadKey();
}
}
}
The resulting log file contains the XML requests and responses. Here's an abbreviated example which I've formatted using tidy.
<?xml version='1.0' encoding='utf-8'?> <feed xmlns='' xmlns: <id></id> <updated>2007-06-07T22:05: 02.674Z</updated> <link rel='self' type='application/atom+xml' href=''> </link> ... <entry> <updated>2007-03-28T17:28:57.250Z</updated> <category scheme='' term=''> <title type='text'>events</title> <content type='text'>events</content> ... </entry> <entry> <updated>2007-05-25T22:11:08.200Z</updated> <category scheme='' term=''> </category> <title type='text'>UnitTest</title> <content type='text'>UnitTest</content> ... </entry> ... </feed>
But perhaps you are really into scripting languages, and you prefer using Python.
Python
To capture the HTTP traffic in the Python client library, you can echo the HTTP header traffic to the console by turning on debug mode in the HTTP client. The service object has a debug member which you can set to True.
Setting debug to true will set the debug flag in the underlying
HTTPRequest object which is contained in the service object.
Here's an example which will echo the HTTP headers sent from the spreadsheets server when you ask for a list of your spreadsheets.
#!/usr/bin/python
import gdata.spreadsheet.service
client = gdata.spreadsheet.service.SpreadsheetsService()
client.debug = True
client.ClientLogin(email, password)
feed = client.GetSpreadsheetsFeed()
for entry in feed.entry:
print entry.title.text
And you will see something like this on your console:
reply: 'HTTP/1.1 200 OK\r\n' header: Content-Type: application/atom+xml; charset=UTF-8 header: Last-Modified: Thu, 07 Jun 2007 18:22:35 GMT header: Cache-Control: max-age=0, must-revalidate, private header: Transfer-Encoding: chunked ... header: Date: Thu, 07 Jun 2007 18:22:35 GMT header: Server: GFE/1.3
As you perform additional operations, such as an insert or update, you'll see corresponding request data echoed to your console.
Conclusion
This brief tutorial has illustrated how you can add basic logging functionality into a Java, .NET, or Python program which uses the Google Data API client libraries. These techniques can be useful if you need to debug HTTP exchanges, but don't have access to a packet sniffer. I've only scratched the surface with these examples. Many of the logging mechanisms present in these languages are much more powerful than what is shown here. If you'd like more information on logging or the Google Data APIs, check out the list of resources below.
Client libraries covered in this article can be found on these pages:
Related knowledge base items:
- How do I get HTTP logging information in the Java client library?
- How do I get HTTP logging information in the .NET client library?
- What are some good tools for HTTP debugging?
- What is the Google Spreadsheets API?
Discussion groups: We have quite a few, with more coming as more Google Data APIs are rolled out. We actively monitor the groups.
If you have questions or suggestions, I'd enjoy hearing from you. Hop on the discussion group and start posting. | https://developers-dot-devsite-v2-prod.appspot.com/gdata/articles/debugging_client_libs | CC-MAIN-2022-40 | en | refinedweb |
3D and 4D niimgs: handling and visualizing#
Here we discover how to work with 3D and 4D niimgs.
Downloading tutorial datasets from Internet#
Nilearn comes with functions that download public data from Internet
Let’s first check where the data is downloaded on our disk:
from nilearn import datasets print('Datasets are stored in: %r' % datasets.get_data_dirs())
Datasets are stored in: ['/home/yasmin/nilearn_data']
Let’s now retrieve a motor contrast from a Neurovault repository
['/home/yasmin/nilearn_data/neurovault/collection_658/image_10426.nii.gz']
motor_images is a list of filenames. We need to take the first one
Visualizing a 3D file#
The file contains a 3D volume, we can easily visualize it as a statistical map:
from nilearn import plotting plotting.plot_stat_map(tmap_filename)
/home/yasmin/nilearn/nilearn/nilearn/plotting/img_plotting.py:300: FutureWarning: Default resolution of the MNI template will change from 2mm to 1mm in version 0.10.0 anat_img = load_mni152_template() <nilearn.plotting.displays._slicers.OrthoSlicer object at 0x7f1274daf910>
Visualizing works better with a threshold
plotting.plot_stat_map(tmap_filename, threshold=3)
<nilearn.plotting.displays._slicers.OrthoSlicer object at 0x7f12752f0430>
Visualizing one volume in a 4D file#
We can download resting-state networks from the Smith 2009 study on correspondence between rest and task
rsn = datasets.fetch_atlas_smith_2009()['rsn10'] rsn
'/home/yasmin/nilearn_data/smith_2009/PNAS_Smith09_rsn10.nii.gz'
It is a 4D nifti file. We load it into the memory to print its shape.
from nilearn import image print(image.load_img(rsn).shape)
(91, 109, 91, 10)
We can retrieve the first volume (note that Python indexing starts at 0):
first_rsn = image.index_img(rsn, 0) print(first_rsn.shape)
(91, 109, 91)
first_rsn is a 3D image.
We can then plot it
<nilearn.plotting.displays._slicers.OrthoSlicer object at 0x7f1272b9a5c0>
Looping on all volumes in a 4D file#
If we want to plot all the volumes in this 4D file, we can use iter_img to loop on them.
Then we give a few arguments to plot_stat_map in order to have a more compact display.
for img in image.iter_img(rsn): # img is now an in-memory 3D img plotting.plot_stat_map(img, threshold=3, display_mode="z", cut_coords=1, colorbar=False)
Looping through selected volumes in a 4D file#
If we want to plot selected volumes in this 4D file, we can use index_img with the slice constructor to select the desired volumes.
Afterwards, we’ll use iter_img to loop through them following the same formula as before.
selected_volumes = image.index_img(rsn, slice(3, 5))
If you’re new to Python, one thing to note is that the slice constructor uses 0-based indexing. You can confirm this by matching these slices to the previous plot above.
plotting.show is useful to force the display of figures when running outside IPython
To recap, neuroimaging images (niimgs as we call them) come in different flavors:
3D images, containing only one brain volume
4D images, containing multiple brain volumes.
More details about the input formats in nilearn for 3D and 4D images is given in the documentation section: Inputing data: file names or image objects.
Functions accept either 3D or 4D images, and we need to use on the one
hand
nilearn.image.index_img or
nilearn.image.iter_img
to break down 4D images into 3D images, and on the other hand
nilearn.image.concat_imgs to group a list of 3D images into a 4D
image.
Total running time of the script: ( 0 minutes 22.865 seconds)
Estimated memory usage: 135 MB
Gallery generated by Sphinx-Gallery | https://nilearn.github.io/stable/auto_examples/00_tutorials/plot_3d_and_4d_niimg.html | CC-MAIN-2022-40 | en | refinedweb |
Introduction (linked article is in Dutch).
I’ve written my experiences in how to develop a CDM and how to use it at run time in three succeeding blog posts:
– part I: Standards & Guidelines
– part II: XML Namespace Standards
– part III: Dependency Management & Interface Tailoring
But first let us start with the beginning.:
<location> <street>A-Street<":"456b"} }
As you can see, there are not only four different ways of data modelling (two XML types, a CVS and a JSON type), but also four different values for the same country. The second example does not even have a value for the Unites States, because it defaults to “us”.
Despite of the differences, these examples of different data models contain the same information. When a CDM is defined, it should be able to contain all data of these models. Note that the data items continent, state and zipcode do not exist in all the data models. Also note that there is no value for state in case of a Dutch address (example 3).
P.S. There might even be more connecting systems that do not do anything with addresses, so their data model does not contain address data.
Creating a Canonical Data Model
When a CDM model is created, it is wise to be flexible and ready for future changes and extensions. Create a CDM that fits best in the integration software being used. Most likely this will be a XML type data model. However, JSON is increasingly supported by integration software and is becoming more popular because of its reduced size and the fact that is is used in front end technology, especially for mobile devices.
Let’s select XML for the CDM in this example and English based, which makes it easier in case non-Dutch developers have to work with it.
In our example the address data in our CDM can look like this:
<Addresses> >
This XML data structure (model) contains all the data items available in our examples. When it comes to flexibility, it is wise to use elements only and no attributes in XML. Usage of elements only makes the model more flexible and ready for future changes. Do not use ‘mixed content’ elements, meaning elements with data as well as child elements. An element is either a container element containing child elements or an element only containing data. Create a ‘plural container’ element for all elements that might (in future) occur more than once. Make the plural element single and obligated (min=1, max=1) and its child elements optional (min=0, max=unbounded). This keeps your model backwards compatible.
It is wise to have standards for the CDM and one person (or a group in a large project) who is responsible for maintaining the CDM model. In the XSD you can see that in this CDM example all the data elements are optional. You could argue there should at least be a street or a city. But what if there is a system that deals with addresses being created, so between the screens there is only half the data of an address present? Or a system that uses only a part or maybe even one data item of an address?
First benefit of using a CDM: Less translations
Now why would you introduce another extra data model, when you already have to deal with existing data models? Can’t we just choose one of them and use it as the central ‘canonical’ data model? Or can’t we just translate data of the existing data models when they connect to each other?
I will start with the last question. When there are only two systems that are connect to each other and there are no future plans to connect them with other systems, that is a good option. It is an overkill to introduc a CDM. But when there are three systems that connect to each other, you already benefit from a CDM. three systems have a maximum of 6 translations: A-B, B-C and C-A (and vice vers). When using an interconnecting CDM, you also have a maximum of 6 translations: A-CDM, B-CDM and C-CDM (and vice versa).
When there are more than three connecting systems, the difference in the number of translations between using a CDM or not increases fast in favor of using a CDM:
Even when not all the systems are connect, the use of a CDM quickly results in less translations.
To give a graphical example of six connecting systems, but not all connecting with each other (it is even quite limited):
Connections without a CDM
Connections with a CDM
In this example, you need 16 translations when you do not use a CDM. With a CDM , you need only 12.
Second benefit of using a CDM: Translation maintenance
There is a second reason for using a CDM related to translations. What happens when the data model of a connected system changes? For example when a system is replaced by another system or when a system is updated to a newer version. In the last case, the changes most likely will be minor, but still have to be checked at every connection point, so each translation, of that system.
Let’s use the graphical picture above and assume that system E is replaced by system X.
When no CDM is used, there are four connections, with system A, B, C and D. This means there are 8 translations that have to be changed, two per system: to and from system X. For example when A is calling X, the request is a translation from A to X and the response from X to A. When a CDM is used, only two translations have to be changed: from CDM to X and from X to CDM.
Graphically explained:
Maintenance without a CDM
Maintenance with a CDM
Third benefit of using a CDM: Logic maintenance
Often the integration software that connects the systems, also has some logic or orchestration (e.g. with BPEL). For example: when a message from system A arrives and it is an order, then the order has to be routed to the ERP and to the financial system. And if the order is for a registered customer, the order has to be routed to the CRM system also. This kind of rules means there is some logic, the integration layer asks the CRM system if the customer of the order is a registered customer and depending on the answer, the order is routed to the CRM system or not. When this logic is using the data model of the connected systems, there is a dependency between the logic and the connecting system. So when one of the connecting systems changes, you need to check all logic to see if it uses (some part) of the data model of the connecting system. And if so, the logic has to be adjusted or rewritten. When a CMD is used, all logic (assume this is done right) is written with the data model of the CDM. Thus there is no dependency and a change of a connecting system does not affect the business logic in the integration layer.
Let’s take the previous pictures as example again and assume there is business logic written in BPEL at three places: business logic related to systems A, D and E, business logic related to systems B and E and business logic related to systems B and F. Now again: What happens when system E is replaced by system X. This means that BPEL1 and BPEL2 have to be adjusted or even rewritten (and tested) whereas with a CDM you do not have to do anything!
Graphically explained:
Logic maintenance without a CDM
Logic maintenance with a CDM
Existing Data model as CDM?
At the start of this blogpost, I raised the question whether an existing data model of a connecting system can be used as the CDM. In theory this is possible. Mostly there will be one large central system, most likely the ERP, that covers all or almost all kind of data. It may be tempting to use that model as the CDM. But what if somewhere in future the ERP is replaced by a new version. Even minor differences can cause problems. You might be tempted to take the old data model as the CDM and make translations from the new model to the CDM, the old data model. When using XML and the new and the old one have different namespaces, this is even possible. But still, you are bound to some old data model of an outdated system. Mostly that is not what you want. It might even cause problems with licenses, especially in case the system from which the data model it taken as CDM, is replaced by a system of another vendor.
Another disadvantage is that it could be confusing for developers of the system, especially future developer who are confronted with multiple data models of which two are quite similar. Mistakes are easily made. And what if a new system is connected and new data elements have to be added to the model. How flexible is it? Can it easily be changed and extended with backwards compatibility? That is why I advise to create your own CDM!
Conclusion
It is quite clear that using a Canonical Data Model in an integration layer or SOA environment soon pays off. You can summarize this into decoupling the external systems (by their data models) from the integration layer or SOA environment, so in fact decouple them from each other!
How do you do this? How do you setup a CDM which is flexible, so it can be changed and extended easily while being backward compatible? And the data model still should fit into interface descriptions of systems (wsdl) without getting too big, so it becomes, functionally seen, meaningless. This means it must be able to be tailored, so the interface (wsdl) reflects its functionality.
Another topic is standards and best practices about data, or specific XML, usage. Which standards are useful and why? When using XML, should you use a predefined XML ‘flavor’ like “Russian Doll”, “Venetian Blind”, “Salami Slice” or “Garden of Eden”? How about run time dependencies? Should you use a central run time CDM with versioning or only use a central design time CDM which does not exist at run time, but only acts as copy-paste reference for development? In my next blogpost I will share my experiences about these questions and give valuable advises which prevents problems we have run into.
10 thoughts on “Benefits of a Canonical Data Model (CDM) in a SOA environment”. To maintain consistency, standards I recommend to have a central person maintaining the CDM. He should also act as a ‘mediator’ between teams when there are conflicting demands.
Another important conclusion is that an entity (Person, Order) is completely different between domains. It has a different meaning’ and thus different data items. Of course you should never use the same data entity for those different domains. You can make the data items domain specific by naming (Order-> PurchaseOrder, SalesOrder, erc). In large enterprises you can even decide to have a CDM per domain.
He also argues that you will end up with a large CDM with lots of data items (attributes/elements) per entity (complex element) and they are for 90% not used for a specific interface. I’ve also experienced this in a large project, but we were able to solve this!
To solve this (and also the runtime dependency problem), I’ve described a solution of using an off-line CDM where the developer copies from (and applies a specific namespace) and tailors it by removing all the data items that are not use in the interface he is working on. This is described in my last blog post, which is (in my opinion), the most interesting and important one! See
The offline way of working provides a very flexible CDM which can start quit small and grow as needed without limitation due the backward compatibility.
Regards,
Emiel
This blog explains why to avoid a canonical data model: | https://technology.amis.nl/architecture/soa-benefits-of-a-canonical-data-model/ | CC-MAIN-2022-40 | en | refinedweb |
#include <POA.h>
#include <POA.h>
Inheritance diagram for TAO_POA:
Implementation of the PortableServer::POA interface.
[protected]
[virtual]
This method is used to downcast safely an instance of PortableServer::POA to an instance of TAO_POA when RTTI is not enabled.
Implements PortableServer::POA.
This method returns the adapter_name as a sequence of strings of length one or more or just a fixed name depending on the Object Adapter. Added wrt to ORT Spec.
Method to notify the IOR Interceptors when there is a state changed not related to POAManager.
Add the given tagged component to all profiles.
Add the given tagged component to all profiles matching the given ProfileId.
Accessor for cached POA policies.
[static]
This method gives the policies that are exposed to the client. These policies are shipped within the IOR.
Call the IORInterceptor::components_established() method on all registered IORInterceptors.
[static, protected]
Access the list of default POA policies. This list is used as a prototype for creating new POA's. It should
This method calls IORInterceptor::establish_components() method on all registered IORInterceptors, and IORInterceptor::components_established() once the former is completed.
Accessor methods to POA state.
The POA can be in one of HOLDING, ACTIVE, DISCARDING, INACTIVE and NON_EXISTENT states.
Accessor methods to ObjectReferenceTemplate.
Accessor methods to ObjectReferenceTemplate, non locked version.
Accessor methods to PortableInterceptor::ObjectReferenceFactory.
ImplRepo helper method, notify the ImplRepo on shutdown.
ImplRepo helper method, notify the ImplRepo on startup.
Wrapper for the ORB's key_to_object that will alter the object pointer if the ImplRepo is used.
Create the correct stub, properly initialized with the attributes and policies attached to the current POA.
[protected, virtual]
Like key_to_stub() but assume that the ORB is not shutting down.
Template method for creating new POA's of this type.
Utility functions for the other.
ORB Core for POA.
Reimplemented from CORBA::Object.
Get the ORT adapter, in case there is no adapter yet, this method will try to create one and hold the POA lock
Get the ORT adapter, in case there is no adapter yet, this method will try to create one but assumes the POA lock is already hole
Calls protected static method used when POACurrent is not appropriate.
Accessor for POA policies.
Store the given TaggedComponent for eventual insertion into all object reference profiles.
Store the given TaggedComponent for eventual insertion into all object reference profiles with the given ProfileId.
Set the object reference factory.
Accessor for the current thread policy of this POA.
[friend]
Adapter can be accepting, rejecting etc.
The object adapter we belong to.
Initial value:
{
024,
001,
017,
000
}
Pointer to the object reference template adapter.
Implementation Repository Server Object.
Flag for whether the ImR should be used or not. | https://www.dre.vanderbilt.edu/Doxygen/5.4.4/html/tao/portableserver/classTAO__POA.html | CC-MAIN-2022-40 | en | refinedweb |
Borislav Hadzhiev
Last updated: May 1, 2022.
Here is an example of how the error occurs.
def example(): example() # ⛔️ RecursionError: maximum recursion depth exceeded example()
We call the function, which then calls itself until the recursion limit is exceeded.
You can get the current value of the recursion limit by using the
sys.getrecursionlimit() method.
import sys # 👇️ 1000 print(sys.getrecursionlimit()) # 👇️ set recursion limit to 2000 sys.setrecursionlimit(2000) # 👇️ 2000 print(sys.getrecursionlimit())
The getrecursionlimit method returns the maximum depth of the Python interpreter stack.
You can use the setrecursionlimit method if you need to update this value.
To solve the error from the example, we have to specify a condition at which the function stops calling itself.
counter = 0 def example(num): global counter if num < 0: return # 👈️ this stops the function from endlessly calling itself counter += 1 example(num - 1) example(3) print(counter) # 👉️ 4
This time we check if the function was invoked with a number that is less than
0 on every invocation.
0, we simply return from the function so we don't exceed the maximum depth of the Python interpreter stack.
If the passed in value is not less than zero, we call the function with the
passed in value minus
1, which keeps us moving toward the case where the
if
check is satisfied.
You might also get this error if you have an infinite loop that calls a function somewhere.
def do_math(a, b): return a + b while True: result = do_math(10, 10) print(result)
whileloop keeps calling the function and since we don't have a condition that would exit the loop, we eventually exceed the interpreter stack.
This works in a very similar way to a function calling itself without a base condition.
Here's an example of how to specify a condition that has to be met to exit the loop.
def do_math(a, b): return a + b total = 0 i = 10 while i > 0: total += do_math(5, 5) i = i - 1 print(total) # 👉️ 100
If the
i variable is equal to or less than
0, the condition in the
while
loop is not satisfied, so we exit the loop.
If you can't track exactly where your error occurs, look at the error message.
The screenshot above shows that the error occurred on line
84 in the
example() function.
You can also see that the error occurred in the
main.py file.. | https://bobbyhadz.com/blog/python-recursionerror-maximum-recursion-depth-exceeded | CC-MAIN-2022-40 | en | refinedweb |
huntsville times obituary archives
sysmex reagents
netgate vlan
craig charles radio 2
if has a million haters i am one of them
runaway princess chinese drama
jazz standard club
cobalt ss sedan turbo
sherwood rx 5502 manual
rigify not working blender
how to make a gemini woman laugh
gfpgan gui
ddo sentient jewel of the kobold
wpga vic open leaderboard 2022
i favor the villainess volume 3
sqlite c wrapper
how to tell if tahitian pearls are real
recover stock losses llc
hyperdimension neptunia re birth1 cheat engine
hard bible study questions
corningware dishes vintage
a level formula sheet physics
fancy bearded dragon colors
pyraminx speedsolving methods
is taino a federally recognized tribe
group 31 battery box napa
death on the nile a hercule
react router dom usenavigate replace
hacker typer commands
foxboro accident today
z dam richmond va wikipedia
hudson county police academy address
wholesale gifts and decor
maine coon adoption denver
top 10 best girl anime 2022
kenmore ultra wash dishwasher models
cara menggunakan blocktopograph
sonny barger obituary
kingspan aluminium cladding
tax id 021000322 checking llc
jeep cj7 v8 for sale
duramax h1 hummers for sale
tkm performance lamb cam price
bmw f750gs 2022
babka lakewood nj
ak magazine pouch molle
utm windows 10 ipad
cymatics sample packs
ade mount f1
umberlee 5e
aldi receipt
6 ft chain link fence rolls
lg stylo 4 bypass google account without computer 2020
girls polo shirts
cessna 185 stol performance
sniper script roblox
github clicker games
talk dating site
what to do if someone keeps driving by your house
what is the use of layers in autocad
jovivi antique copper metal bookmark beading
v2 properties reviews
real gun stl files
alpha grillers instant read meat thermometer for
no bsnl network today
pretest acls answers
free cable tv unlimited
antv g6 zoom
macally compact wired keyboard for mac
corsair 4000d power button blinking
debrid kshared
teacup pomeranian dublin
turbo speedos
spidey and his amazing friends hindi
the theory of evolution by natural selection key concept builder lesson 2 answer key
the khalistan conspiracy pdf free download
pyle phono turntable preamp
cast of young and restless then and now
led grow light companies
osce certification
florida drug bust 2022
blue heeler puppies for sale in pa
a white oval pill
doctor sleep sequel hallorann
trading view contact number
thorens tp tonearm
pandas argsort
pycurl python 3 example
zenia indigo powder indigofera
as good as dead read online
16 inch ash pan
cartage fee
ace jubilee caravan 2008
new holland l218 oil capacity
usb bluetooth dongle driver
astro gaming a40 tr mod
dallas cowboys jersey
sophos endpoint agent tamper protection must be disabled before uninstalling
save all resources firefox extension
farming simulator 22 sugarcane mod
mom of the birthday boy
honda civic skin download
m3u8 link player html code
typescript override method with different return type
180 divided by 63
truist corporate headquarters
best places to metal detect in florida
autumn cove patio furniture
subaru camera
uc berkeley commencement speakers
roksan k3 power amplifier manual
cibi camion
6 screw tremolo setup
palomino quarter horse mare
axle fuel card application
workday application statuses
from beijing with love download
what is ftb penalty code c
rewasd fortnite controller
cannot resolve aws variable sources with old variables resolver
a tank contains 125 gallons of heating oil
json number format
online singing competition 2022 free
crst 10 outlets heavy duty power strip
discord bot image search
wagner ware replacement handles
big ass porn hub
chakra healing chart pdf
little annie movie
ftext unreal
demko ad20 5 back spacer
hairstyles for long thin hair male
impreza pcv valve
fence damage from wind
kitchenaid refrigerator warranty check
live plant delivery
self shot naked girls
restaurants for sale in jamestown pa
millennium falcon lego
best movies netflix 2022
coby portable radio
nfpa 13 2020 pdf
what red dot fits shadow systems mr920
bootstrap compost rhode island
chinese diesel heater exhaust heat exchanger
1941 plymouth special deluxe 4 door
pokemon white 2 pokedex
teepee tent for cats
paws gsu
letter of reinstatement
trading discord servers fortnite
grok data vs greedydata
1972 international 345 engine specs
vanzare teren agricol fara succesiune
directv h25 receiver specifications
private key bitcoin core
clion key github
repossessions for sale
large whale
mrbeast mpreg
calculus circuit training
paypro software
wdac vs applocker
wholesun 3000psi electric pressure washer
necromunda rules pdf yaktribe
coda movie streaming 2022
pyyaml vs yaml
sabnzbd restart
best portable barber chair
dashboard cover for 2008 acura tl
skoy eco friendly cleaning cloth
pytz documentation
how to identify antique chinese furniture
sia working in the private security industry mock test
boat exhaust flaps
kenworth t800 specs
carafate liquid dosage
pick 3 rundown app
the approximate size of the nucleus can be calculated
deathloop flickering pc
refurbished iphone indonesia
lds bishopric meeting agenda template
your body is the temple of god verse
annoying 80s songs
white water theme park
usic vs utiliquest
maytag front load washer not dispensing detergent
mechwarrior 5 best mech locations
rclone mount on startup windows
drag racing practice tree online
the witcher rpg character sheet
vlccontrol play options
skeeter sl190 parts
iframe download file
seal supplier
gas one 2106 3 ft regulator
vitalsource reflowable vs fixed
anam cara a book of celtic
the food lab broome
rawlings sure catch
replit com share bot
acer overdrive ghosting
lenovo beep codes list
puppies for sale warrington
android 12 dsu loader
vmax climax card list
montgomery orthopedics fax number
wireguard windows remote desktop
redragon k552 kumara
what happens if you connect the negative terminal first
cook n home nonstick
sodium lauryl sulfate effects on environment
indoor water fountain shop near me
uninterruptible power supply
gm code c0700
lastlog mod apk
shooting in canberra today
carefully read or read carefully
aesthetic notepad online
2019 f250 death wobble
360 photo booth rental tallahassee fl
hayes farm bromley
tk maxx buyers
walking dead fanfiction sam
kubota hydraulic filter
asian rhinoplasty nyc cost
computer science work experience year 12 london
1987 cutlass 442 parts
kids pirate ship
oakland yacht club
pandas merge two dataframes with different number of rows
best stihl clone chainsaw
bbc weather falmouth
elixir case
beretta gel blaster full metal
my beautiful man bl mydramalist
dog breed groups uk
idle champions reddit
oregon state university fall 2022 start date
first lifesaver working cap
axess portable bluetooth speaker
cell cycle and mitosis answers
vertex in synastry houses
lottery bible number rundown
bayesian hierarchical binomial model
kong software
nec m551 datasheet
casper element mattress reddit
esys carplay coding
hyundai used cars columbus ohio
tinder box murray utah
mexico beach live cam
healing hands massage school cost
june 2021 english language paper 1
froot vshojo
do i need to tell universal credit if i get pip
rc gas engine to electric motor conversion
primrose school fees
artc standards
young amatures topless
rpnb portable security safe quick access
psp homebrew emulators
wafa fitness tracker with
remove lenovo bloatware tablet
sailing zatara youtube income
rolls royce owners manual pdf
ultrawide vs 4k for programming
how to deal with not having your own room
pokemon heartgold cheats
tiny first baby daddy
alienware 240hz gaming monitor
level 41 impossible quiz
lpddr4 ecc
undervolting ryzen 5 5600h
import e57 to autocad
knotless havana twist
titan cable machine
mini 14 tactical wood stock
free printable small banner letters
soulmate indicators synastry
seapak clam strips review
123movies spider man far from home free
adobe data breach 2020
bdc compressor
pokemon reborn forms
big city greens season 3 how many episodes
wild republic jumbo bison
starsat smart tv update
percentage in overleaf
cedar shutters exterior
axol x melony wattpad
spanish medical interpreter classes near me
pepperball training
pgadmin ssh tunnel aws
bootleg blogspot
how to fix broken ceramic toilet
watkins glen race track
heeseung jake ao3
python for trading
colonial and state records of north carolina
victor allen coffee caramel macchiato
planet simulator
okta usyd qr code
gawr gura face reveal reddit
best navy officer jobs reddit
sister challenge questions tiktok
orientation phase nursing examples
discovery kids toys
bmw n57 main bearing torque
local truck driving jobs detroit michigan
solarmax error codes
dallas isd easyiep
nhra topeka 2022
the drop harry bosch
ict4 apple
loop design
arcam pa410
turtle beach recon 200 amplified
kitchen accessories store
thrustmaster tca officer pack airbus edition setup
corvette 3800 status
crochet pattern llama
third eye chakra
grass valley obituaries
global health faculty
does amy and sam get together on heartland
what is entropy and examples
northbridge urgent care wait time
best brute force tool termux
transplanting from soil to dwc
wkb format
crochet mushroom granny square
208 plug wiring
parrots for sale kansas city
android use so file
dma regions map
parking revit family free download
zillow amity oregon
us national parks map
clinique skin care
evinrude 15 hp water pump replacement
junior colleges in oklahoma with football
houzz rectangular coffee tables
bottega louie order online
william pilkington
apx volt amazon
menards metal storage cabinets
moto g8 power hard brick
daisuke fanfic
top 100 rural and community hospitals 2022
transport fever mods without steam
hallelujah funeral version lyrics
nodo rflink gateway v1 4 rflink r46
miami gardens luxury apartments
super card plus
shotshell hulls reloading
usp lewisburg smu
56 chevy wagon for sale
aws classic load balancer
liberty mutual settlement reviews
an array a of size n is said to be good if all the following constraints are satisfied
dell t620 nvme
oliver tractor sheet metal
zx spectrum next manual
sim swap app
sierra college summer 2022 calendar
bmw e60 v8 problems
120 gallon propane tank for sale near moscow
home depot kreg jig k4
the bfg movie
mcreator player skin
lafayette sheriff sale
why is my dimension text so large in autocad
craigslist private owner houses
medical device mergers and acquisitions 2021
kingroot lg k51
property singapore
skillshare premium account for free piximfix
swtor underworld exchange reputation
masconomet powerschool
hyperpop vocal presets
xxl giant dog crate
england national league north table
phet forces and motion basics acceleration
datastore provisioned space vs free space
ema marketing authorisation database
stainless baby bath
2013 nissan sentra axle nut torque
luxury ice fishing house rental lake of the woods
pc ultra portable lenovo yoga slim 7
psychedelic mushroom growth time lapse
clear cyclone dust collector
honda dct problems
alko air suspension parts
fire emblem sims 4 cc
graves alamo trigger
private treaty bulls for sale
battleship with planes instructions
glock adjustable rear sight review
stb codes uk
a2 unit 9 progress test answer key
the alcohol experiment expanded edition
georgia probate notice to debtors and creditors
watch roohi online with english subtitles
heat exhaustion diarrhea
inmate stabbing
hayward s220t how much sand
ibanez rg2027x
cc shops 2022
jp6 tablet
old ford pickups for sale in south africa
the yellow birds a
affordable housing kapolei
sbr socom powder
upscale roller skating rink
html for gmail
roll cost futures
turner rhoden obituaries
onthemarket patrington
blazing saddles with golf
building thinking classrooms non curricular tasks
movie photo editing background
yogasleep dohm uno review
keeping up with the kardashians season 20 finale
kerui upgraded standalone home office
ios310 zip file download
ums engines uk
am2301 raspberry pi
framing a chimney stack
sample 911 dispatcher calls
solitaire tripeaks crashing on tablet
endc sharetechnote
rutgers graduation 2022 live stream
forex bible pdf free download
belomo pso 1
wishlist notion template
pokemon pokedex discord bot
commercial mill for sale
optic riser for eotech
homes for rent amador county calaveras county
positive attitude meaning in telugu
toyota camry for sale bc
book of the dead pathfinder 2e reddit
c concatenate vector of strings
skinwalker ranch movie ending explained
whose this meaning in punjabi
how to find drug dealers on telegram
are subdivision roads private philippines
injector metering rail 1 pressure cummins code 559
krita resize blurry
can someone on probation live with someone who owns a gun
ch motor racing digital cdi unit
texas wood carving supply
civil 3d commands
sabrine maui new breasts pictures
double xl pad
land registry portugal
alaskan ivory rings
aiyima t8 reddit
amazfit zepp z recenzja
how to install ngrok in kali linux
autoimmune dermatitis in dogs
massey serial number search
jiffydos switchless
35 kg thrust brushless motor
airbnb finger lakes lakefront
unity crashing
prefabricated houses poland price
tantra retreat sydney
enphase energy iq8
big city gmod map download
panda helper lite ios 15
pink fixed blade knife
principalities and powers scripture
online dating valentines day
nella nude
1999 aerolite cub specs
windows 10 ltsc 2021 activation
janssen brothers microscope
ps4 gaming headphones target
tradeskillmaster login
eco 9 question paper
channels like red letter media
this layout is already part of a sheet set
robot poetry
pending spac mergers
free beach and vehicle fuck pics
clash in spouse palace
warping constant formula for hollow rectangular section
strato c movie player
css animation rotate loop
starfinder adventure modules
unblocked games 77 geometry dash
quarian bird for sale
vigbody exercise bike
fbi style jacket
3 phase 230v wiring
extent of electrical installation covered by this report
subject pronoun
scroll down button react
enforcer starsector
office of the secretary of state washington
identify the statements that describe sharecropping
stm32 tcp example
danuser wow
houdini usd tutorial
mec shotshell reloading chart
light bulb for broan bathroom fan
free vless
forex journey reddit
exhibitor directory pdf
ramp certification answers pdf
github username and password termux
2022 toyota rav4 navigation system
university of washington school of law notable alumni
foundry ingot molds
2 minute timer
mimecast spf ip ranges
is wood therapy painful
la ilaha illa anta subhanaka inni kuntu minaz zalimin arabic
zqdoll 19 pcs girl
black candle jars wholesale uk
the incredibles 2
ziwi peak canned wet cat
uninstall matlab linux
monroe county florida clerk of court case search
django checkbox
harbor crab dickson city menu
dinosaur stacking blocks
used atv values
city of albuquerque eweb
craigslist moscow pullman
cort g series price
limo license ohio
timberland adventure 20 trainer
in transit on fedex vehicle for delivery
databricks delta partition by date
vw baja bug for sale near me
got7 reaction wedding night
portnet vessel tracking
how to create a social media website like facebook
nagta u siil wayn
fedex ground orientation
databricks jobs api python example
levins display stand
black series camper trailers
qvc clarks cloudsteppers
exocad software download free
devtmpfs vs udev
cobol ide for mac
this end up furniture bunk beds
scotts 6hp rear tine tiller
the dot
jeep cj rear axle
find number in string java
tcl 55c825k review
south carolina fireworks
unit angle relationships homework 4 answer key
electronic tube handbook pdf
adult porn movie lauras toy
lively wallpaper city
before a container of stored recycled refrigerant is used
hack the box windows
toyota land cruiser fj40 hardtop for sale
echuca news
maury county sheriff sale
decorative concrete wall caps
ps5 1080p monitor
love bot blooket
original classic mitsui
convert datetime to date in soql query
eigen unsupported download
abbey road anniversary vinyl
batocera bios gba
craftsman riding mower oil filter
weapons on land ww1
marantz av8801 hdmi board
2x8 rough cedar
resin coat
how to prepare for baptist ordination
m16 trigger pocket dimensions
curseforge optifine
shadowrocket guide
model railway dcc cv
the mountain goats
can you sell deer antlers in minnesota
msm hair growth before and after pictures
avidyne price list
platinum english first additional language grade 8 pdf
nopixel wrangler banned
bpd silent treatment
fun bars in boston with games
convoy full movie
ahb7808r lm v1
acc zolder lap time
medtronic recall list
glioblastoma stories
prayer points to confuse the enemy
nslookup ipv4 only
semaglutide for sale online
12 gauge 100 rounds
pillars of eternity talents
sunday x magazine white comic
hla pallet forks for john deere 1025r
110 bass cab
zikir hu allah sesat
pymetrics balloon game strategy
maui matcha
uclan term dates
pcb meaning gynecology
v cube japan
java linkedlist addlast
cheap pantry organization ideas
home assistant list users cli
formatting issues with mail merge
ash thorp modular
capacitance of transmission line solved problems
kentucky basketball forum
free 3d toilet dwg
stevie triesenberg net worth
david bestdressed instagram
picrew hollow knight
knock knock all out of pad magnetic
boss rt3 mount
brabus 1300 r
military submarine for sale
xil pro drone app
los angeles craigslist gigs
opencv stereo rectify
typescript dynamic object
g35 check engine light reset
numpy resize vs reshape
wll stock
robbins hollow townhomes
where are micargi bikes made
upshur county tx mugshots
alaska oil production
razer blackwidow v3 tenkeyless
lesbian dating sites international
cub cadet xt3 throttle cable replacement
rouje concorde jeans review
draytek vigor 2866 manual
lamprobe 4000 for sale
fsae batteries
houses for sale in surrey
tesco cd player
shinko pump china
how to issue a cheque
dua to remove blockage in life
wacom one software
dixie narco vending machine hack
wv mountain land for sale
best sound insulation between walls
shattered katana deepwoken requirements
edexcel mock set 2 spring 2017
high grade wide shoulder wooden hangers 6 pack
16 piece dinnerware set target
minimum sentence for strangulation
15 inch patio chair glides
lipid management guidelines
red tack sets for horses
elden ring cheats and secrets
disposable vapes near texas
tree blown over
drone x pro manual
uipath convert image to base64
the greatest showman netflix
80p builder review
speakonia tts voices
used pull behind road grader for sale
300 wsm oder 300 win mag
bingo blitz play as guest
spn 1322 fmi 11
neptune conjunct natal mc
wipeout the show
balloon arch examples
itop vpn
profinet github
fake boyfriend call audio
arcgis python toolbox dropdown
elden ring pvp balance
c900 pvc pipe fittings dimensions
music jazz
exodus 1831 tupac
wattpad highschool dxd ghost rider
the strat thinkorswim
jyers manual leveling use probe
free video stuff
51mm cv carburetor
zoomerang parent reviews
dodge dakota turn key nothing happens
rokr 3d wooden puzzle password box
anime stickers for laptop
tiktok text in premiere pro
free necromancer games
3 steps above heaven
pillars of eternity paladin tank build
can you draw against stockfish
caravans to rent in wales
asrock amd x470
p2098 chevy silverado
2d photo to 3d
dark data game
git commit history for branch
hg nss
pogu twitch vods
straight razor barber shop
westin headache rack
fix misaligned door
islamic dance remix
yard drainage contractors near me
planeplotter maps
moon waltz lyrics romaji
40k witchfire rules
brad mossman
gamejolt undertale android
champion first place second place
expertpower exp12180 12v 18ah lead acid battery
animals of farthing wood
xpo logistics questions
rosemont seneca partners revenue
howler brothers hawaiian shirt
pop and bang remap finance
uk leotards
calendars 2023 to print
how stretchy is nylon
1995 jeep wrangler sahara
bucktail lures for stripers
autodiscovering the great leak
girls with swords how to
rcm level 4 theory book pdf
hulu account dispenser
how to finish ransom keep
love 020 ep 18
mini cooper s turbo for sale
source of st johns river
military tactical vest
regulate myo inositol d chiro inositol reviews
security officer badge patch
htc true wireless earbuds
hookup group link on telegram
how long do cold sores last
late night vrchat mod
2 ohm vs 4 ohm wiring
cru 120hz
ruby return from block
craven county arrests 2022
dickson horse sale
error 1053 sql server
which periodic reinvestigation is required for continued secret clearance eligibility
warrior cat roleplay terms
kpop female idols name
pella storm door parts list
show volume on tradingview
xplorer f5j
oxygen not included world seed generator
meri pysi vidhwa maa
new lenovo ideapad 2022
firekirin admin
aseel whatsapp group link kerala
foiling power boats
teens slut deep throating movies
uhf vs vhf channels
opd valve for 5lb propane tank
bmw oem parts fiche
slime vr trackers
aquatic vegetation groomer cattail cutter
azure function custom binding
224 valkyrie ammo ballistics
patreon the sims 4 tattoos
bicycle top tube
how long does it take to hear back from an interview reddit
pennsylvania murders 2021
unhandled native exception occurred skyrim
gateway community college nursing program reviews
karol g and becky g lyrics
eaton ups 1000va
watching the owl house fanfiction
kubota lx series for sale
stiles son of zeus fanfiction
peugeot 207 anti pollution fault reset
costco easy tile
social media marketing for beginners 2021 pdf
kirai mod
22lr seating die
tonton security camera manual
tv hd online gratis romania
window tinting montrose
patreon hacked 2021
ordo extraordinary form 2022
flush interior door slab
painting rough concrete floor
free tits ass movies
2006 ford f150 p0016
is lavender good for anxiety
yardi rest api
hotel travel jobs
memory management linux
space vector modulation code
charles proxy dark mode
straight talk this device is in an invalid status and is not eligible for this transaction
indiana early childhood conference 2022
cisco asr 920 configuration guide
heritage farm stay adventure
free matrimonial sites in karachi
free key fob tester app
compression release spark plug
frigidaire stand up freezer not freezing
student exploration circuits answer key pdf
doom emacs python lsp
traction engine for sale ebay
joint motion to terminate removal proceedings sample
uk 49 group on whatsapp
power bank 50000mah price
bonita beach family home rental
sezzle firearm purchase
dr woods soap bottle
renault master bench seat
costco awning
fnf pibby regular show wiki
stomach cramps and diarrhea for 4 days
streamingresponsebody spring boot example
smp realm code 2021
saisei in english
revit fence family free download
nivea white peach and jasmine
padre pio healing prayer card
can i wear a dress to a nursing interview
slicer planter
chipotle food poisoning symptoms
how to make money online 2022
kubota l4760 for sale craigslist
master spas balance 8 reviews
international 560 hydraulic problems
4p10 engine problems
formica table and chairs for sale
lock up garage near me
sterling silver studio
download antrum
brogdon avocado tree size
how long does it take for child support to process
top gear rallycross
really good stuff positive affirmation chips
washington national cathedral facts
px6 twrp
toyota rav4 navigation dvd download
blacktown council
1960s chandelier
ace the data science interview book reddit
what should a guy do when a girl is on her period
bartow county ga blotter arrest
how to create tar file in windows
mcdonald happy meal
ravinia events
comotomo baby bottle reviews
ducati parts fiche
bsa gold star 2022
dd miniatures rules
marketplace hampshire
homes for sale on lower river road grants pass oregon
great greek myths narrator
south park post covid
1972 k5 blazer seat covers
dynamax rev 2016
double dutch bus lyrics
a company manufacturers different types of software products
k24 oil pan
seduce me at
eva foam helmet pattern free
velocette modifications
bandog for sale uk
identification masons ironstone marks
mobi tiny homes
mafs season 5 where are they now
shake it baby
moscow restaurant
tulsa channel 8 news anchors
do sugar daddies pay upfront
tng reload pin free 2021
vasicek model monte carlo simulation
system 16 taito
churchofjesuschrist seminary
gfwlist ip
valorie jones singer cause of death
rolling 20 bloods
market price for crawfish in texas
amazon fixed blade knives
2 minute timer
linear regression in mysql
this app could not be protected
sherman hot summer nights
wind load calculation for monoslope roof
navan wedding venues
mack 44k trunnion
jojo siwa worldwide party lyrics
coresource insurance
what to do when your nail rips off
yamaha 125 enduro 1975
how to become a bolt driver
everyday witch tarot mini
newburgh breaking news today
wolfwalkers book
pandas groupby drop index
home media server build 2022
netgear mr1100
hairline cracks inside toilet bowl
2002 lexus is300 wheel offset
vallaki maps
hp prodesk 400 g7 drivers windows 10
fl studio mobile vocal presets
raptoreum 5600x
ggmod honey select
photography studio rental near me
ua local 669
kagi chart tradingview
sonata in darkness piano
manual xjs
playmobil porsche taycan
lower hud obituaries
jen smith husband
animal motif knitting patterns
kitchen drop ceiling ideas
anniex27s organic bunny grahams neapolitan 75
esp32 ble advertising interval
aol forwarding to gmail
football helmet solidworks
t 257 white oval pill mg
cj8 scrambler for sale craigslist
javascript blooket hack
dundle carding
reduce space between text and figure latex
lost and found pets saskatoon
foylex27s war set 8
how to use brushes in lightroom
winged eyeliner stamp
dell optiplex 3070 3 monitors
digifiz dash
morphle mila the baby
best tdi intake
enhypen bio ideas facebook
national school chess championship 2021
caylus gta 5 mods
modern recliner for tall man
aussie food plants
cinema hd m3u
rpg ammo for sale
true stories of pregnancy after tubal ligation reddit
primus campfire frying pan
peak indicator mq4
bandit chippers dealer
5g nr router
wholesale swords
autoradio wiring
is carrigan farms pet friendly
clarke gas tank cap
predator generator won t stay running
mod the sims 4 maxis match
eso best class for dunmer
disney frozen 2 pretend and play
can you have telepathy with a false twin flame
mediacorp calefare actress
beagle cocker spaniel puppies for sale
harry potter x female dementor fanfiction
detangler brush for dogs
website color schemes generator
pontoon boat for sale cape coral
cherry blossom x reader
how to clean scanner glass hp
azure aks ingress controller
self closing hinges for interior doors
pregnancy pillows canada
gamess manual pdf
new orleans fireworks 2022
gspace redeem code free
corset top pattern no boning
wiltshire council departments
allen and harris swindon lettings
medicaid spay and neuter
chunky notepads
700 nm color
pediatric radiation oncologist salary
vekkia reading amp bed
spa getaway new england
18th century working class hair
barn builders in virginia
superhero toys set
tinzonc htv iron on
champion 14 seer heat pump
outlook delete unread emails older than
audio note dac for sale
trulieve colors vape
michelob ultra beer giveaway
2004 ford explorer xlt will not start
xnxnxnxn cube algorithms pdf
long lucious legs sex video
m32 gearbox filler plug
fnf hank unblocked
cruise light blinking subaru
give yourself grace | https://cs-advert.pl/1810/10/2006.html | CC-MAIN-2022-40 | en | refinedweb |
This article describes Tekton, an open-source framework for creating CI/CD systems, and explains how to deploy microservices built with Eclipse MicroProfile on Kubernetes and OpenShift..
The biggest companies that are engaged in this project are at this point Google, CloudBees, IBM and Red Hat. Because of its importance the project has been split from Knative which is focussed on scale to zero capabilities.
Tekton comes with a set of custom resources to define and run pipelines:
- Pipeline: Pipelines can contain several tasks and can be triggered by events or manually
- Task: Tasks can contain multiple steps. Typical tasks are 1. source to image and 2. deploy via kubectl
- PipelineRun: This resource is used to trigger pipelines and to pass parameters like location of Dockerfiles to pipelines
- PipelineResource: This resource is used, for example, to pass links to GitHub repos
MicroProfile Microservice Implementation
I’ve created a simple microservice which is available as open source as part of the cloud-native-starter repo.
The microservice contains the following functionality:
- Image with OpenJ9, OpenJDK, Open Liberty and MicroProfile: Dockerfile
- Maven project: pom.xml
- Open Liberty server configuration: server.xml
- Health endpoint: HealthEndpoint.java
- Kubernetes yaml files: deployment.yaml and service.yaml
- Sample REST GET endpoint: AuthorsApplication.java, GetAuthor.java and Author.java
If you want to use this code for your own microservice, remove the three Java files for the REST GET endpoint and rename the service in the pom.xml file and the yaml files.
Setup of the Tekton Pipeline
I’ve created five yaml files that define the pipeline to deploy the sample authors microservice.
1) The file task-source-to-image.yaml defines how to 1. build the image within the Kubernetes cluster and 2. how to push it to a registry.
For building the image kaniko is used, rather than Docker. For application developers this is almost transparent though. As usual images are defined via Dockerfiles. The only difference I ran into is how access rights are handled. For some reason I couldn’t write the ‘server.xml’ file into the ‘/config’ directory. To fix this, I had to manually assign access rights in the Dockerfile first: ‘RUN chmod 777 /config/’.
The source to image task is the first task in the pipeline and has only one step. The screenshot shows a representation of the task in the Tekton dashboard.
2) The file task-deploy-via-kubectl.yaml contains the second task of the pipeline which essentially only runs kubectl commands to deploy the service. Before this can be done, the template yaml file is changed to contain the full image name for the current user and environment.
apiVersion: tekton.dev/v1alpha1 kind: Task metadata: name: deploy-via-kubectl spec: inputs: resources: - name: git-source type: git params: - name: pathToDeploymentYamlFile description: The path to the yaml file with Deployment resource to deploy within the git source ... steps: - name: update-yaml image: alpine command: ["sed"] args: - "-i" - "-e" - "s;authors:1;${inputs.params.imageUrl}:${inputs.params.imageTag};g" - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}" - name: run-kubectl-deployment image: lachlanevenson/k8s-kubectl command: ["kubectl"] args: - "apply" - "-f" - "/workspace/git-source/${inputs.params.pathToContext}/${inputs.params.pathToDeploymentYamlFile}"
3) The file pipeline.yaml basically only defines the order of the two tasks as well as how to pass parameters between the different tasks.
The screenshot shows the pipeline after it has been run. The output of the third and last steps of the second task ‘deploy to cluster’ is displayed.
4) The file resource-git-cloud-native-starter.yaml only contains the address of the GitHub repo.
apiVersion: tekton.dev/v1alpha1 kind: PipelineResource metadata: name: resource-git-cloud-native-starter spec: type: git params: - name: revision value: master - name: url value:
5) The file pipeline-account.yaml is necessary to define access rights from Tekton to the container registry.
Here are the complete steps to set up the pipeline on the IBM Cloud Kubernetes service. Except of the login capabilities the same instructions should work as well for Kubernetes services on other clouds and the Kubernetes distribution OpenShift.
First get an IBM lite account. It’s free and there is no time restriction. In order to use the Kubernetes service you need to enter your credit card information, but there is a free Kubernetes cluster. After this create a new Kubernetes cluster.
To create the pipeline, invoke these commands:
$ git clone $ cd cloud-native-starter $ ROOT_FOLDER=$(pwd) $ REGISTRY_NAMESPACE=<your-namespace> $ CLUSTER_NAME=<your-cluster-name> $ cd ${ROOT_FOLDER}/authors-java-jee $ ibmcloud login -a cloud.ibm.com -r us-south -g default $ ibmcloud ks cluster-config --cluster $CLUSTER_NAME $ export <output-from-previous-command> $ REGISTRY=$(ibmcloud cr info | awk '/Container Registry / {print $3}') $ ibmcloud cr namespace-add $REGISTRY_NAMESPACE $ kubectl apply -f deployment/tekton/resource-git-cloud-native-starter.yaml $ kubectl apply -f deployment/tekton/task-source-to-image.yaml $ kubectl apply -f deployment/tekton/task-deploy-via-kubectl.yaml $ kubectl apply -f deployment/tekton/pipeline.yaml $ deployment/tekton/pipeline-account.yaml
Execute the Tekton Pipeline
In order to invoke the pipeline, a sixth yaml file pipeline-run-template.yaml is used. As stated above, this file needs to be modified first to contain the exact image name.
The pipeline-run resource is used to define input parameters like the Git repository, location of the Dockerfile, name of the image, etc.
apiVersion: tekton.dev/v1alpha1 kind: PipelineRun metadata: generateName: pipeline-run-cns-authors- spec: pipelineRef: name: pipeline resources: - name: git-source resourceRef: name: resource-git-cloud-native-starter params: - name: pathToContext value: "authors-java-jee" - name: pathToDeploymentYamlFile value: "deployment/deployment.yaml" - name: pathToServiceYamlFile value: "deployment/service.yaml" - name: imageUrl value: <ip:port>/<namespace>/authors - name: imageTag value: "1" - name: pathToDockerFile value: "DockerfileTekton" trigger: type: manual serviceAccount: pipeline-account
Invoke the following commands to trigger the pipeline and to test the authors service:
$ cd ${ROOT_FOLDER}/authors-java-jee/deployment/tekton $ REGISTRY=$(ibmcloud cr info | awk '/Container Registry / {print $3}') $ sed "s+<namespace>+$REGISTRY_NAMESPACE+g" pipeline-run-template.yaml > pipeline-run-template.yaml.1 $ sed "s+<ip:port>+$REGISTRY+g" pipeline-run-template.yaml.1 > pipeline-run-template.yaml.2 $ sed "s+<tag>+1+g" pipeline-run-template.yaml.2 > pipeline-run.yaml $ cd ${ROOT_FOLDER}/authors-java-jee $ kubectl create -f deployment/tekton/pipeline-run.yaml $ kubectl describe pipelinerun pipeline-run-cns-authors-<output-from-previous-command> $ clusterip=$(ibmcloud ks workers --cluster $CLUSTER_NAME | awk '/Ready/ {print $2;exit;}') $ nodeport=$(kubectl get svc authors --output 'jsonpath={.spec.ports[*].nodePort}') $ open{clusterip}:${nodeport}/openapi/ui/ $ curl -X GET "{clusterip}:${nodeport}/api/v1/getauthor?name=Niklas%20Heidloff" -H "accept: application/json"
After running the pipeline you’ll see two Tekton pods and one authors pod in the Kubernetes dashboard.
Try out this sample yourself! | http://heidloff.net/article/deploying-microprofile-microservices-tekton/ | CC-MAIN-2019-35 | en | refinedweb |
Explore Vala!
CHALLENGE:
Try out Vala by making a simple app of any kind.
INTRODUCE THE LANGUAGE/TECHNOLOGY: Vala is a relatively new programming language (currently 0.7.9) that is designed as a wrapper for C and GLib/other GNOME Technology . It has support for OOP, generic programming, and assisted memory management. They language is very easy to read and write, and even 'noobs' could be writing programs in no time at all. Vala is also unique in the fact that is borrows much of it's syntax from both C#/Java as well as C++. This gives the language the ability to make developing easier/faster, without giving up any power.
IDEAS:
- Port pre-existing code
- Write an algorithm
- Test out it's many library bindings such as Gtk+ and OpenGL
RESOURCES:
The mother-load of information is on the Official Vala website:
Tutorials for both noobs and experienced:
-
-
-
Link to compiler/good IDE downloads:
HOW TO GET STARTED:
I'm currently working on a tutorial for the Vala language that I plan to submit here on D.I.C. later on. Here is the first lesson (greatly shortened). It explains setting up Vala, and compiling a Hello World program.
Setting Up The Development Envionrment
Setting Up The Vala Compiler
If you already have valac installed, you can skip this section
The first step to setting up valac is to download it. Here is the direct link for the most current version (at the time of this writing) for Windows:
If you are using a different OS, or would like to see what else is available, you can look here (there might even be a newer one
*Five Minutes Later....*
Ok, now that you got it on your hard drive, we're ready to install. Go ahead and run the .EXE. The process for Windows 7 (and most other versions of Windows I assume) is as such:
- Select Language
- Next to start
- The license, accept it
- It will ask you what you want to install. Select what you want and continue.
- Choose the destination folder and click install
- You're done
To make sure is really worked, open a terminal and type 'valac'. As long as it doesn't say that the command wasn't found, you're good to go.
Compiling Vala
Now my conscious won't let me go any farther without a hello world example (Don't ask why
using GLib; // not required public class HelloObject : GLib.Object { public static int main(string[] args) { stdout.printf("Hello World!"); return 0; } }
OK, valac compiles '.vala' files, so you'll need to paste this snippet and then save it as nameOfFile.vala. Compiling it is pretty straight forward. Open a terminal and run the following commands:
$ cd directory_were_vala_files_is
$ valac nameOfFile.vala
If there were errors, you'll need to fix them and recompile. Pretty easy huh? (I know I'm cool
$ nameOfFile.exe | https://www.dreamincode.net/forums/topic/185751-week-%2328-vala/ | CC-MAIN-2019-35 | en | refinedweb |
How To: Use the arcpy.CreateUniqueName function with a tool that has the output name and output location parameters separated
Summary
The arcpy.CreateUniqueName data function creates a unique output name by adding integers to the output name. This helps avoid errors or unintentionally overwriting existing datasets. The return format of the data function includes the full path name, and some geoprocessing tools require that an output location and an output name are specified in separate parameters. The Feature Class to Feature Class geoprocessing tool is one example of this. In these instances, the unique name must be used without the full path name, and this can be done with Python.
Procedure
The following code example shows how to eliminate the full path name from the return value and pass it into a tool such as the Feature Class to Feature Class tool.
Code:
#import arcpy and the OS module
import arcpy, os
from arcpy import env
#set workspace parameter
ws = env.workspace = r"C:\Temp\scratch.gdb"
#set input/output variables
inFeatures = "Customers"
outOriginal = "CustomerLocations"
#create unique name
unique_name = arcpy.CreateUniqueName(outOriginal)
#grab the basename of the full path returned
outName = os.path.basename(unique_name)
#pass outName into Feature Class to Feature Class
arcpy.FeatureClassToFeatureClass_conversion(inFeatures, ws, outName)
Related Information | https://support.esri.com/en/technical-article/000011813 | CC-MAIN-2019-35 | en | refinedweb |
An interactive piano keyboard for React
react-piano
An interactive piano keyboard component.
Supports custom sounds, touch/click/keyboard events, and fully configurable styling.
Try it out on CodeSandbox.
Installing
yarn add react-piano
Alternatively, you can download the UMD build from unpkg.
Usage
You can view or fork the CodeSandbox demo to get a live version of the component in action.
Import the component and styles:
import { Piano, KeyboardShortcuts, MidiNumbers } from 'react-piano'; import 'react-piano/dist/styles.css';
Importing CSS requires a CSS loader (if you're using create-react-app, this is already set up for you). If you don't have a CSS loader, you can alternatively copy the CSS file into your project from src/styles.css.
Then to use the component:
function App() { const firstNote = MidiNumbers.fromNote('c3'); const lastNote = MidiNumbers.fromNote('f5'); const keyboardShortcuts = KeyboardShortcuts.create({ firstNote: firstNote, lastNote: lastNote, keyboardConfig: KeyboardShortcuts.HOME_ROW, }); return ( <Piano noteRange={{ first: firstNote, last: lastNote }} playNote={(midiNumber) => { // Play a given note - see notes below }} stopNote={(midiNumber) => { // Stop playing a given note - see notes below }} width={1000} keyboardShortcuts={keyboardShortcuts} /> ); }
Implementing audio playback
react-piano does not implement audio playback of each note, so you have to implement it with
playNote and
stopNote props. This gives you the ability to use any sounds you'd like with the rendered piano. The react-piano demo page uses @danigb's excellent soundfont-player to play realistic-sounding soundfont samples. Take a look at the CodeSandbox demo to see how you can implement that yourself.
Props
Recording/saving notes
You can "record" notes that are played on a
<Piano> by using
onPlayNoteInput or
onStopNoteInput, and you can then play back the recording by using
activeNotes. See this CodeSandbox which demonstrates how to set that up.
Customizing styles
You can customize many aspects of the piano using CSS. In javascript, you can override the base styles by creating your own set of overrides:
import 'react-piano/dist/styles.css'; import './customPianoStyles.css'; // import a set of overrides
In the CSS file you can do things like:
.ReactPiano__Key--active { background: #f00; /* Change the default active key color to bright red */ } .ReactPiano__Key--accidental { background: #000; /* Change accidental keys to be completely black */ }
See styles.css for more detail on what styles can be customized.
Browser compatibility
To support IE, you'll need to provide an
Array.find polyfill. | https://reactjsexample.com/an-interactive-piano-keyboard-for-react/ | CC-MAIN-2019-35 | en | refinedweb |
Visual Studio Toolbox
Premature optimization may be the root of all evil, but these tools will make sure your code is clear, clean and secure.
Testing your application before shipping is an important part of the development process, and an important part of testing is making sure your application executes efficiently and responsively. That’s where code analysis and profiling tools and techniques come into play:.
Prior to Visual Studio 2012, most of these kinds of code analysis and testing tasks needed third-party tools and manual build/test/analyze/repeat tasks for the developer. 2013, which was further refined and advanced in the Performance and Diagnostic page and Diagnostic Tools debugger window in Visual Studio 2015.
With Visual Studio 2017, these tools are so integrated into the IDE that they no longer have a fancy name, but they continue to be extended and enhanced nonetheless. You'll find excellent documentation and tutorials on Microsoft Docs starting with "Getting Started with Performance Tools" and "Beginner's Guide to Performance Profiling in Visual Studio." You'll find information about runtime data collection and profiling not just for traditional .NET Framework applications but also JavaScript, ASP.NET and Web sites, High-Performance Computing (HPC) clusters, load testing and even energy use.
Another tool I'm going to include here is Vance Morrison's PerfView CPU and memory performance-analysis tool. Morrison is a senior architect at Microsoft and wrote PerfView for internal performance analysis and tuning by the teams building the .NET Framework and Visual Studio. It's now an open source tool still under active development. For further details see Brandon Bray's blog post, "Improving Your App’s Performance with PerfView."
Building extensions for Visual Studio? Take a look at "How to: Diagnose Extension Performance," which specifically employs PerfView to measure the impact of extensions when they load. And for a great example of how PerfView continues to be applicable to new types of development projects, see "Using PerfView with Azure Service Fabric Event Source Messages" on the Microsoft Premier Developer blog.
More Profiling and Debugging Tools
Beyond the tools available from Microsoft are third-party tools designed to meet your development fine-tuning needs.
JetBrains dotTrace Profiler helps you trace execution time, garbage collection, workload distribution, I/O performance and more, supporting .NET Framework and .NET Core applications including Windows Presentation Foundation (WPF) and Universal Windows Platform (UWP), ASP.NET, Windows and Windows Communication Foundation (WCF) services, and Silverlight. You can profile SQL queries and HTTP requests. You can even profile unit tests.
You can license dotTrace on its own, and there's a 10-day trial version available for you to run through its paces. dotTrace is also available as part of the comprehensive ReSharper Ultimate package, which adds ReSharper code analysis, style, and refactoring tools for .NET Framework and C++, dotCover unit testing and code coverage analysis, dotMemory memory profiling, and a dotPeek decompiler and assembly browser.
Redgate ANTS Performance Profiler is another popular tool for .NET Framework-based desktop, ASP.NET and ASP.NET MVC projects. ANTS provides the same code timing analysis as other tools, but also digs deeper into database request performance with support for enhanced data access profiling, with support for SQL Server, Oracle, MySQL, MariaDB, and PostgreSQL. Redgate's ANTS Memory Profiler finds memory leaks and helps you optimize the memory usage in C# and Visual Basic code. Both products are included in the .NET Developer Bundle, which also includes .NET Reflector.
DevExpress CodeRush is another analysis and refactoring tool for C#, Visual Basic and XAML code bases. The CodeRush analysis tools not only work with your core solutions, but also have built-in unit test integration supporting NUnit, xUnit, MSpec and MSTest frameworks, as well as CoreCLR test cases in the DNX environment. Licensing is available on a yearly subscription basis (monthly updates seem to make this worthwhile), and a 30-day trial is available for download from the DevExpress Web site.
Tools from Microsoft DevLabs
Microsoft DevLabs often releases extensions for preview tooling ideas being considered for future Visual Studio releases. For example, the Microsoft Code Analysis 2017 extension gives you built-in access to more than 100 of the most popular FxCop rules as live analyzers. The analyzers watch your C# or Visual Basic code as you type and provide tips on performance, security and best practices, as well as access to a dictionary of code quick fixes.
Microsoft DevSkim is a more comprehensive and flexible framework of plug-ins and code analyzers focused on inline security analysis of code as you type. Potential security issues are highlighted in your code with links to more information and, when available, one-click access to safe alternative code. DevSkim is designed to work with multiple development environments including Visual Studio 2015 and 2017, Visual Studio Code, Sublime Text, and others.
Analyzers from the Visual Studio Community
The open source .NET Compiler Platform "Roslyn" has opened up the possibilities for interactive code analysis to anyone who can write an extension, so let's take a look at some of the most recent free analyzers being released for Visual Studio 2017.
Austin Luo's C# Code Analyzer is a simple analyzer meant to check your string formatting for standard strings, console output, debug output and StringBuilder. This kind of analysis is really handy because we often overlook the details of strings used for edge cases and debugging, where errors make troubleshooting the rest of the application so much more difficult.
CommentCop, by Michael Reukauff, is a similar, simple analyzer for checking XML comments and region/endregion pragmas. The rules, based on StyleCop, are completely configurable and customizable.
Josef Pihrt has created a suite of useful Roslyn-based analyzer extensions starting with Roslynator for Visual Studio 2015 and Roslynator 2017 for Visual Studio 2017, which contain collections of more than 190 analyzers and 180 refactorings for C#. You can create rulesets for enabling or disabling individual analyzers or setting them as messages, warnings, or errors. Code refactorings are also completely configurable and customizable. Additional extensions by Pihrt provide just the analyzer or refactoring tools.
Hot Commands for Visual Studio, by Justin Clareburt, gives you a small collection of useful refactoring and coding tools such as toggling comments, duplicating selected code, joining lines, formatting code, moving between members, extracting class or namespace names and more.
AsyncFixer analyzes your C# Async/Await constructions for common errors and anti-patterns and suggests revisions that avoid common asynchronous code errors and increase application performance. AsyncFixer focuses on the five most common Async/Await problems seen across hundreds of projects reviewed by the tool’s developers. These include unnecessary Async/Await methods, long-running operations, fire-and-forget void methods, fire-and-forget calls in a using block, and implicit downcasting. The analyzer is part of your project so it can be available to all project team members and can run at build time on a continuous integration (CI) server. For more details on the research that identified these anti-patterns, see "LearnAsync.NET - A Study and Toolkit for Asynchronous Programming in C#." Fascinating stuff!
Code Cracker for C# and Code Cracker for Visual Basic are available as either project-based analyzers that you install as NuGet packages or as Visual Studio extensions. These are community-developed toolsets coordinated by Giovanni Bassi that include, so far, more than 40 Roslyn-based code analyzers for C# and Visual Basic along with a small library of refactorings.
JavaScript isn't forgotten here; Rich Newman's TypeScript Analyzer is a static code analyzer for TypeScript using TSLint and is based on Mads Kristensen's Web Analyzer for Visual Studio 2015 extension. It's been updated for Visual Studio 2017 and supports the latest TSLint. You can also attempt to fix errors in your code using the TSLint fixers. In addition, the tests are flexible and can be run on a build, run on the contents of a tsconfig.json file rather than the project file, and more.
More Analysis and Refactoring Extensions
SharpDevelop's Refactoring Essentials for Visual Studio 2017 is an updated version of its popular open source C# and Visual Basic refactoring library and analyzers. See the Refactoring Essentials for Visual Studio Web site for a full list of features.
U2U Consult Performance Analyzers for C# 7 provides code analysis specifically for the latest version of C#, with 40 specific analyzers so far. This is one of the few analyzer packages that isn't open source or a public project, but also isn’t a commercial package. It does, however, represent the expertise of the U2U Consult team's consulting and development services. If you like what you see here, maybe give them a call.
SonarLint for Visual Studio, from SonarSource, is a free, open source code analyzer and linter extension for C#, Visual Basic, JavaScript and C/C++ code. Straightforward enough. Things get more interesting, however, when you hook SonarLint up to a SonarQube, which gives SonarLint access to shared rulesets, continuous inspection and integration tests, and more.
Speaking of integration, and looping back to the string analysis tools mentioned earlier, Serilog Analyzer provides handy Roslyn-based analysis of error and logging strings to ensure your output matches the expected syntax for the Serilog .NET logging library.
Printable Format
I agree to this site's Privacy Policy. | https://visualstudiomagazine.com/articles/2017/10/01/code-analysis.aspx | CC-MAIN-2019-35 | en | refinedweb |
I have a problem with super and sub classes.
Suppose we have the following code:
public class Animal { } public Class Cat extends Animal { } public Class Dog extends Animal { } public Class AnimalHotel { private ArrayList<Animal> animals; }
If we accept that an AnimalHotel can have only one species but we don't know which one until we create one, how can we define its constructor properly?
public Class AnimalHotel { private ArrayList<Animal> animals; public AnimalHotel(String selectedAnimal) { if(selectedAnimal.equals("cat") this.animals = new ArrayList<Cat>(); else this.animals = new ArrayList<Dog>(); } }
Or should I change the programm structure by creating more classes? | https://www.dreamincode.net/forums/topic/416739-inheritence-problem/ | CC-MAIN-2019-35 | en | refinedweb |
In my previous article, I gave an example of how to get the own image with InterSystems Caché. Now it's time to launch a project with container.
To launch a Caché project in a container we will use an image from the previous article as a source for the new one.
But before it can be used, it must be published publicly or privately in the repository. In our company, we use GitLab as a storage for sources for all of our projects. And it can be used as a registry for Docker images as well. We must have authorization to push our image to any registry.
To authorize on a default registry, use this command.
docker login
Or with an address of the registry, which will be used as a storage for an image.
docker login registry.gitlab.com
Authorization is also needed to fetch images from the registry on another server (e.g. if they are not public).
Build the image from the previous article with a new name, which could include registry server name, if it is not hub.docker.com, used by default. Followed by name of user or organization, and name of the particular image with version.
docker build -t daimor/ensemble:2016.2 .
As you may already know, the release of new Ensemble version 2017.1 was announced since the previous article was published. Here is the command to build with version 2017.1 container.
docker build -t daimor/ensemble:2017.1 --build-arg WRC_USERNAME=****** --build-arg WRC_PASSWORD=****** --build-arg cache=ensemble-2017.1.0.792.0 -f Dockerfile.WRC .
Push our image to the registry after build.
docker push daimor/ensemble:2016.2 docker push daimor/ensemble:2017.1
Version 2017.1 will make as a latest as well.
docker tag daimor/ensemble:2017.1 daimor/ensemble:latest docker push daimor/ensemble:latest
Installing application.
Now it's time to install an application that will live in our Caché container. Let's start with a fairly simple application to install - Caché WebTerminal.
# Use our image with Ensemble as a source, we can particular version, or just latest, which will be used as well if omit version. FROM daimor/intersystems-ensemble:latest # FROM daimor/intersystems-ensemble:2016.2 # FROM daimor/intersystems-ensemble:2017.1 # FROM daimor/intersystems-ensemble equal to FROM daimor/intersystems-ensemble:latest # Version of WebTerminal to install ARG TerminalVersion=4.2.14 # Create temporary folder RUN mkdir /tmp/webterminal \ # Download installation xml from github && curl -o /tmp/webterminal/webterminal.xml \ # Start Caché Instance && ccontrol start $ISC_PACKAGE_INSTANCENAME \ # Generate login and password for csession if needed, and Load downloaded xml with compilation # WebTerminal will be installed during compilation process && printf "_SYSTEM\n$ISC_PACKAGE_USER_PASSWORD\n" \ | csession $ISC_PACKAGE_INSTANCENAME -UUSER "##class(%SYSTEM.OBJ).Load(\"/tmp/webterminal/webterminal.xml\",\"cdk\")" # Stop Caché instance && ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly \ # Clean Temporary folder && rm -rf /tmp/webterminal/
Build and launch.
docker build -t terminal . docker run -d -p 57772:57772 --name webterminal terminal
And finally, we are ready to open our application by link 'localhost:57772/terminal/'
It is also possible to get access to csession inside of our container.
> docker exec -it webterminal csession ensemble Node: 34272368af61, Instance: ENSEMBLE Username: _system Password: ********* USER> USER>w $zv Cache for UNIX (Red Hat Enterprise Linux for x86-64) 2017.1 (Build 792U) Mon Mar 20 2017 19:19:52 EDT USER>
Advanced installation
Thanks to @Nikita Savchenko who developed WebTerminal and made it so easy to install. Let's install another app which cals for a few more steps to be taken.
Next I will install DeepSeeWeb which depends on another project MDX2JSON and which should be installed before DeepSeeWeb obviously.
First we should prepare Installer Manifest. Installer, by the way, is quite simple: prepare namespace, load sources for both of projects, and launch its installers.
Class DSWMDX2JSON.Installer { XData setup [ XMLNamespace = INSTALLER ] { <Manifest> <Var Name="Namespace" Value="MDX2JSON"/> <Var Name="Import" Value="0"/> <If Condition='(##class(Config.Namespaces).Exists("${Namespace}")=0)'> <Log Text="Creating namespace ${Namespace}" Level="0"/> <Namespace Name="${Namespace}" Create="yes" Code="${Namespace}" Ensemble="" Data="${Namespace}"> <Configuration> <Database Name="${Namespace}" Dir="${MGRDIR}/${Namespace}" Create="yes" Resource="%DB_${Namespace}" PublicPermissions="RW" MountAtStartup="true"/> </Configuration> </Namespace> <Log Text="End Creating namespace ${Namespace}" Level="0"/> </If> <Namespace Name="${Namespace}"> <Import File="/tmp/deps/Cache-MDX2JSON-master/MDX2JSON/" Flags="ck" Recurse="1"/> </Namespace> <Namespace Name="${CURRENTNS}"> <Import File="/tmp/deps/Cache-MDX2JSON-master/MDX2JSON/Installer.cls.xml" Flags="ck"/> <Import File="/tmp/deps/deepseeweb.xml" Flags="ck"/> <RunInstall Class="MDX2JSON.Installer" Method="setup"/> <RunInstall Class="DSW.Installer" Method="setup"/> </Namespace> </Manifest> } ClassMethod setup( ByRef pVars, pLogLevel As %Integer = 3, pInstaller As %Installer.Installer, pLogger As %Installer.AbstractLogger ) As %Status [ CodeMode = objectgenerator, Internal ] { do %code.WriteLine($char(9)_"set pVars(""CURRENTCLASS"")="""_%classname_"""") do %code.WriteLine($char(9)_"set pVars(""CURRENTNS"")="""_$namespace_"""") #; Let our XGL document generate code for this method. Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "setup") } }
Save this class as Installer.cls. Then we need a CacheObject Script which will call that installer. Save it as install.scr
// install charset for CSP files as recommended set ^%SYS("CSP","DefaultFileCharset")="utf-8" // Load Installer do $system.OBJ.Load("/tmp/deps/Installer.cls","ck") // Setup do ##class(DSWMDX2JSON.Installer).setup(.vars,3) halt
And finally our Dockerfile
FROM daimor/intersystems-ensemble:latest # version DeepSeeWeb ARG DSW_VERSION=2.0.22 COPY cache.key /opt/ensemble/mgr/ COPY install.scr /tmp COPY Installer.cls /tmp/deps/ # Temporary folder RUN mkdir -p /tmp/deps \ && cd /tmp/deps \ # Download MDX2JSON, just master branch from github as archive && curl -L -q | tar xvfzC - . \ # Download DeepSeeWeb from releases && curl -L -q{DSW_VERSION}/DSW.Installer.${DSW_VERSION}.xml -o deepseeweb.xml \ # Start Caché && ccontrol start ensemble \ # add login and password for csession in our installer script && sed -i "1s/^/_SYSTEM\n$ISC_PACKAGE_USER_PASSWORD\n/" /tmp/install.scr \ # run install script && csession ensemble < /tmp/install.scr \ # Sstop Cache && ccontrol stop ensemble quietly \ # clean temporary folder && rm -rf /tmp/deps WORKDIR /opt/deepsee
And everything is ready to build and run
docker build -t deepseeweb . docker run -d -p 57775:57772 deepseeweb
We are now ready to open Installed DeepSeeWeb.
That's it. As a result we have the docker image which includes Web Terminal and DeepSee Web. Sources are available on github.
@Dmitry: Thanks for sharing it. Good work and helpful to the community.
In the dockerfile to add Web Terminal I had to combine the two RUN commands into one line to make it behave properly. The
ccontrol startseemed to complete within the RUN command.
Sure, I forgot to fix to this first Dockerfile. This needed because Docker have some caching mechanism for layers. And when each command become as a different layer, in every next build, Docker tries to optimize this process, and skip all unchanged layers. So, 'ccontrol start' could be skipped and next commands fail after that.
I fixed now in the article.
As a general rule, any RUN command in a Dockerfile that involves a running Caché instance (like calling $system.OBJ.Load()) should be called in the form of:
Other commands can be mixed in, but best practice is to start Caché, use Caché, and stop Caché, in that order, all in a single RUN command. More complex examples could look like this:
Hope that helps. | https://community.intersystems.com/post/containerization-cach%C3%A9-lets-add-our-application | CC-MAIN-2019-35 | en | refinedweb |
Kind PolymorphismEdit this page on GitHub
Normally type parameters in Scala are partitioned into kinds. First-level types are types of values. Higher-kinded types are type constructors
such as
List or
Map. The kind of a type is indicated by the top type of which it is a subtype. Normal types are subtypes of
Any,
covariant single argument type constructors such as
List are subtypes of
[+X] =>> Any, and the
Map type constructor is
a subtype of
[X, +Y] =>> Any.
A type can be used only as prescribed by its kind. Subtypes of
Any cannot be applied to type arguments whereas subtypes of
[X] =>> Any
must be applied to a type argument, unless they are passed to type parameters of the same kind.
Sometimes we would like to have type parameters that can have more than one kind, for instance to define an implicit
value that works for parameters of any kind. This is now possible through a form of (subtype) kind polymorphism.
Kind polymorphism relies on the special type
scala.AnyKind that can be used as an upper bound of a type.
def f[T <: AnyKind] = ...
The actual type arguments of
f can then be types of arbitrary kinds. So the following would all be legal:
f[Int] f[List] f[Map] f[[X] =>> String]
We call type parameters and abstract types with an
AnyKind upper bound any-kinded types.
Since the actual kind of an any-kinded type is unknown, its usage must be heavily restricted: An any-kinded type
can be neither the type of a value, nor can it be instantiated with type parameters. So about the only
thing one can do with an any-kinded type is to pass it to another any-kinded type argument.
Nevertheless, this is enough to achieve some interesting generalizations that work across kinds, typically
through advanced uses of implicits.
(todo: insert good concise example)
Some technical details:
AnyKind is a synthesized class just like
Any, but without any members. It extends no other class.
It is declared
abstract and
final, so it can be neither instantiated nor extended.
AnyKind plays a special role in Scala's subtype system: It is a supertype of all other types no matter what their kind is. It is also assumed to be kind-compatible with all other types. Furthermore,
AnyKind is treated as a higher-kinded type (so it cannot be used as a type of values), but at the same time it has no type parameters (so it cannot be instantiated).
Note: This feature is considered experimental but stable and it can be disabled under compiler flag
(i.e.
-Yno-kind-polymorphism). | https://dotty.epfl.ch/docs/reference/other-new-features/kind-polymorphism.html | CC-MAIN-2019-35 | en | refinedweb |
View Complete Post
: Jeff Prosise presents great tips for Silverlight development, which while it's gaining wide adoption, still needs more documentation and best practices so developers can make the most of the dazzling new features.
Jeff Prosise
MSDN Magazine Launch 2008
This month Inbar Gazit shows readers how to get the most out of the System.Collections namespace.
Inbar Gazit
MSDN Magazine August 2007
Mark Seemann
MSDN Magazine July 2006
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/134-12-asp-net-mvc-best-practices.aspx | CC-MAIN-2017-22 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
odoo 8 automatic logout / session expiry after 10 min if browser is ideal
Hello,
odoo 8 automatic logout / session expiry after 10 min if browser is ideal.
Thanks in advance...
See (and modify) method session_gc in http.py, like this:
def session_gc(session_store):
if random.random() < 0.01: ###!!! 0.001:
# we keep session one week
last_week = time.time() - 60*10 ###!!! 60*60*24*7
for fname in os.listdir(session_store.path):
path = os.path.join(session_store.path, fname)
try:
if os.path.getmtime(path) < last_week:
os.unlink(path)
except OSError:
pass
@zbik,
Thanks for the code, it's very useful. Do you have any suggestions on how this code might be modified so that no core files are modified?
I think this would be incredibly useful as a standalone module, and this would allow it to be upgrade proof as well.
Hello zbik, Can you please explain excatly what modify i need to do to set 10 min logout...?? Thanks.
Hello zbik, Thanks for your answer, i test this code its work fine. but only 1 time it work, after i updated the code and restart the server it work, now its not working. can you please help me, what i am missing... Thanks.
After update Odoo code from repository file http.py (and method session_gc) is replaced and set as original.
I replace the code as you post in answer, and first time its work for testing i put for 1 min. so, its should logout after 1 min if browser is ideal, after logout first time in 1 min, i login again and then now its not logout...
Random set, such as 0.1 or 1.0 and tests (0.1 - in this case an average of 10 refreshes generates log off), the logout time is set to 10 minutes.
Thanks zbik
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/odoo-8-automatic-logout-session-expiry-after-10-min-if-browser-is-ideal-86313 | CC-MAIN-2017-22 | en | refinedweb |
I am working on a problem, and i am having some difficulty starting it. If anyone has time , it would be greatly appreciated if they had a quick look at my code, and what i need to do. Here is what im trying to do:
a simulator program for a robot designed to move packages around in a warehouse environment.
The input to your program (from standard input) will contain a map of the environment in its original state, followed by a blank line, followed by a sequence of instructions to be performed by the robot.
The map specifies the size, shape and initial positions of all the packages in the environment, as well as the positions of the walls, and the initial position and orientation of the robot. The walls will be represented by the "star" character *. The robot will be represented by the characters ^, v, < or >, depending on which direction it is facing. There will be at most 26 "packages" in the environment, labeled with the letters A,B,C, etc. (note that packages may vary in size and shape).
The robot is capable of four basic operations, as follows:
L turn left
R turn right
F (try to) move forward
P print the state of the environment
The instructions for the robot will consist of an ordered sequence of these four basic instructions L,R,F,P enclosed in parentheses.
When it executes an L or R instruction, the robot remains in the same location and only its orientation changes. When it executes the F instruction, the robot attempts to move a single step in whichever direction it is facing.
If there is a package immediately in front of the robot when it tries to move forward, the package will move with the robot (in the same direction). If there are one or more packages immediately in front of that package, they will also move, as well the packages immediately in front of them, and so on. We assume the robot is strong enough to push any number of packages in front of it. Since the walls are immovable, however, there will be some situations where the robot tries to move forward but fails. This will happen if there is a wall immediately in front of the robot, or if there is a wall immediately in front of a package being pushed by the robot (either directly or indirectly). In these cases, the F instruction has no effect on the environment, and the robot continues to the next instruction. (Part of the challenge of the project is to determine which packages are being pushed, and whether or not a wall is being pushed.)
When a P instruction is executed, the current state of the environment should be printed, followed by a blank line. The robot leaves a trail behind it wherever it goes. So, when the environment is printed, places where there is no package or wall should be indicated by: a dot '.' if the robot has been there at some time during its path, and a blank space ' ' otherwise.
Attached is an exampleAttachment 6806
The environment might not be rectangular, but you may assume that it is entirely surrounded by walls, so there is no danger of the robot falling off the edge of the environment. Packages may have any shape, but you can assume that all characters belonging to a single package are contiguous (i.e. connected to each other). You may assume the environment is no larger than 80 x 80 (including walls) and that each package consists of no more than 128 characters. (Note: the program can be written to handle environments and packages of arbitrary size, but we do not force you to do so.)
Here is what i have done and where i need help:
Thanks alot for your help.Thanks alot for your help.Code:
#include <stdio.h>
#define ROBOT 26
#define WALL 27
#define NONE -1
#define MAX_ROWS 80
#define MAX_COLS 80
#define EAST 0
#define NORTH 1
#define WEST 2
#define SOUTH 3
void scan_state(
int *pnrows,
int ncols[MAX_ROWS],
int object[MAX_ROWS][MAX_COLS],
int *pdirection
);
void print_state(
int nrows,
int ncols[MAX_ROWS],
int object[MAX_ROWS][MAX_COLS],
int direction
);
int main( void )
{
int nrows; // number of rows in environment
int ncols[MAX_ROWS]; // number of columns in each row
int object[MAX_ROWS][MAX_COLS]; // object at each location
int direction; // which way the robot is facing
void scan_state(
int *pnrows,
int ncols[MAX_ROWS],
int object[MAX_ROWS][MAX_COLS],
int *pdirection
)
{
int ch;
int r,c;
r = 0;
c = -1;
while (( ch = getchar() ) != '(' ) {
c++;
if ( ch == '\n' ) {
ncols[r++] = c;
c = -1;
}
// else if { ...
// ... Not sure what goes in here ...
// }
}
*pnrows = r;
}
void print_state(
int nrows,
int ncols[MAX_ROWS],
int object[MAX_ROWS][MAX_COLS],
int direction
)
{
// need help here too plz
} | https://cboard.cprogramming.com/c-programming/83393-help-moving-robot-maze-printable-thread.html | CC-MAIN-2017-22 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.