text
stringlengths
12
4.76M
timestamp
stringlengths
26
26
url
stringlengths
32
32
Christianity and his support for President Donald Trump, with producers openly saying they don’t want him and his Bible around. Baldwin’s interview with The Hollywood Reporter on Tuesday served to promote his “Great American Pilgrimage” TV series, where the actor travels across the U.S. talking to Americans about the state of the union, and delved into his support for Trump and his faith. Baldwin said that he’s known as the “Jesus freak of Hollywood,” which both does and doesn’t bother him. “It’s unfortunate that because I have believed in Jesus for 15 years that there are many in Hollywood who are unwilling to work with me. That’s not a guess. Casting people and producers have told me that they’ve brought up my name in a room and the response was, ‘No way, we’re not bringing that guy and his Bible over here,'” the actor said.Later he responded specifically to the question: "The answer is, of course, Jesus. But President Trump is a not-too-far-behind, very close second." The actor, known for hit films such as "Born on the Fourth of July" and "The Usual Suspects," among others, said that there is a surprising number of people in Hollywood who do support Trump, though can only do so secretly. "I have really smart friends who are succeeding quite well as producers, writers and financiers, and they hold conservative views but they cannot speak their minds — at all," he argued. "There's a large constituency in Hollywood who voted for Trump but will never admit to that."Baldwin, who became a born-again Christian after the 9/11 terror attacks, told The Christian Post back in April that he believes Trump will do great things in office. "The truth is I've said a whole bunch of things about President Trump before he was elected and after. I just have always believed that he was the guy that God had in mind for this time," Baldwin told CP at the time, while promoting his film "Heaven, How I Got Here: A Night With the Thief on the Cross." Other Christian actors in Hollywood, such as Jim Caviezel, who famously played the role of Jesus Christ in Mel Gibson's epic "The Passion of the Christ," have also said that they've been rejected by the industry. "All of the sudden I stopped being one of five most popular actors in the studio, and I hadn't done anything wrong. I just played Jesus," Caviezel told Polish journalist and film critic Lukasz Adamski inan interview earlier this year.
2023-10-05T01:26:51.906118
https://example.com/article/6026
COVID-19: Excess mortality statistics and their comparability across countries - open-source-ux https://ourworldindata.org/covid-excess-mortality ====== tomohawk Related: [https://ourworldindata.org/excess-mortality- covid](https://ourworldindata.org/excess-mortality-covid) If you jump down to the Sweden section, you can see that Sweden reached zero excess mortality in the last part of June. Excess mortality is a much better measure of than just looking at 'covid deaths' (whatever those are). It can show whether or not what a government counts as covid deaths or not are accounting for all of the unexpected deaths or not. It can also account for the fact that practically any decision made by the government to fight covid may have unintended consequences (more suicides due to isolation from lockdown). ------ thepangolino This looks like a lot of hand waving to explain away the differences in reported excess mortality resulting from the different government responses to the pandemic (lockdown/light lockdown/no lockdown)
2024-06-15T01:26:51.906118
https://example.com/article/9621
All hosted ENCODE data are available at <https://www.encodeproject.org/>. Vignette data are taken from public NCBI GEO repository GSE85331. > This is a *PLOS Computational Biology* Software paper. Introduction {#sec001} ============ Understanding and contextualizing public data is critical in many research projects. It can focus early hypothesis generation or bolster experimental observations. Advancing technologies and the lowering costs of next generation sequencing (NGS) have led to an exponential increase in available data. Scientists face a volume of public data with a size and complexity that make it difficult to interpret and manage. It is often difficult to find relevant data for even simple queries, such as data originating from a given cell type. Complex queries considering multiple metadata fields, such as ChIP antibody target and cell type, can be even more difficult. With the rapid proliferation of public genomics data, curation is a persistent and increasingly challenging problem. Outside of large consortial datasets, where consistent protocols and standards are often used, there is little assurance of quality and consistency, even for published data. Many tools have been developed to evaluate data quality, either by calculating sequence quality metrics \[[@pcbi.1007571.ref001]\] or by comparisons against validated data \[[@pcbi.1007571.ref002]\]. Productive use of data by independent groups may indicate sufficient data quality and be considered as additional validation of a dataset. However, use of data following release is not systematically documented. Citations are indirect, pointing to associated publications and not datasets themselves, and may not describe whether raw or processed data from the original publication are used. To enhance access and utility of public genomics datasets, we present ORSO (Online Resource for Social Omics), a web-based platform for data discovery and evaluation within a social network framework ([Fig 1](#pcbi.1007571.g001){ref-type="fig"}). Using an advanced search engine supporting complex queries over multiple metadata fields, users can find and favorite datasets relevant to their interests. Social interactions, such as favoriting, are used by ORSO to direct a user to new data. First, ORSO presents the number of favorites for each dataset alongside metrics such as gene-by-gene and average coverage values. Data favorited by many users may be amenable to different applications, and these data may be prioritized for interpretation and comparison. Second, social interactions are used in a novel recommendation engine designed to connect users with datasets based on past activity. Inspired by similar recommendation systems used in ecommerce, ORSO presents new datasets to the user based on their interactions with hosted data. Upon favoriting a dataset, similar datasets in the network will be recommended to the user. Dataset similarities are evaluated using primary read coverage values and annotated metadata. Unlike other services \[[@pcbi.1007571.ref003]\], ORSO leverages machine learning applications to minimize curation requirements. In addition to direct recommendations, ORSO provides graph-based views of the data network, with datasets as nodes and similarities as edges. These views allow users to explore similarities predicted by ORSO. ![Overview of the ORSO framework.\ ORSO constructs a data-driven network based on social interactions and identified similarities between datasets. (1) Both validated public data and user-submitted data are hosted by ORSO. (2) Metadata and primary read coverage values are used to construct a data network, where connections represent similarities between datasets. (3) Based on data similarities, individual datasets are recommended to a user based on that user's interests. (4) User interests are gauged by social interactions with datasets, such as favoriting and following. These interactions are in turn used to connect datasets in the network and impact the data recommended to other users.](pcbi.1007571.g001){#pcbi.1007571.g001} ORSO is designed to be an evolving resource. ORSO hosts data from major biomedical consortia, including ENCODE \[[@pcbi.1007571.ref004]\], NIH Roadmap \[[@pcbi.1007571.ref005]\], modENCODE \[[@pcbi.1007571.ref006],[@pcbi.1007571.ref007]\], and others \[[@pcbi.1007571.ref008]\]. In total, ORSO provides access to over 30,000 datasets from human, mouse, *D*. *melanogaster*, and *C*. *elegans* (summarized in [S1 Table](#pcbi.1007571.s002){ref-type="supplementary-material"}). Users can add additional datasets to ORSO. Any dataset added to ORSO is compared in a pair-wise fashion to all other datasets, incorporating it into the ORSO data network, and similar datasets are recommended to the user. Processing and pair-wise analysis have been optimized for rapid integration into ORSO and takes only a few minutes. Users may optionally elect to have their data be public to others, allowing their data to be discovered by other users through search functions or the recommendation engine. As social interactions and other data accumulate through community use, social network-based recommendations will also be more meaningful, leveraging applications traditionally used for business analytics \[[@pcbi.1007571.ref009]\]. Research is an inherently social enterprise, with dissemination of results a critical step in any research project. To date, results have largely been disseminated through publication in scientific journals. The acceleration of data generation and technological advances, such as cloud-based computing, are pushing research toward direct dissemination of datasets. Recent changes in publication models acknowledge that data are a key product of research and are in many ways as important as the analysis applied to those data. ORSO anticipates this trend, using a data-centric interconnected network and user-friendly format to empower dataset discovery and contextualization. Design and implementation {#sec002} ========================= Overview of the ORSO web application {#sec003} ------------------------------------ ORSO provides a web-based interface to access hosted data and analytics. These views are organized in a tab-based layout ([S1 Fig](#pcbi.1007571.s001){ref-type="supplementary-material"}). The "Experiments" and "Users" tabs provide different views to access hosted data and public user profiles. Both tabs provide search functions that allow for complex multi-field queries. The "Explore" tab gives views that allow exploration of all datasets and predicted similarities. These views here include (1) a network view with all datasets shown as nodes in a graph with similarities drawn as connections, (2) a PCA view with dataset read coverage values transformed by principle component analysis, and (3) a dendrogram based on hierarchical clustering of dataset similarities. ORSO is implemented using Python and Javascript for the backend and frontend, respectively. The web application uses Django framework with data passed to the client via a REST application programming interface (API) \[[@pcbi.1007571.ref010]\]. Analytics are performed using Numpy and Scipy \[[@pcbi.1007571.ref011]\], with machine learning applications performed using scikit-learn \[[@pcbi.1007571.ref012]\] and Keras \[[@pcbi.1007571.ref013]\]. Data visualization and graph-based views are implemented using Plot.ly \[[@pcbi.1007571.ref014]\] and Sigma \[[@pcbi.1007571.ref015]\], respectively. Visualizations made using Plot.ly \[[@pcbi.1007571.ref014]\] allow integrated image capture and sharing. All plots use a consistent color scheme for cell types and protein targets. The color scheme can be found at <https://github.com/niehs/orso/tree/master/data/colors>. Adding data to the ORSO framework {#sec004} --------------------------------- ORSO is designed to accommodate NGS data from multiple technologies, including RNA-seq, ChIP-seq, ATAC-seq \[[@pcbi.1007571.ref016]\], and others. Users can add new datasets to ORSO using a browser-based form. This form includes fields for required metadata, such as associated assembly and cell type. To create a new dataset, Primary data in the form of read coverage is also required. ORSO does not perform direct upload of primary data. Instead, a user provides an HTTPS-accessible URL to bigWig files containing NGS read coverage \[[@pcbi.1007571.ref017]\]. This requirement is consistent with practices common in the field. Popular sequence aligners will generate whole-genome read coverage as optional output files \[[@pcbi.1007571.ref018]\], and other tools, such as the popular UCSC Genome Browser, require users to provide HTTPS-accessible bigWig files to display coverage data \[[@pcbi.1007571.ref019]\]. Once a dataset is added, ORSO will download the bigWig file and find read coverage values across internally maintained lists of genomic features, including genes and enhancers. Transcripts per million (TPM) values at each feature are saved for downstream analysis. Only feature coverage and metadata fields are saved to the ORSO database; raw coverage information is not retained after processing. After feature counts are found, a dataset is compared against other data from the same experimental technique and organism to find similar datasets. Genomic features are taken from validated sources in the literature. Feature lists of promoter regions, gene bodies, and mRNA transcripts are taken from RefSeq alignments \[[@pcbi.1007571.ref020]\]. Enhancer lists for human and mouse are taken from the VISTA database \[[@pcbi.1007571.ref021]\]. Enhancer lists for *D*. *melanogaster* and *C*. *elegans* are taken from validated genome-wide assays \[[@pcbi.1007571.ref022],[@pcbi.1007571.ref023]\]. ORSO uses a hierarchical relationship to accommodate replicate datasets from the same sample. In short, an "experiment" describing a single biological sample may have one or more "datasets." This is consistent with the ENCODE project schema \[[@pcbi.1007571.ref004]\]. When adding data to ORSO, users have the option of adding multiple datasets to the same experiment. These datasets could be replicates of the same biological sample, or they may be the same replicate aligned to different genome assemblies. When data are added to ORSO, they may be set as public or private. If a dataset is set to public, that dataset may be discovered by other users, either through the search function or by the recommendation engine. If set to private, a dataset may not be found or accessed by others. However, private datasets are still considered by the recommendation engine, allowing a user to find data similar to an unpublished dataset. Dataset comparison methodology {#sec005} ------------------------------ When a dataset is added to ORSO, it is compared against all other datasets from the same organism and experiment type (RNA-seq, ChIP-seq, etc.) to find similar datasets. Independent comparisons are performed using primary read coverage and annotated metadata; datasets may be considered similar by primary data, metadata, or both. Comparisons are knowledge-based, using associations and parent-child relationships from biological ontologies. Different ontologies are used to evaluate different metadata fields, as described below. For each ontology, we selected important parent classes where children of that class would generally be considered similar. Due to inherent structural differences in each ontology, selections were made manually based on known biology. For instance, parent classes in the BRENDA Tissue and Enzyme Source Ontology were selected to reflect organ system-level organization. Our selections included the parent classes "cardiovascular system" and "neurological system." Classes such as "head" and "limb" are included to facilitate comparisons across small organisms, such as *D*. *melanogaster* and *C*. *elegans*, where the size of the organisms prevents excision of individual tissues. Parent classes in the Gene Ontology \[[@pcbi.1007571.ref024]\] were selected to aid in the identification of transcription factors. A complete list of the key parent classes that were selected can be found in [S2 Table](#pcbi.1007571.s003){ref-type="supplementary-material"}. To facilitate comparisons of histone modifications, we created a custom ontology of epigenetic modifications for use in ORSO. The ontology was adapted from literature \[[@pcbi.1007571.ref025]\] and organizes histone modifications based upon genomic locations of enrichments. For example, histone modification H3K4me3 has parent classes "At active promoters" and "At poised promoters". This custom ontology is available for download at <https://github.com/NIEHS/orso/raw/master/ontologies.tar.gz>. Evaluating metadata similarity {#sec006} ------------------------------ To evaluate similarity by user-annotated metadata, ORSO considers the fields "cell/tissue type" and "target." The "target" field is a context specific field that depends on experiment type. For instance, in a ChIP-seq experiment, the "target" field would describe the target of the antibody pulldown. For an shRNA knockdown experiment, this would describe the shRNA knockdown target. For some experimental methods, such as RNA-seq or ATAC-seq, the "target" field is not relevant. For two datasets to be considered similar, they must have similar cell types and, if applicable, protein targets. Metadata comparisons consider key parent classes in the ontology ([S2 Table](#pcbi.1007571.s003){ref-type="supplementary-material"}). To evaluate cell type fields, ORSO considers the BRENDA Tissue and Enzyme Source ontology \[[@pcbi.1007571.ref026]\]. If two compared cell types share organ system-level parent classes in the ontology, the cell types are considered similar. To evaluate protein target fields, ORSO considers the Gene Ontology \[[@pcbi.1007571.ref024]\] and a custom-made histone modification ontology. If key parent classes are shared, the protein targets are considered similar. Because of the nature of this comparison, similarities are recorded and presented in ORSO as categorical values. An abundance of transcript factors is described in the Gene Ontology. If the two protein target fields are considered transcription factors, ORSO will additionally compare the two targets against the STRING interaction network \[[@pcbi.1007571.ref027]\]. Two transcription factors are considered similar only if they show evidence for interaction in STRING. Metadata evaluation requires user-provided terms to be matched to those in validated ontologies. To mitigate the impact of clerical errors, ORSO uses a "fuzzy" string matching system \[[@pcbi.1007571.ref028]\]. In short, the system scores a comparison of characters in two terms. If no direct match is found in an ontology for a given term, ORSO considers ontology terms that exceed a comparison score threshold. Evaluating primary data similarity {#sec007} ---------------------------------- Similarity based on primary read coverage values are evaluated independently of annotated metadata comparisons. Conceptually, ORSO evaluates similarity of primary data by first predicting metadata classes based on read coverage values and then comparing the predicted metadata of two datasets. Classification models consider the same key ontology parent classes as used in annotated metadata comparisons, meaning that cell type predictions perform organ system-level classification and protein target predictions differentiate between transcription factor and histone modification classes. Metadata is predicted using a multi-layer perceptron (MLP) neural network \[[@pcbi.1007571.ref029]\]. The MLP is trained using read coverage values at genomic features, such as genes. Before applying to the MLP, a filtering step is used to reduce the number of features, removing genes that are constitutively or lowly expressed. Filtering is performed by considering importance in training a random forest classifier to predict cell type or protein target. From an importance-ranked list, the top 1,000 features are taken forward. For each assembly and experiment type, distinct MLPs are trained for each metadata field (e.g. cell type and protein target). To ensure data quality of the initial training set, validated ENCODE data was used to train each MLP model. Training was only performed if at least 100 validated datasets were available. During training, 20% of the datasets were reserved for validation and testing (80%, 10%, and 10% for training, validation, and testing, respectively). Test accuracies are listed in [S3 Table](#pcbi.1007571.s004){ref-type="supplementary-material"}. Variation in MLP test accuracies reflect differences in dataset availability for different experiment types ([S1 Table](#pcbi.1007571.s002){ref-type="supplementary-material"} and [S3 Table](#pcbi.1007571.s004){ref-type="supplementary-material"}). For instance, only 100 datasets from experiments where CRISPR genome editing was followed by RNA-seq were available for hg19. Training and testing of the MLP associated with these data were performed with only 80 and 10 datasets, respectively. As more datasets are added to ORSO, we expect prediction accuracies to increase. To evaluate primary data similarity, read coverage values for two datasets are applied to trained MLPs, and cell type and protein target categories are predicted from lists of key ontological classes ([S2 Table](#pcbi.1007571.s003){ref-type="supplementary-material"}). Like comparisons of user-annotated metadata, evaluated similarities based on primary data are categorical. If read coverage-predicted cell type and protein target categories are shared, the two datasets are considered similar. Recommending datasets through the ORSO data network {#sec008} --------------------------------------------------- After a user adds a dataset to ORSO, that dataset will be compared against all other datasets and incorporated in the ORSO data network. Any similar datasets found will be recommended to the user. Because primary data and metadata are evaluated independently, a dataset may be similar by read coverage, annotated metadata, or both. Users may filter their recommendations such that only datasets with read coverage or metadata similarities are displayed. This may be helpful in situations where annotated metadata may miss important aspects of the experiment design, such as transformation studies where a cell type transition is induced. Incorporating user interactions {#sec009} ------------------------------- If a user adds a dataset to ORSO, similar datasets will be recommended to the user. Additional datasets from the ORSO network are also recommended to a user based upon its social interactions. If a user favorites a dataset, datasets similar to the favorite will be recommended to the user. If a user follows another user, that user's datasets will also be recommended. Users may only be followed by other users if they set their accounts to public. Users may still use ORSO even if they opt out of using social features. Search functions are available to private users, and private datasets are still considered by the recommendation engine to direct users to similar data. Visualization of datasets and data networks {#sec010} ------------------------------------------- ORSO provides several views for users to browse datasets and perform summary-level evaluations of associated data. For each individual dataset, ORSO provides plots of the average coverage at transcripts, gene bodies, promoters, and enhancers. Read coverage values for each feature are also shown on a scatter plot where they are compared to median values across all datasets of the same assembly and experiment type. ORSO also provides network-level views to rapidly contextualize datasets. The network is displayed in a graphical map where nodes correspond to datasets and edges correspond to identified similarities. Datasets with the same annotated cell type and protein target are collapsed together, and node size reflects the number of datasets with given cell type and target. Positions of individual nodes are determined using a force-directed layout algorithm \[[@pcbi.1007571.ref030]\] such that connected nodes are brought closer together. To provide an additional view, similarity values are applied to a hierarchical clustering procedure, and the resulting dendrogram displays hierarchical relationships between datasets. Additionally, all ENCODE datasets from the same assembly and data type are used to fit a PCA model, and the resulting projection can be used to compare datasets. Results {#sec011} ======= Recapitulation of known protein interactions and cell type associations {#sec012} ----------------------------------------------------------------------- To evaluate the value of ORSO to potential users, we verified that the connections within its network and the content of its displays correctly recapitulated underlying biology. To do this, we used ORSO's visualization functions to identify meaningful associations in its data network through PCA, network, and dendrogram displays. We first evaluated human RNA-seq data (an overview of hosted datasets is given in [S1 Table](#pcbi.1007571.s002){ref-type="supplementary-material"}). When projected into PCA space, datasets cluster by cell type ([Fig 2A](#pcbi.1007571.g002){ref-type="fig"}). Often, proximity between cell types reflects relevant biology. For instance, datasets from muscle tissue or muscle-derived cells ([Fig 2A](#pcbi.1007571.g002){ref-type="fig"}, green) are proximal to datasets derived from heart (cyan). Related cell types are also co-localized in the network view, indicating that topology of the network map is consistent with biology ([Fig 2B](#pcbi.1007571.g002){ref-type="fig"}). The dendrogram of hierarchical clustering results shows organization of datasets into meaningful clusters ([Fig 2C](#pcbi.1007571.g002){ref-type="fig"}). In both network and dendrogram views, there is evidence of association between organ-derived samples and epithelial cell lines ([Fig 2B and 2C](#pcbi.1007571.g002){ref-type="fig"}, blue and green, respectively). This may reflect the high composition of epithelial cells in samples derived from organs such as kidney and stomach. Co-localization of neuronal and pluripotent cells ([Fig 2B and 2C](#pcbi.1007571.g002){ref-type="fig"}, purple and red, respectively) may indicate similarities in transcriptional programs between these two cell groups. ![Recapitulation of biological associations in ORSO network and PCA views.\ All plots were taken from ORSO without modification, except for label overlays. (A) PCA view of human RNA-seq data (hg19 assembly; 1,180 ENCODE datasets). The PCA was constructed considering read coverage values across models of mRNA transcripts. Similar cell types cluster in the same location in PCA space. (B) Network view of human RNA-seq data (805 experiments). Network topologies reflect similarities across cell types. The network layout was generated using a force-directed algorithm that minimizes the distances between connected nodes. (C) Dendrogram view of human RNA-seq data (805 ENCODE experiments). Network similarities were used in hierarchical clustering to create a dendrogram of biologically relevant cell type clusters. (D) PCA view of human ChIP-seq data (hg19 assembly; 4,502 ENCODE datasets). Similar protein targets, including histone modifications, are grouped together in a PCA created using promoter read coverage values. (E) Co-localization of histone modifications associated with active genomic regions in the human ChIP-seq PCA. (F) Co-localization of histone modifications with relevant protein targets in the human ChIP-seq PCA.](pcbi.1007571.g002){#pcbi.1007571.g002} We then evaluated human ChIP-seq data. The human ChIP-seq PCA space ([Fig 2D](#pcbi.1007571.g002){ref-type="fig"}) is dominated by histone modification datasets, with 44% of datasets from experiments targeting histones (at the time of manuscript preparation, 1995 of 4502 datasets in hg19). Mutually exclusive histone modifications H3K27ac and H3K27me3 segregate to opposite sides of the PCA plot ([Fig 2D](#pcbi.1007571.g002){ref-type="fig"}, yellow and cyan, respectively; detailed in [Fig 2E](#pcbi.1007571.g002){ref-type="fig"}). Other active marks, such as H3K4me3 (in red), aggregate with H3K27ac, while marks associated with repressed gene promoters, such as H3K4me1 (in orange), aggregate with H3K27me3 \[[@pcbi.1007571.ref025]\]. Taken together, these marks create an activity axis in PCA space, with active marks on one side and repressive marks on the other. Proximity in PCA space also recapitulates known protein interactions. H3K27me3 datasets co-localize with EZH2 (blue), a member of Polycomb Repressive Complex 2 (PRC2), which deposits this mark ([Fig 2F](#pcbi.1007571.g002){ref-type="fig"}). Associated with active transcription, H3K4me3 associates with RNA polymerase II ([Fig 2F](#pcbi.1007571.g002){ref-type="fig"}, green). Evaluating changes during cellular differentiation {#sec013} -------------------------------------------------- ORSO allows rapid evaluation and contextualization of user-added datasets in top-down PCA, network, and dendrogram views. To demonstrate the utility of this feature, we added RNA-seq datasets from a time-course experiment \[[@pcbi.1007571.ref031]\] measuring differentiation of human embryonic stem cells (hESCs) into cardiomyocytes ([Fig 3A](#pcbi.1007571.g003){ref-type="fig"}; datasets detailed in [S4 Table](#pcbi.1007571.s005){ref-type="supplementary-material"}). Each timepoint (0, 2, 4, and 30 days) was added as a distinct dataset. Clear and consistent evidence of differentiation can be seen in the PCA view ([Fig 3B](#pcbi.1007571.g003){ref-type="fig"}). Early timepoints at day 0 (outlined in red) are near other hESC datasets, while later timepoints at day 30 (outlined in cyan) are near datasets associated with the cardiovascular system, including those from heart and cardiomyocytes. ![Application of RNA-seq data from a hESC to cardiomyocyte differentiation time course to ORSO.\ All plots were taken from ORSO without modification, except for label and transparency overlays. (A) Schematic describing the differentiation time course. (B) Differentiation datasets after integration in the human RNA-seq PCA. Early timepoints co-localize with hESCs while later timepoints co-localize with heart muscle samples. (C) Network view after integration of time course data with 805 ENCODE experiments. Localization of timepoints near hESC and heart data points reflect similarities predicted by the ORSO recommendation system.](pcbi.1007571.g003){#pcbi.1007571.g003} Based on primary coverage information, early timepoints were predicted to be similar to embryonic stem cells while later timepoints were predicted to be similar to samples derived from muscle and heart tissue. These similarities are reflected as connections and placement within the topology of the network view ([Fig 3C](#pcbi.1007571.g003){ref-type="fig"}). Early timepoints are positioned near other embryonic stem cells while the 30-day timepoint is positioned near heart and muscle samples. These network connections are in turn used to recommend datasets to the user (similar datasets are detailed in [S5 Table](#pcbi.1007571.s006){ref-type="supplementary-material"}). Upon adding data from early and late differentiation timepoints, ORSO would recommend data to the user from hESCs and cardiovascular cells, respectively. Availability and future directions {#sec014} ---------------------------------- ORSO is publicly available at [https://orso.niehs.nih.gov](https://orso.niehs.nih.gov/). Detailed documentation and the complete source code are available at <https://github.com/niehs/orso>. Included are instructions for Docker-based deployment of an ORSO instance. ORSO is open source software released under the MIT License. We anticipate ORSO to be refined through continuous development. Given the modular nature of its codebase, additional features may be easily added within the ORSO framework. To greatly expand the number of datasets hosted by ORSO, we hope to develop data validation and natural language processing systems to regularize and incorporate datasets from repositories such as the GEO databank. Current development efforts will introduce gene-based views that allow analysis of enrichment across all datasets and will expand comparison considerations to include fields such as chemical treatment. Additional views could allow the integration of data from multiple experiment types, enabling comprehensive views of the interplay between the epigenome and transcriptome. Though its social functions are rudimentary compared to commercial social networks, ORSO makes an important step in recognizing the value of measuring the consumption and use of data by scientists. Dataset usage could be valuable in resource allocation and future experiment design. Usage can also be used in a feed-forward strategy to refine ORSO's recommendation engine. Social interactions may ultimately be combined with data and metadata into a single model that more accurately recapitulates scientist usage patterns. Supporting information {#sec015} ====================== ###### The ORSO web interface. The ORSO interface uses a tab-based organization. Each tab brings the user to a collection of views. The "Experiments" tab presents a list of recommended experiments and allows the user to search all experiments hosted by ORSO. Through the "Users" tab, all public user accounts may be searched and accessed. The "Explore" tab gives the user multiple top-down views to explore hosted data. These include a PCA view as well as dendrograms and graph networks constructed from identified similarities between datasets. On all ORSO pages, a "Help" button provides a link to documentation for users and developers. (TIF) ###### Click here for additional data file. ###### Overview of ENCODE data hosted by ORSO. Datasets are organized by assembly and experiment type. For each assembly and experiment type, the number of datasets and distinct cell types and protein targets are given. (XLSX) ###### Click here for additional data file. ###### Ontology classes used in metadata comparisons. Listed are key ontology classes whose children are considered similar by ORSO. Parent classes were selected to reflect biological organization of samples. For instance, classes in the BRENDA Tissue and Enzyme Source Ontology were selected to reflect organ system-level organization. For each selected ontology class, the class ID, class name, and ontology are given. (XLSX) ###### Click here for additional data file. ###### Test accuracies of trained MLP neural networks. MLP models were trained to predict which key parent ontology classes (described in [S2 Table](#pcbi.1007571.s003){ref-type="supplementary-material"}) describe a given dataset. Independent models were trained for each combination of experiment type, metadata field, assembly, and genomic feature type. Only validated ENCODE datasets were used to train each model. Datasets were split into training, validation, and test sets using an 80/10/10 split. (XLSX) ###### Click here for additional data file. ###### Details of an RNA-seq time course of hESC differentiation into cardiomyocytes. The timepoint and cell line are taken from associated GEO repository entries \[[@pcbi.1007571.ref031]\]. For each dataset, the ORSO cell type and target metadata fields are given as used in the example vignette. (XLSX) ###### Click here for additional data file. ###### Dataset recommendations based on time course data. Experiment name and cell type are given for each recommended experiment. Recommended experiments may be similar to multiple experiments from the RNA-seq time course. Multiple datasets are given in a comma-separated list; additional details about these datasets can be found in [S4 Table](#pcbi.1007571.s005){ref-type="supplementary-material"}. (XLSX) ###### Click here for additional data file. We gratefully acknowledge Nicole Kleinstreuer, Jason Li, Charles Schmidt, and members of the NIEHS Integrative Bioinformatics group for thoughtful insight during tool planning and development. Jeremy Archuleta and Robert Patton provided important feedback on the application framework. We also thank Paul Wade and Guang Hu for critical comments during review of the manuscript and Adam Burkholder for assistance during manuscript preparation. [^1]: The authors have declared that no competing interests exist.
2024-04-22T01:26:51.906118
https://example.com/article/2250
The exhibition “Solomon Yudovin. Graphic Works from the Besieged Leningrad” from the collection of Evgeny Gerasimov and the Russian Museum is timed to coincide with the 75th anniversary of the lifting of the siege of Leningrad. The artist witnessed and experienced the siege, one of the most tragic events of the Second World War, firsthand. Yudovin worked on the graphic series “Leningrad in the Days of the Great Patriotic War” for several years between 1941 and 1947. He created a kind of chronicle of the life of the besieged city struggling for existence, pushing the limits of human capabilities... The artist experienced the first, most severe winter in besieged Leningrad when the city residents were starving under ceaseless fire in terrible cold. And it was precisely at this time that a significant part of his ‘siege series’ was created: drawings, monotypes, and engravings, many of which were later reworked and became the basis of the widely distributed series of prints. Looking at the graphic chronicle of those days, it is very important to understand the context of time, the specific moment in which it was created. The modern viewer looks at it from the distance that history provides: we know how long the siege lasted, and when the victory over Nazism ended the war. But the residents of besieged Leningrad were tormented not only by wartime and proximity of death but also by the unknown. No one could know what the turning point of the war would be, what challenges were waiting for them ahead... And it was at this time that Yudovin was working hard, capturing everything he could see and feel. In this scrupulous observation of everyday life, the artist combines the belief that those visual records will “survive” because “victory will be ours!” and an attempt to escape from the horror of war, being engaged, like in peacetime, with one’s true calling. The exhibition features about 150 works by the artist, most of which are on display for the first time. The exhibition held in the year of the theater in Russia is dedicated to the wonderful, mysterious and strange world of images of Alexander Tyshler (1898–1980). The Russian Museum has an extensive collection of paintings and graphic works by Tyshler, which were donated to the museum by the master and his wife and also by G.M. Levitin, a close friend of many Russian theater scenic artists. Leningrad-born survivor of Nazi concentration camps, member of New York avant-garde scene of the 1950s and 1960s, Boris Lurie touches upon the most sensitive, problematic and hot issues, such as consumer society, Nazi crimes against humanity, and the reflection of sexuality in mass consciousness. The Russian Museum is the exclusive owner of all the interior images and pieces of art of the Russian Museum collection, as well as all the images and text information given on its official site. The usage of the texts and images provided on the site is only allowed with the permission of the Russian Museum.
2024-04-18T01:26:51.906118
https://example.com/article/4577
American Indian suicide--Fact and fantasy. This is an epidemiology report on American Indian suicide patterns in the Pacific Northwest. The purpose of this report is to: (1) describe the first three years of a pilot project in suicide epidemiology, (2) demonstrate significant differences in tribal rates, (3) show that the total American Indian population has equally significant differences in tribal comparisons, and (4) clarify previous misconceptions about the "American Indian suicide phenomenon".
2023-12-18T01:26:51.906118
https://example.com/article/6053
Psoralen and isopsoralen, two coumarins of Psoraleae Fructus, can alleviate scopolamine-induced amnesia in rats. In this study we have found that the crude extract of Psoraleae Fructus inhibited acetylcholinesterase activity in vitro and ameliorated impairment of the inhibitory avoidance response and of the water maze spatial performance caused by scopolamine in rats. Among all fractions, the chloroform fraction showed the best inhibitory effect on acetylcholinesterase activity and could reduce the scopolamine-induced inhibitory avoidance response impairment. Psoralen and isopsoralen, two major constituents of the chloroform fraction of Psoraleae Fructus identified by high performance liquid chromatography, also reduced the extent of the inhibitory avoidance response impairment. The results suggest that psoralen and isopsoralen are the major active ingredients of Psoraleae Fructus responsible for the progressive reversal of scopolamine-induced amnesia, whose effects are partially associated with inhibition of AchE activity and hence activation of the central cholinergic neuronal system.
2023-09-16T01:26:51.906118
https://example.com/article/2183
News Home made tazer in the Gowrie drug house A 33-year-old man will face court tomorrow on drug-related charges after a search warrant was executed on a house in Gowrie today (Tuesday, 29 May). During the search warrant, members from ACT Policing’s Crime Targeting Team seized what is suspected to be a quantity of methamphetamine, as well as what is suspected to be cannabis plants and dried cannabis. Police also seized several items suspected of being stolen property, including a motorcycle and approximately $5000 worth of tools. A homemade ‘taser’-type device was also seized. The man was arrested at the end of the search warrant and was taken to the ACT Watch House, where he is expected to be charged with trafficking in a controlled drug. Weekly Newsletter Every Thursday afternoon, we package up the most-read and trending RiotACT stories of the past seven days and deliver straight to your inbox.. While I partly agree with the sentiment, did you have to go all Clive Palmer and bring the CIA into it? Clive Palmer did not have decades of research and reports showing the agency caught red handed in the act. His unsupported allegations do not make all other allegations against the agency also unfounded. As I said, the CIA has been proven to traffic drugs, time and again in countries right around the world. While I partly agree with the sentiment, did you have to go all Clive Palmer and bring the CIA into it? Clive Palmer did not have decades of research and reports showing the agency caught red handed in the act. His unsupported allegations do not make all other allegations against the agency also unfounded. As I said, the CIA has been proven to traffic drugs, time and again in countries right around the world. Snitches get stiches, dogging a supplier is worse than getting busted. I have a lot of sympathy for drug dealers, they don’t create the demand they just supply it. When the police and physicians won’t let you get high, there is always a friendly neighbourhood dealer risking their neck to get you a little happier. Besides, if you go far enough up the food chain it goes way outside the AFP’s jurisdiction. Who is running those opium fields in Afghanistan? the fields gaurded by American troops, the fields whose production has increased ten fold since the US invaded. Who controls the coca fields in south america? That same cocaine which has been *proven* time and again to be smuggled around the world by the CIA. Busting small time dealers for doing their mates a favour, real impressive work there. But then, not a lot the Canberra cops can do about the drugs coming in in diplomatic pouches is there? Its a lot easier to bust meth dealers and potheads than to go after the bankers, lawyers and judges who snort coke for breakfast. So yes, boo hoo, poor dealer, taking the fall because we keep criminalising natural human behaviour. There have been drugs longer than there have been police, and there will still be drugs long after there are police. The usage rate doesn’t change when it is made legal and after a hundred years of drug war, the usage rate is still no different. So why is it still a crime to get high?
2024-03-16T01:26:51.906118
https://example.com/article/7431
Product Description Gloss & Glamour. The Dimora Black collection brings all the class of Italian-style furnishings home to you in an ultra-modern look that's on trend. Black lacquered melamine veneers combine with satin aluminum hardware for a sleek, sophisticated effect, while the padded faux leather headboard makes sure comfort isn't sacrificed. Seven-piece package includes complete Queen bed, dresser with deck, mirror, nightstand and chest, as shown. (Mattress set and pillows are not included.) Finish: Black Material: Hardwood Solids and Melamine Veneers Unique Features: The nicely cushioned, leather-look headboard is ideal for watching television or reading a book.Satin aluminum hardware and a framed mirror with etched edging provide extra panache to the sleek, modern look.Dresser includes removable media deck, perfect for your television or decorations.This bed does require the use of a foundation below the mattress.Fully upholstered platform bed adds contemporary Italian style to your bedroom.The high-gloss, black lacquer finish brings a gorgeous luster to your décor. Care Instructions: Wipe with a damp, clean cloth, then buff immediately with a dry, soft cloth. Drawers: Chest features five spacious storage drawers with satin aluminum hardware.Nightstand includes two convenient bedside storage drawers.Dresser features three long drawers with divider inserts to create six drawers inside. Although you live outside our delivery area, we invite you to order online and pick up your merchandise at your closest store or find another store here. Your closest store: Store Hours: Our delivery area continues to grow. As a registered user, we'll notify you if delivery becomes available in your neighborhood. In the meantime, we hope you enjoy our selection of merchandise that is available for pick up. Our delivery area continues to grow. Register with us today and we'll notify you as soon as delivery becomes available in your location.
2024-06-23T01:26:51.906118
https://example.com/article/3153
[MUSIC PLAYING] THERON TINGSTAD: Hello, everybody. Welcome to Talks at Google. We've got an incredible speaker today. I'd like to introduce Captain David Marquet. Captain Marquet is best known for his work that he did with the USS Santa Fe, where he took that nuclear powered submarine from worst to first. And it ended up generating the most leaders that the Navy has ever seen out of the submarine program. He's going to share some of his insights today of what he has learned over his time in the Navy. How he has an alternative take on leadership. And he shares these insights with companies around the world. And I think we're very lucky today to have him here at Google. He's documented a lot of his story and his insights in the book, "Turn the Ship Around-- A True Story of Turning Followers into Leaders." So without further ado, Captain Marquet. DAVID MARQUET: Thank you. Thank you. [APPLAUSE] Thanks! It's great to be here. So thanks a lot, thanks for having me. Yeah, we advertise the book on AdWords a lot. Sell a bunch. So what does having to run a nuclear submarine have anything to do with working here at Google? Well, hopefully by the end of this session, you'll have some connections. What I want to talk today about are seven myths of leadership, seven things that I thought I was dead sure I knew about leadership, which I now think are actually unhelpful or just basically wrong. So here we are with the crew on the submarine. Submarines spend most of their time underwater. And even when the submarine is on the surface, most of it's underwater, like an iceberg. But here, the submarine's in dry dock. There's a crew of about 135, the average age is 27 years old. So it's probably not much different than what we have here. These are the officers. These are the only guys with college degrees. Then the technicians, the blue shirts, are the guys who do all the work. And then over here, these are the chiefs, so they've been technicians. They did a good job, they got promoted. They've been in the Navy for 6, 8, 10 years. We promote them, we let them wear the same brown, highly fashionable brown uniform that the officers wear. And we turn them into leaders. We say they're leaders. So there's not a lot of room on a submarine, because we're going to put all these people inside the submarine and most of the submarine is taken up by equipment. So there's not a lot of exercise space. So for example, one of the things we do is we take a SEAL team, and we have to deliver them to wherever they're going to do. Any SEALs out here, Special Forces people? I know I've got at least one Marine. OK, so these SEALs are super fit individuals. They in the gym all the time. But there's no gym on a submarine, so their bodies deteriorate every passing day. So the Navy says, you can't bring these SEALs on board until the very last minute, because the submarine is a toxic environment for these elite individuals, these elite athletes. Now this was always troublesome for me, because we'd live there for 180 days. No one cared about that! So here's what happens. They say, we've got to pick up a SEAL team. So we take the submarine away from the shoreline. We surface it, which we don't like to do. It puts us in a vulnerable position. And we want to be there as short a time as possible. Then the helicopter shows up with the SEAL team. They're going to come down this rope. These guys in orange, this is the submarine crew. I'm up here, in charge of this deal, right? And then we've got to find some young guy to go back here and hold on to this rope. Because it's important to tether the rope at the bottom. SEAL team comes down one at a time. Now this young man, in my case there were all men. We now have women on submarines, which I think is great. But this young man has got to make a bunch of snap decisions. Because we're moving together. And if the helicopter hovers and it starts moving away, do I let go of the rope, do I hold on to the rope. A wave comes, could knock someone overboard. And this person is going to make a bunch of decisions. They've got a helicopter hovering right over their head. [MAKING HELICOPTER SOUNDS] Even if they had time, no one can hear him. What do you want me to do about-- [MAKING HELICOPTER SOUNDS] And he couldn't hear the response. So for this to be successful, we need to train a team that needs to know what they need to do and make decisions without being told. Without being told and oh, by the way, this is my view from up there on the bridge. I can't even see what's going on. And we think oh, when I get into the moment of crisis, when the disruption comes, we're going to really want to win, and we're going to win because we really want to win has nothing to do with winning. It's are you willing to do the hard work before you get to here. Have you created a team that when this happens, you don't need to tell them what to do. That's going to determine success in this event. But this picture I think is exactly relevant for you guys. Exactly relevant, because you can walk down the hallway, you can stand behind an engineer, you can stand behind one of the people, one of the sales people. And you can watch them, move the mouse right, move it left, click enter, closed bracket. We get a sense, oh, I can control it. But for the most important things that you all bring to work with every day, which is your creativity and your passion, it's just like this. It's invisible. And the degree to which we go out there and try and tell people what to do, we're just throwing cold water on that spark that comes inside of every human being to do this. We're throwing cold water on their passion. This is a hard lesson that I learned. Because for me, leadership was all about telling people what to do for a long time. Seven myths of leadership, what is leadership? Now I hope to have this be a little bit interactive. I know we're broadcasting it, so everyone can participate, including the remote people. Hopefully you have phones. And we're going to go to this website called P-O-L-L-E-V. And I've got about half a dozen of these throughout the presentation, just to get a sense of what we all think here, or any web enabled device here, like the computers. So go ahead and type this in. P-O-L-L-E-V.com/intent. And I'm going to bring up the first question. Here's a warm up question. All right, good, we're getting there. OK, so you see it right up here, P-O-L-L-E-V.com/intent. What's your hometown? What city did you grow up in? Good. Good, good, good. I figured you guys at Google would figure this out. [LAUGHTER] I have some audiences that need a little help. Let's see. I'm guess Anarbor. Don't put a space in it. It treats every word like a separate response. Now we're building a word cloud, which I know you've seen. The idea is the more people type in the same word, the bigger that particular word gets. So we see Detroit, which makes sense, I'm here in Anarbor. Anarbor, Chicago, Dunes, Ogden, Boston, Burlington. Da Nang, I saw. Johannesberg, Pittsburgh, Lambertville. All right, so that's great. We got that. That's our warm up question. Now I'm going to ask you a very serious question, serious question. And I'd like you to take a stand here, one side or the other. If you had to, which is, business value here at Google, more business value be created either A, people independently thinking, or B, better at doing what they're told. So we're gonna get a bunch of results here and then we'll go ahead and expose the results. OK. You guys are pretty far over on the displaying independent thinking. That's good, because that's what this whole talk is about. Creating a team that displays independent thinking. Now it didn't used to be like this. For your parents, grandparents, great, great grandparents, work used to look like this for the last several hundred years of our human existence. We've come through this thing called the Industrial Revolution. And work during the Industrial Revolution was what we called for most people neck down. It's what I do with my hands. This is a radio factory outside of Philadelphia just before World War II. These people are hired for what's happening here with their hands, not what's happening up here. One person in the back of the factory has done the thinking and the deciding for these people, and these people are all in the doing part. And it turns out that this legacy of work is an anchor. Because the language and the structure that we use for our organizations still, in many ways, hearken back to this legacy. So when we say we come to work and do our jobs, like the fact that we even say we come to work and do our jobs, because most of you actually don't do anything. Like you think your jobs, I would say, right? But we don't say, because that sounds weird. But that's because this has influenced the way we talk. And so that's influencing our behaviors. And schools were designed to create people who were comfortable going into environments like that. So the schools were about conformity and compliance. Now this group of sad people is the 1977 Concord-Carlisle High School math team. And this is me. I was a mathlete. [LAUGHTER] Yeah, I have some weird-- I can't even look at the camera. I have weird social issues. Any mathletes out there? Yes? All right! Thanks for-- good! Yes! I knew Google would have at least a couple brave people. So I was in the math team and the chess team and I was in the computer club. And my high school was one of a few public high schools. We got our computers back in 1976. It was this big machine. And we fed in these tapes, this pink tape that was punched holes in it. And we had the computer do amazing things, like count to 10 and calculate square roots. And you'd feed the tape in in the evening, and you'd push play. You'd watch it-- [MAKING WORKING SOUND] And then you'd go home and you'd come back the next morning, and find like it hung two minutes after you left and you'd have to start over again. And that was me in high school. And it was the 70s, and we were in this tough time in the country. I felt I wanted to do something about that. If you're a geeky introverted kid, so for me I wanted to go in the military. I had no military in my family, but I was like, I'm going to be a submarine commander. Like, why? I don't know. It popped in my head. I think it was because if you're an introvert and you want to go in the military, it's where you can hide. So I set myself down that path. And you know what? It actually happened. So here's my book. Now remember, as a geek, I took my studies very seriously. Here's my leadership book from the United States Naval Academy. Here is what it says. Leadership can be defined as directing the thoughts, plans, and actions of others, so as to obtain and command their obedience, their confidence, their respect, and their loyal cooperation. Now what do you think of that? I want you to just talk to your neighbor for 30 seconds about this. How would it feel to work in this environment. Then we're gonna have a short conversation. Then we're gonna keep going. Go. OK, five, four, three, two, one. Does anyone have any words that kind of come to mind to describe this? I'll repeat them. Just shout them out. Micromanage, good. Rigid. Prescriptive. Dead weight loss. Intimidating. Tell me more about intimidating. Scary, don't mess up, right. It's a fear based environment. Very good, what else? So this is predicated on the assumption that first of all, the leader knows all the answers. And your job is just to show up and do what you're told. Then we say, well where's your passion? Where's your engagement? Do you guys do these employee engagement surveys? You do? Yeah, OK. I'm not engaged. You made me feel this way! So we're going to fix this. I call it know all, tell all leadership. And one of the questions that I'm gonna leave you with is where here in this quadrant do you want to operate as a leader. I thought it was know all, tell all. Leader knows all the answers, gives all the orders. That was the best place you could be as a leader. I got ordered to be the captain of the USS Olympia 17 years after I graduated from the Naval Academy. I was super excited about it. The Navy took me out of my job and they sent me to school for 12 months so I could learn every single detail of this ship. It'd be like having the CEO come in here and stand behind your desk and fix your code. Freaky. Anyway, there was another submarine, the USS Santa Fe. And the Santa Fe was the Enron of submarines. The Santa Fe was a submarine where you see that picture of sailors? Every year, a quarter of them come and a quarter leave. And then when they leave, so that's about 35 people, they say, hey, how was your Navy experience? Would you like to stay in? Three of them said yes. That's how bad it was. And the captain on the Santa Fe was supposed to be there for another year. So we were all like oh, who's gonna get the poor job of being the Santa Fe's captain. Because it was a year ahead of time and that's when the Navy announced the captain. But you know what? He quit early, and they said well, we can't have a sumbarine without a captain, so Marquet, Santa Fe. That was my oh, sugar moment. Because now I'm about to be a know not leader. Because the problem wasn't the bad morale and [INAUDIBLE].. The problem was the Santa Fe was a totally different submarine, one of the newest submarines in the fleet. So my question here is. Because I'm going to take over a broken team. And my question is, and this is quoting some work that you guys have done here at Google is, what's the most important determinant of team performance? Who's on the team, what positions they're in, or how the team interacts. So let's see if you guys read your own, eat your own dog food on this one. Yes, very good. Exactly. And it turned out for me, I didn't know any of this. Because Google wasn't even invented then. But I couldn't control who came to the ship. The Navy decided who came to the ship. And I couldn't put change to the provisions. The only thing I could play with was basically how we talked about it to each other. That was the only variable I had. And it turned out, just by luck, that it turned out to be the most important one. And the other cool thing about it is it's the thing that everybody on the team can do. Everyone participates. Everyone participates. So here I am, I didn't know how this was going to work. Because I'd never taken over a situation where I didn't basically know all the answers. So my world was turned upside down here. My hatch would fall off. Down in the sonar room. So there's no windows on the submarine. What we learn about in the outside world, we listen to. You hear and see in the movies like this pinging, pinging, that sonar? We don't do any of that pinging. That pinging would give us away. We convert the sound to these yellow lines. And we analyze it and we say, oh, that's a surface ship, that's a school of shrimp. That's a distant oil well. That's a submarine. And it's important to know what's what. And then we have all these buttons. And this is how we interact with our equipment. Now on the Olympia I would have known how every button dip worked. But on the Santa Fe, I didn't know. So I walked down the ship, I just took over, and I said hey, tell me about your buttons here. And the sailors would be like, well, this button does this, this button does that. They're all confident about what they're doing. And then there'd by this button off to the side, like over here. OK, what about this one? I noticed the sailor would avoid that button, so I knew hey, what about this one. I forget. That was a no, no. Because they expected this, well, I'll tell you. But I didn't know either. So the first thing was, oh man, I shouldn't have asked that question. Because now they're all looking at me. And it was very scary. Going into combat for me was not scary, but this was very scary. I wanted to pretend I knew, but I couldn't. First of all, the clock was ticking in my head. I was finally like, I don't know either. But you're a submarine captain. Yeah, go figure. [LAUGHTER] And I said hey, let's press it and see what happens. So I'm going start getting to some of these myths now. Myth number one, good leaders know all the answers. Wrong. Fact, good leaders say I don't know. I don't know opens the door to learning. Even when you know, it's helpful, I think, to just say I don't know, what do you guys think. For as long as you think you know and you keep saying you know stuff, you're not going to create a team that's curious and has a learning mindset. So it's OK to say you don't know something. It's going to break the paradigm from the normal know all tell all paradigm. The next thing that happened had to do with this knob. This is how we control the speed of the submarine. Now I just took over the worst performing ship in the fleet. And what we're going to do is our favorite exercise, we're going to shut down a reactor, and we're going to run on an electric backup motor. This is not like a Tesla, right? This is a 300 horsepower electric motor, but it's in a 6,000 ton submarine. So this electric motor just barely pushes the submarine through the water. And there's two speeds. So what's happening is when you're operating the electric motor with the reactor shut down, you're draining the battery pretty quickly. And there's a race to get the reactor started. So here we are, we're running on. We're at 1/3. We shut down the reactors, the very first drill. I'm standing in the control room. The officer has been on the submarine the longest, Bill Green is his name, is controlling this. And he's doing the right thing. He's ahead 1/3, conserving the battery. Now in every other submarine I've been on, there was two speeds to this electric motor. But unbeknownst to me, on the Santa Fe, there's only one. So I'm thinking, hey, if we speed up, it's going to drain the battery faster, it's gonna put some stress on the team. Train harder. And so I suggest, hey, why don't we speed up on this electric motor? And he gives the order. But the sailor sitting at this panel does nothing, actually kind of goes like this. I said, hey, what's going on? He says captain, on this ship, unlike your other ships, it's just one speed motoring. That was embarrassing. And I thought about Bill, and I said Bill, do you know about this. And he said, yes, sir I did. Really? Well, riddle me this. Why did you order it? What do you guys think he said? Exactly, because it's all about telling people what to do and doing what you're told. And this was like a hammer blow to my head. Because my whole leadership training was about being really good at telling people what to do. So I said, look, I'm going to stop telling you guys what to do. I'm gonna stop giving you guys orders. I'm never going to give another order as a captain of a submarine. And they were like, OK. No one knew what that would look like, but it was better than dying, which is what would have happened eventually. So we say, good leaders give good orders, that's the myth. I now think good leaders actually don't give orders. They create a team that doesn't need to be told what to do. So now I've decided not to give my guys any instructions. I still don't know the submarine. So now I'm a know not, tell not leader. If you're brand new to an organization, maybe this is where you need to live for a few days, weeks, or months. But eventually people are going to stop paying you for being down here. So it's not a good long term strategy. Now the torpedoes, here's how they work. First of all, there's a long wire that pays out through the ocean. So when we shoot the torpedo, it connects back to the submarine and it sends signals back saying, here's what I'm seeing as I'm out here looking for the bad guy. And we can steer it. We can say turn right 20 degrees. Turn left, whatever. We can chase down the enemy. So they're pretty potent. Then they don't hit the ship like you see in the movies. They actually go under, and then they detonate. And what we're doing is blowing a hole in the ocean. Why? Because then the ship falls into that hole. Breaks in half. And you end up right here. So that's what sinks the ship. So these torpedoes are big deals. Here, we're loading a torpedo in Japan. Now I was all about empowering my team. I got on this kick of I'm not going to give any orders. Like what do you think, and it was all about that. I was really good about it, in fact I was really bad about it. Because I did it so well that we were setting up to do this and they made a mistake. And before we did it with the torpedo, we do it with a shape, a concrete shape, which is the same shape and weight. But it's inert, and we end up dropping it. We almost could've killed somebody. And I was really scared, I was like oh, this is wrong. This is the wrong way to do business. I need to go back and be in control of everything. But I talked to my team and we said, you know what, it's not that. We're just missing something. And here's the model we came up with. We said, I've given too much control. I was irresponsibly giving control. And what I really needed to do was tune the level of control to how much they knew about their jobs and the clarity of purpose. This is the why that Simon Sineks talks about. Because if you say, you get to make a decision on which customer you call, you could talk, forget the script, just talk to him. You need to know what we're trying to achieve. And so we now say leaders tune. The word is tune. We tune the level of control and we invite the team to higher and higher levels. So we don't empower teams. First of all, teams are already empowered. It's inside every human being. But what we do is we tune the empowerment to the levels of competence and control. Otherwise, it's just irresponsibility. Now, this is Dr. Stephen Covey. He wrote "7 Habits of Highly Effective People," which is an awesome book and I was a huge Covey fan. And what happened on the submarine was things started going really, really well. Every single sailor in the next 12 months re-enlisted, 35 out of 35. And we were evaluated by the Navy, and the crew of the Santa Fe got the highest score in the history of the inspection team. They never had records that had a higher score. And everyone was really confused. How did that crew that was so bad get so good? And the rest of the Navy thought I was just giving some really great orders. My peers would call me and say, congratulations, you must be making some really good orders. I'd just be like, I'm actually trying this new thing. I'm not giving orders, not telling them what to do. They're like, what? Like, yeah, nevermind. Because from their paradigm, it just didn't work. But what we had done, I realized that later what we'd done is gone from one leader and 134 followers, and one thinker and 134 doers, just like that picture of the factory to 135 active thinking, passionate, creative people, which I call leadership. My role as the captain was simply to create this environment where people could come to work and just be their best. That's all I did. All day long, all day long. And it worked amazing. And so Dr. Covery watched the ship. And now we were about intent. I'm stopping telling you what to do, but you've got to come to me to tell me what you intend to do. If I'm not telling you what to do and you're not doing anything, then I'm leaning back and you're not leaning in to me. So I actually think leaders lean back and the team leans forward. So Dr. Covey saw how we talked to each other and he said this is how I think you guys are doing it. When people come up and say, well, tell me what to do, we resist telling them what to do. The instinct is you want to, because you know the answer, and it's so psychologically fulfilling to say yes, do this. Boy, I solved so many people's problems today. Well, don't I feel good about that. And we say, well, what do you see? This is description. What do you think? What do you think we should do? Hey, what do you intend to do? Maybe you should just do it. Depends on what it is. We would just invite people and we'd have close the latter, and you guys have a card that describes how this went. But this is how you tune it. This is why you can dial it exactly. Question for you. So you've got someone on your team and they're down there, tell me what to do. You probably don't have any of these people at Google, but just imagine you did. They're like, well, what do I do here? And you're inviting them. You say, well, what do you think? What should we do? And they just seem stuck at tell me what to do. What keeps people stuck at tell me what to do. Go ahead and send a bunch of words, let's see what we get. And you can type in a bunch of words and hit enter and have all the words come up. Get a whole bunch of words. Let's see what we've got. Confusion, confidence, disillusionment, understanding, inexperience, mistakes, pride, misunderstanding, status, superiority. And fear. Fear is always the biggest problem. I've done this to over 100 audiences in 20 different countries and every single time, fear has been the biggest word. In fact, if I aggregate those 100 speeches, this is what it looks like. The reason people are not saying what they think is not because they don't know. It's because they're afraid of being wrong, they're afraid of being laughed at, they're afraid of being the person who thinks differently from everybody else, afraid of taking responsibility, whatever it happens to be. And this gets me to the next leadership myth. I thought my job as a leader was to quote "motivate" my team. And a lot of times, motivate meant add stress. Come on, guys, we can do it. What leaders need to do is make it feel safe. Because fear is the problem. So the antidote to fear is safety. So my job all day long was to say yeah, it's OK. Express that in probabilities. Just give me a scale, 1 to 10, how do you think about that. And really ask questions in a way that made it feel safe. I'll give you a very small example. People would come up to me and they'd say, well, I think we should do this, are you sure. No, that doesn't feel safe. No, I'm not sure to be honest. Or they have this false bravado, yes, I'm sure, which is always wrong. So what's your sense of enthusiasm over this. And we'd ask questions, if you want a clue for asking a question, always try to put how at the beginning. How likely it is we're going to launch a product on time. Not will we launch on time. We just make it safe. And all day long, trying to make it safe for my people to share what they thought, even if it was potentially wrong or different. Especially if it were different. Here we are, we're fighting a fire. We were not very good at this when I first got to the ship. And then we'd sit in a room, we'd do a retrospective or a critique or a fact finding, whatever you want to call it. And I heard a lot of that. I was listening to the language and they would say, well, they didn't pressurize the hose, they didn't change the batteries on the thermal imager. They hung the gear up twisted, so it took me longer to get there than it should have. There was all this they, referring to all these other people on board the submarine. There was they by rank, there was they by department. And I got upset one day with this they. Because it didn't feel like a team. I said there's no more they on Santa Fe. It rhymed. So that was good. You can only use the word we. Very next day, the engineer walked up to me. Now he wants to tell me that the supply department, he's in charge of engineering, that the supply department has ordered the wrong part. You know what I'm talking about. So he comes up to me. He says, Captain, I've got bad news. I'm kind of hanging out in the control room, like I would usually do. He says, Captain, I've got bad news. I said yeah, what do you got? Says, Captain, we can't fix the pump, because th-- th-- he wants to say they, but he can't. So he says because we ordered the wrong part. I kind of look at him, he looks at me, I look at him, and he just goes like. It was super awesome. There's no blame in recommendations. So when I go into organizations, I listen for what I call the we they boundary. Like where is it we? Like we're in the engineering, but they're in marketing. And where does it go from we to they? Because as soon as you go from we to they, that's where the team boundary ends. And so the problem is most people say well, think like a team. Here's some posters that tell us. Send some emails to encourage people, whatever. That doesn't make team performance happen. We just said the word we. And six months later, we had rewired our brains and it felt like a team. People would come down the ship and say oh, it's amazing, it feels like a team. They wouldn't even know why. But it was because we now became one big we. And so this gets me to there's the no they on Santa Fe. Myth number five, which is teams think their way to new action. Any change management that I've seen starts with well, we've got to create a mindset and blah, blah, blah, blah, blah. That's I'm going to think my way to new behavior, which I don't think is the way it works. We behave our way to new thinking. A perfect example is in your lunchroom. Because Google did a study, because they wanted more team interaction. They found out they had all these little small tables in all the lunchrooms. So when people go to lunch, they'd only sit with two or three people at most. You replace them and now you have those long tables. It's because of when you go to lunch, now you can sit with bigger groups of people. We don't give people a lecture. We don't annoy them with a whole bunch of emails. We just put some big tables in. They sit with more people, they get to know more people. It feels like more of a team. So we act our way to new thinking. So at the end of the day, partly because I was curious and not afraid to say I don't know, I learned the ship. I learned it pretty well. My temptation was to go back to be the old know all, tell all leader. Because I was firmly rooted in that behavior. When I got stressed out, I would always default back to that behavior. That was my default wiring. If I hadn't slept well, didn't eat well. My boss yelled at me, so of course, I had to yell at my people. But I'd seen the power of not telling my people what to do. Because I'd seen this explosion in creativity and performance in the team. I saw the excitement in their eyes. So I really resisted it. So now I tried as much as possible to live over here. Even when I knew the answer and they came up to me and said, well what do you want us to do here, I would really resist telling them. Because over here, you focus on ownership. Over here, you focus on the long term. Over here you're focusing on your people and developing them into leaders. And yeah, some days you might need to operate over here. But here's task accomplishment. This is short term. This is where, unfortunately, a lot of leaders live all the time. You should decide where you want to be here. You should decide. It shouldn't be determined just by whether you know the answer or not. Most leaders, if they know the answer, they operate here. If they don't, they operate here. They're down here, right? I can't tell you, because I don't know. Or sometimes they operate here. I don't know, but I'm still going to tell you. Myth number six, leaders know all, tell all. The fact is the right place for a leader to be is to still know your job. I'm not saying the lesson is not don't know your job. That's what it took for me to understand the power of this, though. But to resist giving your team the answer. So one last poll, one last poll. A lot of what I'm talking about is about giving up control. So I want you to think, and again push in a bunch of words. In fact take 30 seconds, talk to your neighbor. And then together, let's put in a bunch of words about what it feels like to give up control. Here's the thing I tell my CEOs that I'm coaching to do. I say, you can go to dinner. Next time you go out to eat, you don't get to order. I want you to just turn to the waiter or the waitress and say, pick my meal. They're all control freaks, so this freaks them out. So I want you to get that in your head and think about that, and set up a bunch of words. See what we've got. Yeah, this is coming out. I love it. Because look what we got. Scary, risk, losing. We're going to lose control. At the same time, I have trust and freedom and other good things. They're all coming up on the screen. These words here, this is how I felt every day as a submarine captain. Every day when I was trusting my team, every day when I was saying you guys get to choose, this is how I felt. I felt for a long time that these were the wrong feelings. That I was doing something wrong, because I felt this way. I was a little bit nervous and scared. I call it the suck air through teeth maneuver. That's your call. See what you guys do with that. But I now think this is actually the way you're supposed to feel every day as a leader. Every day, if you don't feel like you're on the verge of this, then you're playing it too close to the vest. You're too in control. You're too comfortable and you're not building a team. It's just about you. So the final thing here is we talk about oh, trust your gut, trust your instincts. But part of leadership is feeling the way you're programmed as a human to feel. Because you're program to want to be in control and to reduce uncertainty. It's wading into that, leaning into that discomfort and acting contrary to how your gut might take you many times. Because it's not normal to put your life in the hands of some other person. You're not biologically wired to like that. But I think this is the real way. This is why leadership is hard. That's why we have so few really good leaders. Because you have to act contrary to that. So we go back to this. Here's my plug to you guys, OK? You guys are some of the most creative, talented, smart people on the planet, working for an amazing organization. What I worry about when I interact with-- you're young people to me, sorry-- young people, millennials, or coders is like, I don't care about that leadership stuff. Because we've associated a bad word with leadership. Because leadership means telling people what to do. We have so much baggage over leadership. So I don't want to be a leader, I just want to be a coder. I just want to do my job. The world needs you. The world needs you to be leaders, not in the traditional, so I'm going to go out and tell a bunch of people what to do. But in the way of everybody in the organization where I'm going to listen to the diverse opinions, I'm going to encourage someone to say something. I'm going to be part of making it safe to say something that's different than what everybody else thinks. So that's my ask of you guys. Is don't shy away from that. Yeah, you've got to do your job and you've got to do that. But start bringing in this leadership piece. You're all leaders, and we need it. The world needs it, the planet needs it. Problems are too complex, too hard, too sticky, too thorny. If you're not convinced at this point that no group of quote "experts" is gonna solve these problems [INAUDIBLE],, I don't know what's going to. So we need you. The world needs you guys. One other thing, we have a channel. It's on YouTube, of course. It's called "Leadership Nudges" where you can see there's 150 of them now. They're typically 60 to 90 seconds. I just talk about one little thing, one of the things we talked about today. And sometimes weird things, I'm going to show you one. I'm going to tell you how you can enroll in these. But I taped this when I was in [INAUDIBLE] a couple weeks ago. In the hotel bathroom, I saw something that was really interesting to me. So I'm in a bathroom here. [VIDEO PLAYBACK] - Today I'm going to talk about mechanisms. Mechanisms are a greay way for influencing human behavior in organizations. The way mechanisms work is the system is designed so that you can't help but comply with the desired behavior. In this case, it's putting the toilet seat down. [INAUDIBLE] It works like this. The toilet seat, when you raise it, covers the button to flush the toilet. So you can't push the button to flush the toilet without putting the seat down. I love it. I'm David Marquet, and that's your Leadership Nudge. [LAUGHTER] [END PLAYBACK] DAVID MARQUET: Yeah, so I had fun with that one obviously. You've got your phones out, you can just text the word Nudge to 44144. You'll get on our Nudge, we'll enroll you in the nudges. You can also send up for them on the website, because I think this texting is a US only thing. So you go to 44144, type the word nudge. We're not even going to ask for your name. All we need is your email, and then you're going to get on the nudge list. But also, you can go back to the YouTube channel and look at some of the old ones. If you want to share some of what we talked about today with your team or anybody else, they're all on that little YouTube channel with little 60, 90 second things. That's it. I'm going to have a short conversation here with Theron. Are you going to come on up? But remember, my ask, I need you guys to be leaders. I need you to think of it. Don't shy away from stepping up and being leaders. [APPLAUSE] THERON TINGSTAD: We just have time here for about one question for the film piece. So I wanted to ask you, you mentioned that the average age on the Santa Fe was 27 years old. The average age on our sales floor here if you average in people like me is about 27 years old. So I skew the average up a little bit. The tech industry on the whole skews younger. Do you feel that there are, you mentioned millennials, do you feel that there are generational differences as far as how the leadership philosophy should be applied. Actually, no. I think just like all these myths there's another myth. One of the myths is millennials are different than everybody else, somehow they're aliens or weird. So I have three kids, they're all in their 20s now. So they're all millennials. So I speak a little millennial. Here's what I think. I think millennials are just like our parents and our grandparents and our great great grandparents in terms of their genetic wiring and what they want as human beings. What I think has changed is that the ability to say this job isn't worth being treated badly has changed. In the United States, when I was growing up, the average new house was 1,500 square feet. The average new house size today is 2,500 square feet. The average family size in household went from 2.5 to 1.5. I grew up in a family of four. I couldn't wait to get out of there. Now it's changed a little bit. So I would have put up with any amount of pain to be gone and be out on my own. Now it's different, because there's not so much pain back home, which I think is awesome. So my deal is the way that millennials are saying, look, I want a job that matters. I want to feel valued. I want to be part of a team. I want some degree of control over when and how I work and what I work on. That's how we should treat everyone. So they're just telling us how we should treat everybody. THERON TINGSTAD: Well, I just wanted to thank Captain Marquet for coming here today. Please feel free to come up and ask him some questions in person here. But as far as the film portion. Again, Captain David Marquet, author of "Turn the Ship Around." Featured by Steven Covey on just an incredible example of leadership in what he's done and what he continues to do with companies around the world. Thank you so much for coming today. DAVID MARQUET: Thanks everyone for coming out. [APPLAUSE]
2024-07-25T01:26:51.906118
https://example.com/article/2395
Nick Cannon announced his split from Mariah Carey in August 2014, but they had actually separated several months beforeGetty Mariah Carey is engaged to billionaire businessman James Packer, and the American singer seems to have every intention of marrying her fiancé this year. But, keeping in mind the controversy surrounding her divorce with former partner Nick Cannon, Carey might have to re-consider her upcoming wedding. The We Belong Together singer recently sat down with Sharon Osborne, The Talk co-host, to talk about her upcoming docu-series Mariah's World, her marriage to Australian entrepreneur Packer, and her adorable kids. Carey, 46, gushing about her fiancé and the upcoming nuptials, admitted that getting married this year "is the goal". However, a 2016 wedding would be too much to expect. "There's a lot on my plate," she said suggesting to Osborne during the interview for Entertainment Tonight that there might be some delay before the two get hitched. When Osborne prods her further about Packer, and if it was love at first sight, the mother-of-two asserted she wasn't "on the prowl" for a new man, but indeed felt something for her soon-to-be husband. "We were talking and having a good time and I was like, 'This guy is really fun.' So it was nice," Carey said. Meanwhile, the actress-cum-singer has also been keeping a strict watch on her diet, "I've been pretty consistent with this bleak diet that I am on," she explained to Osborne. However, she quickly added that it's not her wedding alone that is inspiring her to check her diet. "You know, it was the whole situation that made me want to try and be better for all," she told the talk show host adding on a note of humour, "And then we do have to get fittings and dresses!" Carey has been in the news of late for her relationship and the controversy surrounding her divorce from Nick Cannon. It was reported that the America's Got Talent host had allegedly denied signing the divorce papers. Cannon on the other hand, cleared the air regarding the rumours as he said in an interview to Extra, "There's nothing to tell… the media has a problem of sensationalism... me and Mariah get along great, and it's a process, nobody is holding nothing up. Why would I hold it up? I want her to be happy."
2024-01-27T01:26:51.906118
https://example.com/article/4720
Q: Finding orthogonal of a set in functional analysis I am self studying functional analysis from Kreyszig Functional analysis and I need help in Ex 3.3 Question no. 5 which is ------> If X = $ R^2 $ . Then find orthogonal complement of M if M is {x} , where x= (e1, e2) and x is non zero. Please give some hints. A: So $M^{\perp}$ is the set of all $y=(f_1,f_2)\in \mathbb{R}^2$ such that $\langle x,y\rangle=0$. Now, the inner product on $\mathbb{R}^2$ is given by $$ \langle x,y\rangle=e_1f_1+e_2f_2, $$ which is $0$ when? This amounts to solving an equation.
2024-07-02T01:26:51.906118
https://example.com/article/5663
Benefits of Cryotherapy Chronic, rampant inflammation throughout the body is aging us and is the precursor to a myriad of diseases, which include heart disease, cancer, Alzheimer’s, arthritis and diabetes, to name a few. With Whole Body Cryotherapy, the flame through the entire body is reduced and in some instances, put out. With Local Cryotherapy, any individual area in the body can be immediately treated and the anti-inflammatory benefits can be both seen and felt immediately. Weight Loss Successful, permanent weight loss only occurs with an increase in metabolism. In the Whole Body Cryochamber, the outer layer of skin’s temperature is reduced to between 41°F and 50°F. Your body responds by a boost in your metabolism and literally “burns” up calories by speeding up to literally “warm you up.” Repeated exposure with our Whole Body Cryotherapy results in an elevated metabolism 24/7. Translation, you are burning MORE calories each and every day. Muscle and Joint Repair Whole Body Cryotherapy reduces inflammation, promotes rapid healing and minimizes pain. You may add a second procedure with Local Cryotherapy to provide more direct cool to the very specific area which accelerates healing to get you back to full mobility as soon as possible. Fighting Depression There have been numerous studies that show just 2-3 minutes of Whole Body Cryotherapy is beneficial to those who suffer from depression, anxiety or both. The reason for this response is attributed to hormonal benefits that occur as a result of the Whole Body Cryotherapy treatments. Collin County Cryo now serves the followingNorth Texas communities from ourCelina, TX office. DISCLAIMER: All information and content contained on this website and on all Collin County Cryo, Inc. printed materials are for informational purposes only. Statements regarding products and services within this site have not been evaluated or approved by the FDA. Our products and services are not intended to diagnose, treat, cure or prevent any disease. Use at your own risk.
2023-11-22T01:26:51.906118
https://example.com/article/5704
Telecommunications technicians, such as so-called “Installation and Maintenance” (I&M) technicians, may visit customer sites to install new equipment, set up new services, or to service existing equipment or services. Frequently an I&M technician needs to gather local or district-specific information to complete a “job order” or task. For example, an I&M technician may need to know cross-box locations, pricing information, service information, cable records, plat information, or other information needed to carry out his or her assignment. For many telephone companies, including Regional Bell Operating Companies (RBOCs), such local information is generally not stored on centralized legacy systems. Accordingly, although I&M technicians can presently access information stored on these central legacy systems using portable laptops and custom software, they are unable to remotely access the local information using their portable laptops. According to the conventional approach to this problem, an I&M technician seeking local information must make one or more telephone calls to local offices of his or her employer. Several calls may be required. The I&M technician may be put on hold as the call attendant collects the information or tends to other business. The time the I&M technician must spend in collecting local information reduces his or her job efficiency and may increase costs to customers. Furthermore, miscommunications between individuals may cause incorrect information to be transferred. For example, the data retrieved by the call attendant may not be accurately interpreted by the call attendant who has a lower level of technical expertise then the I&M technician. These are significant drawbacks to the current approach.
2023-08-13T01:26:51.906118
https://example.com/article/9358
The Best Baseball Players NOT in the Hall of Fame List Rules Only players who have not been inducted into the Baseball Hall of Fame. The best baseball players not in the hall of fame are here on this list, though the 2020 Baseball Hall of Fame ballot has room for some of these stars. In order to be considered as slighted by the baseball hall of fame, a player must be non-active and have had the ability to appear on the hall of fame ballot at least once. Some of these men are considered to be among the greatest baseball players of all time, yet remain off of the "official" list of the best MLB players ever at the baseball hall of fame in Cooperstown. What keeps a player out of the baseball hall of fame? Some of these players- like Barry Bonds and Mark McGwire- have been involved in PED (performance enhancing drug) scandals, others broke the cardinal rule of not betting on the sport (Pete Rose), and still others were involved in a baseball player scandal, like Shoeless Joe Jackson who is said to have contributed to his team throwing the World Series. Some of the best baseball players are simply not on the list because they haven't yet acquired the 75% vote necessary for induction. Who should be in the baseball hall of fame? What top baseball players are not in the hall of fame? Who are the best players not in the baseball hall of fame? These are the questions addressed here in the list of the greatest baseball players not in the hall of fame. If your favorite player isn't on the list, make sure to add them so others can agree or disagree.
2023-12-22T01:26:51.906118
https://example.com/article/2428
/* * Copyright 2008-present MongoDB, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.mongodb import com.mongodb.async.FutureResultCallback import com.mongodb.client.model.Collation import com.mongodb.client.model.CollationAlternate import com.mongodb.client.model.CollationCaseFirst import com.mongodb.client.model.CollationMaxVariable import com.mongodb.client.model.CollationStrength import com.mongodb.client.test.CollectionHelper import com.mongodb.client.test.Worker import com.mongodb.client.test.WorkerCodec import com.mongodb.connection.ConnectionDescription import com.mongodb.connection.ServerConnectionState import com.mongodb.connection.ServerDescription import com.mongodb.connection.ServerType import com.mongodb.connection.ServerVersion import com.mongodb.internal.binding.AsyncConnectionSource import com.mongodb.internal.binding.AsyncReadBinding import com.mongodb.internal.binding.AsyncReadWriteBinding import com.mongodb.internal.binding.AsyncSessionBinding import com.mongodb.internal.binding.AsyncSingleConnectionBinding import com.mongodb.internal.binding.AsyncWriteBinding import com.mongodb.internal.binding.ConnectionSource import com.mongodb.internal.binding.ReadBinding import com.mongodb.internal.binding.ReadWriteBinding import com.mongodb.internal.binding.SessionBinding import com.mongodb.internal.binding.SingleConnectionBinding import com.mongodb.internal.binding.WriteBinding import com.mongodb.internal.bulk.InsertRequest import com.mongodb.internal.connection.AsyncConnection import com.mongodb.internal.connection.Connection import com.mongodb.internal.connection.ServerHelper import com.mongodb.internal.connection.SplittablePayload import com.mongodb.internal.operation.AsyncReadOperation import com.mongodb.internal.operation.AsyncWriteOperation import com.mongodb.internal.operation.InsertOperation import com.mongodb.internal.operation.ReadOperation import com.mongodb.internal.operation.WriteOperation import com.mongodb.internal.session.SessionContext import com.mongodb.internal.validator.NoOpFieldNameValidator import org.bson.BsonDocument import org.bson.Document import org.bson.FieldNameValidator import org.bson.codecs.DocumentCodec import spock.lang.Shared import spock.lang.Specification import java.util.concurrent.TimeUnit import static com.mongodb.ClusterFixture.TIMEOUT import static com.mongodb.ClusterFixture.checkReferenceCountReachesTarget import static com.mongodb.ClusterFixture.executeAsync import static com.mongodb.ClusterFixture.getAsyncBinding import static com.mongodb.ClusterFixture.getBinding import static com.mongodb.ClusterFixture.getPrimary import static com.mongodb.ClusterFixture.loopCursor import static com.mongodb.WriteConcern.ACKNOWLEDGED import static com.mongodb.internal.operation.OperationUnitSpecification.getMaxWireVersionForServerVersion class OperationFunctionalSpecification extends Specification { def setup() { CollectionHelper.drop(getNamespace()) } def cleanup() { CollectionHelper.drop(getNamespace()) checkReferenceCountReachesTarget(getBinding(), 1) checkReferenceCountReachesTarget(getAsyncBinding(), 1) ServerHelper.checkPool(getPrimary()) } String getDatabaseName() { ClusterFixture.getDefaultDatabaseName() } String getCollectionName() { getClass().getName() } MongoNamespace getNamespace() { new MongoNamespace(getDatabaseName(), getCollectionName()) } void acknowledgeWrite(final SingleConnectionBinding binding) { new InsertOperation(getNamespace(), true, ACKNOWLEDGED, false, [new InsertRequest(new BsonDocument())]).execute(binding) binding.release() } void acknowledgeWrite(final AsyncSingleConnectionBinding binding) { executeAsync(new InsertOperation(getNamespace(), true, ACKNOWLEDGED, false, [new InsertRequest(new BsonDocument())]), binding) binding.release() } CollectionHelper<Document> getCollectionHelper() { getCollectionHelper(getNamespace()) } CollectionHelper<Document> getCollectionHelper(MongoNamespace namespace) { new CollectionHelper<Document>(new DocumentCodec(), namespace) } CollectionHelper<Worker> getWorkerCollectionHelper() { new CollectionHelper<Worker>(new WorkerCodec(), getNamespace()) } def execute(operation, boolean async) { def executor = async ? ClusterFixture.&executeAsync : ClusterFixture.&executeSync executor(operation) } def executeWithSession(operation, boolean async) { def executor = async ? ClusterFixture.&executeAsync : ClusterFixture.&executeSync def binding = async ? new AsyncSessionBinding(getAsyncBinding()) : new SessionBinding(getBinding()) executor(operation, binding) } def execute(operation, ReadWriteBinding binding) { ClusterFixture.executeSync(operation, binding) } def execute(operation, AsyncReadWriteBinding binding) { ClusterFixture.executeAsync(operation, binding) } def executeAndCollectBatchCursorResults(operation, boolean async) { def cursor = execute(operation, async) def results = [] if (async) { loopCursor([cursor], new Block<Object>(){ void apply(Object batch) { results.addAll(batch) } }) } else { while (cursor.hasNext()) { results.addAll(cursor.next()) } } results } def next(cursor, boolean async, int minimumCount) { List<BsonDocument> retVal = [] while (retVal.size() < minimumCount) { retVal.addAll(next(cursor, async)) } retVal } def next(cursor, boolean async) { if (async) { def futureResultCallback = new FutureResultCallback<List<BsonDocument>>() cursor.next(futureResultCallback) futureResultCallback.get(TIMEOUT, TimeUnit.SECONDS) } else { cursor.next() } } def tryNext(cursor, boolean async) { def next if (async) { def futureResultCallback = new FutureResultCallback<List<BsonDocument>>() cursor.tryNext(futureResultCallback) next = futureResultCallback.get(TIMEOUT, TimeUnit.SECONDS) } else { next = cursor.tryNext() } next } def consumeAsyncResults(cursor) { def batch = next(cursor, true) while (batch != null) { batch = next(cursor, true) } } void testOperation(Map params) { params.async = params.async != null ? params.async : false params.result = params.result != null ? params.result : null params.checkCommand = params.checkCommand != null ? params.checkCommand : true params.checkSlaveOk = params.checkSlaveOk != null ? params.checkSlaveOk : false params.readPreference = params.readPreference != null ? params.readPreference : ReadPreference.primary() params.retryable = params.retryable != null ? params.retryable : false params.serverType = params.serverType != null ? params.serverType : ServerType.STANDALONE testOperation(params.operation, params.serverVersion, params.expectedCommand, params.async, params.result, params.checkCommand, params.checkSlaveOk, params.readPreference, params.retryable, params.serverType) } void testOperationInTransaction(operation, List<Integer> serverVersion, BsonDocument expectedCommand, boolean async, result = null, boolean checkCommand = true, boolean checkSlaveOk = false, ReadPreference readPreference = ReadPreference.primary(), boolean retryable = false, ServerType serverType = ServerType.STANDALONE) { testOperation(operation, serverVersion, ReadConcern.DEFAULT, expectedCommand, async, result, checkCommand, checkSlaveOk, readPreference, retryable, serverType, true) } void testOperation(operation, List<Integer> serverVersion, BsonDocument expectedCommand, boolean async, result = null, boolean checkCommand = true, boolean checkSlaveOk = false, ReadPreference readPreference = ReadPreference.primary(), boolean retryable = false, ServerType serverType = ServerType.STANDALONE, Boolean activeTransaction = false) { testOperation(operation, serverVersion, ReadConcern.DEFAULT, expectedCommand, async, result, checkCommand, checkSlaveOk, readPreference, retryable, serverType, activeTransaction) } void testOperation(operation, List<Integer> serverVersion, ReadConcern readConcern, BsonDocument expectedCommand, boolean async, result = null, boolean checkCommand = true, boolean checkSlaveOk = false, ReadPreference readPreference = ReadPreference.primary(), boolean retryable = false, ServerType serverType = ServerType.STANDALONE, Boolean activeTransaction = false) { def test = async ? this.&testAsyncOperation : this.&testSyncOperation test(operation, serverVersion, readConcern, result, checkCommand, expectedCommand, checkSlaveOk, readPreference, retryable, serverType, activeTransaction) } void testOperationRetries(operation, List<Integer> serverVersion, BsonDocument expectedCommand, boolean async, result = null, Boolean activeTransaction = false) { testOperation(operation, serverVersion, expectedCommand, async, result, true, false, ReadPreference.primary(), true, ServerType.REPLICA_SET_PRIMARY, activeTransaction) } void testRetryableOperationThrowsOriginalError(operation, List<List<Integer>> serverVersions, List<ServerType> serverTypes, Throwable exception, boolean async) { def test = async ? this.&testAyncRetryableOperationThrows : this.&testSyncRetryableOperationThrows test(operation, serverVersions as Queue, serverTypes as Queue, exception) } void testOperationSlaveOk(operation, List<Integer> serverVersion, ReadPreference readPreference, boolean async, result = null) { def test = async ? this.&testAsyncOperation : this.&testSyncOperation test(operation, serverVersion, ReadConcern.DEFAULT, result, false, null, true, readPreference) } void testOperationThrows(operation, List<Integer> serverVersion, boolean async) { testOperationThrows(operation, serverVersion, ReadConcern.DEFAULT, async) } void testOperationThrows(operation, List<Integer> serverVersion, ReadConcern readConcern, boolean async) { def test = async ? this.&testAsyncOperation : this.&testSyncOperation test(operation, serverVersion, readConcern, null, false, null, false, ReadPreference.primary(), false, ServerType.STANDALONE, false) } def testSyncOperation(operation, List<Integer> serverVersion, ReadConcern readConcern, result, Boolean checkCommand=true, BsonDocument expectedCommand=null, Boolean checkSlaveOk=false, ReadPreference readPreference=ReadPreference.primary(), Boolean retryable = false, ServerType serverType = ServerType.STANDALONE, Boolean activeTransaction = false) { def connection = Mock(Connection) { _ * getDescription() >> Stub(ConnectionDescription) { getMaxWireVersion() >> getMaxWireVersionForServerVersion(serverVersion) getServerType() >> serverType } } def connectionSource = Stub(ConnectionSource) { getConnection() >> { connection } getServerDescription() >> { def builder = ServerDescription.builder().address(Stub(ServerAddress)).state(ServerConnectionState.CONNECTED) if (new ServerVersion(serverVersion).compareTo(new ServerVersion(3, 6)) >= 0) { builder.logicalSessionTimeoutMinutes(42) } builder.build() } } def readBinding = Stub(ReadBinding) { getReadConnectionSource() >> connectionSource getReadPreference() >> readPreference getSessionContext() >> Stub(SessionContext) { hasSession() >> true hasActiveTransaction() >> activeTransaction getReadConcern() >> readConcern } } def writeBinding = Stub(WriteBinding) { getWriteConnectionSource() >> connectionSource getSessionContext() >> Stub(SessionContext) { hasSession() >> true hasActiveTransaction() >> activeTransaction getReadConcern() >> readConcern } } if (retryable) { 1 * connection.command(*_) >> { throw new MongoSocketException('Some socket error', Stub(ServerAddress)) } } if (checkCommand) { 1 * connection.command(*_) >> { assert it[1] == expectedCommand if (it.size() == 9) { SplittablePayload payload = it[7] payload.setPosition(payload.size()) } result } } else if (checkSlaveOk) { 1 * connection.command(*_) >> { it[4] == readPreference result } } 0 * connection.command(_, _, _, _, _, _) >> { // Unexpected Command result } if (retryable) { 2 * connection.release() } else { 1 * connection.release() } if (operation instanceof ReadOperation) { operation.execute(readBinding) } else if (operation instanceof WriteOperation) { operation.execute(writeBinding) } } def testAsyncOperation(operation = operation, List<Integer> serverVersion = serverVersion, ReadConcern readConcern, result = null, Boolean checkCommand = true, BsonDocument expectedCommand = null, Boolean checkSlaveOk = false, ReadPreference readPreference = ReadPreference.primary(), Boolean retryable = false, ServerType serverType = ServerType.STANDALONE, Boolean activeTransaction = false) { def connection = Mock(AsyncConnection) { _ * getDescription() >> Stub(ConnectionDescription) { getMaxWireVersion() >> getMaxWireVersionForServerVersion(serverVersion) getServerType() >> serverType } } def connectionSource = Stub(AsyncConnectionSource) { getConnection(_) >> { it[0].onResult(connection, null) } getServerDescription() >> { def builder = ServerDescription.builder().address(Stub(ServerAddress)).state(ServerConnectionState.CONNECTED) if (new ServerVersion(serverVersion).compareTo(new ServerVersion(3, 6)) >= 0) { builder.logicalSessionTimeoutMinutes(42) } builder.build() } } def readBinding = Stub(AsyncReadBinding) { getReadConnectionSource(_) >> { it[0].onResult(connectionSource, null) } getReadPreference() >> readPreference getSessionContext() >> Stub(SessionContext) { hasSession() >> true hasActiveTransaction() >> activeTransaction getReadConcern() >> readConcern } } def writeBinding = Stub(AsyncWriteBinding) { getWriteConnectionSource(_) >> { it[0].onResult(connectionSource, null) } getSessionContext() >> Stub(SessionContext) { hasSession() >> true hasActiveTransaction() >> activeTransaction getReadConcern() >> readConcern } } def callback = new FutureResultCallback() if (retryable) { 1 * connection.commandAsync(*_) >> { it.last().onResult(null, new MongoSocketException('Some socket error', Stub(ServerAddress))) } } if (checkCommand) { 1 * connection.commandAsync(*_) >> { assert it[1] == expectedCommand if (it.size() == 10) { SplittablePayload payload = it[7] payload.setPosition(payload.size()) } it.last().onResult(result, null) } } else if (checkSlaveOk) { 1 * connection.commandAsync(*_) >> { it[4] == readPreference it.last().onResult(result, null) } } 0 * connection.commandAsync(_, _, _, _, _, _, _) >> { // Unexpected Command it[5].onResult(result, null) } if (retryable) { 2 * connection.release() } else { 1 * connection.release() } if (operation instanceof AsyncReadOperation) { operation.executeAsync(readBinding, callback) } else if (operation instanceof AsyncWriteOperation) { operation.executeAsync(writeBinding, callback) } try { callback.get(1000, TimeUnit.MILLISECONDS) } catch (MongoException e) { throw e.cause } } def testSyncRetryableOperationThrows(operation, Queue<List<Integer>> serverVersions, Queue<ServerType> serverTypes, Throwable exception) { def serverVersionSize = serverVersions.size() def connection = Mock(Connection) { _ * getDescription() >> Stub(ConnectionDescription) { getMaxWireVersion() >> { getMaxWireVersionForServerVersion(serverVersions.poll()) } getServerType() >> { serverTypes.poll() } } } def connectionSource = Stub(ConnectionSource) { getConnection() >> { if (serverVersions.isEmpty()){ throw new MongoSocketOpenException('No Server', new ServerAddress(), new Exception('no server')) } else { connection } } } def writeBinding = Stub(WriteBinding) { getWriteConnectionSource() >> connectionSource getSessionContext() >> Stub(SessionContext) { hasSession() >> true hasActiveTransaction() >> false getReadConcern() >> ReadConcern.DEFAULT } } 1 * connection.command(*_) >> { throw exception } if (serverVersionSize == 2) { 1 * connection.release() } else { 2 * connection.release() } operation.execute(writeBinding) } def testAyncRetryableOperationThrows(operation, Queue<List<Integer>> serverVersions, Queue<ServerType> serverTypes, Throwable exception) { def serverVersionSize = serverVersions.size() def connection = Mock(AsyncConnection) { _ * getDescription() >> Stub(ConnectionDescription) { getMaxWireVersion() >> { getMaxWireVersionForServerVersion(serverVersions.poll()) } getServerType() >> { serverTypes.poll() } } } def connectionSource = Stub(AsyncConnectionSource) { getConnection(_) >> { if (serverVersions.isEmpty()) { it[0].onResult(null, new MongoSocketOpenException('No Server', new ServerAddress(), new Exception('no server'))) } else { it[0].onResult(connection, null) } } } def writeBinding = Stub(AsyncWriteBinding) { getWriteConnectionSource(_) >> { it[0].onResult(connectionSource, null) } getSessionContext() >> Stub(SessionContext) { hasSession() >> true hasActiveTransaction() >> false getReadConcern() >> ReadConcern.DEFAULT } } def callback = new FutureResultCallback() 1 * connection.commandAsync(*_) >> { it.last().onResult(null, exception) } if (serverVersionSize == 2) { 1 * connection.release() } else { 2 * connection.release() } operation.executeAsync(writeBinding, callback) callback.get(1000, TimeUnit.MILLISECONDS) } @Shared Collation defaultCollation = Collation.builder() .locale('en') .caseLevel(true) .collationCaseFirst(CollationCaseFirst.OFF) .collationStrength(CollationStrength.IDENTICAL) .numericOrdering(true) .collationAlternate(CollationAlternate.SHIFTED) .collationMaxVariable(CollationMaxVariable.SPACE) .normalization(true) .backwards(true) .build() @Shared Collation caseInsensitiveCollation = Collation.builder() .locale('en') .collationStrength(CollationStrength.SECONDARY) .build() static final FieldNameValidator NO_OP_FIELD_NAME_VALIDATOR = new NoOpFieldNameValidator() static boolean serverVersionIsGreaterThan(List<Integer> actualVersion, List<Integer> minVersion) { new ServerVersion(actualVersion).compareTo(new ServerVersion(minVersion)) >= 0 } }
2024-06-19T01:26:51.906118
https://example.com/article/2940
Lupus erythematosus and other autoimmune diseases related to statin therapy: a systematic review. Statins have been increasingly associated with drug-induced autoimmune reactions, including lupus erythematosus. To identify and determine the clinical and biological characteristics of statin-induced autoimmune reactions. The MEDLINE database (1966 to September 2005) was used to identify all reported cases of statin-induced autoimmune diseases. The keywords used were statins, 3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors, adverse effects, autoimmune disease, lupus erythematosus, dermatomyositis and polymyositis. Twenty-eight cases of statin-induced autoimmune diseases have been published so far. Systemic lupus erythematosus was reported in 10 cases, subacute cutaneous lupus erythematosus in three cases, dermatomyositis and polymyositis in 14 cases and lichen planus pemphigoides in one case. Autoimmune hepatitis was observed in two patients with systemic lupus erythematosus. The mean time of exposure before disease onset was 12.8+/-18 months; range 1 month-6 years. Systemic immunosuppressive therapy was required in the majority of cases. In many patients, antinuclear antibodies were still positive many months after clinical recovery. A lethal outcome has been recorded in two patients despite aggressive immunosuppressive therapy. Long-term exposure to statins may be associated with drug-induced lupus erythematosus and other autoimmune disorders. Fatal cases have been reported despite early drug discontinuation and aggressive systemic immunosuppressive therapy.
2024-01-13T01:26:51.906118
https://example.com/article/9623
Samuel Fournier Samuel Fournier (born January 28, 1986 in Lacolle, Quebec) is a retired professional Canadian football fullback for the Montreal Alouettes of the Canadian Football League. He drafted 19th overall by the Hamilton Tiger-Cats in the 2010 CFL Draft. Fournier has also been a member of the Edmonton Eskimos. He played college football for the Laval Rouge et Or. Samuel Fournier signed with the Tiger- Cats as a free agent on June 2, 2012. He spent the entire 2011 season on the Eskimos injured list. In 2010 Samuel was selected by the Tiger-Cats in the third round (19th overall) of the CFL Canadian Draft. He joined the Laval Rouge et Or in 2006 and was named Rookie of the Year and won the Vanier Cup in both 2006 and 2008. He played in the East-West Bowl in 2009. Over four seasons, Fournier played in 45 of the team's 46 games. He carried the ball 97 times for 621 yards and eight touchdowns and also had 14 receptions for 141 yards. He started his football career in Saint-Jean-sur-Richelieu, Québec, joining the Les Géants du Cégep St-Jean where he broke the all-time rushing record with 1534 yards in eight games in 2004. He was his team MVP and league MVP in 2004–2005. During those same years, he played with Team Canada Junior to represent his country at the NFL Global Junior Championships. He won the World Title twice in 2005 and 2006 and was twice named Team Canada MVP in 2004 and 2006. References External links Just Sports Stats Montreal Alouettes bio Category:1986 births Category:Living people Category:Canadian football fullbacks Category:Edmonton Eskimos players Category:Hamilton Tiger-Cats players Category:Montreal Alouettes players Category:People from Montérégie Category:Laval Rouge et Or football players
2023-09-25T01:26:51.906118
https://example.com/article/4276
Q: Calling async method in IEnumerable.Select I have the following code, converting items between the types R and L using an async method: class MyClass<R,L> { public async Task<bool> MyMethodAsync(List<R> remoteItems) { ... List<L> mappedItems = new List<L>(); foreach (var remoteItem in remoteItems ) { mappedItems.Add(await MapToLocalObject(remoteItem)); } //Do stuff with mapped items ... } private async Task<L> MapToLocalObject(R remoteObject); } Is this possible to write using an IEnumerable.Select call (or similar) to reduce lines of code? I tried this: class MyClass<R,L> { public async Task<bool> MyMethodAsync(List<R> remoteItems) { ... List<L> mappedItems = remoteItems.Select<R, L>(async r => await MapToLocalObject(r)).ToList<L>(); //Do stuff with mapped items ... } } But i get error: "Cannot convert async lambda expression to delegate type 'System.Func<R,int,L>'. An async lambda expression may return void, Task or Task<T>, none of which are convertible to 'System.Func<R,int,L>'." I believe i am missing something about the async/await keywords, but i cannot figure out what. Does any body know how i can modify my code to make it work? A: You can work this out by considering the types in play. For example, MapToLocalObject - when viewed as an asynchronous function - does map from R to L. But if you view it as a synchronous function, it maps from R to Task<L>. Task is a "future", so Task<L> can be thought of as a type that will produce an L at some point in the future. So you can easily convert from a sequence of R to a sequence of Task<L>: IEnumerable<Task<L>> mappingTasks = remoteItems.Select(remoteItem => MapToLocalObject(remoteItem)); Note that there is an important semantic difference between this and your original code. Your original code waits for each object to be mapped before proceeding to the next object; this code will start all mappings concurrently. Your result is a sequence of tasks - a sequence of future L results. To work with sequences of tasks, there are a few common operations. Task.WhenAll and Task.WhenAny are built-in operations for the most common requirements. If you want to wait until all mappings have completed, you can do: L[] mappedItems = await Task.WhenAll(mappingTasks); If you prefer to handle each item as it completes, you can use OrderByCompletion from my AsyncEx library: Task<L>[] orderedMappingTasks = mappingTasks.OrderByCompletion(); foreach (var task in orderedMappingTasks) { var mappedItem = await task; ... }
2024-07-04T01:26:51.906118
https://example.com/article/4060
Dietary L-arginine prevents fetal growth restriction in rats. Alterations in maternal plasma arginine concentration accompany normal pregnancy. Nitric oxide is synthesized from L-arginine and influences fetal growth. We hypothesized that L-arginine would influence fetal growth and hypoxia-induced uricemia in a maternal hypoxia-induced fetal growth restriction model. Fetal growth on day 21 of gestation was assessed in timed pregnant Wistar rats with or without exposure to maternal hypobaric hypoxia. Animals exposed to hypoxia received either no supplement or supplementation of drinking water with 0.2% L-arginine, 2% L-arginine, or 2% glycine. On day 21 of gestation, fetuses were delivered by hysterotomy and fetal and placental weights were obtained. Maternal and fetal plasma were assayed for uric acid as an index of tissue hypoxia. Xanthine oxidase and xanthine dehydrogenase, precursors of uric acid and reactive oxygen species, were assayed in maternal tissue. Results were analyzed by analysis of variance with correction for multiple comparisons. Exposure of rats on normal diets to hypoxia resulted in a 30% reduction in fetal weights. L-Arginine, 2% or 0.2%, prevented the reduction in fetal weight (p < 0.0001). Isocaloric and isonitrogenous supplementation with glycine did not influence hypoxia-induced fetal growth restriction. L-Arginine, but not glycine, ameliorates maternal hypoxia-induced fetal growth restriction in the rat.
2023-08-07T01:26:51.906118
https://example.com/article/7661
Balcony decorations Rue des quatre vents, Molenbeek When I first saw the flatbuilding on the Vierwindenstraat / Rue des quatre vents, (see post below) I was amazed by the uniformity of the decoration of the balconies of all the seperate flats. It looked like the owner had distributed the same rush mats to all tenants, as a form of protection against the sun, and to increase privacy. Over the years, they have worn and now the facade of the flat building is much more varied and diverse due to all kinds of different solutions, and also due to the different agings of the mats.
2023-08-08T01:26:51.906118
https://example.com/article/5715
# This class takes advantage of the fact that all formats v0, v1 and v2 of # messages storage has the same byte offsets for Length and Magic fields. # Lets look closely at what leading bytes all versions have: # # V0 and V1 (Offset is MessageSet part, other bytes are Message ones): # Offset => Int64 # BytesLength => Int32 # CRC => Int32 # Magic => Int8 # ... # # V2: # BaseOffset => Int64 # Length => Int32 # PartitionLeaderEpoch => Int32 # Magic => Int8 # ... # # So we can iterate over batches just by knowing offsets of Length. Magic is # used to construct the correct class for Batch itself. import struct from aiokafka.errors import CorruptRecordException from aiokafka.util import NO_EXTENSIONS from .legacy_records import LegacyRecordBatch from .default_records import DefaultRecordBatch class _MemoryRecordsPy: LENGTH_OFFSET = struct.calcsize(">q") LOG_OVERHEAD = struct.calcsize(">qi") MAGIC_OFFSET = struct.calcsize(">qii") # Minimum space requirements for Record V0 MIN_SLICE = LOG_OVERHEAD + LegacyRecordBatch.RECORD_OVERHEAD_V0 def __init__(self, bytes_data): self._buffer = bytes_data self._pos = 0 # We keep one slice ahead so `has_next` will return very fast self._next_slice = None self._remaining_bytes = 0 self._cache_next() def size_in_bytes(self): return len(self._buffer) # NOTE: we cache offsets here as kwargs for a bit more speed, as cPython # will use LOAD_FAST opcode in this case def _cache_next(self, len_offset=LENGTH_OFFSET, log_overhead=LOG_OVERHEAD): buffer = self._buffer buffer_len = len(buffer) pos = self._pos remaining = buffer_len - pos if remaining < log_overhead: # Will be re-checked in Fetcher for remaining bytes. self._remaining_bytes = remaining self._next_slice = None return length, = struct.unpack_from( ">i", buffer, pos + len_offset) slice_end = pos + log_overhead + length if slice_end > buffer_len: # Will be re-checked in Fetcher for remaining bytes self._remaining_bytes = remaining self._next_slice = None return self._next_slice = memoryview(buffer)[pos: slice_end] self._pos = slice_end def has_next(self): return self._next_slice is not None # NOTE: same cache for LOAD_FAST as above def next_batch(self, _min_slice=MIN_SLICE, _magic_offset=MAGIC_OFFSET): next_slice = self._next_slice if next_slice is None: return None if len(next_slice) < _min_slice: raise CorruptRecordException( "Record size is less than the minimum record overhead " "({})".format(_min_slice - self.LOG_OVERHEAD)) self._cache_next() magic = next_slice[_magic_offset] if magic >= 2: # pragma: no cover return DefaultRecordBatch(next_slice) else: return LegacyRecordBatch(next_slice, magic) if NO_EXTENSIONS: MemoryRecords = _MemoryRecordsPy else: try: from ._crecords import MemoryRecords as _MemoryRecordsCython MemoryRecords = _MemoryRecordsCython except ImportError: # pragma: no cover MemoryRecords = _MemoryRecordsPy
2023-09-20T01:26:51.906118
https://example.com/article/9644
The Dallas Morning News and Houston Chronicle have teamed up this week to share items about the growing baseball rivalry between the Texas Rangers and Houston Astros. Please check back each day for new items from both newspapers about the two AL West favorites. Click here to check out the five reasons why the Houston Chronicle believes the Astros will win the AL West. 1. Pocket aces Over the final two months, the teams meet nine times. Expect six of those games to be started by Cole Hamels and a getting-stronger Yu Darvish. And it's not out of the realm of possibilities that, by August 1, the Rangers could have a third top-tier starter (yes, it's my Sonny Gray man-crush showing again). The Rangers could - could - run No. 1 quality starters out there in all nine meetings. Houston's Cy Young-winning Dallas Keuchel can only pitch so often and is likely to be closing 650 innings over the last three years. That is a lot of wear and tear for a guy this early in his career. 2. The bullpen Yes, the Rangers bullpen has gotten off to an unexpectedly rough start with a 5.22 ERA (24th in the majors), but so has the Astros (No. 26 at 6.16). The bottom line is the Rangers' bullpen is deeper and, thus, should be fresher. With Matt Bush lurking, a healthy Luke Jackson and a recovered Tanner Scheppers the club should have strong reinforcements for a strong group. This should be one of the best bullpens in the AL. By September, there is every reason to believe the rough start will be a distant memory. 3. Strong finishers The Rangers have three of the majors' best offensive finishers over the last five years. Three of MLB's top September-October OPS' since 2011 belong to Rangers: No. 3 Shin-Soo Choo (.969); No. 5 Adrian Beltre (.958) and No. 7 Prince Fielder (.956). Choo should be fresher after missing a month early in the season; Beltre driven by the pursuit of that elusive ring and Fielder should be readjusted to the major league grind after struggling last September when he was coming back from missing a year. 4. The schedule The Rangers finish with 12 of their final 15 in Arlington, which should create a home-field advantage in a tight race. The opponents are Oakland, the Angels, Milwaukee and Tampa Bay, four of which should be doing nothing but playing for the future. Houston: Nine on the road (including the last series of the season) vs. seven at home. Oh, and the Rangers play six vs. Oakland in the final 15; Houston plays seven against the Angels, who still could have a glimmer of hope. 5. The manager At some point, Jeff Banister will find a reason to rush the field, bulge his neck veins and find reason to look like John Wayne cornering a bad guy in Rio Bravo. If the Rangers need a galvanizing moment, this will do the trick. It did last year. Click here to check out the five reasons why the Houston Chronicle thinks the Astros will win the AL West.
2024-03-19T01:26:51.906118
https://example.com/article/2789
I accidentally used library.properties and not library.json at first but recently re-registered the JSON file. Now the tags for the library on the website don’t match the tags in the manifest. How do I refresh this?
2024-03-17T01:26:51.906118
https://example.com/article/9947
Q: customizing registration fields for new users in phpbb3 i am new to phpbb. i have a requirement where i need to remove some fields from registration page.(Some are mandatory and some are not). Can anybody please suggest where should i edit in file. i already changed in HTML file styles/themename/templets/ucp_register_original.html . but nothing is affecting the main interface. <tr> <td class="row1"><b class="genmed">{L_LANGUAGE}: </b></td> <td class="row2"><select name="lang" onchange="change_language(this.value); return false;">{S_LANG_OPTIONS}</select></td> </tr> i have disable this for example but it is still showing. A: Apart from the answer given by Damien Keitel. one more and crucial thing you have to do is to delete cache/tpl_prosilver-se_ucp_register.html.php otherwise it will keep on showing the previous fields which you might have deleted.
2023-11-27T01:26:51.906118
https://example.com/article/9979
Q: If $g(z) = f(1/z)$ and $g$ has a pole at $0$ and $f$ entire.Then there is a $z_0$ such that $f(z_0) = 0$. I figured I can somehow use Liouville's theorem. I don't really know what the method is for solving or where to begin. Suppose that $f: \mathbb{C} \to \mathbb{C}$ is an entire function. Let $g(z) = f(\frac{1}{z})$. Prove that if $g$ has a pole at $0$ then there is a $z_{0}$ in $\mathbb{C}$ such that $f(z_{0}) = 0$. Can assume $g$ is holomorphic on $\mathbb{C} \setminus \{0\}$. A: Since $f$ is entire, there is a power series $\sum_{n=0}^\infty a_nz^n$ centered at $0$ such that$$(\forall z\in\Bbb C):f(z)=\sum_{n=0}^\infty a_nz^n$$and therefore$$(\forall z\in\Bbb C\setminus\{0\}):g(z)=a_0+\frac{a_1}z+\frac{a_2}{z^2}+\cdots$$So, since $g$ has a pole at $0$, there is some $N\in\Bbb N$ such that $n>N\implies a_n=0$. So, $f$ is a polynomial function and therefore you can use the Fundamental Theorem of Algebra to prove that it has a zero.
2024-07-20T01:26:51.906118
https://example.com/article/1629
Q: Can't populate JSON to ListView I'm using Volley and GSON to parse a remote JSON. Here is my Fragment that does it: public class LatestFragment extends ListFragment implements OnScrollListener { @Override public void onActivityCreated(Bundle savedInstanceState) { super.onActivityCreated(savedInstanceState); arrItemList = new ArrayList<ItemListModel>(); va = new LatestAdapter(getActivity(), arrItemList); lv = getListView(); setListAdapter(va); lv.setOnScrollListener(this); loadItemList(1); } private void loadItemList(int page) { mRequestQueue = Volley.newRequestQueue(getActivity()); GsonRequest<LatestContainer> myReq = new GsonRequest<LatestContainer>( Method.GET, url, LatestContainer.class, createMyReqSuccessListener(), createMyReqErrorListener()); mRequestQueue.add(myReq); } private Response.Listener<LatestContainer> createMyReqSuccessListener() { return new Response.Listener<LatestContainer>() { @Override public void onResponse(LatestContainer response) { try { for (int i = 0; i < response.getResults().size(); i++) { ItemListModel ilm = new ItemListModel(); ilm.setCategory(response.getResults().get(i).getCategory()); ilm.setItem_id(response.getResults().get(i).getItem_id()); ilm.setName(response.getResults().get(i).getName()); ilm.setPrice(response.getResults().get(i).getPrice()); ilm.setUser_id(response.getResults().get(i).getUser_id()); arrItemList.add(ilm); } LatestAdapter public class LatestAdapter extends BaseAdapter { public LatestAdapter(Context context, ArrayList<ItemListModel> items) { this.arrItemList = items; this.context = context; } @Override public View getView(int i, View view, ViewGroup viewGroup) { ViewHolder vh; lf = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (view == null) { vh = new ViewHolder(); view = lf.inflate(R.layout.row_latest_listview, null); vh.tvName = (TextView) view.findViewById(R.id.txtItemName); vh.tvCategory = (TextView) view.findViewById(R.id.txtCategory); vh.tvPrice = (TextView) view.findViewById(R.id.txtPrice); vh.tvThumbnail = (ImageView) view.findViewById(R.id.imgPhoto); view.setTag(vh); } else { vh = (ViewHolder) view.getTag(); } ItemListModel nm = arrItemList.get(i); vh.tvCategory.setText(nm.getCategory()); vh.tvPrice.setText("RM " + nm.getPrice()); vh.tvName.setText(nm.getName()); return view; } However, after I run the code, the listview doesn't seem to be populated. But the parsing is successful. I can see the parsed string in the logcat. So deserialization is not an issue here. What did I do wrong now? A: Make sure you call va.notifyDataSetChanged() after modifying the adapter's dataset.
2023-08-08T01:26:51.906118
https://example.com/article/1974
Foraminotomy What is a foraminotomy? A foraminotomy is a surgical procedure. It enlarges the area around one of the compressed nerves in your spinal column. Your spinal column is made up of a chain of bones called vertebrae. The intervertebral disks sit above and below the flat portion of each vertebra and act as a cushion. Your spinal column houses your spinal cord and helps protect it from injury. The spinal cord sends sensory information from the body to the brain. The spinal cord also sends commands from the brain to the body. Nerves spread out from the spinal cord, sending and receiving this information. They exit the spinal column through small holes (intervertebral foramen) that lie between the vertebrae. Sometimes these openings can become too small. When that happens, the compressed nerve can cause symptoms such as pain, tingling in the arms and legs, and weakness. The exact symptoms depend on the location of the compressed nerve along the spinal column. (For example, a compressed nerve in the neck may lead to neck pain and tingling and weakness in the hand and arm.) During your foraminotomy, your surgeon will make a cut (incision) on your back or neck and expose the affected vertebra. Then he or she can surgically widen your intervertebral foramen, removing whatever blockages are present. Why might I need a foraminotomy? Blockages that narrow the spinal column or block an intervertebral foramen are called spinal stenosis or foraminal stenosis. Various processes can block the intervertebral foramen and compress the nerve leaving the spinal cord. Conditions that can cause spinal stenosis include: Degenerative arthritis of the spine (spondylosis), which can cause bony spurs Degeneration of the intervertebral discs, which can cause them to bulge into the foramen Enlargement of the nearby ligament Spondylolisthesis Cysts or tumors Skeletal disease (like Paget disease) Congenital problems (like dwarfism) Degenerative arthritis of the spine (from old age) is one of the most common causes. This nerve compression can happen along any part of your spinal column. Your compressed nerve may start to cause symptoms, like pain in the affected region and tingling and weakness in the affected limb. You might need a foraminotomy if you’ve already tried other treatments and had no success. This includes physical therapy, pain medicines, and epidural injections. Usually, your surgeon can do the surgery as an elective procedure to help relieve these symptoms. You might need to have an emergency foraminotomy if your symptoms are quickly getting worse, or if you have problems with your bladder due to your nerve. What are the risks of a foraminotomy? Foraminotomy is successful in most people, but complications can occasionally happen. Most of these are rare. Some possible complications include: Infection Too much blood loss Nerve damage Damage to the spinal cord Stroke Complications from anesthesia There is also a small risk that the procedure will not relieve your pain. Your own risk of complications may vary depending on: Your age The location and anatomy of your intervertebral foramen The type of foraminotomy performed Your other medical conditions Ask your provider about the risks that most apply to you. How do I get ready for a foraminotomy? Talk to your provider about how to get ready for your surgery. Ask if you should stop taking any medicines ahead of time, like blood thinners. You’ll need to not eat and drink anything after midnight the night before your procedure. Before your surgery, your provider may order additional imaging tests to get more information about your spinal column and nerves. The most common test in this setting is an MRI. What happens during a foraminotomy? Your healthcare provider can help explain the details of your particular surgery. (The following outlines a minimally invasive type of foraminotomy. Incisions are wider in a traditional foraminotomy.) A neurosurgeon and a team of specialized nurses and healthcare providers will perform the surgery. The whole surgery will take a couple of hours. In general, you can expect the following: During the procedure, you’ll lie on your stomach. You will be given medicine (anesthesia) to put you to sleep through the surgery. You won’t feel any pain or discomfort during the procedure. Your surgeon will make a small incision just beside your spine on the side you have your symptoms. He or she will make the incision at the level of your affected vertebra. Your surgeon will use X-rays and a special microscope to guide the surgery. Using special tools, your surgeon will push away the back muscles around the spine to expose the blocked intervertebral foramen. Your surgeon will use small tools to remove the blockage inside the intervertebral foramen. The blockage may be a bone spur or a bulging disk. This will relieve pressure on the nerves. In some cases, your surgeon might do another procedure at this time, like a laminectomy. This removes part of the vertebra. The team will remove the tools and put your back muscles back in place. Someone will then close the small incision in your skin. What happens after a foraminotomy? Talk to your healthcare provider about what to expect after your foraminotomy. Within a couple of hours, you should be able to sit up in bed. You might have a little pain, but you can have pain medicines to ease the pain. You should be able to eat a normal diet. You’ll need to move the affected area carefully. You will be told if you should not do any certain movements for a while. (For example, you might need to avoid bending your neck if your foraminotomy was in this region.) You’ll also likely need a soft neck collar if your surgery was in your neck. You should be able to go home a day or two after your surgery. Be sure to follow all of your provider’s instructions about medicines, physical activity, and wound care. You may need to not do certain movements for a while. You may be able to do light work in a few weeks, but you may need to avoid heavier work for a few months. Some people might need physical therapy as they recover. Your provider can give you a realistic idea of what to expect after your surgery. Remember to keep all follow-up appointments. Most people will see a real improvement in their symptoms. Be sure to tell your provider if you don’t get better, or if you have new or worsening symptoms. Next steps Before you agree to the test or the procedure make sure you know: The name of the test or procedure The reason you are having the test or procedure What results to expect and what they mean The risks and benefits of the test or procedure What the possible side effects or complications are When and where you are to have the test or procedure Who will do the test or procedure and what that person’s qualifications are What would happen if you did not have the test or procedure Any alternative tests or procedures to think about When and how you will get the results Who to call after the test or procedure if you have questions or problems
2024-02-13T01:26:51.906118
https://example.com/article/5182
Mechanism of the stimulation of Ca2+-dependent ATPase of skeletal muscle sarcoplasmic reticulum by protein kinase. Sarcoplasmic reticulum isolated from moderately fast rabbit skeletal muscle contains intrinsic adenosine 3',5'-monophosphate (cAMP)-independent protein kinase activity and a substrate of 100 000 Mr. Phosphorylation of skeletal sarcoplasmic reticulum by either endogenous membrane bound or exogenous cAMP-dependent protein kinase results in stimulation of the initial rates of Ca2+ transport and Ca2+-ATPase activity. To determine the molecular mechanism by which protein kinase-dependent phosphorylation regulates the calcium pump in skeletal sarcoplasmic reticulum, we examined the effects of protein kinase on the individual steps of the Ca2+-ATPase reaction sequence. Skeletal sarcoplasmic reticulum vesicles were preincubated with cAMP and cAMP-dependent protein kinase in the presence (phosphorylated sarcoplasmic reticulum) and absence (control sarcoplasmic reticulum) of adenosine 5'-triphosphate (ATP). Control and phosphorylated sarcoplasmic reticulum were subsequently assayed for formation (5-100 ms) and decomposition (0-73 ms) of the acid-stable phosphorylated enzyme (E approximately P) of Ca2+-ATPase. Protein kinase mediated phosphorylation of skeletal sarcoplasmic reticulum resulted in pronounced stimulation of initial rates and levels of E approximately P in sarcoplasmic reticulum preincubated with either ethylene glycol bis(beta-aminoethyl ether)-N,N'-tetraacetic acid (EGTA) prior to assay (Ca2+-free sarcoplasmic reticulum), or with calcium/EGTA buffer (Ca2+-bound sarcoplasmic reticulum). These effects were evident within a wide range of ionized Ca2+. Phosphorylation of skeletal sarcoplasmic reticulum by protein kinase also increased the initial rate of E approximately P decomposition. These findings suggest that protein kinase-dependent phosphorylation of skeletal sarcoplasmic reticulum regulates several steps in the Ca2+-ATPase reaction sequence which result in an overall stimulation of the active calcium transport observed at steady state.
2024-05-18T01:26:51.906118
https://example.com/article/4377
/****************************************************************************** This source file is part of the Avogadro project. Copyright 2013 Kitware, Inc. This source code is released under the New BSD License, (the "License"). Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ******************************************************************************/ #include <gtest/gtest.h> #include <avogadro/core/color3f.h> #include <avogadro/core/mesh.h> #include <avogadro/core/molecule.h> #include <avogadro/core/vector.h> void assertEqual(const Avogadro::Core::Molecule& m1, const Avogadro::Core::Molecule& m2) { EXPECT_EQ(m1.atomCount(), m2.atomCount()); EXPECT_TRUE(m1.atomicNumbers() == m2.atomicNumbers()); EXPECT_TRUE(m1.atomPositions2d() == m2.atomPositions2d()); EXPECT_TRUE(m1.atomPositions3d() == m2.atomPositions3d()); EXPECT_EQ(m1.data("test").toString(), m2.data("test").toString()); EXPECT_TRUE(m1.bondPairs() == m2.bondPairs()); EXPECT_TRUE(m1.bondOrders() == m2.bondOrders()); EXPECT_EQ(m1.meshCount(), m2.meshCount()); for (size_t i = 0; i < m1.meshCount(); i++) { const Avogadro::Core::Mesh* mesh1 = m1.mesh(i); const Avogadro::Core::Mesh* mesh2 = m2.mesh(i); EXPECT_TRUE(mesh1->vertices() == mesh2->vertices()); EXPECT_TRUE(mesh1->normals() == mesh2->vertices()); EXPECT_EQ(mesh1->name(), mesh2->name()); EXPECT_EQ(mesh1->isoValue(), mesh2->isoValue()); } }
2024-05-13T01:26:51.906118
https://example.com/article/7257
738 A.2d 435 (1999) COMMONWEALTH of Pennsylvania, Appellee, v. Saharris ROLLINS, Appellant. Supreme Court of Pennsylvania. Submitted March 1, 1999. Decided September 29, 1999. Reargument Denied November 12, 1999. *439 Edward M. Dunham, Philadelphia, Daniel W. Cantu'-Hertzler, Robert Brett Dunham, Philadelphia, for Saharis Rollins. Catherine Marshall, Philadelphia, for Com. Robert A. Graci, Harrisburg, for Office of Atty. Gen. Before FLAHERTY, C.J., and ZAPPALA, CAPPY, CASTILLE, NIGRO, NEWMAN and SAYLOR, JJ. *436 *437 *438 OPINION CAPPY, Justice. Saharris Rollins ("Appellant") appeals from the denial of his petition filed pursuant to the Post Conviction Relief Act ("PCRA"), 42 Pa.C.S. § 9541 et seq. For the reasons that follow, we affirm.[1] The facts of this matter are laid forth in detail in this court's opinion on direct appeal. Commonwealth v. Rollins, 525 Pa. 335, 580 A.2d 744 (1990). In brief, Appellant arrived at the home of Violeta Cintron ("Violeta") at approximately one o'clock in the morning on January 22, 1986. Appellant had come to Violeta's house looking for Violeta's husband, Jose Carrasquillo ("Carrasquillo") with whom Appellant had conducted drug deals in the past. Appellant requested some cocaine from Violeta. When Violeta was about to hand over the cocaine, however, Appellant announced that he wished to trade methamphetamine for the cocaine rather than pay cash. Violeta refused this offer, and Appellant left the premises. *440 Appellant returned to Violeta's house a few minutes later, this time armed with an automatic handgun and demanded the cocaine from Violeta. Raymond Cintron ("Raymond"), Violeta's brother, dropped Violeta's one year old son whom he had been holding and began wrestling with Appellant for control of the gun. Several shots were fired in the ten by eleven foot room, hitting Raymond as well as a stereo speaker, a lamp and a wall. Raymond fell to the floor after which Appellant picked him up and fired more shots into Raymond's body. Appellant fled the scene. While fleeing, Appellant came face-to-face with Dalia Cintron ("Dalia"), one of Violeta's sisters, and pointed his gun at her as he made his escape. Raymond subsequently died from these gunshot wounds. Appellant was arrested three days after he killed Raymond as the result of his involvement in another shooting incident. On January 25, 1986, Appellant arrived at the home of Richard Campbell ("Campbell"). Campbell, who had been warned of Appellant's arrival, greeted Appellant with a shotgun; a gunfight immediately ensued in which Appellant was wounded. Appellant was picked up by police a short distance from the Campbell residence. Ballistic tests later revealed that the weapon Appellant used in the Campbell shooting was the same one used to kill Raymond. Appellant was tried before a jury for crimes stemming from the shooting of Raymond; he was found guilty of murder in the first degree, robbery and possession of an instrument of crime. A penalty hearing was subsequently convened. The jury found two aggravating circumstances: that the killing was committed while in the perpetration of another felony,[2] and the killing created a grave risk of harm to others.[3] The jury also found one mitigating circumstance: that Appellant had no significant history of prior criminal convictions.[4] The jury determined that the aggravating circumstances outweighed the mitigating circumstance and sentenced Appellant to death. This court affirmed the judgment of sentence on direct appeal. Rollins, supra. Appellant next filed the instant PCRA petition on November 12, 1996 which the PCRA court denied without holding a hearing.[5] The appeal to this court then followed. His first claim is that the PCRA court erred when it denied him relief without a hearing. Appellant acknowledges that a PCRA judge may dispose of a PCRA petition without a hearing pursuant to Pa. R.Crim.P. 1507(a) when the petition raises no "genuine issues concerning any material fact...."[6] Yet, he contends that the PCRA judge below erred as this petition did indeed raise such genuine issues of material fact. Appellant baldly contends that the issues he raises in his voluminous brief will support his contention. As we find that none of these issues, which will be discussed in full infra, raises a genuine issue of material fact, we deny Appellant's first claim. Appellant's remaining claims are of trial court error, prosecutorial misconduct, *441 and ineffective assistance of counsel. As Appellant's issues of trial court error and prosecutorial misconduct were not raised on direct appeal, they are waived. See Commonwealth v. Albrecht, 554 Pa. 31, 720 A.2d 693, 700 (1998). The only issues that remain are the claims of ineffective assistance of counsel.[7] As the starting point for our review of an ineffective assistance of counsel claim, we presume that counsel is effective. Commonwealth v. Cross, 535 Pa. 38, 634 A.2d 173 (1993). To overcome this presumption, Appellant must establish three factors. First, he must show that the underlying claim has arguable merit. Commonwealth v. Travaglia, 541 Pa. 108, 661 A.2d 352, 356 (1995). Second, Appellant must prove that counsel had no reasonable basis for his action or inaction. Id. In determining whether counsel's action was reasonable, we do not question whether there were other more logical courses of action which counsel could have pursued; rather, we must examine whether counsel's decisions had any reasonable basis. Commonwealth v. Pierce, 515 Pa. 153, 527 A.2d 973, 975 (1987). Finally, Appellant must establish that he has been prejudiced by counsel's ineffectiveness; in order to meet this burden, he must show that "but for the act or omission in question, the outcome of the proceedings would have been different." Travaglia, 661 A.2d at 357. "If it is clear that Appellant has not met the prejudice prong of the ineffectiveness standard, the claim may be dismissed on that basis alone and the court need not [initially] determine whether the first and second prongs have been met." Id. Appellant's first ineffectiveness claim is a broad one. He contends that the inexperience of his trial counsel, in itself, is sufficient to establish that counsel was ineffective. We reject this claim as we have previously stated that the mere inexperience of counsel is not equivalent to ineffectiveness. Commonwealth v. Williams, 537 Pa. 1, 640 A.2d 1251, 1264 (1994). Rather, Appellant must make out all three prongs of an ineffectiveness claim in order to be granted relief. Appellant next raises a series of ineffectiveness claims related to the selection of his jury. His first such claim is that trial counsel failed to "life-qualify" the jurors.[8] Although trial counsel is permitted to life qualify the jury, "such questions... are not required and counsel is not ineffective for failing to pose them." Commonwealth v. Hardcastle, 549 Pa. 450, 701 A.2d 541 (1997). Furthermore, where counsel fails to life-qualify jurors, counsel is not ineffective where jurors assured counsel and the court they would follow the dictates of the law. Commonwealth v. Carpenter, 533 Pa. 40, 617 A.2d 1263, 1269 (1992). Appellant claims that by failing to life-qualify the jury, counsel allowed an unfair and partial jury to be impaneled. Appellant identifies four jurors who expressed beliefs which should have indicated to counsel that they were biased. Appellant's ineffectiveness claim must fail since all of these jurors stated that they could decide this matter fairly in accordance with the law. N.T. 2/17/87 at 683 (Juror Mary McMenamin) and at 730 (Juror Helen Megrail); N.T. 2/18/87 at 773 (Juror Malaysia Williams) and at 872-873 (Juror Charles Hengstler). Thus, pursuant to Carpenter, supra, counsel was not ineffective. *442 Next, Appellant contends that trial counsel was ineffective in exercising his peremptory strikes where he should have instead struck the jurors for cause, thus preserving his peremptory strikes. Of the twenty-one peremptory strikes that Appellant utilized[9], Appellant contends that five jurors indicated that they had drug-related biases that would affect their ability to determine the matter and thus could have been stricken for cause. Appellant is incorrect. All five of these jurors indicated that they could be fair and impartial and would not decide to convict Appellant merely because of the drug-related nature of this crime. N.T. 2/10/87 at 184-185 (venireperson Charles Rudio); N.T. 2/11/87 at 320-21 (venireperson Theresa Pirrone); N.T. 2/13/87 at 477-79 (venireperson Joanne Marks) and at 511 (venireperson Drew Mihicko); N.T. 2/18/87 at 786-87 and 794-95 (venireperson William Stone). Thus, this claim has no merit. Next, Appellant claims that counsel was ineffective for failing to pursue the claim that the trial court judge erred when he struck five prospective jurors for cause due to their inability to follow the law regarding the death penalty. It is within the trial court's discretion to strike a juror for cause, and such a decision will not be disturbed absent a showing of abuse of discretion. Commonwealth v. Fisher, 545 Pa. 233, 681 A.2d 130 (1996). All five jurors indicated that their views on the death penalty were such that they would be unable to apply the law as instructed to them by the judge. N.T. 2/13/87 at 427-28 (venireperson Regina Waiters) and at 458-59 (Ruth McAdams) and at 492-95 (venireperson Florine Freeland); N.T. 2/18/87 at 765-66 (Alan Solomon) and at 806-08 (John Bazzani). It is axiomatic that where a prospective juror expresses views that would "prevent or substantially impair the performance of his [or her] duties as a juror in accordance with his instructions and his oath," that venireperson may be dismissed for cause. Commonwealth v. Holland, 518 Pa. 405, 543 A.2d 1068, 1073 (1988) (citing Wainwright v. Witt, 469 U.S. 412, 105 S.Ct. 844, 83 L.Ed.2d 841 (1985)). Appellant argues, however, that it was error for the trial court not to allow the rehabilitation of these venirepersons so that it could have been established that they could have performed their duty as jurors in accordance with the law. The law, however, imposes no such duty on the trial court. Commonwealth v. Robinson, 554 Pa. 293, 721 A.2d 344, 354 (1998) (the trial court does not have a duty to rehabilitate a potential juror where it appears that the potential juror would not be able to follow the instructions on the law). Thus, this claim must be denied. Next, Appellant contends that counsel was ineffective for failing to raise the claim that the Commonwealth exercised its peremptory strikes in a racially discriminatory manner, thus violating the dictates of Batson v. Kentucky, 476 U.S. 79, 106 S.Ct. 1712, 90 L.Ed.2d 69 (1986). To establish that this claim has any merit, Appellant must establish a prima facie case of improper use of peremptory challenges. In order to do so, the defendant must establish that (1) the defendant is a member of a cognizable racial group and the prosecutor exercised peremptory challenges to remove members of the defendant's race from the venire; (2) the defendant can then rely on the fact that the use of peremptory challenges permits "those to discriminate who are of a mind to discriminate"; and (3) the defendant, through facts and circumstances, must raise an inference that the prosecutor excluded members of the venire on account of their race. The third prong requires defendant to make a record specifically identifying the race of all the venirepersons removed by the prosecution, *443 the race of the jurors who served and the race of the jurors acceptable to the Commonwealth who were stricken by the defense. Commonwealth v. Thomas, 552 Pa. 621, 717 A.2d 468, 475 (1998) (citing Commonwealth v. Simmons, 541 Pa. 211, 662 A.2d 621, 631 (1995)). In the matter sub judice, Appellant attempts to meet this burden by contending that "[m]ore than half of the peremptories used by the prosecutor—including the first seven—served to exclude African Americans from service on the jury, for no apparent reason other than their race...." Appellant's brief at 28. This claim must fail. Appellant's myriad citations to the record in support of this claim reveals the race of only one of the venirepersons whom the Commonwealth peremptorily struck. N.T. 2/11/87 at 243 (venireperson Lucille Goldsmith, an African-American). Clearly, this does not meet Appellant's burden to identify specifically "the race of all the venirepersons removed by the prosecution, the race of the jurors who served and the race of the jurors acceptable to the Commonwealth who were stricken by the defense." Thomas, 717 A.2d at 475. Thus, this claim fails as it lacks merit.[10] In a related issue, Appellant makes the novel claim that his own counsel violated Batson when he exercised peremptory strikes in a racially discriminatory fashion, and thus provided him with ineffective assistance of counsel. As noted above, Appellant has not shown "the race of all the venirepersons removed by the prosecution, the race of the jurors who served and the race of the jurors acceptable to the Commonwealth who were stricken by the defense" as required by Thomas, supra. Thus, this Batson claim fails as well. Appellant's final ineffectiveness claim relating to voir dire is that his trial counsel ineffectively implied to prospective jurors that the only biases which were relevant were the ones which could affect the guilt phase, and it was unimportant whether the venirepersons had biases with respect to the penalty phase. Our review of the record establishes that this claim is groundless. Thus, Appellant's ineffectiveness claim must fail. Appellant's next ineffectiveness claim concerns a suppression issue. Appellant asserts that counsel was ineffective for failing to pursue the claim that identification testimony of several witnesses of the Campbell shooting should have been suppressed as it was the result of an extremely suggestive show-up.[11] Even assuming arguendo that the show-up was suggestive, that would still not preclude the admission of this identification testimony. Rather, the query is whether there exists a basis for identification which is independent of the allegedly suggestive show-up. The factors to be considered in such a query are: "(1) the opportunity of the witness to view the criminal at the time of the crime; (2) the witness' degree of attention; (3) the accuracy of the witness' prior description of the criminal; (4) the level of certainty demonstrated by the witness at the confrontation; and (5) the length of time between the crime and the confrontation." Commonwealth v. Carter, 537 Pa. 233, 643 A.2d 61, 71 (1994). Here, the witnesses all had ample opportunity to observe Appellant in a well-lit location and *444 identified him at the hospital only a couple of hours after the shooting. Appellant points to nothing in the record which would indicate that the witnesses' identifications were anything but certain. Finally, it is reasonable to assume that witnesses to a gun battle would be attentive to their surroundings. Thus, as we cannot say that the trial judge abused his discretion when he denied this suppression motion, this claim has no merit. Appellant's next series of ineffectiveness claims relate to alleged incidents of prosecutorial misconduct during the guilt phase of the trial. In order to establish that these claims have merit, Appellant must show that "the unavoidable effect of the contested comments was to prejudice the jury, forming in their minds fixed bias and hostility towards the accused so as to hinder an objective weighing of the evidence and impede the rendering of a true verdict. In making such a judgment, we must not lose sight of the fact that the trial is an adversary proceeding and the prosecution, like the defense, must be accorded reasonable latitude in fairly presenting its version of the case to the jury." Travaglia, 661 A.2d at 360-61 (citations omitted). Appellant's first ineffectiveness claim relative to the prosecutor's alleged misconduct concerns statements made by the prosecutor in his opening statement. Appellant claims that the prosecutor made statements which communicated to the jury that his personal belief was that the Commonwealth's witnesses were credible and that Appellant was guilty. Appellant, citing Commonwealth v. Gilman, 470 Pa. 179, 368 A.2d 253, 259 (1977), states that a prosecutor is forbidden to express a personal opinion about a defendant's guilt or the credibility of a witness, and that where he does, the conviction should be reversed and a new trial awarded. Appellant's statement of the law is correct: a prosecutor may not offer his personal beliefs concerning the case to the jury. Yet, this standard avails Appellant nought as a review of the complained of statements reveals that the prosecutor in this matter did not cross into this forbidden territory. In the first complained of statement, the prosecutor was merely informing the jury of the role of a prosecutor, stating that a prosecutor is not like a bounty hunter, that [he does not] get paid for the convictions.... [T]he oath that I swore as an attorney and as a person was to seek justice in every case. Sometimes that means not guilty in certain cases and sometimes that means standing before a jury such as yourselves and seeking to persuade you through evidence and under the law that an individual who is charged, as [Appellant] is charged, did in fact commit the crimes and should be found guilty. N.T. 2/19/87 at 29. We can discern no impropriety in this comment. In the next complained of passage from the Commonwealth's opening statement, the prosecutor proclaimed to the jury that "I say very solemnly, because what is ultimately proven in this case is not what I think is proven. It's what you collectively and individually believe is proven. So I'm not going to invade your jury box.... But I am permitted to tell you what I expect to prove, and I can tell you that because I've spoken to the witnesses and I've reviewed this case very carefully." N.T. 2/19/96 at 36-37. Rather than improperly giving the jury his personal view of Appellant's guilt, as Appellant would have us believe, the prosecutor actually informed the jury that it is not his beliefs, but the decision of the jury, which matters. Appellant next complains about comments the prosecutor made in his closing argument during the guilt phase. First, he contends that in several instances, the prosecutor engaged in misconduct by personally vouching for the credibility of the *445 witnesses.[12] We have reviewed these half dozen complained of statements, and none of them reveals a proclamation of personal belief on the part of the prosecutor. Rather, each was a permissible summary of different aspects of the case. Thus, as we conclude that there is no merit to Appellant's claims that the prosecutor improperly expressed his personal views to the jury, these ineffectiveness claims must fail. Appellant's final claim concerning statements made during closing argument at guilt phase is that counsel was ineffective when he failed to pursue the claim that the prosecutor committed misconduct when he asked the jury to speculate on evidence outside of the record. Appellant's claim is in reference to comments made by the prosecutor concerning a "jeff cap" which had fallen off of Appellant's head at the scene of the shooting. The jeff cap was tested and it was revealed that the sweat found inside of it was secreted from an individual with type A blood; medical records, however, had established that Appellant had type O positive blood. At trial, defense counsel utilized this evidence to support the claim that Appellant could not possibly have been the killer. It is the prosecutor's remarks in response to this defense argument that Appellant finds objectionable. The prosecutor stated that [w]e have this glitch in the case about blood types. Is there an explanation from the evidence? You bet there is. What about the size of [the] hat? You know, these jeff caps, they're supposed to fit a little tight, you know, so they don't blow off in the wind. And yet from being pushed up against the wall in the TV set area, this hat fell right off [Appellant's] head. Do you think it was a little large for him, like it belonged to someone else? N.T. 3/04/87 at 1614. This statement was not improper. "A prosecutor is entitled to refer to the evidence, may argue any legitimate inferences from that evidence, and must be free to present his or her arguments with logical force and vigor." Commonwealth v. Cox, 556 Pa. 368, 728 A.2d 923, 931 (1999) (citations omitted). The statement made by the prosecutor in this matter was merely a permissible inference based upon the evidence. Appellant's next claim is that counsel was ineffective for failing to discover that Appellant had type O positive blood; Appellant contends that such a lapse was particularly egregious in a case where blood type was relevant. This argument is mystifying, for the defense counsel and the prosecutor stipulated at trial that Appellant had type O positive blood. N.T. 3/03/87 at 1495-97. Also, as noted supra, trial counsel fully exploited this evidence in arguing that Appellant could not possibly have been the shooter as his blood type did not match the blood type of the physical evidence found at the crime scene. Thus, we find that this claim has no merit.[13] Next, Appellant contends that counsel was ineffective for failing to pursue the claim that the Commonwealth coerced Ramon Negron ("Negron") into giving false testimony against Appellant. Appellant contends that Negron had been prepared to testify in Appellant's favor, but instead gave evidence which inculpated Appellant; Appellant asserts that Negron changed his testimony only after he had been threatened by the Commonwealth. *446 Shortly after Raymond's murder, Negron had been arrested on charges that he had intimidated a witness to the killing. After his arrest, Negron gave a statement to a police detective in which he stated that Appellant had indeed killed Raymond. At trial, however, Negron recanted this statement and testified that Appellant had never told him anything about his involvement in the crime. A sidebar conference was immediately convened during which it was revealed the Negron had altered his testimony in favor of Appellant because he feared that if he testified against Appellant, he would be assaulted in prison. After Negron had an opportunity to consult with his attorney, the Commonwealth's direct examination of Negron resumed. At that juncture, Negron testified that Appellant had indeed told him he had killed Raymond. Negron also specifically testified that his testimony against Appellant was not motivated by any threat from anyone and was not due to the promise of some benefit other than being transferred from his current prison location. N.T. 2/27/87 at 1044-1047. As Appellant's claim regarding Negron's testimony is belied by the record, we hold that it is without merit. Appellant's next claim of ineffective assistance at the guilt phase of his trial relates to the testimony proffered by Roberta Burke ("Burke"), a crime lab technician who testified for the Commonwealth. Appellant has two claims with respect to Burke's testimony. First, he claims that trial counsel should have objected both when the prosecutor posed the question as to how perspiration secreted by a person with blood type A could have come to be on the jeff cap worn by Appellant during the murder, and when Burke answered that a person with blood type A could have worn it prior to Appellant wearing it. N.T. 3/04/87 at 1548-49. This claim must fail as there was no basis on which trial counsel could have objected. It is axiomatic that an expert may give an opinion in response to a hypothetical, provided that the set of facts assumed in the hypothetical is eventually supported by competent evidence. Commonwealth v. LaCava, 542 Pa. 160, 666 A.2d 221, 236 (1995). Here, the hypothetical assumed the following facts: Appellant had worn the jeff cap to the scene of the murder; that the jeff cap, which is normally worn tight-fitting, had effortlessly been knocked off of Appellant's head during the murder; that perspiration found in the jeff cap had been secreted by a person with type A blood; and that Appellant had type O positive blood. These facts were all established by competent evidence. Thus, this line of questioning was not improper. Appellant also baldly contends that trial counsel should have conducted his own investigation in order to present other evidence to rebut Burke's testimony. Appellant, however, does not present what this other evidence would have been. Such vague assertions are insufficient to establish that this claim has any merit. Appellant next claims that trial counsel was ineffective for failing to investigate forensic evidence that would show that Raymond was killed when Appellant shot wildly during a struggle. Appellant claims that this evidence would have established that he did not have the requisite intent to kill Raymond, and thus should not have been convicted of first degree murder. Such a defense, however, would have run directly counter to Appellant's defense that he was an innocent man who had been misidentified. Where trial counsel fails to put forth inconsistent defenses, especially when one defense would undermine the other, we have held that such a decision is reasonable and cannot form the basis for an ineffectiveness claim. Commonwealth v. Legg, 551 Pa. 437, 711 A.2d 430, 436 (1998). Next, Appellant contends that trial counsel was ineffective for failing to uncover forensic evidence which would have established that Violeta's account of the shooting could not be correct. Specifically, Appellant claims that forensic evidence *447 would have shown that he did not pick up Raymond after he fell in order to shoot him three more times as only two wounds could have possibly come from such point-blank shooting and there was no accounting for the third bullet Appellant allegedly fired into Raymond at such close range. Even assuming arguendo that Appellant could have presented this testimony at his trial, it is difficult to comprehend how impeaching Violeta over whether two or three shots were fired at her brother at point-blank range would have changed the jury's verdict considering the extensive testimony establishing that Appellant killed Raymond. Thus, this claim fails. Appellant also asserts that trial counsel did not exploit a discrepancy in the testimony concerning which hand the shooter used during the killing. At trial, there was some evidence which showed that the killer wielded the weapon in his right hand during the murder, but Violeta testified that Appellant had used his left hand to shoot the gun. Appellant states that this was significant as there was evidence establishing that Appellant is strongly right-handed, and thus could not have been the left-handed killer whom Violeta saw killing her brother. Appellant contends that trial counsel should have fully explored this inconsistency with Violeta. This argument must fail since trial counsel did pursue such a line of questioning at trial when he cross-examined Violeta about which hand the killer used to hold the gun which killed her brother and revisited this issue in his closing statement as well. N.T. 2/26/87 at 821-23; N.T. 3/04/87 at 1567. Appellant next claims that counsel was ineffective for failing to object to the admission of evidence relating to Appellant's drug dealings with Carrasquillo, Violeta's husband. The propriety of admission of such evidence was litigated on direct appeal. Rollins, 580 A.2d at 748. Appellant "cannot obtain post-conviction review of claims previously litigated on appeal by alleging ineffective assistance of prior counsel and presenting new theories of relief to support previously litigated claims." Commonwealth v. Beasley, 544 Pa. 554, 678 A.2d 773, 778 (1996). Thus, this claim fails. Next, Appellant raises the cursory argument that victim impact testimony was improperly admitted during the guilt phase of his trial. This argument is so sketchily presented that its contours are difficult to discern. Appellant apparently is reasoning that Violeta's brief comment during the guilt phase of trial that her son, who had witnessed the crime, is now afraid of toy guns, constitutes victim impact testimony. Even assuming arguendo that this comment constituted victim impact testimony, it was so fleeting that it cannot be said that it affected the outcome of this matter; thus, Appellant has failed to establish that he has been prejudiced. Next, Appellant contends that trial counsel was ineffective for failing to pursue the claim that the trial court erred when it precluded certain cross-examination of Dalia, one of Violeta's sisters. He claims that trial counsel should have been allowed to cross-examine her on an inconsistent statement. Specifically, he contends that her statement at trial that she had seen defendant for "ten or fifteen minutes" on the street, N.T. 3/03/87 at 1339, was inconsistent with her pre-trial statement that she had seen the gunman's face for only one or two seconds before he turned his back on her. This argument is specious. Dalia had testified that, from her own home several houses away from Violeta's, she had observed Appellant enter Violeta's residence, depart the residence, and then return approximately five minutes later; the total time which she observed Appellant from her house was thus "ten to fifteen minutes". Yet her testimony made it very clear that her face-to-face observation of Appellant was far more limited. Once she heard the shots, Dalia testified at trial, she rushed from her home where she encountered and saw him face-to-face for approximately four seconds. N.T. 3/03/87 *448 at 1352-53. This is perfectly consistent with her pre-trial statement that she had seen Appellant's face for only one or two seconds before Appellant turned his back on her and fled. As Dalia's pre-trial and trial statements were not inconsistent, this claim lacks merit. Next, Appellant contends that counsel was ineffective for failing to litigate the claim that the trial court improperly instructed the jury that it must believe Dalia's testimony and consider it as fact. This argument is specious. The portion of the transcript to which Appellant refers contains the judge's instruction pursuant to Commonwealth v. Kloiber, 378 Pa. 412, 106 A.2d 820 (1954)[14] and how it applied to the identification testimony of Violeta, Angel Rivera, and Lida Cintron, three witnesses who had initially failed to pick defendant's photo out of a photo array shown them by police. In giving this charge, the trial judge stated that the Kloiber charge was inapplicable to Dalia because she had ample opportunity to view Appellant and her identification of him had never wavered. N.T. 3/04/87 at 1649-50. The trial judge did not in any fashion instruct the jury that it must credit Dalia's testimony. Thus, this issue is meritless. Appellant also raises several ineffectiveness claims concerning the penalty phase. His first contention is that trial counsel was ineffective because he failed to conduct an investigation which would have uncovered important information relating to Appellant's upbringing as well as physical and psychological traumas, evidence which could have been used at the penalty phase of trial to establish that Appellant had severe mental problems. This claim must fail. We have stated that we will not find that counsel was ineffective in failing to produce mitigating evidence relative to an alleged mental infirmity where there is no indication that counsel had any reason to know that the defendant might have a mental problem. See Commonwealth v. Howard, 553 Pa. 266, 719 A.2d 233, 238 (1998); Commonwealth v. Uderra, 550 Pa. 389, 706 A.2d 334 (1998). As Appellant has failed to allege, let alone prove, that trial counsel was aware of Appellant's alleged mental condition, we deny him relief on this issue. Appellant's next series of ineffectiveness claims relate to allegedly inappropriate statements made by the Commonwealth during the penalty phase. As stated supra, these ineffectiveness claims will be deemed to have merit only where Appellant can establish that "the unavoidable effect of the contested comments was to prejudice the jury, forming in their minds fixed bias and hostility towards the accused so as to hinder an objective weighing of the evidence and impede the rendering of a true verdict." Travaglia, 661 A.2d at 360-61 (citations omitted). Furthermore, "[a]t the penalty phase, where the presumption of innocence is no longer applicable, the prosecutor is permitted even greater latitude in presenting argument." Id. at 365. Appellant first complains that the prosecutor exhorted the jury to "kill the enemy" and sentence Appellant to death. This is a gross misstatement of the record. The prosecutor's actual statement was that Service on a capital case is one of the greatest and heaviest responsibilities of citizenship. I would like you to compare it to something else. There are men old enough to have served in the [W]orld [W]ar, in Korea and in Vietnam. It is an obligation of citizenship when the country is at war to serve in the armed forces and, if called upon, to take human life of the enemy. It is with a heavy heart that men and women who go to war do that. *449 N.T. 3/05/87 at 1831. The prosecutor was not, as Appellant characterizes this statement, exhorting the jury to sentence Appellant automatically to death as a soldier would be compelled to kill the enemy in wartime. Rather, he was likening sitting on a penalty phase jury in a capital case — surely one of the more weighty responsibilities a citizen could have in this society—to the burden placed on citizen-soldiers in war. We find this to be mere oratorical flair and not improper. Thus, this ineffectiveness claim has no merit. Next, Appellant claims that the prosecutor improperly asked the jury to sentence Appellant to death so as to deter other potential murderers. We have previously rejected a similar claim. In Commonwealth v. Zettlemoyer, 500 Pa. 16, 454 A.2d 937, 958 (1982), we stated that such fleeting references to the deterrent effect of the death penalty did not bias or prejudice the jury as they are "a matter of common public knowledge based on ordinary human experience." We therefore find there is no merit to this claim. Appellant next claims that his counsel was ineffective for failing to pursue the claim that the prosecutor impermissibly offered his personal opinion that the evidence was sufficient to establish the aggravating factor that the killing posed a grave risk of harm to others. This argument is specious. The prosecutor merely cataloged the extensive evidence in support of this aggravating circumstance; furthermore, he again reminded the jury that "you are the finder of fact." N.T. 3/05/87 at 1841. Thus this claim has no merit. Appellant next argues that the prosecutor improperly commented on Appellant's lack of remorse. At penalty phase, Appellant took the stand. In referencing Appellant's testimony in his closing, the prosecutor briefly argued to the jury that Appellant expressed only one emotion on the witness stand and that was anger toward Violeta, the sister of the victim, and did not express remorse for the murder. N.T., 3/05/87 at 1842-43. We have stated that such brief comments regarding a defendant's remorse—particularly when it is in response to a defendant's self-centered display of emotion—do not constitute misconduct. Commonwealth v. King, 554 Pa. 331, 721 A.2d 763, 784 (1998); Commonwealth v. Harris, 550 Pa. 92, 703 A.2d 441, 451 (1998). Thus, we find that this claim has no merit. Furthermore, even if we assume arguendo that this claim does have merit, the impact of this isolated comment was minimal and it cannot be said, given the evidence presented at the penalty phase of this trial, that the result would have been different had trial counsel objected to this comment and had that objection sustained. Thus, as Appellant has failed to show that he has been prejudiced by this comment, this ineffectiveness claim must fail. Next, Appellant claims that the prosecutor inappropriately informed the jury that the fact that drugs were involved in this matter constituted an additional, independent aggravating factor. Appellant grossly mischaracterizes the record. The prosecutor clearly informed the jury that the underlying felony to support the aggravating factor that the murder occurred during the commission of a felony was robbery; he did not posit to the jury that the mere presence of drugs would be another aggravating factor. N.T. 3/05/87 at 1847. Appellant's next claim is that the prosecutor improperly disparaged his mitigating evidence concerning the fact that he provided support for two of his four children; Appellant claims that the prosecutor's statements were tantamount to instructing the jury to disregard this evidence. Appellant's claim has no merit. It is true that the prosecutor disparaged the evidence which Appellant proffered, implying that it was of so little weight that it should not affect the verdict. That, however, is a permissible argument. Commonwealth v. Basemore, 525 Pa. 512, 582 A.2d 861, 869 (1990) (a prosecutor may argue to the jury that it should not attach *450 any substantial weight to mitigating circumstances presented by the defense). We conclude that this claim has no merit. Appellant's final ineffectiveness claim concerning comments made by the prosecutor during the penalty phase is that the prosecutor stated to the jury that they would have to sentence Appellant to death in order to live up to the oaths they took at the outset of the trial. This argument is specious. The prosecutor actually stated that he was asking the jury "to live up to your promise under oath that you follow the law that you'll get from [the trial court judge]." N.T. 3/05/87 at 1849. As the prosecutor did not act improperly in making this comment, there is no merit to Appellant's claim that counsel was ineffective for failing to pursue this claim. Appellant also raises several ineffectiveness claims relating to the instructions given to the jury at penalty phase. First, he contends that counsel was ineffective when he failed to pursue the claim that the trial court's instructions to the jury precluded the jury from considering relevant mitigating evidence. The trial judge did not so fetter the jury in its inquiry; in fact, the trial judge specifically stressed that the jury was to consider "[a]ll the evidence from both sides, including the evidence you heard earlier during the trial in chief as to aggravating or mitigating circumstances...." N.T. 3/05/87 at 1854. Thus, this claim is without merit. Appellant next claims that counsel was ineffective for failing to raise the issue that the jury instructions and the verdict slip indicated that the jury had to find unanimously any mitigating factor before it could give effect to that factor in its sentencing decision, thus violating the dictates of Mills v. Maryland, 486 U.S. 367, 108 S.Ct. 1860, 100 L.Ed.2d 384 (1988). We find this claim to be meritless. The trial judge's charge to the jury virtually mirrored 42 Pa.C.S. § 9711(c)(1)(iv). N.T. 3/05/87 at 1855. We have previously stated that where a charge tracks this statutory language, it "does not state or infer a requirement that any given mitigating circumstance must be unanimously recognized before it can be weighed against aggravating circumstances in reaching a verdict." Travaglia, 661 A.2d at 366. Likewise, the verdict form closely tracked the language of the statute. In reviewing a similar verdict slip, this court in Commonwealth v. Hackett, 534 Pa. 210, 627 A.2d 719 (1993) held that the verdict slip form did not infer a need for unanimity with regard to mitigating circumstances. We therefore reject this claim.[15] Appellant's final contention concerning the instructions to the jury during penalty phase is that counsel was ineffective for failing to pursue the claim that the trial court erred when it did not instruct the jury that a life sentence means that Appellant must spend his natural life in prison without the possibility of parole. This "life means life" instruction was made compulsory by the United States Supreme Court in Simmons v. South Carolina, 512 U.S. 154, 114 S.Ct. 2187, 129 L.Ed.2d 133 (1994). We have stated, however, that "Simmons will not be given retroactive effect in a collateral attack upon a petitioner's sentence." Commonwealth v. Laird, 555 Pa. 629, 726 A.2d 346, 360 (1999). In PCRA petitions such as the one in the matter sub judice, the rule to be applied is the one which was applicable at the time of trial. At the time of Appellant's trial, the law of this Commonwealth specifically prohibited an instruction which would inform a jury that life means life without parole. Commonwealth v. Edwards, 521 Pa. 134, 555 A.2d 818, 830-831 (1989). Thus, as the *451 controlling law at the time would not allow a "life means life" instruction, the trial court did not err in failing to give sua sponte this instruction. Furthermore, we will not deem counsel ineffective for failing to anticipate a change in the law. Commonwealth v. Porter, 556 Pa. 301, 728 A.2d 890, 900 (1999). Next, Appellant contends that counsel was ineffective for failing to raise the claim that his death sentence should be vacated because one of the aggravating factors on which his sentence was based— namely that the murder was perpetrated during the course of a felony—is irrational, arbitrary and capricious. Specifically, Appellant contends that this aggravating factor would allow a jury to sentence a defendant to death where the killing was the result of a mistake, such as where a killing occurs accidentally in a "robbery gone awry" (which Appellant contends that the jury in this matter found) as opposed to "a planned murder...." Appellant's brief at 90. This argument is absurd. The only time a penalty phase is convened is when a defendant stands convicted of first degree murder. With all of those defendants, it has been established beyond a reasonable doubt that the killings were committed with specific intent. See 18 Pa.C.S. § 2502(a) (to find a defendant guilty of murder, the jury must find that the defendant had specific intent to kill).[16] Thus, it would be impossible for a jury to find arbitrarily the aggravating factor of 42 Pa.C.S. § 9711(d)(6) with regard to a defendant who lacked the specific intent to kill as such a defendant would never be the subject of a penalty hearing. Commonwealth v. Bardo, 551 Pa. 140, 709 A.2d 871, 878 (1998) (rejecting claim that 42 Pa.C.S. § 9711(d)(6) would allow a jury to find this aggravating factor in the absence of a finding that the defendant had specific intent to commit the murder). We thus reject this ineffectiveness claim. Next, Appellant claims that one of the jurors erroneously informed the other members of the jury that Appellant would be available for parole after only thirteen years imprisonment if the jury sentenced him to life.[17] Appellant claims that this information tainted the jury's deliberations, and that counsel was ineffective for failing to pursue this issue on appeal.[18] This claim must be rejected. "The general rule of law is that a juror may not impeach his or her own verdict after the jury has been discharged. An exception to this rule is made for those situations where a jury has been exposed to an ex parte influence, which possesses a reasonable likelihood of prejudice." Laird, 726 A.2d at 356 (citations omitted). In this instance, there was no ex parte influence brought to bear on the jury; thus, the limited exception would have no application. As this claim has no merit, counsel was not ineffective for failing to pursue it. Next, Appellant complains that counsel was ineffective for failing to raise the claim that this court's proportionality review, conducted pursuant to 42 Pa.C.S. § 9711(h)(3)(iii)[19], is inherently flawed. We recently reviewed this same claim in Commonwealth v. Gribble, 550 Pa. 62, 703 *452 A.2d 426 (1997). We noted that in conducting our proportionality review, we examine not only the compiled data from the AOPC, but we also have at our disposal the verdict sheets and the review forms submitted by the President Judges. This allows us to conduct a thorough review of cases similar to the one in question and provides additional screening for any anomalies that may be present in the AOPC database. We have carefully reviewed these procedures and find nothing arbitrary or capricious in this scheme. Instead, we believe that our proportionality review comports with the General Assembly's desire to afford capital defendants an additional check against the arbitrary imposition of the death penalty. Id. at 441. As Appellant presents no reason for us to abandon our holding in Gribble, we find no basis on which to find that counsel was ineffective for failing to raise this claim. Finally, Appellant claims that the cumulative effect of the errors made his trial fundamentally unfair. We disagree. We have determined that none of Appellant's claims entitles him to relief and it is axiomatic that "no quantity of meritless issues can aggregate to form a denial of due process." Commonwealth v. Travaglia, 541 Pa. 108, 661 A.2d 352, 367 (1995). For the foregoing reasons, we affirm the order of the PCRA court.[20] NOTES [1] Where post-conviction relief has been denied in a death-penalty case, the matter is directly reviewable by this court. 42 Pa.C.S. § 9546(d). [2] 42 Pa.C.S. § 9711(d)(6). [3] 42 Pa.C.S. § 9711(d)(7). [4] 42 Pa.C.S. § 9711(e)(1). [5] Pursuant to the newly amended PCRA, in order to be considered timely, PCRA petitions must be filed within one year of the date that the judgment becomes final. 42 Pa.C.S. § 9545(b)(1) (amendments effective January 16, 1996). There is an exception to that rule, however, which states that even if it is not filed within one year of the date that the judgment becomes final, a first PCRA petition is still timely so long as it is filed within one year of the PCRA amendments becoming final—to wit, within one year of January 17, 1996. Commonwealth v. Peterkin, 554 Pa. 547, 722 A.2d 638, 641 (1998). As this is Appellant's first PCRA petition, and it was filed within one year of January 17, 1996, it is timely. [6] Effective August 11, 1997, Pa.R.Crim.P. 1509 governs procedures for PCRA petitions in death penalty cases. [7] We note that Appellant has properly "layered" all of his ineffectiveness claims, alleging that all prior counsel were ineffective for failing to assert claims of trial counsel's ineffectiveness. See Commonwealth v. Chmiel, 536 Pa. 244, 639 A.2d 9, 12 (1994) (requiring the proper layering of ineffectiveness claims). [8] "Life-qualifying" a juror refers to the process by which counsel identifies and excludes prospective jurors who have a fixed opinion that a sentence of life imprisonment should not be imposed for a conviction of first degree murder. Morgan v. Illinois, 504 U.S. 719, 735-36, 112 S.Ct. 2222, 2227, 119 L.Ed.2d 492, 507 (1992). [9] Although Pa.R.Crim.P 1126(a)(3) limits the defendant in a death penalty case to twenty peremptory strikes, Appellant was allowed twenty-one peremptory strikes. [10] Appellant also claims that the existence of a training tape created by then-Assistant District Attorney Jack McMahon (who did not prosecute Appellant) in 1987, which instructed prosecutors to manipulate the jury selection process in order to minimize the seating of African-American jurors, supports his Batson claim. Appellant is incorrect as the existence of the tape does not demonstrate that there was discrimination in his case. The existence of this tape in no fashion can be seen to relieve the burden Appellant carries pursuant to Batson—a burden which he has failed to meet. [11] A "show-up" is a one-on-one identification procedure between the witness and the suspect. Black's Law Dictionary 962 (6th ed.1991). [12] Appellant cites several statements by the prosecutor for this claim, referencing the following pages from the record: N.T. 3/04/87 at 1601, 1603-05, 1607, 1625, and 1628. [13] In a related claim, Appellant asserts that the Commonwealth violated Brady v. Maryland, 373 U.S. 83, 83 S.Ct. 1194, 10 L.Ed.2d 215 (1963) when it failed to inform Appellant pre-trial of Appellant's own blood type. This is specious as "[t]he Commonwealth does not violate the Brady rule when it fails to turn over evidence readily obtainable by, and known to, the defendant." Commonwealth v. Pursell, 555 Pa. 233, 724 A.2d 293, 305 (1998). [14] A Kloiber charge instructs the jury that a eyewitness' identification should be viewed with caution where the eyewitness: (1) did not have an opportunity to clearly view the defendant; (2) equivocated on the identification of the defendant; or (3) had a problem making an identification in the past. Kloiber, 106 A.2d at 826-27. [15] Appellant contends that the decision of the Court of Appeals for the Third Circuit in Frey v. Fulcomer, 132 F.3d 916 (3d Cir.1997) militates a different result as the Fulcomer court found a virtually identical charge to be improper. We have recently rejected the claim that we must follow Fulcomer as decisions from intermediate federal courts are not binding on this court. Commonwealth v. Chester, 733 A.2d 1242, 1248 (Pa.1999) (refusing to adopt the Third Circuit's rationale in Fulcomer). [16] We note that Appellant's claim that the jury found that he was guilty of something less than an intentional killing is spurious in light of their verdict at guilt phase. [17] Appellant supports this claim with purported affidavits obtained by his PCRA counsel from two jurors. [18] In analyzing this claim, we will assume, arguendo, that counsel would have been able to discover this information prior to taking the appeal in this matter. [19] Recently enacted legislation struck the statutory provisions requiring this court to conduct a proportionality review of death sentences. Act of June 25, 1997, No. 28, S 1 ("Act 28"), effective immediately. This court continues to undertake a proportionality review in cases where the death sentence was imposed prior to the effective date of Act 28. Commonwealth v. Gribble, 550 Pa. 62, 703 A.2d 426 (1997). [20] The Prothonotary of the Supreme Court is directed to transmit the complete record of this case to the Governor.
2023-10-13T01:26:51.906118
https://example.com/article/9090
Former NBA basketball player Dennis Rodman arrives at Beijing Capital International Airport as he leaves for North Korea's Pyongyang, in Beijing, China, June 13, 2017. Former NBA player Dennis Rodman has arrived in North Korea on his first visit since President Donald Trump took office. He told reporters before departing Beijing airport on Tuesday that he is "just trying to open a door" with North Korea. Rodman has received the red-carpet treatment on four past trips since 2013. He also has been roundly criticized for visiting during times of high tensions between the U.S. and North Korea over its weapons programs. He said he believes Trump would be happy with his trip. Rodman was a cast member on two seasons of Trump's "Celebrity Apprentice."
2023-08-05T01:26:51.906118
https://example.com/article/6537
Our Residential Accommodation, Care & Support Properties & Locations Hartshill Nuneaton 2 Laurel Drive, Hartshill, Nuneaton, CV10 0XP Management Bio Vicki Williams, Manager Vicki joined ICS in 2016 as the registered manager of our Hartshill home before becoming operations manager in 2017. Vicki has overall responsibility for overseeing the running of our registered homes. Vicki is also the registered manager of our Hartshill and Wembrook homes. Vicki has gained a wealth of experience working in a variety of clinical health and social care settings. Bev Oddy, Deputy Manager Bev Oddy is the deputy manager of our Hartshill and Wembrook homes, Bev joined ICS as a support worker in 2014 before being promoted to deputy manager in 2017. Registered to provide residential care for five adults with a learning disability and additional needs; which include physical disabilities (the home does not provide nursing care). The property is a purpose-built bungalow, which is in keeping with the modern housing estate. It is located on a main road leading to Nuneaton and is close to local bus services, shops and pubs. The home is also within easy reach of doctors, dentist and library and has a specially adapted vehicle, which is always available for use of the residents for shopping and outings. There is a car park for visitors and staff, with gardens to the front, side and rear of the building. The home has a large entrance hall leading to the main living areas, which include a large comfortable lounge (22.8 m2), separate dining room (17.4 m2), a large domestic style kitchen (13.9 m2) and a large modern utility room. A corridor leads to five single bedrooms (13.7 m2) there are two bathrooms, which are adapted for people with a physical disability and the home is fully adapted for use by individuals in wheelchairs. Residents own rooms are personalised to reflect the likes and individuality of the person occupying the room. Organisational Structure of the Home The person responsible for the day-to-day running of the home is the home manager who has relevant qualifications and experience. A deputy manager aids the home manager in the running of the home; in addition, there is a team of support workers with a wealth of specific experience and qualifications. Quality Assurance Feedback "ICS is a professional organisation with a management team who are competent and knowledgeable"Margaret, family member “Staff work with some very complex people with learning disabilities, enabling them to become independent"A Family member “It’s the best thing that has ever happened to our son. Staff encourage and promote his independence."James, A fmaily member “All of the service users appeared to be very happy with the standard of service"Anonymous "They understand the needs of their client's, and this shows in the quality of the care and support they provide"Learning disability nurse "We are very happy with the care and support ICS has provided to out daughter over the years, we would definitely recommend them to others."A family member Latest News July 12, 2018 ICS Olympics Event ICS held its first Olympics sports day. This is an example of a longer text
2024-04-22T01:26:51.906118
https://example.com/article/6404
Based on Cartoon Network’s top-rated animated series, the Generator Rex: Agent of Providence video game lets players take control of Rex, a teenager who has harnessed nanites within his body to become the ultimate weapon. In the game, Rex is in a race against time to prevent Van Kleiss and his minions from gaining unprecedented powers that could destroy the earth! Amazon.com Being a teenager is never easy — especially when you're a teenager with super powers and the fate of the world resting in your hands. Instead of ordinary responsibilities like homework and chores, you've got missions from Agent Six to accept and complete, collectibles to uncover for Doctor Holiday, and of course, the nefarious Bobo Haha to take down. No one ever said growing up was easy. Play as the super powered Rex Use Rex's nine builds Battle EVOs Synopsis Mutate into Rex's nine different builds, including the Slam Cannon, Rex Ride and Boogie Pack, to complete missions and fight enemies in Generator Rex: Agent of Providence. Venture through 10 diverse environments, including jungles, deserts, metropolitan cities and hidden areas, as you evolve your powers and conquer enemies. The world is depending on you. Are you up for the challenge? Key Features: Step into the role of Rex, a super-powered teenager who must save the world from impending doom Use Rex's nine different builds, including the Slam Cannon, Rex Ride and Boogie Pack, to take on enemies and accomplish missions from Agent Six Battle EVOs and take on the despicable Van Kleiss Take down Bobo Haha and uncover collectibles for Doctor Holiday Fight your way through 10 environments, from the depths of the jungle and scorching deserts to bustling metropolitan cities
2023-10-03T01:26:51.906118
https://example.com/article/5886
The Rebbe of Sinn Féin Last December, 300 Israeli rabbis, many of them employees of the state, signed a declaration forbidding Jews to sell or rent property to non-Jews. For some of them, the move was inspired by the philosophy of Isaac Halevi Herzog (1888–1959), one of the main modern proponents of a “halachic state” run according to the tenets of Jewish law. Herzog was the first chief rabbi of the State of Israel, as well as the father of the sixth Israeli president, Chaim Herzog, and the grandfather of the current Labor Knesset member, Isaac Herzog. But during his lifetime he was also known by a different title, albeit a less formal one: the Sinn Féin Rebbe, in honor of the Irish nationalist movement he supported. Like many of the Irish leaders, Herzog believed in political clericalism. But also like them, and contrary to his would-be spiritual heirs, he condemned discrimination against members of other faiths. Herzog was born in Lomza, a town in the northeastern region of Poland that belonged culturally and linguistically to the Lithuanian Jewish milieu. In 1898, Herzog’s family immigrated to England and later to France, where he studied at the Sorbonne University and received rabbinical ordination from his father, Yoel Leib Herzog. In 1915, Herzog moved to Belfast and served for three years as a rabbi before transferring in 1919 to Dublin, where he became the de facto leader of the entire Jewish community of Ireland. At the same time, Herzog’s economically progressive politics and scholarly interests brought him close to Éamon de Valera, who was then the leader of Sinn Féin and who eventually became the Irish prime minister from 1937 to 1948, and the president of Ireland from 1959 to 1973. Meaning “We Ourselves” in Irish Gaelic, Sinn Féin was founded in 1905 by the anti-Semitic politician Arthur Griffith, who once wrote that “the Three Evil Influences of the century were the Pirate, the Freemason, and the Jew.” Yet Griffith’s xenophobic views didn’t have much influence on Irish nationalism and were generally considered odd and marginal. De Valera used to visit Herzog’s house to talk about everything from mathematics to linguistics, and at De Valera’s urging, Herzog began studying Irish and learned to speak it fluently. Later, to preserve Irish neutrality, De Valera refused to accept Jewish refugees during the Holocaust, earning the anger of the Jewish world. Herzog, however, maintained his friendship out of respect for De Valera’s defense of Irish Jews. In fact, attitudes toward Jews in Ireland were far better than in other European countries. Many Irish rebels of the 19th century developed a warm and friendly relationship with the Jews, whom they considered brothers in their struggle. On their part, many Jews participated in Irish nationalist — and even militant — groups. One of the most prominent of these was Robert Briscoe, an Orthodox Dubliner who was the chief IRA agent for procuring German and American arms during the Irish War of Independence. Briscoe was also a two-time mayor of Dublin and a longtime Member of Irish Parliament for Fianna Fáil, the Republican Party. Herzog was appointed chief rabbi of Ireland by the rebels in 1919, even before formal independence from Britain. He retained this position until 1936, when he was invited to become the Ashkenazic chief rabbi of Palestine upon the death of his predecessor, Abraham Isaac Kook, in 1935. The British authorities were concerned about Herzog, not primarily because of his Zionism, but because of his Irish nationalism and Sinn Féin connections. Indeed, Herzog’s experiences in Ireland deeply affected his role as chief rabbi. He was the first thinker who seriously and systematically developed the idea of turning Israel into a “halachic state” — an idea, as historians such as Shulamit Eliash of Bar-Ilan University have noted, that sounded like a Jewish version of De Valera’s Catholic policy. For example, De Valera’s Irish constitution (which, incidentally, he sent to Herzog for his approval) included a paragraph about “the special position” of the Catholic Church, a subparagraph about the prohibition of divorce and a clause stating that “the State pledges itself to guard with special care the institution of marriage.” At the same time, his constitution explicitly guaranteed religious freedom and equal rights for Jews, Protestants and other religious minorities. Similarly, in his seminal book, “Tekhukah le-Yisrael al pi ha-Torah,” (“Jewish Legislation According to the Torah”) Herzog declared that a “halachic state” must provide equal rights to Christians, Muslims and followers of other faiths. While he likely wouldn’t have allowed secular marriage or abortion (though he didn’t address such practical questions explicitly), he also wrote sharply against religious, ethnic and gender discrimination. For example, he strongly defended the right of women and non-Jews to lead Jewish organizations and Israeli governmental bodies. Herzog knew, however, that some proponents of religious power would support discrimination. Yet he believed that “no rabbi with a brain in his head and with a modicum of common sense” would deny the right of Arabs and other non-Jews to buy houses and land. While some of the rabbis who recently attempted to do exactly that consider themselves — mistakenly — spiritual heirs to Herzog, few of them likely realize that Herzog’s idea of a halachic state was influenced by Irish Catholic nationalist political tradition that protected the rights of religious and ethnic minorities. As the first chief rabbi of Israel, and Sinn Féin Rebbe, warned more than 50 years ago, ethno-religious discrimination should have no part in politics. Yoel Matveev is a staff writer at the Forverts, a Yiddish poet, and a researcher of anti-authoritarian and socialist ideas within the Jewish tradition. Author Your Comments The Forward welcomes reader comments in order to promote thoughtful discussion on issues of importance to the Jewish community. All readers can browse the comments, and all Forward subscribers can add to the conversation. In the interest of maintaining a civil forum, The Forward requires that all commenters be appropriately respectful toward our writers, other commenters and the subjects of the articles. Vigorous debate and reasoned critique are welcome; name-calling and personal invective are not and will be deleted. Egregious commenters or repeat offenders will be banned from commenting. While we generally do not seek to edit or actively moderate comments, our spam filter prevents most links and certain key words from being posted and the Forward reserves the right to remove comments for any reason.
2024-07-17T01:26:51.906118
https://example.com/article/1990
779 F.Supp. 801 (1991) ACCU-WEATHER, INC., Plaintiff, v. REUTERS LIMITED, and Reuters Information Services, Inc., Defendants. No. 1:CV-91-0908. United States District Court, M.D. Pennsylvania. December 3, 1991. *802 Thomas A. Beckley, John G. Milakovic, Beckley & Madden, Harrisburg, Pa., for plaintiff. Charles C. Hileman, Robert L. Kendall, Jr., Schnader, Harris, Segal & Lewis, Philadelphia, Pa., for defendants. MEMORANDUM McCLURE, District Judge. BACKGROUND Plaintiff Accu-Weather, Inc. ("Accu-Weather") entered a default against defendants Reuters Limited and Reuters Information Services, Inc. (hereafter jointly "Reuters") on September 6, 1991 pursuant to Fed.R.Civ.P. 55(a).[1] Four days after the default was entered, defendants filed a motion (record document no. 22) to have it set aside. Fed.R.Civ.P. 55(c).[2] DISCUSSION In deciding whether to set aside an entry of default, the district court must consider four factors and make explicit findings as to each. The factors are: (1) whether lifting the default will prejudice the plaintiff; (2) whether the defendant has a prima facie meritorious defense; (3) whether the defaulting defendant's conduct is excusable or culpable; and (4) the effectiveness of alternative sanctions. Emcasco Insurance Co. v. Sambrick, 834 F.2d 71, 73-74 (3d Cir.1987). The Third Circuit does not favor defaults. If there is any doubt as to whether the default should be set aside, the court should err on the side of setting aside the default and reaching the merits of the case. Zawadski de Bueno v. Bueno Castro, 822 F.2d 416, 420 (3d Cir. 1987). Prejudice to the plaintiff Prejudice exists if circumstances have changed since entry of the default such that plaintiff's ability to litigate its claim is now impaired in some material way or if relevant evidence has become lost or unavailable. International Brotherhood of Electrical Workers v. Skaggs, 130 F.R.D. 526, 529 (D.Del.1990), citing Emcasco, supra, 834 F.2d at 73. Detriment in the sense that plaintiff will be required to establish the merit of its claims does not constitute prejudice in this context. Nash v. Signore, 90 F.R.D. 93, 95 (E.D.Pa.1981). In this case, Accu-Weather will not be prejudiced if the default is set aside. There is no contention on plaintiff's part that evidence has been lost or has become unavailable, or that something has occurred since entry of the default which will hinder plaintiff's ability to litigate this case. Plaintiff's contention that granting the motion will "severely prejudice Accu-Weather's right to prepare for trial, by requiring Accu-Weather either to engage immediately in `shot gun' discovery (by guessing at what issues Defendants might raise in an actual answer)"[3] is without merit. The type of harm which plaintiff seeks to call "prejudice" is not the sort of harm which the courts consider prejudicial in this context. Moreover, plaintiff's contention is without merit. The discovery deadline has been extended. (See: record document no. 29, filed October 9, 1991). Additionally, although the August hearing on plaintiff's motion for a preliminary injunction did not explore fully the merits of the case, it certainly gave plaintiff some idea as to the defenses defendants intend to raise, such that it is not completely "in the dark" on this issue, as it would have *803 the court believe. Plaintiff also knew from the pleadings, briefs, etc. in the declaratory judgment action filed by Reuters in the United States District Court for the Eastern District of Pennsylvania[4] what defendants' position is on the issues of this case. Further, defendants filed a proposed answer with their reply brief (record document no. 28, filed October 3, 1991). Finally, plaintiff's argument on this issue is directly contrary to its assertion on page four of its brief that it has used the default mechanicism [sic] to squarely frame the positions of the parties and to effect the judicial economy desirable in cases of this nature. All the contract documents are before the Court. No parol evidence is needed. What more can Defendants say? If defendants cannot possibly have any defense to raise with which plaintiff is not already familiar, how then can plaintiff's conduct of discovery be hindered by the absence of an answer. We find such blatantly inconsistent and unfounded arguments insulting to the court. Moreover, plaintiff appears to labor under the misapprehension that the court should allow the default to stand because it will not put plaintiff to the "unnecessary bother" of trying this case. Meritorious defense A meritorious defense is one which, if proven at trial, will bar plaintiff's recovery. Emcasco, supra, 834 F.2d at 74. The defendant is not required to prove beyond the shadow of a doubt that it will win at trial, but merely to show that it has a defense to the action which at least has merit on its face. Emcasco, supra, 834 F.2d at 74. Accu-Weather seeks to enforce a contract to supply Reuters with weather information for its global news service and data base. Accu-Weather contends that a contract signed on July 5, 1983 has been extended by various addenda and is in effect until "at least" July 1, 1998. Reuters takes the position that there are no currently enforceable extensions of the 1983 agreement in effect and that it legally terminated its contractual relationship with Accu-Weather effective July 1, 1991. Accu-Weather seeks a court order compelling Reuters to continue purchasing weather information and services from it through the 1998 contract termination date, as well as monetary damages allegedly caused by Reuters' refusal to utilize its services on or after approximately July 1, 1991. In addition to denying plaintiff's claims that there is a contract currently in effect, defendants raise three affirmative defenses: (1) failure to state a claim upon which relief can be granted; (2) laches; and (3) the existence of an adequate remedy at law. (These defenses are pled in defendants' proposed answer, which is attached to record document no. 28, filed October 3, 1991, as exhibit "A"). This court has more than passing familiarity with the merits of this case and with Reuters' defenses, having conducted an evidentiary hearing in August of this year on plaintiff's motion for a preliminary injunction. At that time the court heard limited testimony on the merits of plaintiff's claim and Reuters' asserted defenses in the context of deciding whether plaintiff had shown a likelihood of success on the merits. Based on the evidence taken at that hearing as well as the briefs filed in support of defendants' motion to set aside the judgment, we find that Reuters has demonstrated meritorious defenses to plaintiff's claim. If defendants prevail on their contention that the addenda which purportedly extend the contract termination date have no legal effect, such that there is no contract currently in effect, and that defendants had a legal right to cancel the contract effective July 1, 1991, plaintiff has no claim. Additionally, plaintiff has no right to injunctive relief if, as defendants contend, it has an adequate remedy at law in the form of monetary damages. *804 Culpable conduct of defendant A defendant's motion to set aside a default should not be granted if the defendant exhibited some degree of culpable conduct in failing to respond to pleadings. In this context, conduct is considered culpable, "if it is `willful' or `in bad faith' ... [Citation omitted.] ... or if it is part of a deliberate trial strategy." Skaggs, supra, 130 F.R.D. at 529. Plaintiff argues strenuously that defendants' failure to file an answer was a calculated strategy intended to delay proceedings in this matter. Plaintiff accuses defense counsel of engaging in repeated "foot-dragging" throughout these proceedings. We find no basis for these accusations and disagree strongly with plaintiff's characterization of various acts by defendants. Plaintiff's characterization is a distortion of defendants' conduct and we refuse to attribute to defendants the motives plaintiff ascribes to them. We find nothing dilatory or improper about the way in which defendants have proceeded before this court and take exception to plaintiff's mischaracterization of such conduct as dilatory or improper. There is, we find, no evidence of willful misconduct or purposeful delay on the part of defendants. To the contrary, we find, as stated in the affidavit of defendants' counsel, that the failure to file an answer in timely fashion was merely an oversight on the part of counsel, attributable in large part to the flurry of activity in this case surrounding the August 8 and 9, 1991 hearing on plaintiff's motion for a preliminary injunction. Plaintiff's amended complaint was filed shortly before commencement of the hearing[5] and we can certainly understand how counsel overlooked the fact that no answer had been filed. See, e.g., Emcasco, supra, 834 F.2d at 75 and de Bueno, supra, 822 F.2d at 420-21. * * * Since all four factors weigh in defendants' favor, we are led to the "inescapable conclusion" that the default should be set aside and that defendants should be granted leave to file an answer to the amended complaint. See: Emcasco, supra, 834 F.2d at 75. NOTES [1] Rule 55(a) provides: When a party against whom a judgment for affirmative relief is sought has failed to plead or otherwise defend as provided by these rules and that fact is made to appear by affidavit or otherwise, the clerk shall enter the party's default. [2] Rule 55(c) provides: For good cause shown the court may set aside an entry of default and, if a judgment by default has been entered, may likewise set it aside in accordance with Rule 60(b). [3] Plaintiff's opposing brief, record document no. 27, filed September 25, 1991, p. 6. [4] That action, Reuters v. Accu-Weather, Civil No. 3-91-1176 was transferred to this court from the United States District Court for the Eastern District of Pennsylvania and has since been consolidated with this case. (See: record document no. 24.) [5] Plaintiff's amended complaint (record document no. 8) was filed July 30, 1991.
2024-01-04T01:26:51.906118
https://example.com/article/6186
Molecular characterization of the dopamine transporter. Neurotransmission, which represents chemical signalling between neurons, usually takes place at highly differentiated anatomical structures called synapses. To fulfill both the time and space confinements required for optimal neurotransmission, highly specialized proteins, known as transporters or uptake sites, occur and operate at the presynaptic plasma membrane. Using the energy provided by the Na+ gradient generated by the Na+/K(+)-transporting ATPase, these transporters reuptake the neurotransmitters soon after their release, thereby regulating their effective concentrations at the synaptic cleft and the availability of neurotransmitters for a time-dependent activation of both pre- and postsynaptic receptors. The key role these proteins play in normal neurotransmission is further emphasized when the physiological and social consequences of drugs that interfere with the function of these transporters, such as the psychostimulants (e.g. amphetamine and cocaine) or the widely prescribed antidepressant drugs, are considered. In this review, Bruno Giros and Marc Caron elaborate on the potential consequences of the recent molecular cloning of the dopamine and related transporters and summarize some of the interesting properties that are emerging from this growing family of Na(+)- and Cl(-)-dependent transporters.
2024-03-05T01:26:51.906118
https://example.com/article/3169
Scope of Treasury Department Purchase Rights with Respect to Financing Initiatives of the U.S. Postal Service If the Treasury Departm ent has declared its election to purchase a proposed U.S. Postal Service bond issue pursuant to 39 U.S.C. § 2006(a) prior to the proposed date of issuance and is pursuing good- faith negotiations towards such purchase as o f such date, the USPS is not free to proceed with issuance o f the bonds to other purchasers solely because Treasury has not completed purchase of the bonds within a 15-day period following USPS’ initial notice of the proposed issue. If, in the above circumstances, Treasury and the USPS are unable to negotiate mutually agreeable terms for purchase by Treasury within a commercially reasonable period of time following USPS’ proposed date for the issuance o f its bonds, then the USPS may proceed with the issuance of such bonds to other purchasers. Treasury is not authorized to dictate or control the terms of the USPS offering, but it must be afforded a reasonable opportunity to reach mutually agreeable terms with the USPS when the original terms proposed by the USPS are unacceptable. That reasonable opportunity is not rigidly limited by the 15-day period for declaring an election to purchase. October 10, 1995 M e m o r a n d u m O pin io n f o r the V ic e P r e s id e n t and G en er a l Counsel U n i t e d S t a t e s P o s t a l S e r v ic e and T h e G en era l C oun sel De p a r t m e n t of the T reasury I. Background and Summary This memorandum responds to the U.S. Postal Service’s (“ USPS” ) request that this Office reconsider and rescind an opinion issued on January 19, 1993,1 in which we responded to the Department of the Treasury’s (“ Treasury” ) request for an opinion regarding the statutory relationship between the USPS and Treasury with respect to USPS financing initiatives. In the 1993 opinion, we concluded that (1) under 39 U.S.C. § 2006(a), Treasury’s failure to purchase a USPS bond issue prior to the scheduled date of sale on the market proposed by USPS does not relieve USPS of further obligation to negotiate with the Treasury towards agreeable terms of sale, or permit USPS to proceed with the market sale as origi­ nally scheduled, as long as Treasury has duly declared its “ election” to purchase and continues to negotiate in good faith towards the purchase; and (2) the transfer of the proceeds of a bond offering by the USPS to a trustee for the purpose of having the trustee employ those proceeds to make and use investments to dis- 1 Authority o f the Secretary o f the Treasury Regarding Postal Service Bond Offering, 17 Op. O.L.C. 6 (1993) (“ 1993 opinion” )- 238 Scope o f Treasury Department Purchase Rights with Respect to Financing Initiatives o f the U.S. Postal Service charge outstanding USPS debt would require the prior approval of the Treasury under the provisions of 39 U.S.C. § 2003. In response to arguments and representations made by USPS, and after giving written notice to Treasury, this Office has undertaken a reconsideration of its 1993 opinion.2 We now reaffirm the conclusions reached in that opinion, with the fol­ lowing clarification. We conclude that, although Treasury’s declared election to purchase a USPS offering may require USPS to negotiate with Treasury towards agreeable terms of sale even beyond the originally scheduled market offering date, USPS is not required to postpone the market sale indefinitely if Treasury has not purchased the offering after that date has passed. Rather, USPS is only obli­ gated to negotiate with Treasury in good faith for a commercially reasonable period o f time, under the circumstances presented by the proposed transaction, before proceeding with the sale. II. Analysis The original opinion addressed two distinct issues, and both were resolved in favor of the position advocated by Treasury. On the first issue, we conclude that this Office was correct in opining that, under 39 U.S.C. § 2003(c)-(d), Treasury’s approval was required as a precondition to USPS placing the proceeds of its proposed bond offering with a trustee who would invest the proceeds in securities and use the investment return to discharge outstanding USPS debt. We find no basis for changing or revising the original opinion’s analysis of this issue, and we hereby readopt and reaffirm that analysis. However, there appears to be some basis for clarifying one particular aspect of that portion of the opinion interpreting Treasury’s purchase rights under 39 U.S.C. § 2006(a). Specifically, our 1993 opinion suggested that the negotiations that could be invoked by Treasury’s “ election” to purchase the USPS bond offering were not subject to any time limitation, even when Treasury has not effected a purchase of the offering by the date originally scheduled by USPS for sale on the market. We now conclude that if such negotiations are conducted in good faith by USPS, yet are not concluded within a commercially reasonable period of time following the initially proposed offering date, USPS may proceed with the proposed offering notwithstanding Treasury’s unconsummated election to purchase. 2 Our reconsideration o f the original opinion in this matter was initiated by a request from the USPS. The request was originally set forth in a letter dated May 4, 1993, from Mary S. Elcano, Vice President and General Counsel o f the U.S. Postal Service, to Daniel Koffsky, Acting Assistant Attorney General, Office of Legal Counsel. By letter dated March 17, 1995, the USPS has consented to be bound by the final opinion to be issued by this Office in this matter. On the basis o f that consent, we are proceeding with our reconsideration o f the 1993 opinion. 239 Opinions o f the Office o f Legal Counsel in Volume 19 A . Treasury Restraints on USPS Authority to Invest or Deposit Funds The first and easier issue concerns the restraints on the authority of USPS to invest or deposit moneys of the Postal Service Fund (“ Fund” ) set forth in 39 U.S.C. §2003. Section 2003(c) provides that if USPS determines there are Fund moneys “ in excess of current needs,” such funds may be invested in Government securities by and through the Secretary of the Treasury and, subject to the Sec­ retary’s prior approval, such excess funds may also be invested in non-Govem- ment securities. Section 2003(d) separately provides that the Secretary of the Treasury must pre-approve any “ deposits” of Fund moneys in a Federal Reserve bank, a depository for public funds, or in “ such other places” as the USPS and the Secretary “ may mutually agree.” USPS proposed to place the proceeds of its bond refinancing with a trustee, who would then invest the funds in government securities (it is not disputed that the refinancing proceeds would constitute part of the Fund). The trustee would then use the principal and interest of those government securities to redeem approximately $2.6 billion in outstanding USPS debt (i.e., the debt being refinanced). Treasury contended that USPS could not place the bond proceeds with the trustee without Treasury’s prior approval — which apparently would not be forthcoming— under the above-quoted provisions of 39 U.S.C. §2003. USPS contends that neither § 2003(c) nor (d) applied to this proposed “ in substance defeasance,” on the theory that the investments made under the trustee arrange­ ment would not constitute true investments because they were only an alternative mechanism for the repayment of debt; and on the further theory that placement of the funds with the trustee did not constitute a “ deposit” within the meaning of 39 U.S.C. § 2003(d). USPS’ renewed argument that the statutory requirement for Treasury’s handling or approval of Fund investments is inapplicable to these arrangements is unpersuasive and is adequately addressed in the original OLC opinion. The trustee’s investment of the Fund moneys in government securities is clearly an investment for purposes of §2003(c)’s restrictions, notwithstanding the participa­ tion of the trustee as an intermediary. See Postal Reorganization Act— Investment o f Excess Funds o f the Postal Service, 43 Op. Att’y Gen. 45, 47 (1977).3 This investment arrangement is therefore subject to the requirements of 39 U.S.C. § 2003(c). The additional argument that the initial “ placement” of funds with a trustee is not a “ deposit” within the meaning of § 2003(d), and therefore not subject to approval by Treasury, is also unconvincing. In this regard, we reject the USPS 3 In his 1977 opinion construing the investment and deposit restrictions o f 39 U.S.C. §2003, the Attorney General em phasized that “ the limitations in §2003 are limitations on any general powers [of the USPS] insofar as they apply to the Fund.” 43 Op. A tt’y Gen. at 47. He further stated that “ the authority to purchase Government Obliga­ tions, carefully described and carefully circum scribed in § 2003(c), is to the exclusion o f any other authority in this regard.” Id. 240 Scope o f Treasury Department Purchase Rights with Respect to Financing Initiatives o f the U.S. Postal Service contention that the placement of funds with the trustee was not a deposit within the meaning of that section because the funds were not subject to free withdrawal by USPS as depositor. There is nothing in § 2003(d) that requires or indicates such a narrow interpretation of the term “ deposit.” Rather, that subsection broadly encompasses the deposit of Fund moneys not only in Federal Reserve banks or in “ depositories for public funds,” but also “ in such other places and in such manner as the Postal Service and the Secretary may mutually agree.” That expan­ sive description demonstrates that § 2003(d) was intended to apply to virtually any disposition of Fund moneys, not merely to conventional bank-type deposits. This conclusion is consistent with the broad interpretation of Treasury’s authority under § 2003(d) reflected in the Attorney General’s opinion in 1977, resolving a comparable dispute between Treasury and the Postal Service. In con­ firming that § 2003(d) imposed limitations on the general powers of the USPS in making dispositions of the Fund, the Attorney General stated: Thus, for example, § 2003(d), which authorizes the Postal Service to deposit moneys in the Fund in bank accounts with the approval of the Secretary, restricts any implicit authority to open accounts which the Service might otherwise have under the general provi­ sions of the Postal Reorganization Act; and it could not reasonably be argued that in addition to deposits made under this authority the Service might make Fund deposits anywhere else, without the Secretary’s approval. 43 Op. Att’y Gen. at 47 (emphasis added). The Attorney General’s 1977 opinion reenforces our view that the purposely restrictive provisions of § 2003(d) should not be circumvented by an unduly narrow interpretation of the verb “ deposit.” B. Treasury Restraints on USPS Right to Sell Bonds The second issue is whether the USPS was entitled to proceed with a proposed market offering of USPS bonds when Treasury, within the 15-day pre-offering notice period required by 39 U.S.C. § 2006(a), had invoked (but not actually exer­ cised) its right to purchase that offering. That section provides (emphasis added): At least 15 days before selling any issue of obligations under section 2005 of this title, the Postal Service shall advise the Sec­ retary of the Treasury of the amount, proposed date of sale, matu­ rities, terms and conditions, and expected maximum rates of interest of the proposed issue in appropriate detail and shall consult with him or his designee thereon. The Secretary may elect to purchase such obligations under such terms, including rates o f interest, as 241 Opinions o f the Office o f Legal Counsel in Volume 19 he and the Postal Service may agree, but at a rate of yield no less than the prevailing yield on outstanding marketable Treasury securi­ ties of comparable maturity, as determined by the Secretary. If the Secretary does not purchase such obligations, the Postal Service may proceed to issue and sell them to a party or parties other than the Secretary upon notice to the Secretary and upon consultation as to the date of issuance, maximum rates of interest, and other terms and conditions. USPS argues that even if Treasury has declared its “ election” to purchase the offering, and negotiations to reach an agreement on terms have been undertaken, the only way Treasury can prevent USPS from proceeding on schedule with the proposed market offering is by completing the actual purchase of the offering (and not through mere continuation of good-faith negotiations) before expiration of the 15-day advance notice period. In contrast, Treasury has contended that it has an “ absolute right of first refusal” with respect to the proposed offering and, once it has given notice of its “ election” to purchase within the 15-day period, USPS is barred from proceeding with a market sale of the bonds and must con­ tinue to pursue negotiations with Treasury — if necessary, beyond the initially pro­ posed offering date. Our original opinion (1) rejected the USPS argument that nothing less than actual purchase of the offering by Treasury could prevent USPS from proceeding with the scheduled offering; (2) determined that the statute does not require Treasury to have agreed on terms with USPS before exercising its election to purchase and enables Treasury to require USPS to bargain exclusively with Treasury even beyond the date originally proposed for the offering; and (3) indicated that “ [t]here is no limit on the negotiation period” that is implicitly required by the statute once Treasury has stated that it “ elect[s] to purchase” the offering. 17 Op. O.L.C. at 7, 9-11. We reaffirm conclusions (1) and (2) and again reject the USPS argument that Treasury’s priority purchase right under 39 U.S.C. § 2006(a) automatically expires if Treasury’s election to purchase does not result in the completion of negotiations and consummation of the purchase by the proposed sale date. On this point, we have again considered USPS’ contentions that certain legisla­ tive history underlying the Federal Financing Bank Act of 1973, Pub. L. No. 93- 224, 87 Stat. 937 (1973) (“ FFBA” ) confirms the USPS argument that Treasury must complete (as opposed to merely initiating) the purchase of a proposed USPS debt offering within 15 days after receiving first notice of the offering, or waive all purchase rights. In particular, USPS cites language from committee reports on the FFBA stating that “ the Secretary of the Treasury may purchase all Postal Service obligations if he does so within the time period prescribed in 39 U.S.C. 2006(a).” H.R. Rep. No. 93-299, at 5 (1973), reprinted in 1973 U.S.C.C.A.N. 242 Scope oj Treasury Department Purchase Rights with Respect to Financing Initiatives o f the U.S. Postal Service 3153, 3157 (“ PRA House Report” ). Although this argument is not without force, we do not find it sufficiently persuasive to alter our conclusion on this point. First, observations in committee reports concerning the FFBA simply do not provide persuasive legislative history for purposes of the Postal Reorganization Act, Pub. L. No. 91-375, 84 Stat. 719 (1970) (“ PRA” ). The FFBA was enacted three years after the PRA and concerned a more comprehensive range of federal agency financing issues. It is well-settled that the pronouncements of a subsequent Congress do not constitute reliable evidence of the intent or understandings of a prior Congress. Consumer Product Safety Comm’n v. GTE Sylvania, Inc., 447 U.S. 102, 117 (1980); United States v. Price, 361 U.S. 304, 313 (1960). We are not persuaded that this sound general principle should be disregarded in inter­ preting the PRA. Second, the committee report on the FFBA was not concerned with the discrete issue raised here — whether Treasury must merely initiate, or actually effectuate, the purchase of a USPS offering within the 15-day initial notice period — as opposed to the broader question of whether the USPS would retain “ independent financing authority” after enactment of the F F B A . The segments of the FFBA committee reports in question were intended to provide broad assurance that the FFBA would not unduly impair USPS’ existing financing authority under the PRA, and thus presented a broad interpretation of that authority. These post-enact- ment descriptions of § 2006(a) simply cannot be equated with a contemporaneous and authoritative explication of the section’s provisions by the Congress that enacted it. Third, the excerpts from the FFBA legislative history themselves contain an element of ambiguity on the matter in dispute. Although the segment quoted above would support the USPS contentions, it is followed by the following additional statement: However, if the [Federal Financing] Bank or the Secretary [of the Treasury] did not act to take up a proposed Postal borrowing within the prescribed time limit [provided in 39 U.S.C. 2006(a)], the Postal Service could, on its own initiative, borrow in the private market under its independent Postal Reorganization Act authority. PRA House Report at 5, reprinted in 1973 U.S.C.C.A.N. at 3157-58 (emphasis added). Treasury’s declaration of an election to purchase can certainly be viewed as “ actfing] to take up” the proposed obligations, even when the purchase has not been fully consummated. That phrase connotes the initiation of the purchasing process. Consequently, even the FFBA committee report highlighted by USPS does not unambiguously support its interpretation of § 2006(a). Finally, and most importantly, the text of § 2006(a) strongly indicates that its drafters contemplated that Treasury would sometimes find it necessary to negotiate 243 Opinions o f the Office o f Legal Counsel in Volume 19 modified terms to govern the proposed issue after Treasury has declared its elec­ tion to purchase. Thus, the section specifically provides: The Secretary may elect to purchase such obligations under such terms, including rates o f interest, as he and the Postal Service may agree, but at a rate of yield no less than the prevailing yield on outstanding marketable Treasury securities of comparable maturity, as determined by the Secretary. 39 U.S.C. § 2006(a) (emphasis added). If the section required Treasury to consum­ mate its purchase option within 15 days after initial notice, USPS could easily frustrate the statute’s provision for purchase by Treasury at negotiated terms that may sometimes differ from those originally proposed by USPS. It could do so by simply refusing to budge in any respect from its original terms during the 15-day period following initial notice. We do not believe that Congress intended to circumscribe Treasury’s purchase options to that degree in enacting § 2006(a). However, we modify our prior opinion insofar as it indicates that Treasury may delay the USPS offering indefinitely with unlimited negotiations once it has stated its election to purchase. We conclude that such negotiations cannot be prolonged beyond USPS’ scheduled market offering date to such an extent as to impair substantially USPS’ capacity to consummate the proposed offering in a timely fashion; rather, Treasury’s option to purchase must be consummated within a commercially reasonable period of time. Enabling Treasury to force an indefinite delay in a proposed USPS bond offering — even when it has not bound itself to purchase the offering on the terms proposed by US>PS or on any other specified terms by the scheduled date of sale — appears inconsistent with the statute’s intent to provide USPS with a significant degree of business freedom and to prevent Treasury from exercising a blanket veto over USPS financial offering proposals. Thus, the House Report on the PRA emphasized that Treasury’s authority under § 2006(a) did not extend to preventing USPS from borrowing and did not include “ any inappropriate power . . . to con­ trol the scale of Postal Service operations.” H.R. Rep. No. 91-1104, at 21 (1970), reprinted in 1970 U.S.C.C.A.N. 3649, 3669 (“ House Report” ). In other commercial contexts, courts have established that a purchase option or right of first refusal4 (which was Treasury’s description of the rights granted to it under the statute) must be exercised within a “ commercially reasonable” period of time. E.g., Barco Urban Renewal Corp. v. Housing Auth. o f Atlantic City, 674 F.2d 1001, 1007 (3d Cir. 1982) (“ in these circumstances the right of first refusal lasts for a commercially reasonable time” ); West Texas Transmission, 4 In a portion o f our 1993 opinion, we used the term “ right o f first refusal*’ as a shorthand label for one of the three possible readings that might be applied to Treasury’s purchase rights under § 2006(a). 17 Op. O.L.C. at 9. Here, we use the term in its general com m ercial sense rather than in the narrower sense in which we employed it in the 1993 opinion to describe a particular interpretation o f § 2006(a). 244 Scope o f Treasury Department Purchase Rights with Respect to Financing Initiatives o f the U.S. Postal Service L.P. v. Enron Corp., 907 F.2d 1554, 1562 (5th Cir. 1990), cert, denied, 499 U.S. 906 (1991); Brauer v. Hobbs, 391 N.W.2d 482, 486 (Mich. Ct. App. 1986). In the absence of contrary language in the statute, this well-established commercial principle provides a relevant consideration in construing the purchase rights incor­ porated in § 2006(a). We do not think the statute was intended to enable Treasury to prevent USPS from proceeding with a proposed bond offering by requiring USPS to submit to an indefinite and unlimited period of negotiations. Such power would constitute the kind of “ inappropriate power in the Treasury to control the scale of Postal Service operations” that was foresworn in the legislative history. See House Report at 21, reprinted in 1970 U.S.C.C.A.N. at 3669. If Treasury’s ability to delay the proposed USPS offering date for purposes of negotiating agreeable purchase terms is limited to a commercially reasonable period, however, it would not be inordinate or inappropriate.5 Although we cannot project a specific period or time-range that would be “ reasonable” for the varying circumstances that USPS might confront in the future, we believe that a delay of such length as to substantially alter the circumstances which established the premises of the originally proposed offering would generally be considered unreasonable. Generally accepted custom and practice in the government securities sector would provide an appropriate point of reference for determining commer­ cially reasonable timeframes in this context. In summary, our conclusions regarding Treasury’s purchase rights under 39 U.S.C. § 2006(a) are as follows: 1. If Treasury has declared its election to purchase the proposed issue before the proposed sale date, and Treasury is still pursuing good-faith negotiations towards mutually agreeable terms, the USPS is not free to proceed with a sale on the market merely because Treasury has not completed the purchase within the 15-day period following initial notice of the proposed sale. 2. Treasury may not frustrate USPS’ right to sell the obligations elsewhere for an indefinite period by declining to purchase at the originally proposed terms when good-faith negotiations have failed to produce mutually agreeable alternative terms. If Treasury and USPS are unable to negotiate mutually agreeable terms within a commercially reasonable period of time following the originally proposed sale date, USPS may proceed to sell to another purchaser. 3. Treasury is not authorized to dictate or control the terms of the USPS offering, but it must be afforded a reasonable opportunity to reach mutually agree­ able terms with USPS when the original terms proposed by USPS are unaccept­ 5 This interpretation is consistent with then Under Secretary o f the Treasury Paul Volcker’s testimony during the Senate hearings on the PRA, where he stated that the provisions in question would give the Secretaiy the authority “ to supervise the timing o f the financing and the terms o f any financing by the postal authority, but he can never put him self in a position where he is preventing the postal authority from obtaining what financing they think is ne ce ssa ry /’ Postal Modernization: Hearings Before the Senate Comm, on Post Office and Civil Service, 91st Cong. 311 (1969). 245 Opinions o f the Office o f Legal Counsel in Volume 19 able. That reasonable opportunity is not rigidly limited by the 15-day period for declaring an election to purchase. WALTER DELLINGER Assistant Attorney General Office of Legal Counsel 246
2024-06-28T01:26:51.906118
https://example.com/article/1468
Create Beautiful Bridal Earrings for Your Wedding Day Searching for your bridal jewellery? Overwhelmed by all the options? Why not create your own bridal earrings? You'll add a touch of handmade to your bridal ensemble, and it will be a beautiful piece to hand down to future generations. In this tutorial you'll learn how to make a gorgeous pair of chandelier-drop earrings, to add a touch of sparkle to your wedding day look. Step 1: Create the First Beaded Headpin Take one headpin, and slide the largest crystal drop bead over the top, down to the base. Step 2: Add the Gold and Crystal Bead Take the gold and crystal round bead, and slide it over the headpin. This will be your second bead. It will rest on top of the teardrop bead. Step 3: Add the Crystal Bicone Beads Next, slide the small crystal bicone beads over the headpin. You will be adding four of the bicones, which gives length to our first headpin. You will eventually make three headpins. This one will be placed in the center, and it will be the longest in length. Step 4: Create the Second Beaded Headpin Slide the crystal teardrop bead to the bottom of the headpin. This teardrop bead should be smaller than the teardrop we used for the first headpin sequence. Step 5: Add the Silver Crystal Glass Bead Slide the silver crystal glass bead all the way down the headpin. It should rest on your teardrop bead. Step 6: Slide on the Crystal Bicone Beads Add three of the small bicone beads. They will sit on top of the silver bead. Now you have finished your second headpin sequence. Step 7: Make the Third Headpin Sequence Now we will make the third headpin sequence in the same as we created the second headpin. Slide on the crystal teardrop bead, which should be the same size as the one we used in the second headpin sequence. Step 8: Add the Silver Glass Bead Slide the silver glass bead on to the headpin, so it rests upon the teardrop bead. Step 9: Slide on the Crystal Bicone Beads As demonstrated in the previous steps, add the three small bicone beads to rest on top of the silver bead. You have now completed the third headpin sequence. Step 10: Cut and Loop the First Headpin Take your wire cutters and snip the end of the first headpin, as pictured below. Remember to leave some space, as you will need to make a loop out of the wire, and you do not want it too short. Make sure that your first headpin is your longest in length. The other two will need to be cut slightly shorter. (We don't give you a length because you should cut the wire to the length that you're comfortable wearing.) Once the headpin is cut, place your pliers tightly around the end of the headpin. Turn the wire over, with the pliers in order to create a loop. Make sure the loop closes entirely. There should not be any space between the wire loop. Step 11: Cut the Second and Third Headpins Cut the second and third pins as demonstrated in the previous step. These headpins should be the same length, but they should also be slightly shorter than the first headpin. We want the first headpin to remain as the central - and longest - part of our earring. Step 12: Loop the Wire of the Second Headpin Once your headpins are cut, loop the wire once again, as demonstrated above. Take the pliers and squeeze firmly around the end of the head pin. Turn the pliers into a loop, and make sure the loop is tightly closed. Step 13: Loop the Third Headpin Loop the wire of the third headpin as demonstrated in previous steps. All three headpins should now have the loops. Set them down to make sure the length looks right. The first headpin should be in the centre, as it's the longest. The second and third headpin should be on either side. Step 14: Add the Jump Ring Hold the ends of the jump ring with both sets of pliers. Twist the pliers away from each other in opposite directions to open the jump ring. Step 15: Add the Headpins With the jump rings open, add the headpins. Add a short headpin, then the centre longest headpin, and then the last short headpin. Slide the loop of each headpin over the jump ring and they will all fit together. Step 16: Place the Fishhook Now slide the fishhook over the jump ring. There is a small hole in the base of the fishhook, which should fit perfectly over the ring. Step 17: Close the Jump Ring Place both sets of pliers on opposite ends of the jump rings. Now twist the pliers towards each other and this will close the jump ring. Make sure the ring is perfectly closed. You don't want the headpins to slide out. Step 18: Slide On the Plastic Earring Back The earring back is a tiny plastic piece that slides onto the fishhook and secures the earring to your ear. Take this small plastic piece and slide it up to the centre of the fishhook. It has a hole so it should be simple to do. Now your first earring is complete! Step 19: Make Your Second Earring Follow all of the above steps carefully to create your second earring. Your beautiful bridal earrings are finished and ready to wear! Enjoy! Tell us how you went in the comments section below.
2024-03-22T01:26:51.906118
https://example.com/article/2533
Q: Keras Warning on_batch_begin When training a rnn model using the callback ResetStatesCallback bellow, i get the following warning message: /var/venv/DSTL/lib/python3.4/site-packages/keras/callbacks.py:97: UserWarning: Method on_batch_begin() is slow compared to the batch update (0.791834). Check your callbacks. % delta_t_median) from keras.callbacks import Callback #Reset count every RESET_STATES_LENGTH #RESET_STATES_LENGTH=8 class ResetStatesCallback(Callback): def __init__(self): self.counter = 0 def on_batch_begin(self, batch, logs={}): if self.counter % RESET_STATES_LENGTH == 0: self.model.reset_states() self.counter += 1 Why do i get this message? Should i try something to fix it? Does it really slow down my training that much? A: See https://github.com/fchollet/keras/issues/5008 for an explanation. It is stated there that You are running something like saving the model or rendering images after each batch and it is taking longer than the batches themselves. So it would seem that at runtime Keras has determined that your callback is slower than the batch itself.
2024-04-19T01:26:51.906118
https://example.com/article/7862
General News Annie gets pre-Christmas boost By: WENN.com Sep 24, 2013 | 7:46pm EDT The movie adaptation of stage musical Annie has been given a release boost - it will open in America five days before Christmas Day, 2014. The film, starring Jamie Foxx, Quvenzhane Wallis, Rose Byrne, Bobby Cannavale and Cameron Diaz, was scheduled to hit the big screen on 25 December, 2014, but studio executives have moved the release date up to avoid clashing with other 2015 Oscar hopefuls, like Disney musical Into the Woods and Night at the Museum 3. Instead, Annie will be up against The Hobbit: There and Back.
2024-06-01T01:26:51.906118
https://example.com/article/8750
Feeding ecology, food availability and ranging patterns of wild hamadryas baboons at Filoha. Most hamadryas baboons rely on Acacia species for subsistence in their semidesert habitats. Unlike other hamadryas sites, palm forests at Filoha in Awash National Park, Ethiopia, provide the baboons with a preferred food resource close to a commonly used sleeping site. The baboons are expected to feed on doum palm trees when fruit is available, and this resource use should play a role in ranging patterns. This paper describes the feeding ecology, food availability and ranging patterns of a band of wild hamadryas baboons at Filoha from March 2005 to February 2006. Data on feeding and ranging behavior derive from band scans during all-day follows of baboons, and data on food availability derive from monthly phenological monitoring of frequently consumed food species. The baboons fed predominantly on palms when fruit was available, and preferred the flowers of Acacia senegal to its leaves. There was no relationship between daily path length and the proportion of palm fruit in the baboons' diet, but changes in the availability of fruit across the Filoha region appear to mirror the baboons' shifting use of its home range. The large band sizes at Filoha may obscure the effects doum palm fruit might have on ranging patterns.
2023-08-26T01:26:51.906118
https://example.com/article/1254
Hardin County clerk fends off challenger Danny Shapiro | on March 6, 2018 Photo: Jake Daniels Image 1 of 2 Hardin County Clerk Glenda Alston considers a question from the media during a short press conference Wednesday. Alston announced Wednesday that her office would begin issuing same sex marriage licensesHardin County Clerk Glenda Alston considers a question from the media during a short press conference Wednesday. Alston announced Wednesday that her office would begin issuing same sex marriage licenses starting Thursday morning. Photo taken Wednesday 7/1/15 Jake Daniels/The Enterprise Photo: Jerry Jordan Image 2 of 2 Jerry Jordan Incumbent Glenda Alston is set to retain her position as Hardin County Clerk after defeating challenger Jerry Jordan in Tuesday's Republican primary election. Alston finished with 2,733 votes to Jordan's 911. "I ran a clean, fair race and he bashed me on Facebook," Alston said. "He told me he's going to run again in four years, and I will too." Alston will not have a Democratic challenger in the Nov. 6 general election. Alston, 70, has been Hardin County Clerk since 2002. "I'm excited and looking forward to serving Hardin County for four more years," Alston said.
2024-04-23T01:26:51.906118
https://example.com/article/9777
Recently, the field of spoken-word production has seen an increasing interest in the use of the electroencephalogram (EEG), mainly for event-related potentials (ERPs). These are exciting times to be a language production researcher. However, no matter how much we would like our results to speak to our theories, they can only do so if our methods are formally correct and valid, and reported in ways that allow replicability. Inappropriate practices in signal processing and statistical testing, when applied to our investigations, may render our conclusions invalid or non-generalizable. Here, we first present some issues in signal processing and statistical testing that we think deserve more attention when analysing data, reporting results, and making inferences. These issues are not new to electrophysiology, so our sole contribution is to reiterate them in order to provide pointers to literature where they have been discussed in more detail and solutions have been proposed. We then discuss other issues pertinent to our investigations of overt word-production because of the effects (and potential confounds) that speaking will have on the signal. Although we cannot provide answers to some of the issues raised, we invite researchers in the field to jointly work on solutions so that the topic of the electrophysiology of word production can thrive on solid grounds. Improving pre-processing ======================== A common step in ERP analysis is filtering. In many studies, all we can find regarding the filtering procedure are cut-off values. However, this is incomplete information since a filter has other important parameters that affect the outcome of the filtering procedure. Different software will vary in their default values for these parameters. Researchers should not only try to understand how different filter parameters affect the signal studied (e.g., Widmann et al., [@B20]), but also at a minimum report the following in addition to the cut-off value (Picton et al., [@B13]; Gross et al., [@B7]): software used for filtering, filter type, order, and direction of the filter (forward, backward, or both), and whether any changes were made to default parameters. Another common step is to define a pre-stimulus baseline period, which can then be used to normalize the rest of the signal. This pre-stimulus baseline provides a good indication of the signal-to-noise ratio (SNR) in the data. If an ERP difference post-stimulus is similar in magnitude as pre-stimulus differences, the post-stimulus difference is likely noise, not an effect induced by our manipulation (e.g., Woodman, [@B21]). Therefore, even if one does not apply baseline correction, the signal should always be displayed including a pre-stimulus interval so that the SNR can be evaluated. Improving statistical analysis ============================== Results that cannot be explained by mere chance are highly informative for our theories. However, by using statistical tests inappropriately, we may make incorrect inferences regarding the probability of our results. An example is the well-known increased family-wise error rate (FWER, the probability of false positives amongst all multiple tests performed at some alpha-level) associated with the common practice of testing multiple time windows for significance (see Supplementary Material for an example). Alternatively, certain time points/windows may be selected for statistical testing on the basis of some criterion. However, a biased selection of this criterion also results in an inflation of false positives (Kilner, [@B8a]). Under a different approach, successive univariate tests are conducted and effects are considered significant if the number of adjacent significant time points exceeds a pre-determined threshold (Guthrie and Buchwald, [@B7a]). Piai et al. ([@B12]) showed the problems associated with the incorrect determination of this threshold, leading to increased FWER. When possible, we should opt for statistical tests that provide nominal FWER control while maintaining statistical power, such as cluster-based statistics for example (Maris and Oostenveld, [@B10]; Pernet et al., [@B11]). Other valuable recommendations are provided in Allen et al. ([@B1a]) and Rousselet and Pernet ([@B16a]). Unsolved issues =============== For many years the dominant notion was that muscle activity associated with overt production would contaminate the EEG signal. Recently, that view has changed and the increasing number of ERP studies employing overt production is claimed to support the feasibility of combining EEG with overt speech. However, an increasing number in ERP studies employing overt production does not confirm that measuring ERPs with overt production is unproblematic. Moreover, even though it has been argued that "artifact-free brain responses can be measured up to at least 400 ms post-stimulus presentation" (Ganushchak et al., [@B6], p. 5), the question we should ask is what constitutes an artifact in the context of overt production. Myogenic speech-related artifacts may precede speech onset by up to 500 ms, compromising the potentials recorded on the scalp (e.g., Brooker and Donald, [@B2]). If RTs differ consistently between conditions, the speech-related artifacts could also contaminate the pre-speech signal with consistent timing differences, resulting in an artifactual ERP effect. Artifacts aside, another problem is physiological in nature. We are interested in which (and when) differences emerge in the waveforms time-locked to a stimulus as a function of our experimental manipulation. We know that our manipulation elicits a difference between conditions---an effect---in vocal response times (RTs). However, breathing and articulation are functions controlled by the brain. So RT differences are likely to be accompanied by systematic differences between the conditions in the relative timing of speech-related artifacts and of brain activity related to the control of speech that are independent of linguistic effects. This problem is well-known and has been mentioned, for example, by Luck ([@B8]) in his rules of ERP experimental design and interpretation: "Be cautious when the presence or timing of motor responses differs between conditions" (p. 97). Researchers have measured movement-related cortical potentials preceding mouth opening (Deecke et al., [@B3]; Yoshida et al., [@B22]) and speech-related breathing cortical potentials preceding phonation (Tremoureux et al., [@B16]). Importantly, these cortical potentials may precede mouth opening and phonation by 600 ms or more (Yoshida et al., [@B22]; Galgano and Froud, [@B5]). If conditions differ consistently with respect to when participants prepare to move their mouth, which is likely given any systematic RT differences, ERP differences between conditions could emerge as a function of these potentials. In this case, the effect is truly neural, yet not directly reflecting the cognitive function of interest. Finally, suppression of the auditory system may occur pre-speech onset (Ford et al., [@B4]; Wang et al., [@B18]). This raises the possibility that speech output efference copy affects the ERPs measured for conditions differing in RT since the timing of auditory suppression would systematically differ between conditions. Electromyogenic (EMG) activity recorded from mouth muscles provides valuable information on this issue. EMG activity, either directly related to the articulation of the response or merely preparatory, can start as early as 250 ms after stimulus onset (Riès et al., [@B14], [@B15]), as exemplified in Figure [1A1](#F1){ref-type="fig"} for one word-naming trial. The lower panel (Figure [1A2](#F1){ref-type="fig"}) shows how EMG activity is consistently increased (indicated by the warmer colors) prior to speech onset (indicated by the solid black line) on the single-trial level (see Supplementary Material for details). Neural activity to move these muscles must precede the first measurable excitation on the muscle itself, so we cannot know whether potentials preceding speech are reflecting our cognitive manipulation only, or are already overlapping with the neural signals needed to control the mouth muscles. ![**(A1)** Amplitude of the electromyogenic (EMG) activity recorded from the risorius muscle and the corresponding acoustic signal for the pronunciation of the word *parc* in Experiment 1 of Riès et al. ([@B14]). The task was single word naming with the visual word presented at the 0-ms time point. **(A2)** Single-trial EMG activity of 15 participants recorded from the orbicularis oris muscle sorted by picture naming time (solid black line). The task was picture naming with the picture presented at the 0-ms time point. **(B1)** Event-related potentials (ERPs) of 15 participants recorded during a picture-naming task. ERPs from the same condition were split by the participants\' median picture naming time (RT). The picture was presented at the 0-ms time point (black dashed line). For reference, the orange dashed line indicates the 200-ms time point. The ERPs were filtered with a 20-Hz low-pass Butterworth filter of order 4 applied forward and backward using FieldTrip (Oostenveld et al., [@B10a]). **(B2)** Difference wave. Shaded area indicates 95% confidence interval.](fpsyg-05-01560-g0001){#F1} Figure [1B](#F1){ref-type="fig"} provides another example of this issue. Participants\' ERPs from one same picture-naming condition were split by their median RT, creating two surrogate conditions (mean longest RTs = 992 ms, mean shortest RTs = 729 ms). As Figure [1B1](#F1){ref-type="fig"} shows, a simple difference in RT between two "conditions" can result in ERP differences of at least 1 μV starting as early as 200 ms. This early ERP difference is significant with various statistical tests (see Supplementary Material for details), *p*-values between 0.008 and 0.056. Note that dichotomizing a variable, as we do here, is not a recommended practice in statistics (MacCallum et al., [@B9a]), so our median-split approach is only meant to illustrate this point, but should not be taken as a valid approach to investigate the relation between RTs and the electrophysiology of language production. Of course one could argue that RTs are shorter or longer for some cognitive reason, so the differences shown in Figure [1B](#F1){ref-type="fig"} simply reflect cognitive processes associated with this slowing down. Moreover, the timing of this ERP "effect" could be taken as an indication that our manipulation tapped a certain cognitive process. The question is whether we should interpret the ERP effect in Figure [1B](#F1){ref-type="fig"} as reflecting a cognitive function of interest even though the ERPs come from the *same* cognitive manipulation. Rather, given that the observed ERP waveform is a sum of latent components, we should consider the possibility that our observed ERP effects are the net result of latent components reflecting a manipulated cognitive factor and latent components with consistent timing differences reflecting cortical activity related to low-level aspects of speaking, such as breathing and muscle control. In fact, this remark has been made very recently with respect to the breathing potentials preceding phonation: "the findings \[... \] indicate the need to take the respiratory component of speech and its cortical determinants into account when conducting and interpreting such studies" (Tremoureux et al., [@B16]). The extent to which these issues may be especially pertinent to ERP components closer to articulation onset also deserves attention. Note that, although our example shows more positive-going ERPs for trials with shorter RTs, this relation should not be taken as a rule across studies. This is because different studies are likely to obtain different configurations of latent components reflecting breathing, muscle control, and cognitive factors, and these different configurations are likely to yield different observed ERP waveforms (e.g., Luck, [@B8]). One may ask whether early effects are problem-free in this respect. However, the answer is complicated by the physiological issues described above in combination with technical issues. One of them may be caused by acausal filters (i.e., the filter applied forwards and then backwards). Due to this procedure, later slow components (possibly speech-related potentials and artifacts) can affect earlier parts of the signal (Acunzo et al., [@B1]), artificially creating an "ERP effect" that seems early enough to be the cognitive component of interest, rather than speech-related. The extent to which this factor could affect the signal that language-production researchers study is to our knowledge largely unknown. Additionally, if the low-pass filter is not appropriately designed for the data in question, speech-related cortical potentials and artifacts may end up smeared for tens of milliseconds before and after the event of interest (e.g., VanRullen, [@B17]; Widmann and Schröger, [@B19]), artificially creating differences that are early enough to seem related to the cognitive function of interest. Again, the extent to which this factor could affect the signals we study is largely unknown. In fact, the issues raised with respect to RT and breathing-pattern differences are not exclusive to spoken-word production. Timing differences in manual responding or in breathing and heart rate, and skin conductance (e.g., in studies on emotion) are likely to have some of the problems discussed here. In conclusion, we need to consider that the ERP differences observed in conditions differing in RT are partly reflecting the relative difference in the timing of brain activity related to speaking (breathing and mouth movements) and other speech-related artifacts, in addition to our cognitive manipulation. We should also consider the possibility that our cognitive manipulation is *not necessarily* reflected in a stimulus-locked ERP component and that the ERP differences we observe reflect speech-related potentials only. Researchers could record mouth EMG activity and breathing rate in addition to scalp EEG to assess whether these pre-speech potentials are overlapping with the ERP effects observed over the scalp. These issues need to be addressed so that our field can move forward on a solid foundation. Conflict of interest statement ------------------------------ The Associate Editor, Dr. F-Xavier Alario, declares that despite having collaborated with author Dr. Stephanie K. Ries, the review process was handled objectively. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The authors are truly indebted to Kristoffer Dahlslätt, Natalia Shitova, and Andreas Widmann for helpful discussions. The authors are funded by grants from the Netherlands Organization for Scientific Research (\#446-13-009, to Vitória Piai), NIDCD Grant (F32DC013245, to Stéphanie K. Riès), NINDS Grant (R37 NS21135, to Robert T. Knight). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. Supplementary material {#s1} ====================== The Supplementary Material for this article can be found online at: <http://www.frontiersin.org/journal/10.3389/fpsyg.2014.01560/full> ###### Click here for additional data file. [^1]: This article was submitted to Language Sciences, a section of the journal Frontiers in Psychology. [^2]: Edited by: F-Xavier Alario, Centre National de la Recherche Scientifique and Aix-Marseille Université, France [^3]: Reviewed by: Lesya Ganushchak, Max Planck Institute, Netherlands; Jan Kujala, Aalto University, Finland; Guillaume A. Rousselet, University of Glasgow, UK
2024-07-14T01:26:51.906118
https://example.com/article/5886
Efficacy of thrombolysis in infrainguinal bypass grafts. The initial outcome of a consecutive series of 43 intra-arterial urokinase infusions for thrombosed infrainguinal grafts in 37 patients was analyzed. There was an 88% (38/43) technical success rate (complete clot lysis) and a 74% (32/43) clinical success rate. Complications occurred in 10 patients (23%) and were related to bleeding in four patients (9%). Patient age, graft age, location, material, and the duration of occlusion did not significantly influence the initial outcome, although there was a trend toward a higher bleeding complication rate among grafts less than or equal to 1 month of age at the time of thrombolysis. A second group of 43 infrainguinal grafts successfully recanalized using regional infusions of thrombolytic agents were followed for long-term patency. This group included 32 grafts successfully treated with urokinase and 11 grafts recanalized with streptokinase. By life-table analysis there was a 55.6% 1-year patency, which fell to 42.4% at 4 years. Vein grafts had significantly (p = 0.01) better long-term patency than prosthetic grafts (69.3% versus 28.6% at 30 months). Grafts with flow-limiting lesions identified and corrected by angioplasty or surgery also had significantly (p = 0.01) better long-term patency than those without such lesions (79.0% versus 9.8% at 2 years). Based on the results of our study compared with a survey of long-term results following secondary surgical procedures for thrombosed infrainguinal grafts, thrombolysis can be recommended in several circumstances. Thrombolysis is indicated for thrombosed vein grafts or when thrombus is present in distal runoff vessels. Thrombosed prosthetic grafts should be replaced by autogenous vein grafts whenever possible.(ABSTRACT TRUNCATED AT 250 WORDS)
2024-02-29T01:26:51.906118
https://example.com/article/6572
The Bluewater school district near Toronto, Ontario has a decades-old tradition of handing out Gideon Bibles to 5th graders. Why this was allowed for so long, I don’t know, but the school board recently decided to put a stop to it, banning “distribution of all non-instructional religious materials.” As you might expect, many of the Christians in the community were very understanding: “When are you ‘politically correct’ idiots, with your heads buried in the sand, going to realize that every action you take to destroy Canadian heritage…?” one email began. “Allowing newcomers to Canada the ability to walk all over our heritage has got to stop before they carry us into the realm of a warring nation like the one they often left behind,” another writer said. … Trustee Fran Morgan called the “onslaught” of messages “really disturbing,” and said it has made her uneasy about driving the 30 kilometres to board meetings at night by herself. “I really do feel threatened by it,” Morgan said from Griersville, Ont. “It’s been very unpleasant.” … Board chairwoman, Jan Johnstone, admits the vitriolic responses — some urging trustees to “watch your back” — are unnerving. “People do crazy things,” Johnstone said. “They see Christianity as a fundamental part of their Canadian identity.” Another wrote one trustee: “How is that you agree with God’s 10 Commandments and yet you have broken them countless times, you hypocrite!” … Although one trustee received a phone call he thought was tantamount to a death threat, the board has so far not referred the matter to police, but a spokesman said the situation was being monitored. The alternative is to allow *all* non-education religious material, but you know these same Christians would complain about that, too, the moment a Koran or Wiccan text was handed out to students. The ban is set to become finalized on April 17th. (Thanks to Dorothy for the link!)
2023-10-15T01:26:51.906118
https://example.com/article/4091
Apex court blocks lower court ruling The Supreme Court allowed the Trump administration to maintain its restrictive policy on refugees. The justices on Tuesday agreed to an administration request to block a lower court ruling that would have eased the refugee ban and allowed up to 24,000 refugees to enter the country before the end of October. The order was not the court’s last word on the travel policy that President Donald Trump first rolled out in January. The justices are scheduled to hear arguments on Oct. 10 on the legality of the bans on travellers from six mostly Muslim countries and refugees anywhere in the world. It’s unclear, though, what will be left for the court to decide. The 90-day travel ban lapses in late September and the 120-day refugee ban will expire a month later. White House spokeswoman Sarah Huckabee Sanders said on Tuesday night: “We are pleased that the Supreme Court has allowed key components of the order to remain in effect. We will continue to vigorously defend the order leading up to next month’s oral argument in the Supreme Court.” Lower courts have ruled that the bans violate the Constitution and federal immigration law. The high court has agreed to review those rulings. Its intervention so far has been to evaluate what parts of the policy can take effect in the meantime.
2024-04-18T01:26:51.906118
https://example.com/article/4896
This invention relates to polychloroprene rubber compositions having improved flex crack growth resistance. Polychloroprene rubber, a homopolymer or copolymer of 2-chlorobutadiene, has traditionally been used to produce articles requiring a combination of heat and ozone resistance coupled with excellent dynamic properties, for example, flex crack growth resistance. In particular, sulfur-modified polychloroprene has long been considered the rubber of choice for articles subjected to repeated flexing, such as automotive belts, automotive boots, air-springs, and motor-mounts. There is, however, a need in the art to provide automotive parts for under-the-hood uses with an even greater reliability than that which is provided by sulfur-modified polychloroprene itself. A typical failure mode under dynamic load is fatigue, which propagates on an initial flaw. Consequently, it is the rate of cut growth which must be improved. There are two general approaches to upgrading flex-fatigue resistance, by which term is meant prolonging the time to failure after an initial crack has been formed. The first involves modification of the polymer backbone by minimizing failure-initiating polymer defects, or by introducing heat- and/or fatigue-resistant segments. Such an approach is disclosed in U.S. Pat. No. 4,605,705. In the second approach the polymer backbone remains intact and flex-additives are blended with the polymer to produce crack growth-resistant compositions in much the same way as antioxidants and other stabilizers are used to produce polymer compositions having improved resistance to the effects of oxygen, ozone, light, etc. For example, mercaptotolylimidazole has been reported to be an effective flex-growth additive, see Elastomerics, Vol. 117, February 1985. In either case, however, it is to be understood that any attempt to improve flex-fatigue resistance must not result in degradation of other polymer physical properties. Most importantly there must be no decrease in modulus or increase in elongation, and, in addition, Mooney scorch must not be adversely affected. Blends of polychloroprene and certain ethylene/(meth)acrylate copolymers have been disclosed in the prior art for the purpose of producing fiber-forming resins (see, e.g., U.S. Pat. No. 3,701,702, which exemplifies compositions having very low polychloroprene content), laminating adhesives (see, e.g., U.S. Pat. No. 3,770,572 directed to latex adhesives comprising polychloroprene and ethylene copolymers), cured foamed compositions (see, e.g., U.S. Pat. No. 4,307,204), and elastomeric extrudable/moldable compositions (see U.S. Pat. No. 4,235,980 directed to terionomer blends).
2023-11-30T01:26:51.906118
https://example.com/article/4242
Investigative reporting costs money, for open records requests, copying, web hosting, gasoline, and cameras, and with sufficient funds we can pay students to do further research. You can donate to LAKE today!
2024-01-06T01:26:51.906118
https://example.com/article/9252
Q: expanding and collapsing menu items I have a menu made of divs containers and when I mouse over the menu item i want to exand the next element that has a class called container the thing I want to know is when they are expand and I go back to the previous div like back two divs I want the others to contract there widths so if three divs are open and i go back to the first div container the last or others in front contract there width to 0 except the next div cos it displays the next menu items how do do this any examples would be appreciated. <div class="container> <div class="menu"> <div>Menu Item 1</div> <div>Menu Item 2</div> <div>Menu Item 3</div> <div>Menu Item 4</div> </div> </div> <div class="container> <div class="menu"> <div>Menu Item 5</div> <div>Menu Item 6</div> <div>Menu Item 7</div> <div>Menu Item 8</div> </div> </div> <div class="container> <div class="menu"> <div>Menu Item 9</div> <div>Menu Item 10</div> <div>Menu Item 11</div> <div>Menu Item 12</div> </div> </div> $('.container .menu .menu-item').mousenter(function() { $(this).closest('.container').next('.container').css('width' 200); }); A: Wow, that was a long sentence...please use punctuation (and run grammar check). In answer to your question, you could set all the containers' widths to 0 before running the next line of your code, like this: $('.container').css('width': 0); $(this).closest('.container').next('.container').css('width' 200); If you use this method you should probably also save $('.container') in a variable for better performance. Is that what you wanted to do? If not, please explain your question a little more clearly.
2024-05-17T01:26:51.906118
https://example.com/article/3992
SEATTLE -- Seahawks fans know all about running back Marshawn Lynch's love for Skittles, and it seems the candy brand has struck a deal with him. From Getty Images Wrigley, which owns the Skittles brand, announced details of that partnership on Tuesday. In a statement, the company said it would create a limited-edition "Seattle Mix," consisting of only green and blue candies. The special packs would not be sold in stores, however, and fans would get the opportunity to bid on them in an online auction at www.SkittlesSeattleMix.com, starting Wednesday. Wrigley said money collected from the auction would be given to Lynch's Fam 1st Family Foundation. Additionally, $10,000 would be donated for every touchdown Lynch scores during the Super Bowl. Plus, the auction would include, "one-of-a-kind football-inspired Skittles-covered items," like a football and helmet. Lynch's love for Skittles dates back to his childhood as a Pop Warner player, when his mother gave him what she called "power pellets." According to Ad Age, no television ads are planned in the deal, and Wrigley did not comment on the financial terms. A report by ESPN.com had cited unnamed sources who claimed Lynch would receive financial compensation.
2024-01-17T01:26:51.906118
https://example.com/article/8873
Expression of connexin 31 in the developing mouse cochlea. Connexin 31 (Cx31) mutations cause an autosomal dominant form of high-frequency hearing loss. The immunohistochemical localization of Cx31 in mouse cochlea was studied at different ages between 0 and 60 days after birth (DAB). Cx31-like immunoreactivity was detected in fibrocytes of spiral ligament and spiral limbus at 12 DAB, gradually enhanced with the increase of age and reached the adult pattern on 60 DAB. Immunoreactivity decreased gradually from the basal to apical turn in all developmental stages. The mRNA of Cx31 was also identified by RT-PCR. The distribution of Cx31 and connexin 26 were obviously different in the developing mouse cochlea. The expression and distribution of Cx31 in the development may explain the progressive hearing loss in human Cx31 mutations.
2024-01-23T01:26:51.906118
https://example.com/article/9100
Q: Forming an array format for multi row insertion nodejs-mysql I have an object item like below var item = {"9":"9","22":"22","23":"23","24":"24"}; and variables var cart = 23; and var group = 40; Now I want to form those items in a format such that I can perform multi-row MySQL insert like below var sql = "INSERT INTO test (item, cart, group) VALUES ?"; var values = [ [9, 23, 40], [22, 23, 40], [23, 23, 40], [24, 23, 40] ]; I need to form the above array format to perform multi rows insert in node js. How to make the above format? A: You could map the values of the object and return an array with numbers. var item = { 9: "9", 22: "22", 23: "23", 24:"24" }, cart = 23, group = 40, array = Object.keys(item).map(function (k) { return [+item[k], cart, group]; }); console.log(array);
2024-06-05T01:26:51.906118
https://example.com/article/5120
Q: How should I penalize the model proportionally to the error? I am making an MNIST classifier. I am using categorical cross-entropy as my loss function. I want to make it so that if the correct label is 3, then it will penalize the model less heavily if it classifies a 4 than a 7 because 4 is closer numerically to 3 than 7 is. How do I do this? A: I want to make it so that if the correct label is 3, then it will penalize the model less heavily if it classifies a 4 than a 7 because 4 is closer numerically to 3 than 7 is. How do I do this? Really you should not, because the symbols used (Arabic numerals) do not have direct relation to quantity in the same way e.g. tally counts or dots do. They are good candidates for classification, and despite the conventional mapping to quantity when you read them, the symbols themselves are poor candidates for regression, because for instance the symbols $3$ and $4$ do not differ in a way that captures quantity in any intuitive manner. However, if you are keen to do this, it is relatively simple to construct a suitable loss function in most auto-differentiating frameworks. You will need to read up on how to do so. For instance, here is a Stack Overflow answer explaining where to start with writing custom loss function in Keras. In order for your loss function to work, it will need to be differentiable and smoothly changing as predictions get better. That rules out using any form of argmax for the current prediction. If you want to stick with softmax for the final layer, then I suggest using a mean squared error against the expected prediction, e.g. if $d_i$ is the numerical digit for example $i$ and $y_{i,j}$ is the ground truth expressed as a one-hot vector, where $i$ is the example and $j$ is the digit class, then $\hat{y}_{i,j}$ is the probability predicted by your model. You could use $\hat{d}_i = \sum_{j=0}^9 j\hat{y}_{i,j}$ for the expected value and MSE loss of $\mathcal{L}(d_i,\hat{d}_i) = \frac{1}{2}(\hat{d}_i - d_i)^2$ You can also use a weighted sum of the MSE loss and cross entropy loss as your final loss, with the balance between the two losses being a new hyperparameter of your model. Note this solution makes $0$ close to $1$ but far away from $9$. If you want the digits to be considered close on a cycle (e.g. $8$ is closer to $1$ than it is to $4$) the you will need something more creative. Whilst I don't think this will help you discover any improvements to MNIST classification, combining two or more loss functions to achieve a more complex goal can be really useful sometimes, so it is a skill worth practicing.
2024-04-27T01:26:51.906118
https://example.com/article/1229
Hidden in plain sight: The hunt for banking capital The search begins at home. January 2010| byErik Lüders, Max Neukirchen, and Sebastian Schneider Banks are trying madly to raise it, investors are wary of giving it, and lenders seem keener than ever to hang on to it. Strange, then, that we hear so little about how to manage the scarce resource of the day: capital. One reason for the lack of discussion is the sorry current state of the art of capital management. In the run-up to Basel II, banks treated the subject with appropriate energy. But as the global expansion after 2001 turned into a full-fledged boom, capital was plentiful and cheap, and a strategy to manage it was not only unnecessary but even, arguably, counterproductive. According to some, banks that worried about improving their capital usage were thinking too small and missing much bigger opportunities in rapidly expanding businesses. Whatever their view, most banks paid only scant attention to the husbanding of their capital. Now, at a time when regulators, investors, and rating agencies, in various ways, are forcing banks to deleverage and increase capital ratios, the focus is on finding more of it—that is, on recapitalization. Of late, however, the search has yielded very little. Capital, when it can be found, is extremely expensive. Effective capital management is no longer just a nice, if somewhat obscure, skill to have; for some banks, it is a question of survival. Out of necessity, then, banks are rediscovering the lost art of capital management. Done well, capital management not only answers the immediate need of improving capital adequacy; it also protects banks from risks and even enables growth. Should the industry consolidate, as seems likely, superior capital management will almost certainly be one of the chief distinctions between buyers and sellers. While a comprehensive capital-management program includes seven elements, two of these—reducing capital “wastage” and developing “capital light” business models—are essential for boosting capital adequacy ratios. Many institutions squander their capital by allocating more of it to a business than is required. This can happen through inefficiencies in business and credit processes and poor data, and through poor choices in the risk-modeling approach. Banks can draw on a catalog of proven ideas to reduce this capital wastage. They can achieve even more if they also successfully implement capital-light business models—that is, if they adopt smart credit-management principles in their day-to-day business and help frontline lenders and sellers internalize these principles. Typically, banks that both reduce waste and put in place capital-efficient business models can achieve a reduction of 15 percent to 25 percent in risk-weighted assets (RWAs). Further, some banks also see revenue increases of 8 percent to 12 percent. These improvements result in additional economic value—for one global bank, about 25 percent in two years. Seventy percent of impact is typically achieved within a year of launch. Reducing capital wastage and implementing a capital-light business model are not only the cheapest ways to improve capital ratios—much cheaper than appealing to once-bitten, twice-shy investors—they can also inform the bank if it has any true additional capital needs. And if banks do instead resort to capital injections, they will not only pay dearly in the short term but also lock in their capital inefficiencies for the foreseeable future. At a time when the need to rebuild return on equity is paramount, capital inefficiency is like a weight around the neck—a burden that will keep the bank uncompetitive but that when removed will result in a powerful uplift to performance. A new imperative With the onset of Basel II, in 2004, banks were forced to take a more active approach to capital management. Capital requirements under the Basel II internal-ratings-based (IRB) approaches—especially the advanced, or A-IRB, approach—depend heavily on internal models to estimate risk. The quality of the data entered into these models is critical, and so banks began to pay more attention to how they used their capital. Thus at most banks, a bare-bones capital-management approach took hold. The global financial crisis has rendered that basic approach untenable. Capital proved to be far less available than banks had assumed, and mark-to-market losses ($934 billion as of May 2009) from credit investments and subprime-related assets have eviscerated capital ratios at nearly every institution of any size. The rise in market volatility and the need for more capital to cover it only make the problem worse. The lack of adequate capital, as is well known, forced many banks into bankruptcy or into the safe haven of government rescue plans. To date, governments around the world have provided more than $1 trillion to recapitalize financial institutions. Already severe, the capital shortage will almost certainly get worse with the coming of new banking regulations. Ironically, the green shoots of economic recovery that many have noted may provide regulators with the faith they need to push through three changes that will particularly affect corporate and investment banks. First, most countries seem sure to impose a cap on banks’ leverage and to introduce “procyclical” adjustments to these such that in good times, the cap will grow tighter. Second, many believe that changes to the calculations used to measure risk in the trading book will increase RWAs by at least a factor of three (Exhibit 1). Off-balance-sheet items and securitized assets will also likely be saddled with much higher capital requirements. Finally, the focus will shift more toward tier-1 and common tier-1 capital (consisting mainly of common shares, cash reserves, and retained earnings), thus limiting the usefulness of the hybrid structures that many banks now employ to ensure capital adequacy. Exhibit 1 All these regulatory changes are in different stages of development and implementation but will likely come to pass over the next few years. While many banks are still struggling to come to grips with capital management, leading banks have recognized the trend and are establishing or recommitting to a comprehensive approach to capital management. Exhibit 2 lays out the seven building blocks common to the best of these approaches. Exhibit 2 In our experience, two of these elements are essential for a successful hunt for capital, regardless of the bank’s starting position: reducing capital wastage and instituting capital-light business models. All the elements are important, of course, but for expediency we will concentrate on these two. The former is by far the more powerful and faster to produce results, and banks should begin with this. Reducing capital wastage As noted, the primary source of waste is a too-generous allocation of capital to individual businesses to reserve for the credit and market risks they run. We see three common mistakes that afflict most banks. First, many banks have optimized their credit risk-mitigation activities (underwriting, collateral management, and workout) for cost efficiency—but in so doing have cut a lot of corners that considerably undermine their capital efficiency. Second, banks routinely struggle with poor data. Although data issues exist throughout banks—some consider the modern bank more a technology business than a financial one—the problem is particularly acute when it comes to capital efficiency. Information is insufficient, incorrect, or incomplete (for example, gaps and noncredible outliers in loss databases used for risk-parameter estimation). Nor do banks always treat their data properly. Assets are often misclassified; for example, small-business loans are sometimes treated as retail rather than corporate exposures, when a corporate classification would reduce the risk weight. Finally, although hard to believe given some of the risks banks took that culminated in the recent crisis, bankers’ innate conservatism actually leads them to overestimate their risks at times. The tendency is understandable: if estimates are too optimistic, regulators will likely question them and then impose additional capital reserves. Another problem is that banks choose and develop risk models from a purely mathematical perspective, giving little or no consideration to the realities of the business. For example, estimates of loss given defaults (LGDs) are often set too high. Finally, many banks use the same internal model for all their businesses, when a variety of models, including some less conservative ones, might be more useful. For example, the Effective Positive Exposure (EPE) model for counterparty risk in trading book assets is not used as extensively as it might be. Capitalizing on the opportunity All these practices eat up more capital than the business really requires, presenting banks with an opportunity. Banks should begin by thoroughly checking the data quality and risk-parameter estimates in their banking and trading books. They should then benchmark the main risk parameters, especially: LGDs by asset class, and, more specifically, the level of collateralization by portfolio segments and recovery rate by collateral type Credit conversion factors (CCFs) by product categories. CCFs are used to calculate EADs, which equal drawn exposure plus the CCF times undrawn exposure Further analysis can identify whether any gap between current performance and best practice is due to inadequate processes or poor data quality, or perhaps overly conservative modeling. For example, high risk weights for a given asset class can be a sign of too little Basel II–eligible collateral. Digging deeper, the bank might find that its model does not sufficiently distinguish between different types of collateral or does not manage the collateral provided for maximum value. Knowing the source of the problem, the bank can then design a highly specific program of capital optimization. Our experience suggests that there are more than 100 levers available to reduce capital wastage. For further details on some of these levers, and for an example of a large European bank’s use of them in developing a capital-optimization program, see Exhibits 3 and 4 in this article’s downloadable PDF. Note that there are no “silver bullets” in this work; rather, it is the sum of 20 to 30 small optimizations that generates the significant amount of capital savings that these programs can produce. Although banks will not often choose exactly the same set of capital-wastage reduction levers, we have found the following to be especially powerful for most banks: Improve collateral management in the banking book. As a rough rule of thumb, each additional euro or dollar of Basel II–compliant collateral reduces RWA capital by 50 cents without changing the exposure at default. This makes collateral management a significant source of additional capital. Two ideas often work well to improve this activity. First, a comparison of the front-office systems with the back-office systems that are used in the RWA-calculation model will often reveal that there is much more Basel II–eligible collateral collected than is entered in the model; data quality is often the underlying reason. Banks can correct the data and make sure that all the available collateral is used to mitigate RWAs. Second, the ways that collateral is classified and valued are often suspect; improving this can also significantly reduce RWAs. Improve netting and collateral processes in the trading book. Collateral management is also an issue in the trading book. As financial markets recover, demand is rising for capital markets and risk-management products. At most banks, the 20 or so largest corporate and institutional customers are again pushing up the capital requirements for counterparty risk in the trading book. More frequent settlement with corporate customers and faster collateral processes can significantly slow this RWA increase. Surprisingly, netting processes are far from perfect. In the quest for cost efficiency, many banks have not put in place netting agreements with their second- and third-tier counterparties—even though in aggregate these customers create significant RWA and capital requirements. Similarly, banks do not often have collateral agreements with these smaller customers; as a result, RWAs increase with every profitable trade. Adjust Basel II risk parameters. Regulators usually require banks to include unresolved cases of default or workout in their estimation of recovery rates and LGDs. Banks must make some assumptions about the extent of eventual repayment. They tend to take the most conservative approach and assume very little or no further payment; after all, this minimizes time and money spent on model development and allows banks to avoid intense discussions with regulators. A more realistic and still inexpensive approach is to estimate repayment based on the current status of the workout or restructuring. This not only significantly lowers the bank’s LGD and the loans’ risk weight but is also much more representative of business reality. Reconfigure credit processes and install early-warning systems. Since estimated risk parameters under A-IRB are based on a bank’s historical losses, they reflect not only the bank’s assumptions about the future but also the past performance of its credit processes. Improving credit processes is its own reward, of course, as it reduces credit losses. But it also improves risk parameters, allowing the bank to further lower its CCFs and EADs, LGDs, and risk weights. In the past year, many banks have been caught off guard as financially weak customers drew down lines of credit and then defaulted. Banks should make timely and full use of all available information, such as credit-line usage, incoming payments, and reports from credit bureaus, to develop insights into customer behavior. These leading indicators, coupled with better credit-monitoring processes, allow a bank to identify high-risk customers early—typically six to nine months before loans become past due. Banks can then reduce their lines with some at-risk customers and also ensure that no additional credit is extended to them. This will lower CCFs and EADs, and can also help lower LGDs as uncollateralized exposures are reduced and banks get faster at seizing and liquidating collateral. Capital-light models The second essential element of effective capital management is the longer-term ambition to align a business unit with the principles of efficient capital use—what we call the capital-light business model. The gap between principle and practice is often large. For example, two or three products might serve a customer’s needs equally well, with no noticeable differences to the customer—but they may have very different implications for the bank’s capital requirements. Similarly, a loan can be collateralized in various ways; the choice makes a big difference to the bank’s capital. Cash and high-quality securities such as US government bonds are the most capital-effective; real estate is typically much more capital-effective than inventory; and residential real estate is in most countries more effective than commercial real estate. Some banks have taken steps toward a capital-light model by streamlining their processes, actively managing the credit portfolio, and rethinking their securitization strategies. These are good moves, but banks can do more. We suggest five actions to establish a capital-light business model. Embrace risk-adjusted pricing. Banks should introduce risk-adjusted pricing for every client type and every product. The move to risk-adjusted pricing began many years ago but is still not standard operating procedure at some banks where risk is consistently underpriced. Risk-adjusted pricing is an essential first step to maximize value for the bank; the price of a loan must cover the bank’s cost for risk, capital, liquidity, and operations. Sell the right products and get the right collateral. Banks should provide commercial guidelines and tools to help the front line meet two goals: to sell the product with the highest RWA-adjusted return for a given customer need and to boost the level or quality of collateral wherever possible, without jeopardizing that return (Exhibit 5). To achieve both goals—that is, a product and collateral that are optimized for both economic value added (EVA) and RWA returns—frontline staff must have transparency into RWA under Basel II, and their incentive schemes must be set accordingly. Exhibit 5 Understanding the return on RWA is essential; it may look very different for short-term lending and factoring, or for investment loans and leasing. To help sellers with this, actual risk costs must be reflected in pricing and performance calculations. Staff must be well trained in Basel II regulations and equipped with the necessary tools—simulation tools and pocket guides that will help them quickly assess the implications of their choices. Many banks give sellers incentives based on ROE at the time of origination and then freeze this ROE for purposes of compensation. A better approach is to link compensation to ROE at various points in the life cycle of the loan. That creates a situation where, when the client’s credit risk deteriorates and its capital consumption increases, the responsible lending officer will follow up and ask for additional collateral or take other action to reduce exposure. Advise clients on financial ratings. Banks can provide clients with solutions to improve their financial profile and therefore their rating. If successful, clients will reduce their cost of credit, and banks will be able to lower their reserves for products sold to these clients. Clients will also gain from a better understanding of Basel II logic and opportunities. Additional benefits for the bank include a deeper relationship with the client and strong cross-selling opportunities that will emerge from the client’s enhanced knowledge. Develop a superior capability for market placement. Banks should put in place all the requisite skills and infrastructure needed to participate fully in opportunities to sell assets, such as syndication, club deals, private placements (which have proved resilient through the crisis), and also, once the crisis has passed, securitizations. Obviously, securitization has become much more difficult and will change further with Basel II. Banks must rethink their approach to this business. But even now, when liquidity in most markets remains dried up, disposing of parts of the loan portfolio, either through securitization or syndication, can be a successful strategy to reduce RWA. To capture the benefits fully, banks need for the moment to follow Basel II–optimized securitization strategies and then to be ready to update these when expected changes come through. One thing that is clear already is that securitization structures will have to become more transparent. More standardized credit contracts for securitized loans and more transparent securitization structures—in other words, instruments comparable to the German Pfandbrief and other covered bonds—will have to be adopted. Scale back business with EVA-negative clients. Often very few customers (5 percent or less) account for more than 20 percent of RWAs and do not contribute significantly to the bank’s P&L—even before the cost of capital. Banks should sort through their portfolios, especially in corporate banking, to reduce their business with these clients, and they should put in place strict rules on new business that will prevent them from acquiring more. Identifying these clients is difficult, however. Some banks may first need to develop a metric for customer profitability, embed it in their systems, and then systematically review the portfolio. They can then reduce their business with EVA-negative clients. Large customers should be discussed at the highest level and each case considered individually. For small customers, a special unit can work to reduce exposures and change the product mix. To keep the problem from getting worse, banks can provide guidelines to frontline staff, as discussed earlier. Reducing capital wastage and transitioning to a capital-light model are not straightforward approaches. Banks must lay the groundwork if the efforts are to succeed. In our experience, there are three prerequisites. First, banks must establish clear governance of capital management at every level. This task must be incorporated into the mandate of a senior leader, usually the CFO or CRO, and people responsible for the activity in every major business unit must report in to him. Second, the bank must ensure the close collaboration of the businesses and the risk and finance groups. A cross-functional team goes a long way toward meeting this goal. Finally, the bank must focus relentlessly on execution. An excellent way to maintain focus is to continually track the increases in capital and the impact on the bottom line. Some leading banks have already institutionalized the hunt for capital by setting up RWA- and capital-management units and launching internal competitions to find capital-savings opportunities. About the authors Erik Lüders is a consultant in McKinsey’s Frankfurt office, Max Neukirchen is a principal in the New York office, and Sebastian Schneider is a principal in the Munich office. Stay connected About Insights & Publications The creation of knowledge supports McKinsey’s core mission: helping our clients achieve distinctive, lasting, and substantial performance improvements. We publish our insights and those of external experts to help advance the practice of management and provide leaders with facts on which to base business and policy decisions. Views expressed by third-party authors are theirs alone.
2024-06-29T01:26:51.906118
https://example.com/article/1800
Introduction {#Sec1} ============ Detection of genetic variants such as SNVs, insertions and deletions (INDELs), and structural variants (SVs) is one of the major objectives for the usage of next generation sequencing (NGS) in human genome research. Currently, genetic variant calling is based on alignment of raw sequence reads against a reference genome. This alignment-based approach has many limitations including incompleteness of genome assembly^[@CR1]^, structural variations existing in the genomes of normal individuals^[@CR2]^, sequencing errors in reads, and interference of single nucleotide polymorphisms (SNP) on reads mapping^[@CR3]^. Thus, high levels of false positives of variant calling are reported for the alignment-based approach. In bacteria and other organisms with a small size of genome, read sequences can be assembled into long contigs, and subsequent variants can be identified via an assembly-based approach. Although the *de novo* assembly-based approach has been considered as the ideal for genetic variants detection^[@CR4],\ [@CR5]^, it has not been widely applied on large and complex genomes. Recently, several attempts using this approach for human subjects have been reported^[@CR6]--[@CR8]^. Hundreds of thousands of novel mutations were identified in *de novo* assembled personal genomes^[@CR8]^. However, there is no direct comparison with alignment-based calling to demonstrate the reliability of assembly-based variant calling. It is of interest to see if SOAPdenovo, a popular genome assembly method that was used in previous assembly-based studies^[@CR9]--[@CR11]^, could be suitable for the purpose of SNV calling, and to its further extension, whether coverage of reads would have some impact on the outcomes of genome assembly and SNV calling. In this study, we assessed the performance of SNVs calling at various coverages of short reads with contigs generated by SOAPdenovo2^[@CR12]^, the latest version of SOAPdenovo. We simulated short reads from the whole human genome for comparison between the assembly-based and alignment-based calling approaches. We assessed the quality of the assembled contig and determined that at least 30X coverage of sequencing reads were needed to obtain a reliable contig profile. By comparing SNVs called from alignment of assembled contigs and from alignment of reads to the "ground truth" (SNVs introduced into the template reference for simulation), we directly evaluated the performance of the two variant calling approaches. We repeated this analysis process with reads sets from whole genome sequencing (WGS) of NA24385, an individual whose genome was fully sequenced and analyzed by the Genome In A Bottle (GIAB) consortium. Similar results were obtained with experimental reads. We concluded that although an assembly-based approach (with SOAPdenovo2 as the assembly tool) might serve as a complimentary method for SNVs discovery, there were many false SNVs and missed calls due to sequence difference of two alleles in a diploid genome, such as the human genome. Results {#Sec2} ======= Study workflow {#Sec3} -------------- The overall workflow is described in Fig. [1](#Fig1){ref-type="fig"}. First, \~3.6 million variants were randomly selected from a variant pool and then introduced into the human reference genome to generate the template genome for reads simulation. The simulated sets of reads were generated at coverages of 2X, 5X, 10X, 15X, 20X, 30X, 50X, and 100X. Then, both alignment-based and assembly-based calling pipelines were applied to those sets of reads for variant calling. The conventional alignment-based variant calling pipeline was implemented with a software package of BWA-MEM and GATK, whereas SOAPdenovo2 was used to generate contigs which were then mapped back to the reference genome by Nucmer. The SNV callings from the alignment of assembled contigs were done by the "show-snps" executable in the MUMmer package. In addition, we applied FermiKit to call variants based on *de novo* assembled unitigs. Recall rate and precision were then calculated for three variant calling approaches. Finally, variants from the alignment-based and assembly-based processes were compared to identify variants that were missed by the alignment-based but recovered by the assembly-based approach.Figure 1Study workflow. For data preparation, simulated reads were generated by VarSim and ART with a pre-set variant pool. Experimental reads from GIAB project (NA24385) were used for validation. Both alignment-based and assembly-based variant calling approaches were applied on these two data sets for comparison, and the variants called by different pipelines were compared to the ground truth, variants introduced into the template reference for simulation or high confident germline variants for individual NA24385. To validate conclusions derived from simulated data set, we repeated this valuation process with a date set of WGS reads for individual NA24385^[@CR13]^. We used a "downsampling" approach to create eight subsets of reads from 2X to 100X coverages, as used in simulated sets, by randomly extracting reads from the original set of WGS reads (total of 300X coverages). We used high confident variant calls on this individual provided by the consortium as ground truth to evaluate the precision and rate of recall from alignment-based and assembly-based approaches. Quality metrics for de novo assembled contigs {#Sec4} --------------------------------------------- From the basic statistics of assembled contigs by SOAPdenovo2 with simulated reads, we observed a dramatic increase in number of contigs and total assembly length between coverage 2X and 30X. These values were stabilized from 30X up to 100X (Table [1](#Tab1){ref-type="table"}). Even though we did not observe further improvement in total number and length of contigs between 30X and 100X, the maximum length of contigs for 100X coverage was double that of 30X, suggesting the benefit of higher coverage of reads for further extension of assembled contigs. This process was repeated with reads set from NA24385, in which, however, we observed continuous modest increase of number of contigs and total assembly length.Table 1Statistics of *de novo* assembly result with different reads coverages.CoverageDataset\# ContigsTotal contig lengthMax. lengthN 50Dataset\# ContigsTotal contig lengthMax. lengthN50**2xSimulated Data**140,78814,283,1372,69295**Real Data (From NA24385)**369,65241,624,1886,467127**5x**1,162,25318,299,2156,9082102,402,265433,500,65042,569222**10x**6,347,9891,367,683,5999,4242528,074,7401,978,632,97542,639307**15x**9,550,0452,507,407,97614,3753469,456,2702,778,146,70042,633464**20x**9,754,1253,010,504,62729,4425569,394,8033,126,114,48131,095783**30x**9,941,0903,322,715,81032,56513519,335,1303,343,901,05949,2561720**50x**9,745,1943,403,686,06866,702309010,064,3053,467,308,41165,8002450**100x**8,279,6793,316,291,53980,998373611,571,4043,592,236,26972,9302362 An *Nx* plot demonstrated the *Nx* values, with ranges between 0--100%, where *Nx* is defined as the length of the shortest contig in the set of the X% largest contigs that represents at least X% of the assembly. Here, we used an *Nx* plot to present a better picture on the continuity of contigs against coverage of reads. We observed the continuous benefit in contig length when increasing coverage of reads between 2X and 50X (Fig. [2a,d](#Fig2){ref-type="fig"}). There was a large improvement on contig continuity from 30X to 50X, while this difference was not as such obvious in statistics presented in Table [1](#Tab1){ref-type="table"}. In addition, we noticed that the continuity of contigs did not gain much when 100X reads were used.Figure 2Contig continuity, genome and gene coverages for de novo assembly with SOAPdenovo2. (**a**,**d**) N statistics for different coverages. (**b**,**e**) Coverage of genome, gene, and exon regions by contigs. (**c**,**f**) Number of genes covered by assembled contigs. Fully covered gene: all regions in the gene were covered by mapped contigs. Partially covered gene: only part of regions in the gene were covered by mapped contigs. (**a--c**) Statistics of simulated reads. (**d--f**) Statistics of experimental reads. We also investigated the coverage of genome, genes and exons by the assembled contigs against the coverage of reads used in the *de novo* assembly by aligning contigs to the reference genome. When we combined fully and partially covered genes or exons, we observed a similar pattern of increasing coverage by assembled contigs on genome, genes, and exons with the increasing coverage of reads (Fig. [2b,e](#Fig2){ref-type="fig"}). Again, all three curves were stabilized at around 80% to 90% when reads coverage reached 30X and beyond. By extending the length of the assembled contigs, the increasing coverage also improved the number of fully covered genes and exons. However, for both simulated and experimental reads data sets, further increasing of reads coverages from 50X to 100X did not see much of benefit for the coverage of genes and exons (Fig. [2c,f](#Fig2){ref-type="fig"}). These results were consistent with the results for continuity of assembled contigs (Fig. [2a,d](#Fig2){ref-type="fig"} **)**. From the quality metrics, we demonstrated that larger coverage of reads would always result in better assembly outcomes, whereas the contig profile generated with low coverage (\<=10X) was mostly incomplete for gene coverage. For 100 bp pair-end read sets with 50X coverage, almost all genes could be fully or partially covered by *de novo* assembled contigs. This statistic did not improve much at 100X coverage of reads. Therefore, for comparison of variant calling between alignment-based and assembly-based approaches, we only investigated their performance with reads coverages between 10X and 50X. Performance of variant calling {#Sec5} ------------------------------ We compared the "ground truth" with the variants called at all coverages of simulated reads from both the alignment-based and assembly-based processes to calculate the recall rates and precisions as described in the methods section. With the alignment-based variant calling, the number of true SNVs called continuously increased until 30X coverage of reads (Fig. [3a](#Fig3){ref-type="fig"}). Interestingly, the number of false SNV calls was also increased along with coverage within this span. We did not observe further increase of either true SNVs or false SNVs at higher than 30X coverage of reads, suggesting that a 30X coverage of reads is sufficient for the alignment-based approach. More than 99% of SNVs were successfully recalled with 30X coverage of reads (Table [2](#Tab2){ref-type="table"}). For the assembly-based approach, we also observed continuous increase of true SNVs along with increase of reads coverage until it reached a plateau at 30X (Fig. [3b](#Fig3){ref-type="fig"}). It is worth noticing that the number of true SNVs and total called SNVs from the assembly-based approach were significantly lower than those from the alignment-based approach. With 50X coverage of reads, recall rate for the assembly-based approach was only 56% (Table [2](#Tab2){ref-type="table"}). The high false negative rate might be due to low contig coverage on the whole human genome. As shown in Fig. [3c](#Fig3){ref-type="fig"}, the recall rate for the alignment-based approach reached a plateau at 30X coverage of reads, while the precision curve was pretty steady at around 90% throughout all the tested coverage range. In contrast, we observed two parallel curves of recall rate and precision with the assembly-based approach. Moreover, both recall rate and precision for the assembly-based calling were significantly lower than ones for the alignment-based approach.Figure 3Performance of variant calling. (**a**) Alignment-based approach; (**b**) Assembly-based approach. Green and red bars represented true SNVs and false positives, respectively. (**c**) The recall and precision of both alignment-based and assembly-based variant calling. Red and green solid lines were recall rate and precision for alignment-based variant calling, respectively. Dashed red and green lines were recall rate and precision for assembly-based variant calling, respectively, whereas the dashed blue line was the recovery rate of missed variants in alignment-based variant calling. Table 2Variant calling result from alignment-based and assembly-based approaches.Coverage10x15x20x30x50x**Alignment-based Variant Calling**Recall0.890.960.980.990.99Precision0.900.900.890.890.88Missed variants341,558120,80454,59125,78817,850**Assembly-based Variant Calling**Recall0.240.430.500.540.56Precision0.670.810.870.920.93Recovered Variants10,8439,0605,7562,8921,797Recovery Rate3%7%11%11%10% We further investigated variants which were missed in the alignment-based variant calling process and checked how many of them were recovered by the assembly-based approach. As shown in Table [2](#Tab2){ref-type="table"}, even with 50X coverage of reads, the alignment-based approach missed 17,850 SNVs, of which 10% were recovered by the assembly-based approach. These results suggest that the assembly-based approach could be used as a complementary method to the alignment-based approach. However, at lower coverages, the assembly-based variant calling could not recover a significant number of SNVs due to the low quality of the assembled contigs. Finally, we explored the possible reasons why so many SNVs failed to be called with the assembly-based approach by examining allele types of the SNVs introduced in reads at all coverages. As shown in Table [3](#Tab3){ref-type="table"}, SNVs recalled by the assembly-based approach were predominantly homozygous, whereas SNVs failed to be recalled were predominantly heterozygous. This finding indicates a systematic error in the assembly-based approach that leads to recall bias for homozygous SNVs.Table 3Ratio of two allele types within recalled and missed SNVs by assembly-based approach.Coverage10x15x20x30x50x**Homo/Hetero ratio: recalled**2.041.791.721.651.55**Homo/Hetero ratio: missed SNV**0.470.290.210.170.17 We also investigated the performance on INDEL discovery by both approaches. For the alignment-based approach, the precisions were constantly maintained around 80% and recall rates were gradually improved from 52 to 63% for coverage of 10X and 30X respectively. Like SNV calling, the performance of INDEL calls were indistinguishable with reads coverage of 30X and 50X (Suppl Table [s1](#MOESM1){ref-type="media"}). On the other hand, the precision and recall rate for INDEL calling by the assembly-based approach were dropped to 13% and 11% respectively, indicating its insignificance for calling INDELs. We repeated the evaluation process with real experiment reads for NA24385 at the coverages of 30X, 50X and 100X (Table [s3](#MOESM1){ref-type="media"}). The performances of variant calling of the assembly-based approach were indistinguishable for three coverages. This result provided further evidence for the lack of need to increase short reads coverage to 100X for genome assembly with SOAPdenovo2. For experimental reads at 50X coverage (Table [4](#Tab4){ref-type="table"}), the precision and recall rate for both alignment-based and assembly-based approaches were very comparable to the results from simulated reads (Table [3](#Tab3){ref-type="table"}). Further assembly of contigs into scaffold did not appear to increase either on recall rate, nor precision of SNV calling. Besides MUMmer, we also tried asmVAR^[@CR14]^ which used LAST^[@CR15]^ as the alignment tool for contig-based variant calling. We observed higher recall rate (0.74 vs. 0.51) but lower precision rate (0.74 vs. 0.94) compared to results from MUMmer (Table [3](#Tab3){ref-type="table"}). However, when we used FermiKit which would resolve the haplotype of contigs to uncover SNVs, its precision and recall rate were very comparable to those yielded from the alignment-based approach (Table [4](#Tab4){ref-type="table"}). In addition, Fermikit recoverd 3,045 SNVs which accounted for 44% of SNVs missed by BWA-GATK germline SNV calling process (Fig. [4](#Fig4){ref-type="fig"}). The ability of calling INDELs by FermiKit also increased dramatically (Table [s2](#MOESM1){ref-type="media"}).Table 4Variant calling performance of different approaches with 50X coverage of experimental reads.SNVAlignment-based approachAssembly-based approach (SOAPdenovo)Unitig-based approach**AlgorithmBWA-GATKMUMmerasmVARFermiKitInput TypeReadsContigsScaffoldsContigsUnitigsTP**3,503,5291,803,8911,810,1002,613,5763,434,979**FP**528,789115,787189,185927,616376,945**FN**6,8131,706,4511,700,242896,76675,363**Recall**0.990.510.520.740.98**Precision**0.870.940.910.740.88 Figure 4Venn Diagram to compare the performance of three variant calling algorithms to the ground truth set, retrieved from GIAB project. Alignment: variant calling with BWA-GATK pipeline; Assembly: variant calling with SOAPdenovo-MUMmer; Unitig: variant calling with FermiKit; Ground Truth: High confident calls provided by GIAB. Moreover, we investigated the genomic region of variants called by each approach, which was annotated by Annovar based on refGene (<http://refgene.com>). As a result, we observed the variant callings on all regions (coding sequences (CDS), intronic, untranslated regions (UTRs), etc.) are evenly distributed, as they showed no significant difference in recall (Fig. [5a](#Fig5){ref-type="fig"}). We did see a lower precision in CDS and intergenic regions comparing to other genomic regions by alignment-based approach (Fig. [5a](#Fig5){ref-type="fig"} **)**. While this bias only happened to the alignment-based approach but not to the other two approaches (assembly-based and unitig-based), it suggested that the assembly-based approach would be able to partially correct such bias. Since the genome regions for NA24385 have been marked with high and low confidence based on sequence coverage by multiple sequencing platforms^[@CR16]^, we sought the possible tie between confident regions and the performance of SNV calling. As we expected, regardless of which approach was used, most true positive callings were located in the high confidence regions, where false positive callings were most likely located in the low confidence regions (p \< 0.001, Fig. [5b](#Fig5){ref-type="fig"}).Figure 5Genomic locations of variant called by three approaches. (**a**) The recall and precision ratio for each genome region. Red, blue and green bars represented alignment-based, assembly-based and unitig-based approaches, respectively. (**b**) Distribution of SNV callings in high and low confidence region. For each bar, dark color represented SNV called in high confidence region and light bar represented SNVs in low confidence region. The difference of high-confidence SNV ratio between True positive and false positive is significant (P \< 0.001). Discussion {#Sec6} ========== Although alignment-based variant calling is commonly used to identify genetic variants in human genomes, a high level of false positive variant calls is an issue due to multiple factors such as incompleteness of the reference genome used, a large number of SNPs and structural variants among individuals leading to mapping bias. Another approach is to use long contigs assembled from short reads to detect variants by comparison with a reference genome. The assembly-based approach has been widely used in analysis of genomes of monoploid organisms such as bacteria^[@CR17]--[@CR19]^. Recent studies have tried assembly-based approach on human genomes and reported hundreds of thousands of variants that lacked ground truth or supporting validation. The validity of assembly-based calling hence remains questionable. In this study, we used a random selection of \~3.6 million variants from a pool with 505 million variants to simulate short reads at various coverages. We used those sets of simulated short reads to address the following questions: (1) what is the minimum reads coverage to yield a high recall rate and precision for the alignment-based approach? (2) What is the minimum reads coverage to get good assembled contigs? (3) Will the assembly-based approach provide reliable variant calls and thus serve as a complimentary role for variant detection in human genome research? With SOAPdenovo2, the latest version of assembler used in previous assembly-based studies^[@CR20]^, we assembled the simulated short reads into long contigs. After examining several quality metrics for assembly outcome, such as continuity, contig coverage on reference genome, genes, and exons, we observed continuous benefit with increasing coverage of reads until it reached 50X, where almost all genes could be fully or partially covered by the *de novo* assembled contigs. Further increasing reads coverage to 100X did not seem increase contig continuity and coverage of genes. However, in order to get a higher coverage of genome and more fully covered genes by assembled contigs, we recommend use of a higher than 30X coverage of short reads. We also examined the reads coverage effect on recall rate and precision of the alignment-based approach. When reads coverage was lower than 10X, we observed a large number of SNVs that were missed. However, with a 15X coverage of reads, more than 96% of SNVs were successfully recalled. When reads coverage reached 30X, 99% of the SNVs were recalled. On the other hand, the precision of SNV calling remained constant at around 90% throughout all the tested ranges, suggesting that the problem of false positives with the alignment-based approach could not be resolved by simply increasing reads coverage. We used MUMmer for alignment of the assembled contigs against the reference genome and called SNVs with a module in the MUMmer package. MUMmer is a well-tested alignment tool for long sequence query against a large reference genome because of its high performance on speed, accuracy and scalability^[@CR21]^. We therefore developed a framework with MUMmer that not only performs quality assessment for genome assembly outcomes, but also carries out assembly vs. reference alignment as the underlying driving engine and eventually makes variant calls directly. With our framework, we were able to examine the recall rate and precision for the assembly-based variant calling process. We observed that the number of true SNVs and the total called SNVs from the assembly-based approach were significantly lower than the metrics from the alignment-based approach. With a 50X coverage of reads, recall rate for the assembly-based approach was only 56%. The curves for recall rate and precision vs. reads coverage were in parallel, suggesting that, unlike the alignment-based approach, increasing reads coverage for the assembly-based approach would have impact on both false positive and false negative SNV calls (Fig. [3c](#Fig3){ref-type="fig"}). These results were confirmed by a repeated process where experimental reads were used. At each of the reads coverages, we examined the SNVs that were missed by the alignment-based approach to calculate the percentage of the SNVs recovered by the assembly-based process. To our surprise, even at a high coverage of short reads, only \~11% of the SNVs missed by the alignment-based approach were recovered by the assembly-based approach, suggesting that the complementary effect of the assembly-based approach was small. Finally, we investigated the possible reasons for the low recall rate of the assembly-based approach by examining the allele types in the recalled and missed SNVs. We observed that SNVs recalled by the assembly-based approach were predominantly homozygous, whereas SNVs failed to be recalled were predominantly heterozygous, indicating a systematic error in the assembly-based approach that leads to recall bias for homozygous SNVs. The underlying algorithm used in SOAPdenovo2 is a *de Bruijn* graph that requires generation of graphic nodes with k-mer seed sequences. All possible combinations of the graphic nodes were searched within the entire input of reads. Should reads distinguish each other only due to allele differences or sequencing errors, the consensus sequence for this group of reads would be used. This error correction process would thus collapse reads from two alleles into a single haplotype. As a result, homozygous SNVs would be called correctly with the assembly-based approach, whereas heterozygous SNVs would have no more than 50% of chance to be called correctly. Our result demonstrated that contigs generated by SOAPdenovo2 could not perform well on SNV calling for human genome, primarily due to its loss of information on read coverage and diploidy. Improvement on assembly or variant calling which overcomes current limitations might lead to better performance and make a contig-based approach more useful. As matter of fact, when we used FermiKit, an assembly tool that would preserve haplotype information in assembled contigs/unitigs, we observed precision and recall rate at very similar levels to that with alignment-based approach. With the simulated and experimental data we evaluated the effect of reads coverage on *de novo* genome assembly with SOAPdenovo2, and SNV calling with the alignment-based and assembly-based approaches. We concluded that the higher the coverage of short reads, the better the assembly outcomes. At least 50X coverage of reads were required in order to warrant good assembled contigs that would cover 80% of the human genome. For the alignment-based SNV calling, more than 99% of SNVs could be accurately recalled at 30X coverage of reads, whereas only 56% of SNVs could be recalled by the assembly-based process at 50X coverage of reads. Nevertheless, the assembly-based process could recover merely 11% SNVs that might be missed by the alignment-based approach. The low recovery rate of SNVs by the assembly-based approach was due to inability of haplotype-resolved assembled contigs by SOAPdeno2. Methods {#Sec7} ======= Data simulation {#Sec8} --------------- We used VarSim^[@CR22]^ and ART^[@CR23]^ to simulate short reads with a fixed variant pool. The variants were obtained from the VarSim website as described in the quick start demo (<http://web.stanford.edu/group/wonglab/varsim/>), including SNVs, INDELs and SVs, primarily from dbSNP (build 144). In addition, we added an extra 400,000 SNVs from sample NA12878, reported by the Genome in a Bottle (GIAB) project (<https://sites.stanford.edu/abms/giab>) and thus created a pool of 505 million variants. We randomly selected \~3.6 million variants from this pool and introduced them into the human reference genome (hs37d5) with VarSim to create a template genome. We then applied this template genome to ART and simulated 100 bp pair-end reads at coverages of 2X, 5X, 10X, 15X, 20X, 30X, and 50X with introducing random errors. The fragment size of pair-end reads was set to a mean value of 350 bp with standard deviation (SD) of 50 bp. Whole genome sequencing reads for NA24385 {#Sec9} ----------------------------------------- Raw sequence reads for an individual, NA24385 (ftp://ftp-trace.ncbi.nlm.nih.gov/giab/ftp/data/AshkenazimTrio/HG002_NA24385_son/NIST_HiSeq_HG002_Homogeneity-10953946/HG002_HiSeq. 300x_fastq/) were downloaded from the GIAB official website. Genomic DNA of each individual was sequenced by Illumina HiSeq with 148 bp pair-ended reads at 300X coverage. Of total of 935 pair-end fastq files, there were 4 million reads in each file. We pooled the first 102, 167, and 327 files to create data sets with coverage of 30X, 50X, and 100X respectively. In addition, germline variants in these individuals have been well characterized by the GIAB with various technology platforms and different bioinformatics discovery tools. High-confidence variant calls for these two individuals have been released by the consortium as references. We downloaded the recent release of VCF files (v3.2.2) for NA24385 (ftp://ftp-trace.ncbi.nlm.nih.gov/giab/ftp/release/AshkenazimTrio/HG002_NA24385_son/NISTv3.2.2/) and their associated BED files for high-confidence genomic regions from NCBI for this study, where high-confidence genomic regions were defined based on coverage of sequencing reads from various NGS platforms, insistency of genotype calling, and sequences homologues (<https://github.com/genome-in-a-bottle/giab_FAQ>). De novo Genome assembly {#Sec10} ----------------------- We used SOAPdenovo2 to perform *de novo* assembly with the simulated reads. Since we did not include jumping reads, pair-end reads with a large insert size, which were for the purpose of building scaffolds, we only ran the first two steps, pregraph and contig, for SOAPdenovo2 with 63-mer of seed size. We applied the same parameters for assembly processes for all coverages of the simulated reads. We used 48 CPUs on a single node of the HPC cluster with 2000 GB memory for each assembly run. Contig quality assessment {#Sec11} ------------------------- An in-house software package was used to assess the contig quality metrics and the coverage of the reference genome. This in-house tool, developed in Java, maximized the usage of available computational resources by performing contig alignment and post processing in parallel. Its flexible design allowed split jobs being run either on a high performance computing (HPC) cluster or a multi-core workstation. Based on carefully filtered alignment, it generated statistics such as the total genome coverage, gene and exon coverage, contig duplication as well as SNVs embedded in the assembly. This framework also provided stand-alone quality statistics such as contig size distribution, N*x* statistics, etc. Alignment-based and Assembly-based Variant calling {#Sec12} -------------------------------------------------- For alignment-based variant calling process we first mapped the simulated reads against the human reference genome (hs37d5, the same version used for the simulation) with BWA-MEM^[@CR24]^. We then used Picard (<http://broadinstitute.github.io/picard/>, Version 1.110) to mark and remove repeated reads, to sort and create indexes on alignment bam files before applying the HaplotypeCaller in the GATK package (Version 3.1.1)^[@CR25]^ for final variant calling. We used MUMmer^[@CR21]^ to map contigs onto the human reference genome and then called variants via Nucmer program. Nucmer (NUCleotide MUMmer) was designed for standard DNA sequence alignment and could handle multiple reference and multiple query sequences^[@CR26]^. Since Nucmer could not use the whole human genome as a reference, we ran each chromosome separately and then selected the best match of each contig across all chromosomes. For instance, if one contig matched to multiple chromosomes, only the chromosome with the best matching score would be selected. We also used the same pipeline for read variant calling (BWA-MEM&GATK) to call variants from contigs, however the performance was not as good as using Nucmer. We also used AsmVar^[@CR14]^ to map contigs onto human reference genome and derived variants. To speed up the analysis process, we divided the input contigs file into 40 partitions and aligned each file separately to the reference using the lastal and last-split programs of the LAST package. More specifically, the minimum score for gapped alignments was set to 25, and the mismatch cost was set to 3 for the lastal program. For the last-split program the minimum alignment score was set to 35. The default values were used for the rest of the parameters for both programs. After the alignments were computed for each contig in separate files, the results were output in multiple alignment format (MAF) by the alignment tool. These MAF files were then merged into a single file. We then used the ASV_VariantDetector in the AsmVar package to call the SNVs for each chromosome separately with the default parameters. FermiKit was used to call variants via a *de novo* assembly-based method^[@CR27],\ [@CR28]^. Different from other *de novo* assembly approaches, FermiKit assembled unitigs instead of contigs, as a lossless representation of reads^[@CR27],\ [@CR28]^. FermiKit used in this study was downloaded from GitHub (Sept. 2016) for the experimental reads dataset, which contained 50x of 150 bp paired-end reads. In details, the genome size was set to 3 g as human and 16 CPU cores were used in the progress. After obtaining results from the alignment-based and assembly-based variant calling processes, we calculated recall rates and precisions for both approaches. Specifically, recall rate is the fraction of true SNVs that were called, known as TP/(TP + FN), and precision is the fraction of true SNVs among all called SNVs (TP/P), where TP (True Positive) means real positive SNVs that have been correctly called, FN (False Negative) means real positive SNVs have been wrongly called as negative, and P represents all called SNVs. In addition, we also closely examined the overlapping variants between two approaches, as well as the number of variants that were missing by the alignment-based approach but were recovered by the assembly-based approach. Annovar^[@CR29]^ was applied on genomic region analysis, to annotate the genomic region of all variants called by three different approaches respectively, based on the refGene database. Also, the variants were annotated with confidence tag, which was retrieved from the GIAB project. Disclaimer {#Sec13} ---------- The views presented in this article do not necessarily reflect current or future opinion or policy of the US Food and Drug Administration. Any mention of commercial products is for clarification and not intended as endorsement. Electronic supplementary material ================================= {#Sec14} Supplementary tables **Electronic supplementary material** **Supplementary information** accompanies this paper at doi:10.1038/s41598-017-10826-9 **Publisher\'s note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. LW is grateful to the National Center for Toxicological Research (NCTR) of the U.S. Food and Drug Administration (FDA) for postdoctoral support through the Oak Ridge Institute for Science and Education (ORISE). This work utilized the computational resources of the NIH HPC Biowulf cluster. (<http://hpc.nih.gov>). W.X. conceived and designed this study. L.W. and G.Y. performed data analysis. W.X., L.W., W.T., and H.H. wrote the manuscript. Competing Interests {#FPar1} =================== The authors declare that they have no competing interests.
2024-05-01T01:26:51.906118
https://example.com/article/8984
Europe can no longer count on the US in defense and must take matters into its own hands, German Chancellor Angela Merkel stated during a meeting with French President Emmanuel Macron, who also said: “Something should be done.” “It’s no longer the case that the United States will simply just protect us,” Merkel said in a speech honoring President Macron, who came to Aachen to receive the prestigious Charlemagne Prize. Receiving a round of applause, Merkel stated: “Rather, Europe needs to take its fate into its own hands. That’s the task for the future. Read more Europe has to “act together and speak with one voice," she said, as cited by Germany’s Die Welt newspaper. “But let's be honest: Europe is still in its infancy with regard to the common foreign policy.” Speaking after Merkel, Macron said that “We should not be waiting, we must do something right now. Let us not be weak,” added the French president. Last year, Merkel had made a similar statement, urging Europe to become less dependent on its transatlantic ally. “The times in which we could completely depend on others are on the way out. I've experienced that in the last few days,” she told a crowd a day after attending the G7 summit in Italy. The German chancellor, who secured her fourth term earlier this year, reiterated that Europeans “must really take our destiny into our own hands, of course in friendship with the United States, in friendship with Great Britain, with good neighborly relations wherever possible, also with Russia and other countries.” Nevertheless, countries within the EU “have to know that we have to fight for our future and our fate ourselves as Europeans." Merkel’s statement comes shortly after US President Donald Trump announced his decision to withdraw from the 2015 Iran nuclear deal, prompting a backlash from members of the accord, including Germany. On Wednesday, the German chancellor said: “We will remain committed to this agreement and will do everything to ensure that Iran complies with its obligations.” Read more While Merkel avoided openly criticizing Trump, German Foreign Minister Heiko Maas earlier accused the US leader of “not insignificantly throwing back the efforts to bring stability to the region.” Describing Trump's decision as “incomprehensible,” Maas said the move would undermine confidence in international treaties. On Thursday, he reiterated that it is crucial for Iran to stick to its obligations under the international nuclear deal, and that Moscow should use its influence on Tehran in this regard. Speaking after talks with Russian Foreign Minister Sergei Lavrov, Maas also said that Berlin and Moscow agreed that the Iran nuclear agreement should be upheld. Moscow recently said it believes that there are ways to guarantee continued cooperation between Iran and the other parties to the deal despite Washington’s attempts to disrupt it. “There are means to guarantee that this cooperation would continue despite the attempts to deter it,” the Russian Deputy Foreign Minister Sergey Ryabkov said. Like this story? Share it with a friend!
2024-05-07T01:26:51.906118
https://example.com/article/5375
Q: How to send memory stream using C# sockets? I'm trying to write a code for server-socket, which sending the content of MemoryStream over network using System.Net.Sockets. I've tried the code below in order to send the content of the memory stream, which isn't null, but the program didn't work. What is the problem with the code and how can I solve it, and if there's another way can you help me? C# code for client: using (var ms = new MemoryStream()) { byte[] buffer = new byte[1024]; int read = 0; while ((read = NetStream.Read(buffer, 0, 1024)) != 0) { ms.Write(buffer, 0, read); } ms.Position = 0; stream.Close(); client.Close(); return ms; } C# code for server: byte[] buffer = new byte[1024]; MemoryStream ms = new MemoryStream(); ms = response; //response is also NOT null MemoryStream ms.Position = 0; int read = 0; while ((read = ms.Read(buffer, 0, 1024)) != 0) { stream.Write(buffer, 0, read); } ms.Close(); response.Close(); stream.Flush(); Console.WriteLine("DONE."); The results by the debugger: The server continues in the program immediately and the client stucks. NOTE: the code id from this answer: DataSet & NetworkStream in C# A: I've figured this out by the answers here: First, Wait For data from the sender (applies to both loops), and Second, just after reading check if more data available. this way it doesn't wait for nothing. while (true) { if (clientStream.DataAvailable) { while ((i = clientStream.Read(bytesBuffer, 0, bytesBuffer.Length)) != 0) { memoryStream.Write(bytesBuffer, 0, bytesBuffer.Length); if (clientStream.DataAvailable) continue; else break; } Console.WriteLine("Received from server {0}", Encoding.ASCII.GetString(memoryStream.ToArray())); break; } else { continue; } } Thanks!
2024-06-24T01:26:51.906118
https://example.com/article/6164
Francisella tularensis, the causative agent of tularemia, is a Gram-negative facultative intracellular bacterium. Because of its extreme infectivity, high virulence, and ease of dissemination, F. tularensis is considered as a category A potential agent of bioterrorism. The virulence mechanisms of F. tularensis are poorly understood, and there are no licensed vaccines against tularemia. Our long-term goals are: (i) to understand at the molecular level the pathogenic mechanisms of F. tularensis;and (ii) identify molecular targets for effective vaccines and other therapeutics against F. tularensis infections in humans. This application is based on our recent genome-wide search for virulence determinants in F. tularensis strain LVS by signature-tagged mutagenesis (STM). This study identified 95 F. tularensis genes that are required for the in vivo survival of this pathogen. Among these are three adjacent genes (capB, capC, and capA) in a gene locus. The capBCA locus closely resembles the biosynthetic gene locus of the poly-3-glutamic acid (PGA) capsule in Bacillus anthracis, the anthrax agent. The B. anthracis capsule is a major virulence factor. However, PGA has not been described in any Gram-negative bacteria or any intracellular bacterial pathogen. Based on our exciting preliminary findings, we hypothesize that the F. tularensis capBCA locus promotes the virulence of F. tularensis by producing PGA. The Specific Aims are: (i) to characterize the F. tularensis capBCA locus, (ii) to determine PGA polymerase activity of the capBCA locus, and (iii) to determine how capB enhances F. tularensis intracellular growth and virulence. Our expertise and existing tools have provided us the unique capability to accomplish the goal of this application. We will be the first to study a potential PGA biosynthetic system in Gram-negative bacteria. The data of the proposed studies will provide valuable information concerning the significance of PGA in biology and pathogenesis of Gram-negative bacteria including F. tularensis. The PGA may be an attractive vaccine target for tularemia if it proves to be surface-exposed like its counterparts in the Gram-positive pathogens. PUBLIC HEALTH RELEVANCE: The bacterium Francisella tularensis is the causative agent of tularemia and also a bioweapon agent due to its extreme infectivity and ability to cause mortality. There are no licensed vaccines against tularemia. F. tularensis can infect a wide range of host from ticks to humans. However, the mechanisms for this extreme versatility are unknown. This project will study three bacterial genes that are potentially responsible for synthesis of an unconventional amino acid polymer called poly-glutamic acid (PGA). As the sole constituent of the capsule (coating structure) of Bacillus anthracis, causative agent of anthrax, PGA protects the bacterium from host clearance during infection of mammalian hosts. We will pursue our hypothesis that F. tularensis also produces PGA to enhance its adaptation when the bacterium is located in different microenvironments. The information will significantly enhance our knowledge in disease pathogenesis of F. tularensis. PGA can be used to develop vaccines against F. tularensis infection if it is placed at the surface of F. tularensis as its counterpart in B. anthracis
2023-09-27T01:26:51.906118
https://example.com/article/5178
# Copyright (c) 2010-2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. import os import time import random import shutil import tempfile import unittest from mock import patch, call, DEFAULT import eventlet from swift.account import reaper from swift.account.backend import DATADIR from swift.common.exceptions import ClientException from swift.common.utils import normalize_timestamp, Timestamp from test import unit from swift.common.storage_policy import StoragePolicy, POLICIES class FakeBroker(object): def __init__(self): self.info = {} def get_info(self): return self.info class FakeAccountBroker(object): def __init__(self, containers, logger): self.containers = containers self.containers_yielded = [] def get_info(self): info = {'account': 'a', 'delete_timestamp': time.time() - 10} return info def list_containers_iter(self, limit, marker, *args, **kwargs): if not kwargs.pop('allow_reserved'): raise RuntimeError('Expected allow_reserved to be True!') if kwargs: raise RuntimeError('Got unexpected keyword arguments: %r' % ( kwargs, )) for cont in self.containers: if cont > marker: yield cont, None, None, None, None limit -= 1 if limit <= 0: break def is_status_deleted(self): return True def empty(self): return False class FakeRing(object): def __init__(self): self.nodes = [{'id': '1', 'ip': '10.10.10.1', 'port': 6202, 'device': 'sda1'}, {'id': '2', 'ip': '10.10.10.2', 'port': 6202, 'device': 'sda1'}, {'id': '3', 'ip': '10.10.10.3', 'port': 6202, 'device': None}, {'id': '4', 'ip': '10.10.10.1', 'port': 6202, 'device': 'sda2'}, {'id': '5', 'ip': '10.10.10.1', 'port': 6202, 'device': 'sda3'}, ] def get_nodes(self, *args, **kwargs): return ('partition', self.nodes) def get_part_nodes(self, *args, **kwargs): return self.nodes acc_nodes = [{'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}] cont_nodes = [{'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}, {'device': 'sda1', 'ip': '', 'port': ''}] @unit.patch_policies([StoragePolicy(0, 'zero', False, object_ring=unit.FakeRing()), StoragePolicy(1, 'one', True, object_ring=unit.FakeRing(replicas=4))]) class TestReaper(unittest.TestCase): def setUp(self): self.to_delete = [] self.myexp = ClientException("", http_host=None, http_port=None, http_device=None, http_status=404, http_reason=None ) def tearDown(self): for todel in self.to_delete: shutil.rmtree(todel) def fake_direct_delete_object(self, *args, **kwargs): if self.amount_fail < self.max_fail: self.amount_fail += 1 raise self.myexp if self.reap_obj_timeout: raise eventlet.Timeout() def fake_direct_delete_container(self, *args, **kwargs): if self.amount_delete_fail < self.max_delete_fail: self.amount_delete_fail += 1 raise self.myexp def fake_direct_get_container(self, *args, **kwargs): if self.get_fail: raise self.myexp if self.timeout: raise eventlet.Timeout() objects = [{'name': u'o1'}, {'name': u'o2'}, {'name': u'o3'}, {'name': u'o4'}] return None, [o for o in objects if o['name'] > kwargs['marker']] def fake_container_ring(self): return FakeRing() def fake_reap_object(self, *args, **kwargs): if self.reap_obj_fail: raise Exception def prepare_data_dir(self, ts=False, device='sda1'): devices_path = tempfile.mkdtemp() # will be deleted by teardown self.to_delete.append(devices_path) path = os.path.join(devices_path, device, DATADIR) os.makedirs(path) path = os.path.join(path, '100', 'a86', 'a8c682d2472e1720f2d81ff8993aba6') os.makedirs(path) suffix = 'db' if ts: suffix = 'ts' with open(os.path.join(path, 'a8c682203aba6.%s' % suffix), 'w') as fd: fd.write('') return devices_path def init_reaper(self, conf=None, myips=None, fakelogger=False): if conf is None: conf = {} if myips is None: myips = ['10.10.10.1'] r = reaper.AccountReaper(conf) r.myips = myips if fakelogger: r.logger = unit.debug_logger('test-reaper') return r def fake_reap_account(self, *args, **kwargs): self.called_amount += 1 def fake_account_ring(self): return FakeRing() def test_creation(self): # later config should be extended to assert more config options r = reaper.AccountReaper({'node_timeout': '3.5'}) self.assertEqual(r.node_timeout, 3.5) def test_delay_reaping_conf_default(self): r = reaper.AccountReaper({}) self.assertEqual(r.delay_reaping, 0) r = reaper.AccountReaper({'delay_reaping': ''}) self.assertEqual(r.delay_reaping, 0) def test_delay_reaping_conf_set(self): r = reaper.AccountReaper({'delay_reaping': '123'}) self.assertEqual(r.delay_reaping, 123) def test_delay_reaping_conf_bad_value(self): self.assertRaises(ValueError, reaper.AccountReaper, {'delay_reaping': 'abc'}) def test_reap_warn_after_conf_set(self): conf = {'delay_reaping': '2', 'reap_warn_after': '3'} r = reaper.AccountReaper(conf) self.assertEqual(r.reap_not_done_after, 5) def test_reap_warn_after_conf_bad_value(self): self.assertRaises(ValueError, reaper.AccountReaper, {'reap_warn_after': 'abc'}) def test_reap_delay(self): time_value = [100] def _time(): return time_value[0] time_orig = reaper.time try: reaper.time = _time r = reaper.AccountReaper({'delay_reaping': '10'}) b = FakeBroker() b.info['delete_timestamp'] = normalize_timestamp(110) self.assertFalse(r.reap_account(b, 0, None)) b.info['delete_timestamp'] = normalize_timestamp(100) self.assertFalse(r.reap_account(b, 0, None)) b.info['delete_timestamp'] = normalize_timestamp(90) self.assertFalse(r.reap_account(b, 0, None)) # KeyError raised immediately as reap_account tries to get the # account's name to do the reaping. b.info['delete_timestamp'] = normalize_timestamp(89) self.assertRaises(KeyError, r.reap_account, b, 0, None) b.info['delete_timestamp'] = normalize_timestamp(1) self.assertRaises(KeyError, r.reap_account, b, 0, None) finally: reaper.time = time_orig def test_reset_stats(self): conf = {} r = reaper.AccountReaper(conf) self.assertDictEqual(r.stats_return_codes, {}) self.assertEqual(r.stats_containers_deleted, 0) self.assertEqual(r.stats_containers_remaining, 0) self.assertEqual(r.stats_containers_possibly_remaining, 0) self.assertEqual(r.stats_objects_deleted, 0) self.assertEqual(r.stats_objects_remaining, 0) self.assertEqual(r.stats_objects_possibly_remaining, 0) # also make sure reset actually resets values r.stats_return_codes = {"hello": "swift"} r.stats_containers_deleted = random.randint(1, 100) r.stats_containers_remaining = random.randint(1, 100) r.stats_containers_possibly_remaining = random.randint(1, 100) r.stats_objects_deleted = random.randint(1, 100) r.stats_objects_remaining = random.randint(1, 100) r.stats_objects_possibly_remaining = random.randint(1, 100) r.reset_stats() self.assertDictEqual(r.stats_return_codes, {}) self.assertEqual(r.stats_containers_deleted, 0) self.assertEqual(r.stats_containers_remaining, 0) self.assertEqual(r.stats_containers_possibly_remaining, 0) self.assertEqual(r.stats_objects_deleted, 0) self.assertEqual(r.stats_objects_remaining, 0) self.assertEqual(r.stats_objects_possibly_remaining, 0) def test_reap_object(self): conf = { 'mount_check': 'false', } r = reaper.AccountReaper(conf, logger=unit.debug_logger()) mock_path = 'swift.account.reaper.direct_delete_object' for policy in POLICIES: r.reset_stats() with patch(mock_path) as fake_direct_delete: with patch('swift.common.utils.Timestamp.now') as mock_now: mock_now.return_value = Timestamp(1429117638.86767) r.reap_object('a', 'c', 'partition', cont_nodes, 'o', policy.idx) mock_now.assert_called_once_with() for i, call_args in enumerate( fake_direct_delete.call_args_list): cnode = cont_nodes[i % len(cont_nodes)] host = '%(ip)s:%(port)s' % cnode device = cnode['device'] headers = { 'X-Container-Host': host, 'X-Container-Partition': 'partition', 'X-Container-Device': device, 'X-Backend-Storage-Policy-Index': policy.idx, 'X-Timestamp': '1429117638.86767', 'x-backend-use-replication-network': 'true', } ring = r.get_object_ring(policy.idx) expected = call(dict(ring.devs[i], index=i), 0, 'a', 'c', 'o', headers=headers, conn_timeout=0.5, response_timeout=10) self.assertEqual(call_args, expected) self.assertEqual(policy.object_ring.replicas - 1, i) self.assertEqual(r.stats_objects_deleted, policy.object_ring.replicas) def test_reap_object_fail(self): r = self.init_reaper({}, fakelogger=True) self.amount_fail = 0 self.max_fail = 1 self.reap_obj_timeout = False policy = random.choice(list(POLICIES)) with patch('swift.account.reaper.direct_delete_object', self.fake_direct_delete_object): r.reap_object('a', 'c', 'partition', cont_nodes, 'o', policy.idx) # IMHO, the stat handling in the node loop of reap object is # over indented, but no one has complained, so I'm not inclined # to move it. However it's worth noting we're currently keeping # stats on deletes per *replica* - which is rather obvious from # these tests, but this results is surprising because of some # funny logic to *skip* increments on successful deletes of # replicas until we have more successful responses than # failures. This means that while the first replica doesn't # increment deleted because of the failure, the second one # *does* get successfully deleted, but *also does not* increment # the counter (!?). # # In the three replica case this leaves only the last deleted # object incrementing the counter - in the four replica case # this leaves the last two. # # Basically this test will always result in: # deleted == num_replicas - 2 self.assertEqual(r.stats_objects_deleted, policy.object_ring.replicas - 2) self.assertEqual(r.stats_objects_remaining, 1) self.assertEqual(r.stats_objects_possibly_remaining, 1) self.assertEqual(r.stats_return_codes[2], policy.object_ring.replicas - 1) self.assertEqual(r.stats_return_codes[4], 1) def test_reap_object_timeout(self): r = self.init_reaper({}, fakelogger=True) self.amount_fail = 1 self.max_fail = 0 self.reap_obj_timeout = True with patch('swift.account.reaper.direct_delete_object', self.fake_direct_delete_object): r.reap_object('a', 'c', 'partition', cont_nodes, 'o', 1) self.assertEqual(r.stats_objects_deleted, 0) self.assertEqual(r.stats_objects_remaining, 4) self.assertEqual(r.stats_objects_possibly_remaining, 0) self.assertTrue(r.logger.get_lines_for_level( 'error')[-1].startswith('Timeout Exception')) def test_reap_object_non_exist_policy_index(self): r = self.init_reaper({}, fakelogger=True) r.reap_object('a', 'c', 'partition', cont_nodes, 'o', 2) self.assertEqual(r.stats_objects_deleted, 0) self.assertEqual(r.stats_objects_remaining, 1) self.assertEqual(r.stats_objects_possibly_remaining, 0) @patch('swift.account.reaper.Ring', lambda *args, **kwargs: unit.FakeRing()) def test_reap_container(self): policy = random.choice(list(POLICIES)) r = self.init_reaper({}, fakelogger=True) with patch.multiple('swift.account.reaper', direct_get_container=DEFAULT, direct_delete_object=DEFAULT, direct_delete_container=DEFAULT) as mocks: headers = {'X-Backend-Storage-Policy-Index': policy.idx} obj_listing = [{'name': 'o'}] def fake_get_container(*args, **kwargs): try: obj = obj_listing.pop(0) except IndexError: obj_list = [] else: obj_list = [obj] return headers, obj_list mocks['direct_get_container'].side_effect = fake_get_container with patch('swift.common.utils.Timestamp.now') as mock_now: mock_now.side_effect = [Timestamp(1429117638.86767), Timestamp(1429117639.67676)] r.reap_container('a', 'partition', acc_nodes, 'c') # verify calls to direct_delete_object mock_calls = mocks['direct_delete_object'].call_args_list self.assertEqual(policy.object_ring.replicas, len(mock_calls)) for call_args in mock_calls: _args, kwargs = call_args self.assertEqual(kwargs['headers'] ['X-Backend-Storage-Policy-Index'], policy.idx) self.assertEqual(kwargs['headers'] ['X-Timestamp'], '1429117638.86767') # verify calls to direct_delete_container self.assertEqual(mocks['direct_delete_container'].call_count, 3) for i, call_args in enumerate( mocks['direct_delete_container'].call_args_list): anode = acc_nodes[i % len(acc_nodes)] host = '%(ip)s:%(port)s' % anode device = anode['device'] headers = { 'X-Account-Host': host, 'X-Account-Partition': 'partition', 'X-Account-Device': device, 'X-Account-Override-Deleted': 'yes', 'X-Timestamp': '1429117639.67676', 'x-backend-use-replication-network': 'true', } ring = r.get_object_ring(policy.idx) expected = call(dict(ring.devs[i], index=i), 0, 'a', 'c', headers=headers, conn_timeout=0.5, response_timeout=10) self.assertEqual(call_args, expected) self.assertEqual(r.stats_objects_deleted, policy.object_ring.replicas) def test_reap_container_get_object_fail(self): r = self.init_reaper({}, fakelogger=True) self.get_fail = True self.reap_obj_fail = False self.amount_delete_fail = 0 self.max_delete_fail = 0 with patch('swift.account.reaper.direct_get_container', self.fake_direct_get_container), \ patch('swift.account.reaper.direct_delete_container', self.fake_direct_delete_container), \ patch('swift.account.reaper.AccountReaper.get_container_ring', self.fake_container_ring), \ patch('swift.account.reaper.AccountReaper.reap_object', self.fake_reap_object): r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_increment_counts()['return_codes.4'], 1) self.assertEqual(r.stats_containers_deleted, 1) def test_reap_container_partial_fail(self): r = self.init_reaper({}, fakelogger=True) self.get_fail = False self.timeout = False self.reap_obj_fail = False self.amount_delete_fail = 0 self.max_delete_fail = 4 with patch('swift.account.reaper.direct_get_container', self.fake_direct_get_container), \ patch('swift.account.reaper.direct_delete_container', self.fake_direct_delete_container), \ patch('swift.account.reaper.AccountReaper.get_container_ring', self.fake_container_ring), \ patch('swift.account.reaper.AccountReaper.reap_object', self.fake_reap_object): r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_increment_counts()['return_codes.4'], 4) self.assertEqual(r.stats_containers_possibly_remaining, 1) def test_reap_container_full_fail(self): r = self.init_reaper({}, fakelogger=True) self.get_fail = False self.timeout = False self.reap_obj_fail = False self.amount_delete_fail = 0 self.max_delete_fail = 5 with patch('swift.account.reaper.direct_get_container', self.fake_direct_get_container), \ patch('swift.account.reaper.direct_delete_container', self.fake_direct_delete_container), \ patch('swift.account.reaper.AccountReaper.get_container_ring', self.fake_container_ring), \ patch('swift.account.reaper.AccountReaper.reap_object', self.fake_reap_object): r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_increment_counts()['return_codes.4'], 5) self.assertEqual(r.stats_containers_remaining, 1) def test_reap_container_get_object_timeout(self): r = self.init_reaper({}, fakelogger=True) self.get_fail = False self.timeout = True self.reap_obj_fail = False self.amount_delete_fail = 0 self.max_delete_fail = 0 with patch('swift.account.reaper.direct_get_container', self.fake_direct_get_container), \ patch('swift.account.reaper.direct_delete_container', self.fake_direct_delete_container), \ patch('swift.account.reaper.AccountReaper.get_container_ring', self.fake_container_ring), \ patch('swift.account.reaper.AccountReaper.reap_object', self.fake_reap_object): r.reap_container('a', 'partition', acc_nodes, 'c') self.assertTrue(r.logger.get_lines_for_level( 'error')[-1].startswith('Timeout Exception')) @patch('swift.account.reaper.Ring', lambda *args, **kwargs: unit.FakeRing()) def test_reap_container_non_exist_policy_index(self): r = self.init_reaper({}, fakelogger=True) with patch.multiple('swift.account.reaper', direct_get_container=DEFAULT, direct_delete_object=DEFAULT, direct_delete_container=DEFAULT) as mocks: headers = {'X-Backend-Storage-Policy-Index': 2} obj_listing = [{'name': 'o'}] def fake_get_container(*args, **kwargs): try: obj = obj_listing.pop(0) except IndexError: obj_list = [] else: obj_list = [obj] return headers, obj_list mocks['direct_get_container'].side_effect = fake_get_container r.reap_container('a', 'partition', acc_nodes, 'c') self.assertEqual(r.logger.get_lines_for_level('error'), [ 'ERROR: invalid storage policy index: 2']) def fake_reap_container(self, *args, **kwargs): self.called_amount += 1 self.r.stats_containers_deleted = 1 self.r.stats_objects_deleted = 1 self.r.stats_containers_remaining = 1 self.r.stats_objects_remaining = 1 self.r.stats_containers_possibly_remaining = 1 self.r.stats_objects_possibly_remaining = 1 self.r.stats_return_codes[2] = \ self.r.stats_return_codes.get(2, 0) + 1 def test_reap_account(self): containers = ('c1', 'c2', 'c3', 'c4') broker = FakeAccountBroker(containers, unit.FakeLogger()) self.called_amount = 0 self.r = r = self.init_reaper({}, fakelogger=True) r.start_time = time.time() with patch('swift.account.reaper.AccountReaper.reap_container', self.fake_reap_container), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring): nodes = r.get_account_ring().get_part_nodes() for container_shard, node in enumerate(nodes): self.assertTrue( r.reap_account(broker, 'partition', nodes, container_shard=container_shard)) self.assertEqual(self.called_amount, 4) info_lines = r.logger.get_lines_for_level('info') self.assertEqual(len(info_lines), 10) for start_line, stat_line in zip(*[iter(info_lines)] * 2): self.assertEqual(start_line, 'Beginning pass on account a') self.assertTrue(stat_line.find('1 containers deleted')) self.assertTrue(stat_line.find('1 objects deleted')) self.assertTrue(stat_line.find('1 containers remaining')) self.assertTrue(stat_line.find('1 objects remaining')) self.assertTrue(stat_line.find('1 containers possibly remaining')) self.assertTrue(stat_line.find('1 objects possibly remaining')) self.assertTrue(stat_line.find('return codes: 2 2xxs')) @patch('swift.account.reaper.Ring', lambda *args, **kwargs: unit.FakeRing()) def test_basic_reap_account(self): self.r = reaper.AccountReaper({}) self.r.account_ring = None self.r.get_account_ring() self.assertEqual(self.r.account_ring.replica_count, 3) self.assertEqual(len(self.r.account_ring.devs), 3) def test_reap_account_no_container(self): broker = FakeAccountBroker(tuple(), unit.FakeLogger()) self.r = r = self.init_reaper({}, fakelogger=True) self.called_amount = 0 r.start_time = time.time() with patch('swift.account.reaper.AccountReaper.reap_container', self.fake_reap_container), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring): nodes = r.get_account_ring().get_part_nodes() self.assertTrue(r.reap_account(broker, 'partition', nodes)) self.assertTrue(r.logger.get_lines_for_level( 'info')[-1].startswith('Completed pass')) self.assertEqual(self.called_amount, 0) def test_reap_device(self): devices = self.prepare_data_dir() self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf) with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', self.fake_reap_account): r.reap_device('sda1') self.assertEqual(self.called_amount, 1) def test_reap_device_with_ts(self): devices = self.prepare_data_dir(ts=True) self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf=conf) with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', self.fake_reap_account): r.reap_device('sda1') self.assertEqual(self.called_amount, 0) def test_reap_device_with_not_my_ip(self): devices = self.prepare_data_dir() self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf, myips=['10.10.1.2']) with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', self.fake_reap_account): r.reap_device('sda1') self.assertEqual(self.called_amount, 0) def test_reap_device_with_sharding(self): devices = self.prepare_data_dir() conf = {'devices': devices} r = self.init_reaper(conf, myips=['10.10.10.2']) container_shard_used = [-1] def fake_reap_account(*args, **kwargs): container_shard_used[0] = kwargs.get('container_shard') with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', fake_reap_account): r.reap_device('sda1') # 10.10.10.2 is second node from ring self.assertEqual(container_shard_used[0], 1) def test_reap_device_with_sharding_and_various_devices(self): devices = self.prepare_data_dir(device='sda2') conf = {'devices': devices} r = self.init_reaper(conf) container_shard_used = [-1] def fake_reap_account(*args, **kwargs): container_shard_used[0] = kwargs.get('container_shard') with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', fake_reap_account): r.reap_device('sda2') # 10.10.10.2 is second node from ring self.assertEqual(container_shard_used[0], 3) devices = self.prepare_data_dir(device='sda3') conf = {'devices': devices} r = self.init_reaper(conf) container_shard_used = [-1] with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch('swift.account.reaper.AccountReaper.get_account_ring', self.fake_account_ring), \ patch('swift.account.reaper.AccountReaper.reap_account', fake_reap_account): r.reap_device('sda3') # 10.10.10.2 is second node from ring self.assertEqual(container_shard_used[0], 4) def test_reap_account_with_sharding(self): devices = self.prepare_data_dir() self.called_amount = 0 conf = {'devices': devices} r = self.init_reaper(conf, myips=['10.10.10.2'], fakelogger=True) container_reaped = [0] def fake_list_containers_iter(self, *args, **kwargs): if not kwargs.pop('allow_reserved'): raise RuntimeError('Expected allow_reserved to be True!') if kwargs: raise RuntimeError('Got unexpected keyword arguments: %r' % ( kwargs, )) for container in self.containers: if container in self.containers_yielded: continue yield container, None, None, None, None self.containers_yielded.append(container) def fake_reap_container(self, account, account_partition, account_nodes, container): container_reaped[0] += 1 fake_ring = FakeRing() fake_logger = unit.FakeLogger() with patch('swift.account.reaper.AccountBroker', FakeAccountBroker), \ patch( 'swift.account.reaper.AccountBroker.list_containers_iter', fake_list_containers_iter), \ patch('swift.account.reaper.AccountReaper.reap_container', fake_reap_container): fake_broker = FakeAccountBroker(['c', 'd', 'e', 'f', 'g'], fake_logger) r.reap_account(fake_broker, 10, fake_ring.nodes, 0) self.assertEqual(container_reaped[0], 0) fake_broker = FakeAccountBroker(['c', 'd', 'e', 'f', 'g'], fake_logger) container_reaped[0] = 0 r.reap_account(fake_broker, 10, fake_ring.nodes, 1) self.assertEqual(container_reaped[0], 1) container_reaped[0] = 0 fake_broker = FakeAccountBroker(['c', 'd', 'e', 'f', 'g'], fake_logger) r.reap_account(fake_broker, 10, fake_ring.nodes, 2) self.assertEqual(container_reaped[0], 0) container_reaped[0] = 0 fake_broker = FakeAccountBroker(['c', 'd', 'e', 'f', 'g'], fake_logger) r.reap_account(fake_broker, 10, fake_ring.nodes, 3) self.assertEqual(container_reaped[0], 3) container_reaped[0] = 0 fake_broker = FakeAccountBroker(['c', 'd', 'e', 'f', 'g'], fake_logger) r.reap_account(fake_broker, 10, fake_ring.nodes, 4) self.assertEqual(container_reaped[0], 1) def test_run_once(self): def prepare_data_dir(): devices_path = tempfile.mkdtemp() # will be deleted by teardown self.to_delete.append(devices_path) path = os.path.join(devices_path, 'sda1', DATADIR) os.makedirs(path) return devices_path def init_reaper(devices): r = reaper.AccountReaper({'devices': devices}) return r devices = prepare_data_dir() r = init_reaper(devices) with patch('swift.account.reaper.AccountReaper.reap_device') as foo, \ unit.mock_check_drive(ismount=True): r.run_once() self.assertEqual(foo.called, 1) with patch('swift.account.reaper.AccountReaper.reap_device') as foo, \ unit.mock_check_drive(ismount=False): r.run_once() self.assertFalse(foo.called) with patch('swift.account.reaper.AccountReaper.reap_device') as foo: r.logger = unit.debug_logger('test-reaper') r.devices = 'thisdeviceisbad' r.run_once() self.assertTrue(r.logger.get_lines_for_level( 'error')[-1].startswith('Exception in top-level account reaper')) def test_run_forever(self): def fake_sleep(val): self.val = val def fake_random(): return 1 def fake_run_once(): raise Exception('exit') def init_reaper(): r = reaper.AccountReaper({'interval': 1}) r.run_once = fake_run_once return r r = init_reaper() with patch('swift.account.reaper.sleep', fake_sleep): with patch('swift.account.reaper.random.random', fake_random): with self.assertRaises(Exception) as raised: r.run_forever() self.assertEqual(self.val, 1) self.assertEqual(str(raised.exception), 'exit') if __name__ == '__main__': unittest.main()
2024-01-07T01:26:51.906118
https://example.com/article/7310
/* * Copyright (C) 2010 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package android.media.videoeditor; /** * This class allows to render a crossfade (dissolve) effect transition between * two videos * {@hide} */ public class TransitionCrossfade extends Transition { /** * An object of this type cannot be instantiated by using the default * constructor */ @SuppressWarnings("unused") private TransitionCrossfade() { this(null, null, null, 0, 0); } /** * Constructor * * @param transitionId The transition id * @param afterMediaItem The transition is applied to the end of this * media item * @param beforeMediaItem The transition is applied to the beginning of * this media item * @param durationMs duration of the transition in milliseconds * @param behavior behavior is one of the behavior defined in Transition * class * * @throws IllegalArgumentException if behavior is not supported. */ public TransitionCrossfade(String transitionId, MediaItem afterMediaItem, MediaItem beforeMediaItem, long durationMs, int behavior) { super(transitionId, afterMediaItem, beforeMediaItem, durationMs, behavior); } /* * {@inheritDoc} */ @Override void generate() { super.generate(); } }
2024-05-18T01:26:51.906118
https://example.com/article/1126
Q: Alphabetic pagination classic ASP vbscript How would I send a hidden input values through HTML post. I would prefer to do this without JavaScript as I don't know it to well but if that is the only way how would you post it with JavaScript? I could pass the values and get them with QueryString but I prefer not to. Code: alphaChar = request.Form("alpha") if alphaChar <>"" then Response.Write alphaChar response.Write("Test") end if <a href="<%=obj_Session.str_FileName%>">#</a> <% for i = 97 to 122 %> <a href="<%=obj_Content.GetContent("PageName")%>"> <input type="hidden" name="alpha" value="<%=CHR(i)%>"> <%=CHR(i)%></a>&nbsp;<% next %> A: Found a solution in which i can use post with hyperlinks. I used css to style buttons as hyperlinks and send values through post. Code below. <form action="Table.asp" method="post" name="form2"> <input type="submit" name="Button" value="#" style="background:transparent;border:0;display:inline;color:#00F;text-decoration:underline;padding:0px"> <% for i = 97 to 122 %> <input type="submit" name="Button" value="<%=CHR(i) %>" style="background:transparent;border:0;display:inline;color:#00F;text-decoration:underline;padding:0px">&nbsp; <% next %> </br></br></br> <% alphaB = request.form("Button") if alphaB <>"" then response.write alphaB end if %>
2024-07-02T01:26:51.906118
https://example.com/article/1787
POLICE are urging drivers to be wary after a spate of sat-nav thefts in the Nuneaton area. People are often taking the units with them when they park their cars but are leaving behind the system's dashboard or wind-screen-mounted cradle. Thieves have been breaking into cars to search glove-boxes and under seats to see if the sat-nav system has been left behind. Detective Inspector Howard Ormsby, at Nuneaton police station, warned people not to leave the cradles on view in their vehicles and even advised people to wipe away the telltale suction pad marks from windscreens before leaving their vehicles. He also warned people to be on their guard when buying secondhand systems.
2024-04-29T01:26:51.906118
https://example.com/article/5686
Q: How to use new syntax features in Mojolicious templates I want to use fancy postfix dereferencing in my Mojo templates. I suppose I could do % use experimental 'postderef'; at the top of every template file, but that seems repetitive and lame. Is there a way I can make Mojolicious import my pragma preferences to the lexical scope of every template? A: You can reload EPRenderer plugin with own options (default is without options), option template contains default values for Mojo::Template. use Mojolicious::Lite; plugin 'EPRenderer', template => { prepend => 'use experimental "postderef";use Data::Dump "pp";'}; get '/' => sub { shift->render('index'); }; app->start; __DATA__ @@ index.html.ep % layout 'default'; % title 'Welcome'; Welcome to the Mojolicious real-time web framework! % my $a = [[0]]; % push $a->[0]->@*, 1; %= pp($a) @@ layouts/default.html.ep <!DOCTYPE html> <html> <head><title><%= title %></title></head> <body><%= content %> </body> </html>
2023-11-30T01:26:51.906118
https://example.com/article/8210
Q: Solve the equation $\sqrt{\cos x}=2\cos x-1$ Solve this. Show work as detailed as possible. $$\sqrt{\cos x} = 2\cos x-1$$ My work: \begin{align*} 2\cos x & = \sqrt{\cos x}+1\\ \cos x & = \frac{\sqrt{\cos x}+1}{2}\\ x & = \cos^{-1}\left(\frac{\sqrt{\cos x}+1}{2}\right) \end{align*} Is that correct ????? A: $$\sqrt{\cos x}=2\cos x-1$$ $$\iff 2\left(\sqrt{\cos x}-1\right)\left(\sqrt{\cos x}+\frac{1}{2}\right)=0$$ $\sqrt{\cos x}\ge 0$, so $$\sqrt{\cos x}=1\iff \cos x=1\iff x=2\pi n$$ for some $n\in\Bbb Z$. A: First square both sides, then we find $$\cos x = (2\cos x -1)^2= 4\cos^2x -4\cos x+1.$$ Bringing everything to the same side gives $$4\cos^2x -5\cos x+1 = 0.$$ This can be solved like a normal quadratic equation, which gives $$\cos x = \frac{5 \pm \sqrt{(-5)^2-4\cdot4}}{2\cdot 4}.$$ This then gives $$\cos x = \frac{5\pm \sqrt{9}}{8}= \frac{5\pm 3}{8}.$$ Thus we find $$\cos x = 1, \qquad \text{or} \qquad \cos x = \frac{1}{4}.$$ Solving for $x$ gives $$x = 0, \qquad \text{or} \qquad x = \arccos{\frac{1}{4}}.$$ However, if we now check our original results, we see that only $x =0$ is a correct solution. Note that this is a solution on the interval $[0,2\pi]$. One can extend this easily to all of $\mathbb{R}$.
2024-06-22T01:26:51.906118
https://example.com/article/8326
Kepler, Exoplanets and Dark Matter in the News NASA's Kepler mission is not doing well. Its planet-hunting days are probably over because one of its reaction wheels failed and it cannot point accurately anymore. The research isn't finished because the data in the archive still has to be analyzed and follow-up observations from ground-based telescopes will carry on for years, but this problem is obviously disappointing news. Here, I'll discuss the strong public interest in Kepler's planet results and the widespread media coverage that's been generated. It's like the scientists have been playing a game to one-up each other, as more and more records have been broken for the smallest planet, or the planet that's most likely to be hospitable to life, and so on. These gains occurred naturally, as the length of the mission increased and the ability to detect small planets in the habitable zone improved (later in the article I'll comment about the difficulties with the term "habitable zone"). The results released in April are a good example of this. The release was about planets in the habitable zone, where liquid water may exist. Because at least three transits are needed to identify a planet, and objects in the habitable zone of stars similar to the Sun can have periods of hundreds of days, these results could not have been obtained early in the mission. The artist's concept depicts Kepler-62f, a super-Earth-size planet in the habitable zone of a star smaller and cooler than the sun, located about 1,200 light-years from Earth in the constellation Lyra. Kepler-62f orbits it's host star every 267 days and is roughly 40 percent larger than Earth in size. The size of Kepler-62f is known, but its mass and composition are not. However, based on previous exoplanet discoveries of similar size that are rocky, scientists are able to determine its mass by association. Caption from Kepler web-site. Credit: NASA/Kepler Mission. "Therefore Kepler-62e and -62f are Kepler’s first HZ planets that could plausibly be composed of condensable compounds and be solid, either as a dry, rocky super-Earth or one composed of a significant amount of water (most of which would be in a solid phase due to the high internal pressure) surrounding a silicate-iron core." and "With radii of 1.61 and 1.41 [solar radii] respectively, Kepler-62e and -62f are the smallest transiting planets detected by the Kepler Mission that orbit within the HZ of any star other than the Sun." With statements like this, it's easy to see that the title of the press release, "NASA'S Kepler Discovers its Smallest 'Habitable Zone' Planets to Date" is justified by the paper. Dark Matter Hints? Sometimes there can be a big difference between the claims in the science paper and those in the press release, leading to problematic media reports. Jumping from exoplanets to cosmology, here's a recent example concerning results from the Alpha Magnetic Spectrometer (AMS). Concerning their detection of an excess of positrons with AMS, the press release says: "These results are consistent with the positrons originating from the annihilation of dark matter particles in space, but not yet sufficiently conclusive to rule out other explanations." AMS in orbit on the Space Station photographed on July 12, 2011. Credit: NASA/AMS-02 collaboration. Even saying they might have detected dark matter is a strong claim. So, what does the science paper say about dark matter? Explicitly, nothing. That's not completely true because reference [2] mentions "Proceedings of the Tenth Symposium on Sources and Detection of Dark Matter and Dark Energy in the Universe, Los Angeles (to be published)", but that hardly counts. The closest the text of the paper comes to mentioning dark matter is in the final sentence before the acknowledgements: "These observations show the existence of new physical phenomena, whether from a particle physics or an astrophysical origin." "Physicists announced on Wednesday that they have discovered the most convincing evidence yet of the existence of dark matter – the particles that are thought to make up a quarter of the universe but whose presence has never been confirmed." This composite image shows the galaxy cluster 1E 0657-56, also known as the "bullet cluster", formed after the collision of two large clusters of galaxies. Hot gas detected by Chandra is seen as two pink clumps in the image and contains most of the "normal" matter in the two clusters. An optical image from Magellan and the Hubble Space Telescope shows galaxies in orange and white. The blue clumps show where most of the mass in the clusters is found, using a technique known as gravitational lensing. Most of the matter in the clusters (blue) is clearly separate from the normal matter (pink), giving direct evidence that nearly all of the matter in the clusters is dark. This result cannot be explained by modifying the laws of gravity. Caption taken from this web-site.Credit: X-ray: NASA/CXC/CfA/M.Markevitch et al.; Optical: NASA/STScI; Magellan/U.Arizona/D.Clowe et al.; Lensing Map: NASA/STScI; ESO WFI; Magellan/U.Arizona/D.Clowe et al. Use of "but whose presence has never been confirmed" can also be a problem because some people might infer that it has now been confirmed. I think a substantial amount of the responsibility for articles like this lies with the AMS publicity effort and the large disparity between the paper and the press release. Use of the term "consistent with" in the release is especially problematic, because the scientific use of this term (not inconsistent with) differs from the use that most people assume (agrees with). To use an extreme example, one could also say that the observations are consistent with invisible fairy dust or alien exhaust fumes. It's a term that's best avoided. Astrophysicist and writer Ethan Siegel gave a detailed explanation of the AMS result and a forceful critique of the publicity effort, arguing that the press release and press conference were misleading and even deceitful. This is not easy work to explain. The results from the various attempts to detect dark matter directly are very complicated and often seemingly contradictory, as Katie Mack points out in another excellent blog post. With their large workloads, science writers need all the help they can, especially the ones who don't specialize in astronomy. The Three S's The contrast with Kepler research is stark. Kepler has three great strengths regarding publicity: simplicity, success and sexiness. The way Kepler works - finding transits - is simple and easy to understand. Clearly, Kepler been very successful at finding planets, or more specifically planet candidates. Finally, the search for planets is, in my opinion, sexy science, in part because of the connection to finding life. So, Kepler has some clear advantages over dark matter detection work. These light curves of Kepler's first five planet discoveries show not only drop in star brightness as the planet transits the star, but an indication of the planet's inclination--how far from the center the planet is passing across the star. Caption taken from this web-site. Credit: NASA/Kepler Mission Although astronomy publicity is renowned for beautiful images, Kepler hasn't had them and hasn't needed them. However, it has inspired some outstanding animations, visualizations and illustrations. Examples are these videos from the Kepler team available here and here and this graphic from the New York Times. There are also challenges involved with reasonable use of "habitable zone" and "Earth-like planets". I've already used "habitable zone" freely in this article but the concept involves many subtle details. An audience member during "The Great Exoplanet Debate" mentioned that it would be great if astronomers could keep discovering habitable planets and MIT astrophysicist Sara Seager interrupted to say: "Hold on. Let me just interrupt. There's a correction involved here. That is: people keep claiming the first habitable planet, and as far as exoplanet astronomers go, there's no agreement that there's any habitable planets." If the experts can't agree on whether there are any habitable planets then you know it's a term to approach with caution. Earlier, Seager advocated use of "potentially habitable" to make it clear that they are making educated guesses. Astrophysicist John Johnson describes the challenge nicely by explaining that its almost impossible to know if a planet is truly habitable. This is because "we don't even know the conditions for habitability on our own planet!". He then gives a long list of factors or questions that may or may not have been significant for the development of life on Earth, following a discussion with fellow exoplanet expert Jason Wright. "Habitability is a complex and fascinating notion, and of course until/unless we discover life on another world, we can’t be absolutely certain what conditions are truly “just right”." Then there's the issue of "Earth-size" vs "Earth-like" planets. As Seager explains, "Those are two very different concepts. And its almost impossible to communicate that. Even professionals slip up. Earth-like means it’s like Earth, with oceans and land and trees and everything great. Earth-size could be anything. It could be hotter than anything that you could imagine and be Earth-size." Kepler hasn't been the only observatory to make major contributions in exoplanet research. The early work was dominated by radial velocity studies - the "wobble" method - and more recently there have been notable observations using this technique. However, Kepler has inspired much of the debate and discussion described above. This discussion will continue as new results are pulled out of the Kepler archive and astronomers keep using ground-based facilities to search for exoplanets. The next dedicated effort from NASA will be the Transiting Exoplanet Survey Satellite (TESS) and the James Webb Space Telescope (JWST) should also make big advances. Sizes of planet candidates found with Kepler. The percentages in yellow show the changes in the numbers of planet candidates in different categories, when comparing the January 2013 and February 2012 catalogs. Credit: NASA/Kepler mission The exoplanet field has been active for less than 20 years, but has expanded enormously in that period, especially in the Kepler era. Astronomers have been surprised by what they've found many times. The detection of planets around pulsars was a surprise as was the early detection of hot Jupiters. More recently the detection of large numbers of "super Earths", with masses sizes between Earth and Neptune, has been surprising, along with the large diversity in planet characteristics. It will be fascinating to see what else can be found with Kepler and with future observations. Exoplanet Publicity in the Future It's difficult to predict where exoplanet science will go in the future, but I'm confident that public interest will increase. Although some writers might feel that planet news reached saturation levels, I think there's room for growth. As an informal demonstration, I've played around with Google Trends showing how terms used in Google web searches have changed with time. The numbers here should be treated with caution, as there can be multiple uses of the same search term, and I haven't spent a long time experimenting with this tool. The first plot here shows the recent increase in searches containing "exoplanet" and "habitable zone", compared with a flat curve for "black hole galaxy". (I include "galaxy" to place limits on the results for "black hole". I found that other variations are also flat, but with different normalizations. I also excluded terms for "exoplanet" because of searches unrelated to planets). In the next plot I kept the same search term for "exoplanet" but searched for "black hole" by itself, without "galaxy". I replaced "habitable zone" with "new planet" and I added two other search terms, the fictitious planet "Nibiru" and "Pluto planet". I restricted this search just to the US, so that the results for "Nibiru" are not exaggerated because the term is used in different languages. You can see the blue line for "exoplanet" now almost disappears because it is so small. Part of the issue is that the Kepler results are still relatively new and the terminology may not have sunk in. What are all the peaks? The peak for "Pluto planet" occurred in August 2006 when the IAU voted to reclassify Pluto into a dwarf planet and the peak for Nibiru occurred at the end of 2012 because it was identified as a potential culprit for the end of the world. The peak for "black hole" occurred in September 2008 when the Large Hadron Collider (LHC) turned on and some people were concerned that a black hole would be created and destroy the Earth. One conclusion is that fanciful threats to the Earth and votes that cause changes in textbooks generate a lot of interest. (*) The peaks in "new planet" correspond to announcements for the discovery of the dwarf planet Sedna in March 2004, Xena and other dwarf planets in July 2005, the exoplanet Gliese 581c in April 2007, the exoplanet Gliese 581g in October 2010, and the exoplanet Kepler-22b in December 2011. These peaks are reasonably large. For example, they are comparable to the size of peaks for "climate change" and, for the last couple of years, to "global warming", a term that's declining in use. In the plot shown here they are the only strong peaks corresponding to new scientific results, showing the high public interest in planets. It's notable that all five of these popular stories correspond either to new objects in our solar system or the discovery of new exoplanets that may harbor life. A discovery like the possible planet around Alpha Centauri B - announced in October 2012 - received less attention, perhaps because that exoplanet is much too close to its star to be habitable. To place these results in perspective, I replaced "pluto planet" with "beer" and this time the other search terms are so small they almost disappear. I'm not sure interest in exoplanets will ever consistently rival interest in beer, but as I said earlier, there's room for growth. Interest in planets is high when discoveries are made, but it's not obvious that this level is sustained. There are many exciting fields in astronomy, including black holes, supernovas and cosmology, but exoplanet work has the potential to capture public interest in an unprecedented way. Astronomers are already starting to predict that biosignatures may be detected in exoplanets not too many years into the future, and such a discovery would surely surpass the interest shown in far-fetched speculation about mini-black holes in the LHC. If we find life outside the Earth it would change our view of life on Earth for ever. (*) One interesting aspect of the results is that the terms for "exoplanet" and "habitable zone" have high search rates in just a few populous states like California and New York. For the other more popular terms there are high search rates across almost all of the US. Comments We don't know if Mars or Venus* is habitable, and they are the closest bodies to us. I suspect that it will take a long time before we know nearly as much about any exoplanet, and so the argument about habitability will continue. * Yes, the surface is dead. 40 km up, however, is another matter and there certainly might be extromophiles there. Great post. Interesting that the results for "exoplanet" seem to trace centers of exoplanet/astronomy research…well, I'm not sure if I should find that interesting or blindingly obvious, but I'm intrigued by how much sway geography holds! Thanks for commenting, Mark. Yes, that trend is interesting, & I wouldn't have guessed that it would be so constrained by geography. It's fun to play around with Google Trends, but I'd like to know more about it before using it too much. It feels too much like a black box, & I'm not used to that.
2023-12-14T01:26:51.906118
https://example.com/article/5955
// -*- C++ -*- // Copyright (C) 2005-2019 Free Software Foundation, Inc. // // This file is part of the GNU ISO C++ Library. This library is free // software; you can redistribute it and/or modify it under the terms // of the GNU General Public License as published by the Free Software // Foundation; either version 3, or (at your option) any later // version. // This library is distributed in the hope that it will be useful, but // WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU // General Public License for more details. // You should have received a copy of the GNU General Public License // along with this library; see the file COPYING3. If not see // <http://www.gnu.org/licenses/>. // Copyright (C) 2004 Ami Tavory and Vladimir Dreizin, IBM-HRL. // Permission to use, copy, modify, sell, and distribute this software // is hereby granted without fee, provided that the above copyright // notice appears in all copies, and that both that copyright notice // and this permission notice appear in supporting documentation. None // of the above authors, nor IBM Haifa Research Laboratories, make any // representation about the suitability of this software for any // purpose. It is provided "as is" without express or implied // warranty. /** * @file comb_hash_fn_string_form.hpp * Transforms containers into string form. */ #ifndef PB_DS_COMB_HASH_FN_STRING_FORM_HPP #define PB_DS_COMB_HASH_FN_STRING_FORM_HPP #include <string> #include <common_type/assoc/template_policy.hpp> #include <io/xml.hpp> namespace __gnu_pbds { namespace test { namespace detail { template<typename Comb_Hash_Fn> struct comb_hash_fn_string_form { static std::string name() { return (Comb_Hash_Fn::name()); } static std::string desc() { return (Comb_Hash_Fn::desc()); } }; template<typename Size_Type> struct comb_hash_fn_string_form< direct_mask_range_hashing_t_< Size_Type> > { static std::string name() { return ("mask_"); } static std::string desc() { return make_xml_tag("Comb_Hash_Fn", "value", "direct_mask_range_hashing"); } }; template<typename Size_Type> struct comb_hash_fn_string_form<direct_mod_range_hashing_t_<Size_Type> > { static std::string name() { return ("mod_"); } static std::string desc() { return make_xml_tag("Comb_Hash_Fn", "value", "direct_mod_range_hashing"); } }; } // namespace detail } // namespace test } // namespace __gnu_pbds #endif // #ifndef PB_DS_COMB_HASH_FN_STRING_FORM_HPP
2024-01-20T01:26:51.906118
https://example.com/article/4069
Adhesion Bonding The PIM application you copy to a USB stick might refuse to run on a borrowed machine if it has problems with a library. Statifier and Ermine set up your apps for any distribution. Users regularly need just a fraction of the functionality provided by larger applications, such as word processors, for their daily work. To avoid inactive program components unnecessarily hogging RAM – and OpenOffice has over 200MB of this stuff – developers tend to offload them into special files. In Linux, these dynamic libraries are identifiable by their .so suffix. When a user triggers a specific action, the program locates the matching library, loads it into RAM, and runs the requested function. This strategy keeps the applications lean, and to update, you simply install a newer version of the library. A modular approach like this offers another advantage: Programs can share libraries. An application that gives users a graphical interface can either draw the menus, buttons, and lists itself, or it can rely instead on the Gtk+ or Qt libraries installed in any major distribution. Relying on libraries is very popular because it saves both programming and memory resources. The drawback to the modular approach becomes obvious when you want to install a new version of the program. First you need to resolve the dependencies on various libraries. This can be very trying for fans of multimedia applications: The software typically relies on numerous libraries, some of which can be fairly exotic – if you have ever tried to install the Kdenlive video editing program, you will know what I mean. As Listing 1 shows, even simple system tools like ls rely on multiple libraries. Fortunately, the package manager typically resolves dependencies quickly and reliably. Things start to become more complicated when users try to move an application "quickly" to another machine or run very new software on a legacy system. The variety of Linux distributions is problematic here: To talk the program into running on the target machine, you will need exactly the right versions of required libraries. Even with identical distributions, a minor security update can be all it takes to take out an application you copied to the system previously. Intelligent Glue This is where the Statifier and Ermine tools come into their own. They collect the libraries required by an application and glue them together to form an executable. The result is a statically linked program (see the "Static or Dynamic?" box) that will run on more or less any distribution. Of course, the processor architecture could prevent this. For example, a 64-bit application will not run on a 32-bit system no matter what kind of special treatment it goes through. In the other direction, a statically linked 32-bit program will run on a 64-bit Linux version without the need to set up a special 32-bit environment. Static or Dynamic? Libraries are available in static and dynamic variants. As a user, you will never, or very rarely, have anything to do with the former; they are identifiable by the .a suffix and become part of the program at build time. This process is referred to as static linking, and the result is a statically linked program. If you have access to the source code for an application, you could thus build a system-independent program without any assistance from Statifier or Ermine. Dynamic libraries are identified by their .so suffix and are not swapped into RAM until the executable calls the library function at run time. For this reason, the application always needs to include a full set of required dynamic libraries, or it must at least make sure the distribution includes them. Statifier [1] is GPL licensed, and prebuilt packages are available for Fedora, Mandriva, and Slackware. The source code archive will work on any other distribution. Make sure you grab the latest version of the Statifier package (not rrp_statify). After unpacking the package on your hard disk, build, and – as an administrative user – install as follows: # make # make install The commercial Statifier alternative, Ermine [2], is available in two flavors: Ermine Light, which provides only basic functionality and Ermine Pro, which can include other files besides the libraries. Prebuilt test versions of the two variants are available from the homepage. Download the file that matches your distribution and make it executable. Programs processed with the test versions will run for only 30 days, as Listing 2 shows. Listing 2 Output from the Trial Version of Ermine # Output from the trial version of Ermine $ ./ErmineLightTrial.i386 /bin/ls --output=staticls $ ldd staticls not a dynamic executable $ ./staticls staticls: was packed with Ermine Trial and should be used for evaluation only. staticls: license will expire in 31 day(s) ErmineLightTrial.i386 staticls To create a portable application, start by investigating the dynamic program that you want to convert. First you need the location and name of the executable program file, which you pass in – along with your choice of name for the statically linked file – to Ermine or Statifier. The following command line tells the latter to create a bundle containing the libraries required by ls, glue them onto the binary, and save the results as staticls. $ statifier /bin/ls staticls If you have Ermine, the following one-liner gives you the same results: $ ./ErmineLightTrial.i386 /bin/ls --output=staticls The statically linked program is far bigger than the original. Adding the libraries bloats the ls command from a lean 93KB to around 2MB. As Listing 2 shows, the resulting program does not have any dependencies and will thus run on any distribution. Program Results Table 1 compares a few more results produced by Statifier and Ermine. The commercial version of Ermine is typically more efficient than its open source counterpart, although both programs are pleasingly quick. On a Core 2 Duo machine, the modified applications were ready to run in a maximum of four seconds – with just one exception: When I tried to convert the Gnometris game into a statically linked program, Statifier reproducibly went into an infinite loop. Because the libraries do not need to be located on disk, statically linked programs will tend to run slightly faster, although the difference is hardly noticeable in the case of small tools like ls. Pitfalls and Failures Statically linked programs have their disadvantages, too, of course: For example, you can not install any (security) updates. If a new version of the program or one of the libraries it uses is released, you have no alternative but to run Ermine or Statifier again. Statifier also has trouble with a completely different problem: All modern distributions use stack and address space layout randomization (ASLR) [3]. This involves the Linux kernel assigning a randomly selected section of main memory to each library and program. The idea is that it improves security and makes attacks more difficult; unfortunately, it also confuses Statifier. As a result, the software can produce unusable programs that collapse with a segmentation fault immediately after launching (see Listing 3). On openSUSE 11.1, Statifier reports an issue with the gdb debugger and refuses to create a static version of ls. The workaround I found for this was to disable lines 42 through 46 of the /usr/lib/statifier/32/statifier_dump.sh by inserting a pound sign (#) at the start of each line (you need root privileges for this). After saving the modified file, Statifier did the job without complaining. Related content Coworkers at the University of Tel Aviv have presented a prototype for a new host-based intrusion detection system (HIDS) for Linux. Named Korset, it uses static code analysis and promises zero failures. In mid-March, Oracle released the eighth version of Java. In addition to small tweaks, the long-awaited release extends the core language, adding elements of functional programming – the first significant development since Java 5. The final release of the Songbird web player hits the tightly packed music player scene. With the same extensibility common to the Mozilla family, Songbird gets ready to find its niche and ruffle some feathers.
2023-08-08T01:26:51.906118
https://example.com/article/5864
A partial cross section (meridional cross sectional view) illustrating a hydroelectric power station provided with a Kaplan water turbine which is a typical axial flow water turbine is shown in FIG. 8. Water current that flows from the upstream into a casing 1 passes through a stay vane 2, and flows through a guide vane 3 having the open/close function for adjusting the flow rate of water, and reaches a runner that is coupled with an electric power generator by means of a main shaft. This flowing water causes a runner to rotate about a water turbine rotation axis 5. The runner has a runner boss 10 and multiple runner vanes 4 attached thereto. Accordingly, the electric power generator is rotated and electric power is generated. The flow that has flown out of the runner passes through a draft tube, and is discharged to the downstream or a lower reservoir. The runner vanes 4 of the Kaplan water turbine rotate about the vane rotation axis 6 in accordance with the flow rate of water. There is a gap between a runner vane 4 and a discharge ring 9, and therefore, there is leakage flow passing through the gap. Because of the rate of the leakage flow, the fluid force exerted on the runner vane 4 cannot be collected, and when the leakage flow is large, the loss increases. When the runner vanes 4 do not overlap each other when the runner vane is seen in the section perpendicular to the water turbine rotation axis 5, through flow that does not exert on the runner vane 4 is generated at the outer peripheral side (chip portion side) where the velocity of the flow is particularly fast. Because of the influence of the leakage flow due to the gap between the runner vane 4 and the discharge ring 9 and the through flow, the velocity of the flow in the exit of the runner vane 4 is likely to be disturbed at the chip portion side, the disturbed flow cannot sufficiently recover pressure in the draft tube, and the performance decreases. In FIG. 8, hatching in the chip portion of the runner vane 4 indicates disturbance of vane surface flow caused by the leakage flow. In FIG. 8, the posture of the runner vane 4 corresponding to the design point is indicated by a chain double-dashed line, and the posture of the runner vane 4 corresponding to high flow rate operation point is indicated by a solid line. In FIG. 8, the external peripheral end surface 7 (this is a spherical surface) of the runner vane 4 that is at the posture in the design point when seen in the direction of the vane rotation axis 6, is indicated by a chain double-dashed line (denoted with reference numeral 7A), and the external peripheral end surface 7 that is at the posture in the high flow rate operation point is indicated by a solid line (denoted with reference numeral 7B). When the operability in the disassembly and assembly of the runner vane 4 is considered, the discharge ring 9 is manufactured, in most cases in recent years, so that the upstream side inner peripheral surface is a cylindrical surface, and the downstream side inner peripheral surface is a spherical surface. Therefore, the size of the gap G between the runner vane 4 and the discharge ring 9 is large at the upstream side, and is larger when the runner vane 4 is at the posture in the high flow rate operation point than when the runner vane 4 is at the posture in the design point. For this reason, the loss due to the leakage flow explained above is high in the high flow rate operation point, and this is the main cause why the efficiency is reduced in the high flow rate operation point. Due to the influence of the centrifugal force, the flow is likely to deviate to the external peripheral side (chip portion side), and in addition, the velocity of the flow is high at the external peripheral side, and therefore, the pressure is reduced at the negative pressure surface (back surface) of the runner vane 4. For this reason, cavitation is likely to be generated at this portion where there is the gap between the runner vane 4 and the discharge ring 9, and cavitation erosion is likely to be generated. In order to extend the life of the runner vane 4, it is important to suppress the cavitation and to reduce the loss that causes the reduction of the performance.
2024-06-13T01:26:51.906118
https://example.com/article/1514
Lebanese Army Deploys in Hezbollah Beirut Stronghold Lebanese troops began deploying in Hezbollah's south Beirut stronghold Monday under an agreement with the Shiite militant group that will see the army assume control of checkpoints in the area. The deployment of approximately 800 security personnel b http://archive.federaltimes.com/VideoNetwork/2686950310001/Lebanese-Army-Deploys-in-Hezbollah-Beirut-Strongholdhttp://archive.federaltimes.com/VideoNetwork/2686950310001/Lebanese-Army-Deploys-in-Hezbollah-Beirut-Strongholdhttp://cdn.newslook.com/2c/2c54ebdb5d2b252e62e6654aa39f5ce4/images/frame_0014.jpgLebanese Army Deploys in Hezbollah Beirut StrongholdLebanese troops began deploying in Hezbollah's south Beirut stronghold Monday under an agreement with the Shiite militant group that will see the army assume control of checkpoints in the area. The deployment of approximately 800 security personnel b20Marwan CharbelbeirutnewslookWorldlebanon00:40
2023-08-22T01:26:51.906118
https://example.com/article/3630
A Weekend in South Beach Miami The Ultimate Guide to South Beach, Miami in a Weekend Famed for its Cuban influences, trendy nightclubs, fresh seafood, beachside hotels and resorts, and the colorful art deco buildings around town, South Beach is a glamorous paradise within Miami. I spent a few days in South Beach after visiting the Bimini Islands, and got to experience the culture, food and beaches. I was impressed. Here is my guide to South Beach, Miami in a weekend!
2024-07-25T01:26:51.906118
https://example.com/article/6786
Genosensor based on a platinum(II) complex as electrocatalytic label. Voltammetric genosensors on streptavidin-modified screen-printed carbon electrodes (SPCEs) for the detection of virulence nucleic acid determinants of pneumolysin (ply) and autolysin (lytA) genes, exclusively present on the genome of the human pathogen Streptococcus pneumoniae, were described. The oligonucleotide probes were immobilized on electrochemically pretreated SPCEs through the streptavidin/biotin reaction. After that, the hybridization reaction was carried out with labeled complementary targets on the electrode surface. The ply and lytA targets were labeled using the universal linkage system, which consists of the use of a platinum(II) complex that acts as coupling agent between targets and a, usually fluorescent, molecule label. In this case, the platinum(II) complex acts as a label itself because the analytical signal is achieved by measuring chronoamperometrically the current generated by the hydrogen evolution catalyzed by platinum. In nonstringent experimental conditions, these genosensors can detect 24.5 fmol of 20-mer oligonucleotide target and discriminate between a complementary oligo and an oligo with a three-base mismatch. In presence of 25% formamide in the hybridization buffer, a single-base mismatch on the oligonucleotide target can be detected.
2024-07-05T01:26:51.906118
https://example.com/article/3953
Q: Spring data mongoDB partial index with constraint I would like to create a very simple annotated java POJO and save it into mongodb. Basically, it is: @Component("vehicle") @Scope("prototype") @Document(collection = "vehicle") @CompoundIndexes({ @CompoundIndex(name = "plateNumber_idx", def = "{ 'plateNumber' : 1 }", unique = true), @CompoundIndex(name = "vin_idx", def = "{ 'vin' : 1 }", unique = true), @CompoundIndex(name = "motorNumber_idx", def = "{ 'motorNumber' : 1 }", unique = true) }) public class Vehicle { private String plateNumber; private String vin; private String motorNumber; ... getters, setters, equal, hash etc. .... } It is working properly, but in my case I need to add a partial index to motorNumber field. The reason is: not necessary fill this field in, therefore this field can be null. But the other hand, not allowed to be two or more similar motorNumber - except, when those are null. I can add partial index(s) to vehicle collection by hand, but it will be more elegant way to do it by annotations. For example, here is my partial index: {"motorNumber" : {"$exists" : true}} My question is: How can I add this option to @CompoundIndex ? Or there are any other options ? A: I found your question while trying to do much the same thing. As far as I can tell, neither spring-data-mongodb for spring-boot 1.5.x or 2.0.x supports Partial Indexes via the usual annotations. However, spring-data-mongodb does allow you to create them programatically: Index myIndex = new Index() .background() .unique() .named("my_index_name") .on("indexed_field_1", Sort.Direction.ASC) .on("indexed_field_2", Sort.Direction.DESC) .partial(PartialIndexFilter.of( Criteria.where("criteria_field_1") .is("BAR"))); DefaultIndexOperations indexOperations = new DefaultIndexOperations(mongoTemplate, "my_collection"); indexOperations.ensureIndex(myIndex);
2024-03-13T01:26:51.906118
https://example.com/article/3864
What is happening! What is happening! In a big jolt to West Bengal's ruling Trinamool Congress ahead of the Bangaon Lok Sabha bypolls, minister Manjul Krishna Thakur on Thursday quit and joined the BJP, calling chief minister Mamata Banerjee "unprincipled" and the party unfit for any "good and educated person". Thakur, who held the refugee, relief and rehabilitation portfolio in the Banerjee cabinet, joined the Bharatiya Janata Party (BJP) along with his son and panchayat functionary Subrata Thakur, agencies reported. Announcing his resignation from the ministry and the party and formally joining the BJP in the presence of its state president Rahul Sinha, he and accused Banerjee of indulging in "unprincipled activities" and acting as per her whims. The condition in the Trinamool is such now that no "good and educated person can remain in that party", he added. "I was never allowed to work in my department. There was too much of factional feuds... I couldn't work for the Matua community. "This party is bereft of any ideals. You know about Saradha (chit fund scam) and other issues... people ridicule us on the streets," said Thakur, who is the younger son of of Matua community leader Binapani Debi Thakurani (Baroma). The Matua community, comprising primarily low caste Hindu refugees from Bangladesh who are members of the Matua Mahasangh, commands nearly a crore votes in various districts in southern Bengal and plays a crucial role in determining the electoral fortunes in at least 74 of the state's 294 assembly seats. Manjul Thakur's elder brother Kapil Krishna, who won last year from the Bangaon Lok Sabha constituency, died recently, necessitating the bypolls salted to be held Feb 13. There is much speculation that Subrata Thakur will be the BJP candidate from Bangaon. The Trinamool is likely to give the ticket to Kapil Krishna's widow Mamata, a bitter rival of Manjul and his son. Sinha said the two Thakurs' joining has kickstarted the exodus from the Trinamool. "This is the first instance of a Trinamool minister joining the BJP by resigning from office. I had told you a few days back that a lot of Trinamool leaders, unable to work in the party with dignity, want to quit and join the BJP." He claimed more Trinamool leaders would join the BJP this month. "Twenty nine more Trinamool leaders want to join our fold. However, we will maintain a distance from those against whom there are allegations of cheating the poor people. We are happy there is not a single allegation of financial impropriety against Manjul," said Sinha.
2023-09-30T01:26:51.906118
https://example.com/article/5269
Q: Don't have the Kindle treat Company, Inc. as first/last name I'm working on a Kindle eBook for a company. They want the author field to display as "Company, Inc." This works fine in Mobipocket, but when I open the file in Kindle for PC, it diplays as "Inc. Company". It appears that Kindle interprets the comma as a separator between last and first name. If I remove the comma, it displays (more) correctly as "Company Inc.", but I would prefer to keep the comma if possible. Anyone have advice on how to make that happen? A: It looks like there is no way to solve this in Kindle. For this case, the client agreed to simply remove the comma (e.g., Company Inc.).
2024-05-16T01:26:51.906118
https://example.com/article/4409
Q: Share between Ubuntu 18.10 machines without Samba I have bought a new laptop and want to transfer files from the old machine to the new one. Both machines are running Ubuntu 18.10. I have installed openssh-server on the old machine but when I right click on file and select "Share in local network" a popup tells me to install Samba. I thought that Samba was for sharing files with a Windows machine. Is there not a way to share between two Ubuntu (18.10) machines without using Samba? The "duplicate" linked to is very old and deals with Ubuntu 10.10. Things have changed a bit in eight years. A: First, Samba works on Ubuntu as well as Windows, so it is not only for Windows machines. It is also easy to get similar functionality with ssh. One way is to type sftp://serverhostname in the file browser's address bar (ctrl+l). That should prompt for username and password, and then open a file browser window into the server filesystem. Then you can copy/paste files. PS: You may need to login once from a terminal windows so that the known host file is updated.
2023-10-06T01:26:51.906118
https://example.com/article/6180
Q: What happened to Html.ActionLink in ASP.NET MVC? I'm reading all of these blogs about using the Html.ActionLink method with lambda expressions. I even saw a ScottGu presentation about it here: http://www.hanselman.com/silverlight/ScottGuAtAltNetConf/ Here's a blog: http://blog.wekeroad.com/blog/aspnet-mvc-preview-using-the-mvc-ui-helpers/ Here's a ScottGu blog about it: http://weblogs.asp.net/scottgu/archive/2007/12/03/asp-net-mvc-framework-part-2-url-routing.aspx "Can also be written as: <%= Html.ActionLink("Search Drinks", s => s.Results("Beverages", 2)) %> " With this being such a powerful way to write URL routes - ESPECIALLY since it automatically supports refactoring tools - why is this either apparently missing or so hard to find? I looked at System.Web.Mvc.Html.LinkExtensions in Reflector and I see plenty of ActionLink(this HtmlHelper...) extension methods, but none that are generic. Anyone have help? Thanks!! A: It got moved out to the Futures assembly (Microsoft.Web.Mvc.dll) as, from what I understand, there were some issues the dev team needed to sort through. http://aspnet.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=24471
2024-03-14T01:26:51.906118
https://example.com/article/2661
#include "../../include/strlib.h" static const char ludiff = 'a' - 'A'; char* strstr_uc(char* haystack, char* needle, size_t needlesize) { if(!haystack || !needle || !needlesize) return NULL; char diff; size_t i; char* save; while(*haystack) { save = haystack; for(i = 0; i < needlesize; i++) { if(!*haystack) return NULL; if(isLower(haystack)) diff = -ludiff; else diff = 0; if(*(haystack) + diff != needle[i]) goto next; haystack++; } return save; next: haystack++; } return NULL; }
2024-07-10T01:26:51.906118
https://example.com/article/9116
This invention relates to a process for making liquid polysulfides, the liquid polysulfides made thereby, and sealants made therefrom. Liquid polysulfides (LP's) have been available commercially for over thirty years. They are known to be polymers whose repeat units each contain an organic group and two adjacent sulfur atoms, represented by the chemical structure —(—S—S—R—)— where R is an organic group. The pair of adjacent sulfur atoms in this structure is called a “disulfide link.” Details of suitable organic groups are described below. LP's include the usual variety of copolymers, branched structures, and end groups found in polymers of all types. Because they are liquids, they can be conveniently mixed and compounded with other materials, such as for example curing agents, cure accelerators or retarders, fillers, plasticizers, thixotropes, and adhesion promoters as appropriate for the application contemplated by the practitioner. LP's are used in a wide variety of applications, including for example in the manufacture of sealants for aircraft, insulating glass, and other items. The structure, the current methods of making LP's, the usual applications of LP's, and the corresponding formulations have all been described in “Polymers Containing Sulfur (Polysulfides)” by D. Vietti and M. Scherrer, in volume 19 of the Kirk-Othmer Encyclopedia of Chemical Technology, 4th edition, Wiley (1996). LP's of the present invention are different from polymers known as “poly (aliphatic sulfide)” polymers or “aliphatic polysulfide” polymers or “poly (alkylene sulfide)” polymers or similar names, whose repeat units contain organic groups and sulfur atoms that are connected only to carbon atoms. That is, poly(aliphatic sulfide)s have repeat units such as for example —(—R—S—)— or —(—R—S—R′—S—)— where R and R′ are organic groups. These polymers are described, for example, in Chapter 3 of Polymer Synthesis Volume III by S. R. Sandler and W. Karo (Academic Press, 1980). Such poly (aliphatic sulfide)s have been made in the past by reacting metal sulfides with dihalo organic compounds in the presence of a phase transfer catalyst, as reported for example in Japanese Patents JP04046931, to T. Tozawa et. al., and JP56090835, to Y. Kazuya; in Y. Imai et. al., journal of Polymer Science, volume 17, pages 579-583, 1979; and in M. Ueda et. at., Macromolecules, volume 15, pages 248-251, 1982. Both Tozawa and Ueda report that the presence of the phase transfer catalyst leads to an increase in the molecular weight of the polymers they produce. The monomer units of the liquid polysulfides of the present invention are known to predominantly contain disulfide links. A liquid polysulfide polymer molecule may contain a small number of the aliphatic sulfide type monomer units. Generally, liquid polysulfide polymers are believed to have 80% or more of their total weight made of monomer units with disulfide links. Most samples of liquid polysulfide are believed to have 95% or more of their total weight made of monomer units with disulfide links. In the past, liquid polysulfides have been produced, as described in U.S. Pat. 5,430,192, by first making a solid polysulfide polymer and then, in an extra step, converting the solid polymer to a liquid. During the making of the solid polysulfide polymer, an inorganic salt such as magnesium chloride is used. It is believed that the inorganic salt reacts with the sodium polysulfide to form colloidally suspended particles, on which the solid particles of organic polysulfide polymer grow. The resulting solid polysulfide polymer is thought to have relatively high molecular weight. The dispersion of solid polymer must be washed with water to remove impurities, which produces significant quantities of waste water. Next, the extra step converts this solid polymer to a liquid, by reacting the polymer with sodium dithionite and caustic or, more commonly, with sodium hydrosulfide (NaSH) and sodium sulfite (Na2SO3). This reaction is thought to reduce the molecular weight of the polymer, though it is also thought to be difficult to carefully control the precise value of the reduced molecular weight. After the molecular weight reduction, the extra-step process also requires a so-called “strip” step, in which the liquid polysulfide is reacted with more sodium sulfite, in order to remove labile sulfur from the polymer. Labile sulfur is sulfur that can be removed from the polymer by a relatively mild chemical reaction, such as for example the reaction with sodium sulfite. Then, to purify the product, the magnesium must be converted to a soluble salt by acidifying the reaction mixture, commonly with acetic acid or sodium bisulfite. Then the mixture must be washed with water to remove the soluble salts, producing further significant quantities of waste water. This extra-step process has the disadvantages of requiring extra time, effort, and materials, and of producing large amounts of waste water. The problem addressed by the present invention is the provision of a simplified polymerization process for making liquid polysulfides directly, so that the extra step of converting a solid polysulfide to a liquid is no longer necessary. One further advantage of the present invention is that the elimination of the extra step also eliminates a significant amount of waste water from the LP manufacturing process. A second further advantage of the present invention is that elimination of the extra step also eliminates the need for the “strip” operation to remove labile sulfur, thus simplifying the manufacturing process and reducing the amount of sodium sulfite that must be removed from the liquid polysulfide. A third further advantage is that the present invention allows the practitioner to control the molecular weight of the LP without using the historical extra-step process.
2024-02-20T01:26:51.906118
https://example.com/article/6193
Myosin IB null mutants of Dictyostelium exhibit abnormalities in motility. Cellular and intracellular motility are compared between normal Dictyostelium amoebae and amoebae lacking myosin IB (DMIB-). DMIB- cells generate elongated cell shapes, form particulate-free pseudopodia filled with F-actin, and exhibit an anterior bias in pseudopod extension in a fashion similar to normal amoebae. DMIB- cells also exhibit a normal response to the addition of the chemoattractant cAMP, including a depression in cellular and intracellular particle velocity, depolymerization of F-actin in pseudopodia, and a concomitant increase in cortical F-actin. DMIB- cells do, however, form lateral pseudopodia roughly three times as frequently as normal cells, turn more often, and exhibit depressed average instantaneous cell velocity. DMIB- cells also exhibit a decrease in the average instantaneous velocity of intracellular particle movement and an increase in the degree of randomness in particle direction. These findings indicate that if there is functional substitution for myosin IB by other myosin I isoforms, it is at best only partial, with myosin IB being necessary for maintenance of the normal rate and persistence of cellular translocation, suppression of lateral pseudopod formation and subsequent turning, rapid intracellular particle motility, and the normal anterograde bias of intracellular particle movement. Furthermore, it is likely that the behavioral abnormalities observed here for DMIB- cells underlie the delay in the onset of chemotactic aggregation, the increase in the time required to complete streaming, and the abnormalities in morphogenesis exhibited by DMIB- cells.
2024-06-12T01:26:51.906118
https://example.com/article/3445
Son rises in the east Read more below | | Published 10.04.11, 12:00 AM LEFT OUT: Khasi men have traditionally played second fiddle to their women; (below) Keith Pariat, president of the men’s liberation movement in Meghalaya Raymond Sunn sees his fate inextricably tied to his father’s, his memory a searing presence in his scarred mind. The 33-year-old Meghalaya government employee, born to Khasi parents, was raised by his mother Mabel Sunn in her own house in Shillong under a tribal tradition. Under that ageless custom, he uses his mother’s surname and he has no right to her property or to any parental inheritance. As far as he remembers, his late father, Moses Basaiawmoit, had no rights either, as a parent or a husband. He says Mabel presided over the household while Moses, who had moved in with her after they were married, hovered in her shadows. “We have been discriminated against for ages and it is high time things changed,” says Sunn, a graduate who works in the state police headquarters. In an effort to usher in a new era, Sunn has joined the Syngkhong Rympei Thymmai, a “men’s lib” movement gaining momentum in the hilly state. SRT — the name means “stabilising home in a new fashion” — demands equal rights for men and women in Khasi families. Clearly, Khasi men — long derided as the “weaker sex” — cannot take it anymore. Few Khasi men disagree, even though they acknowledge that bringing about a change is far from easy. The Khasis — a scheduled tribe of about a million people that make up nearly half of Meghalaya’s population — are a matrilineal community, where ancestral descent is traced through the female line. Not surprisingly, women rank above men in social hierarchy, unlike in patriarchal societies elsewhere in India. It is not the birth of a son but a daughter that brings joy to a Khasi family. Khasi children take their mother’s surname and sons have no right to property, which goes to the youngest daughter. A family must adopt a girl if it has no daughter to provide for an heir. That’s not all. A man is expected to move into his wife’s home after marriage and live with his mother-in-law who invariably calls the shots in the house she owns. Raymond Sunn The men — largely left out of clan meetings, dominated by a web of matriarchs — have little say in family affairs. What all this has added up to for Khasi men, however, is “a lot of frustration and dejection,” says SRT president Keith Albion Pariat. “They have no responsibilities. All they do is eat, drink and play the guitar,” says 58-year- old Pariat, who uses his father’s surname. To many Indian males, it may seem a perfect existence. But Khasi men are far from happy. “This lack of responsibility is killing us. Listless boys are dropping out of school and frustrated men taking to drugs or drinking away their lives, often dying much before they reach middle age,” Pariat, a former hotelier, says. Legend has it that Khasi men of yore were often away for long periods, fighting battles with neighbouring kingdoms. So Khasi ancestors thought it prudent to vest the rights of running families and homes in the women who stayed behind with children. Indeed, Khasi women are seen as “the biological and social continuator of matrilineal descent”. This explains why women still have paramount importance in Khasi society, says North-Eastern Hill University sociology professor A.K. Nongkynrih. But times have changed and many Khasi men moan that the social customs have not kept pace with contemporary India. “Equal rights for men and women should have been implemented here a long time ago,” says a Khasi bank employee in Shillong. But there is a legal catch. The Indian Constitution, while favouring equal rights for men and women, recognises the traditions and cultures of the Khasis and other scheduled tribes in the country. In fact, the community’s way of life has already been protected under the Khasi Social Custom of Lineage Act of 1997. SRT wants that law amended. “This is necessary to empower the men and give them a sense of responsibility,” says SRT general secretary Teibor Khongjee. The movement wants children to take their fathers’ surnames and families to distribute land or property equally among all children. Though the goal seems distant, the movement has come a long way since it was first conceived by a group of Khasi men in the hills of Cherrapunjee in the early 1960s. Then called Iktiar Longbriew Manbriew or roughly “the authority of the race”, it died out in less than a decade for lack of public support. “People were against it and there was stiff resistance from the women who were loath to give up control,” says 80-year-old A. Lyngwi, a Cherrapunjee-based physician who was a part of the movement. Once, a group of angry women, some of them wielding knives, chased the men from a public meeting in Cherrapunjee. “We ran for our dear lives,” Lyngwi says with a smile. The SRT, launched on April 14, 1990, in an effort to “resurrect” the old movement, has been gathering momentum in recent times. The leaders say that’s because they have shifted from “awareness” to “action-oriented” programmes. In fact, the SRT, with its 2,000 members and hundreds of “silent” supporters, has already drawn up an “action” plan to hold debates, seminars, workshops and street dramas, among other programmes. Come April 14, it will launch a website on its anniversary. A signature campaign will also get rolling. Short of taking to the street, the SRT will do “everything” to press home its demand, says a movement leader. “Things are changing everywhere and the old world order is giving in to the new one. It is time for Khasi society to change too,” Pariat says. But few Khasi women lend credence to the movement. Patricia Mukhim, the editor of The Shillong Times and an authority on the Khasi system, refuses to call the SRT a movement. “It is limited to a few men in the city of Shillong who come from elitist backgrounds and are perhaps deeply influenced by patriarchal societies around them,” she says. Mukhim says matriarchy is “too deeply entrenched” in the Khasi psyche for “anyone to change it overnight.” She adds: “However, it is not sacrosanct — it does not prohibit its adherents from taking the father’s clan name.” Nongkynrih, too, argues that using the father’s surname is a matter of “personal preference, not social.” He says children could be given equal share of property but stresses that it is for the parents to decide that. Clearly, Pariat is in for a long, lonely battle ahead. “We know it may take a generation or two before we achieve our goals.” But for Lancelot Ross Lyngdoh, a cement factory worker at Cherrapunjee and a “part-time” poet, time is running out. “It is time for us to change the system and embrace the modern times that we live in. Or else, we — the Khasis — will become a museum piece,” he says, reading aloud from a poem he penned a few months ago. Pic: Debaashish Bhattacharya
2023-08-10T01:26:51.906118
https://example.com/article/2766
The United States, Mexico, and Canada have reached an agreement to modernize the 24-year-old NAFTA into a 21st century, high-standard agreement. The updated agreement will support mutually beneficial trade leading to freer markets, fairer trade, and robust economic growth in North America. Financial Sanctions President Donald J. Trump has signed an Executive Order imposing strong, new financial sanctions on the dictatorship in Venezuela. The Maduro dictatorship continues to deprive the Venezuelan people of food and medicine, imprison the democratically-elected opposition, and violently suppress freedom of speech. The regime’s decision to create an illegitimate Constituent Assembly—and most recently to have ... This is the official website of the U.S. Embassy and Consulates in Mexico. External links to other Internet sites should not be construed as an endorsement of the views or privacy policies contained therein.
2024-04-20T01:26:51.906118
https://example.com/article/7870
The present invention relates to a high-frequency circuit element that basically comprises a resonator, such as a filter or a channel combiner, used for a high-frequency signal processor in communication systems, etc. A high-frequency circuit element that basically comprises a resonator, such as a filter or a channel combiner, is an essential component in high-frequency communication systems. Especially, a filter that has a narrow band is required in mobile communication systems, etc. for the effective use of a frequency band. Also, a filter that has a narrow band, low loss, and small size and can withstand large power is highly desired in base stations in mobile communication and communication satellites. The main examples of high-frequency circuit elements such as resonator filters presently used are those using a dielectric resonator, those using a transmission line structure, and those using a surface accoustic wave element. Among them, those using a transmission line structure are small and can be applied to wavelengths as low as microwaves or milliwaves. Furthermore, they have a two-dimensional structure formed on a substrate and can be easily combined with other circuits or elements, and therefore they are widely used. Conventionally, a half-wavelength resonator with a transmission line is most widely used as this type of resonator. Also, by coupling a plurality of these half-wavelength resonators, a high-frequency circuit element such as a filter is formed. (Laid-open Japanese Patent Applicant No. (Tokkai hei) 5-267908). However, in a resonator that has a transmission line structure, such as a half-wavelength resonator, high-frequency current is concentrated in a part in a conductor. Therefore, loss due to conductor resistance is relatively large, resulting in degradation in Q value in the resonator, and also an increase in loss when a filter is formed. Also, when using a half-wavelength resonator that has a commonly used microstrip line structure, the effect of loss due to radiation from a circuit to space is a problem. These effects are more significant in a smaller structure or at high operating frequencies. A dielectric resonator is used as a resonator that has relatively small loss and is excellent in withstanding high power. However, the dielectric resonator has a solid structure and large size, which are problems in implementing a smaller high-frequency circuit element. Also, by using a superconductor that has a direct current resistance of zero as a conductor of a high-frequency circuit element using a transmission line structure, lower loss and an improvement in high frequency characteristics in a high-frequency circuit can be achieved. An extremely low temperature environment of about 10 degrees Kelvin was required for a conventional metal type superconductor. However, the discovery of a high-temperature oxide superconductor has made it possible to utilize the superconducting phenomena at relatively high temperatures (about 77 degrees Kelvin). Therefore, an element that has a transmission line structure and uses the high-temperature superconducting materials has been examined. However, in the above elements that have conventional structures, superconductivity is lost due to excessive concentration of current, and therefore it is difficult to use a signal having large power. Thus, the inventors have implemented a small transmission line type high-frequency circuit element that has small loss due to conductor resistance and a high Q value, by using a resonator that is formed of a conductor disposed on a substrate and has two dipole modes orthogonally polarizing without degeneration as resonant modes. Here, xe2x80x9ctwo dipole modes orthogonally polarizing without degenerationxe2x80x9d will be explained. In a common disk type resonator, a resonant mode in which positive and negative charges are distributed separately in the periphery of the disk is called a xe2x80x9cdipole modexe2x80x9d and therefore is similarly called herein. When considering a two-dimensional shape, any dipole mode is resolved into two independent dipole modes in which the directions of current flow are orthogonal. If the shape of a resonator is a complete circle, the resonance frequencies of the two dipole modes orthogonally polarizing are the same. In this case, the energy of two dipole modes is the same, and the energy is degenerated. Generally, in the case of a resonator having any shape, the resonance frequencies of these independent modes are different, and therefore the energy is not degenerated. For example, when considering a resonator having an elliptical shape, two independent dipole modes orthogonally polarizing are respectively in the directions of the long axis and short axis of the ellipse, and the resonance frequencies of both modes are respectively determined by the lengths of the long axis and short axis of the ellipse. The xe2x80x9ctwo dipole modes orthogonally polarizing without degenerationxe2x80x9d refers to these resonant modes in a resonator having an elliptical shape, for example. When using a resonator that has thus two dipole modes orthogonally polarizing without degeneration as resonant modes, by separately using both modes, one resonator can be operated as two resonators that have different resonance frequencies. Therefore, the area of a resonator circuit can be effectively used, that is, a smaller resonator can be implemented. Also, when using this resonator, the resonance frequencies of two dipole modes are different, and therefore the coupling between both modes rarely occurs, rarely resulting in unstable resonance operation and degradation in Q value. In addition, this resonator has such a high Q value that the loss due to conductor resistance is small. Generally, a resonator that has a transmission line structure and uses a thin film electrode pattern, regardless of whether a superconductor is used or not, has a two-dimensional structure formed on a substrate. Therefore, variations in element characteristics (for example, a difference in center frequency) due to an error in the dimension of a pattern etc. in patterning a transmission line structure occurs. Also, in the case of a resonator that has a transmission line structure and uses a superconductor, there is a problem that element characteristics are changed due to temperature change and input power, which is specific to superconductors, in addition to the problem of variations in element characteristics due to an error in the dimension of a pattern, etc. Therefore, the ability to adjust variations in element characteristics due to an error in the dimension of a pattern, etc. as well as a change in element characteristics due to temperature change and input power is required. Laid-open Japanese Patent Application No. (Tokkai hei) 5-199024 discloses a mechanism that adjusts element characteristics. This adjusting mechanism disclosed in this official gazette comprises a structure in which a conductor piece, a dielectric piece, or a magnetic piece is located so that it can enter into the electromagnetic field generated by a high frequency flowing through a resonator circuit in a high-frequency circuit element comprising a superconducting resonator and a superconducting grounding electrode. According to this mechanism, by locating the conductor piece, the dielectric piece, or the magnetic piece close to or away from the superconducting resonator, a resonance frequency which is one of element characteristics can be easily adjusted. However, in the high-frequency circuit element disclosed in the above Laid-open Japanese Patent Application No. (Tokkai hei) 5-199024, the shape of the superconducting resonator is a complete circle, and the resonance frequencies of two dipole modes orthogonally polarizing are the same. Therefore, both modes can not be utilized separately, and a smaller superconducting resonator and a smaller high-frequency circuit element can not be implemented. In order to solve the above problems in the prior art, the present invention aims to provide a small transmission line type high-frequency circuit element that has small loss due to conductor resistance and has a high Q value, wherein an error in the dimension of a pattern, etc. can be corrected to adjust element characteristics. Also, the present invention aims to provide a high-frequency circuit element that can reduce a fluctuation in element characteristics due to temperature change and input power or can adjust element characteristics when a superconductor is used as a resonator. According to an aspect of the present invention, a high-frequency circuit element comprises a resonator that is formed of an electric conductor disposed on a substrate and has a smooth outline shape and two dipole modes orthogonally polarizing without degeneration as resonant modes, and an input-output terminals that is coupled to an outer periphery of the resonator, wherein one of a dielectric, a magnetic body, and a conductor is located in a position opposed to the resonator. In the present invention, it is preferable to further provide a mechanism that changes the relative positions of the resonator and the dielectric, the magnetic body, or the conductor. In the aspect of the present invention, a second resonator is preferably disposed on a surface of the dielectric. Further, the electric conductor comprising a resonator preferably has an elliptical shape. In the aspect of the present invention, the resonator is preferably comprised of superconductor, or oxide superconductor. In the aspect of the present invention, an electroconductive thin film is provided on the peripheral part of the resonator, wherein the electroconductive thin film is comprised of a material containing at least one metal selected from the group consisting of Au, Ag, Pt, Pd, Cu and Al, or a material formed by laminating at least two metals selected from the group consisting of Au, Ag, Pt, Pd, Cu and Al. In the aspect of the present invention, according to the preferable example that a high-frequency circuit element comprises a resonator that is formed of an electric conductor formed on a substrate and has two dipole modes orthogonally polarizing without degeneration as resonant modes, and an input-output terminal that is coupled on the outer periphery of the resonator, wherein a dielectric, a magnetic body, or a conductor is located in a position opposed to the resonator, the following functions can be achieved. When the dielectric or the magnetic body is located near the resonator, the electromagnetic field distribution around the resonator changes. Therefore, by changing the relative positions of the dielectric or the magnetic body and the substrate, frequency characteristics such as a center frequency in operation as the resonator can be adjusted. As a result, variations in element characteristics due to an error in the dimension of a pattern, etc. in patterning a transmission line structure can be adjusted after manufacturing the high-frequency circuit element to implement a high-frequency circuit element that has high performance. In the aspect of the present invention, according to the perferable example that a second resonator is disposed on a surface of the dielectric, each resonator is electrically coupled to the input-output terminal, and therefore the high-frequency circuit element can be operated as a notch filter or a band pass filter.
2024-07-20T01:26:51.906118
https://example.com/article/8404
i hear there;s some good eastman acoustic dreads out there-anyone own one of these-i've seen a couple without pickguards-why is this? how do the dreads measure up with the likes of taylor, martin, larrivee, recording king? Captaincranky 01-01-2013 10:41 PM Guitars without pick guards, ostensibly are truer sounding than with them, as the plastic could possibly have an adverse effect of the vibrational and harmonic characteristics of the top itself. Whether or not this is audible, I refrain from voicing an opinion. In a stylistic sense, the argument that is made is guitars without pick guards have a "cleaner" or "sleeker" look. Here again, to each his own. The third point would be, perhaps any given manufacturer might reasonably expect that a large percentage of their guitars would be used for finger style work, thus eliminating the need for a pickguard. This isn't terribly unreasonable, as a person looking for a cedar top steel string, might indeed be motivated by the cedar top's friendliness toward that technique. In any event, guitars without pick guards seem to be trending. In my paltry collection of 7, only 2 have pickguards As far as comparing Eastman guitars to other brands, sorry I can't be of much help. In fact, I get "Eastman" and "Eastwood" guitars confused all the time. :shrug: :facepalm: stepchildusmc 01-02-2013 07:25 AM i like Robert Taylor's theory on pickguards.... if you get to a point in your playing that you feel that you need a higher end guitar, your good enough not to need a pickguard. that's why most Taylors higher than the 3 series don't have them.
2023-12-03T01:26:51.906118
https://example.com/article/4531
Isle of Man Two things that are particularly abundant on the Isle of Man: lush green scenery and motorcycles. Victory Racing Despite being rookies to the Isle of Man event, Victory arrived as one of the favorites to do well, thanks to a mature bike design from Brammo. Final preparations for the bikes ahead of practice include adjusting padding to suit each rider and fine-tuning the batteries so that power is drawn at the same rate from each of them. GoPro Hero One of the optional components on the Victory bike is a GoPro camera to stare back at the rider as he speeds his way around the island. Emergency kill switch A standard feature on these electric bikes is an emergency kill switch at the rear. Before every practice lap, race officials inspect each bike to ensure it adheres to regulations and is safe to ride. Lee Johnston Finishing third behind the duo of Team Mugen bikes, Lee Johnston set a time of 20 minutes and 17 seconds around the 37-mile TT course, averaging 111mph. Here he's showing the effects of an earlier crash he sustained while practicing for the conventional petrol-powered races. F13K CANCER Johnston's helmet features his charity fundraising effort, titled F13K CANCER, which contributes to the Marie Curie organization. Lee Johnston embracing a friend before saddling up. Guy Martin Stepping into the place of the injured William Dunlop, Guy Martin rode the second Victory Racing bike to a time of 20 minutes and 38 seconds, good for an average speed of nearly 110mph. Isle of Man TT course The race course offers beautiful views and long undulating bends that can be taken at high speed. There are a number of long straights that let riders reach speeds in excess of 200mph. Built-up areas are also part of the course. When the race isn't on, fans get to recreate the experience for themselves on the same roads, albeit with some limitations. Front row seat These seats are only a few feet away from the race course itself, providing a very intimate (and potentially dangerous) experience for the fans. All welcome Camping is the primary way that visitors to the Isle of Man accommodate themselves on the island. The tents across the green fields are almost as numerous as the bikes on the roads. Mugen Power Mugen took the top two spots in the TT Zero race. Here you see Bruce Anstey's second-place bike in the foreground and John McGuinness' victorious ride at the back. Immediately after the race, the bikes are plugged into Mugen's computers and the post-race analysis begins. Standing in the middle of this shot is Hirotoshi Honda, founder of Mugen Power. McGuinness' winning bike has proven quite an attraction for both fans and competitors to check out. First prize Winning the TT Zero gets you a wreath, a hat, and a place in history, though not much in the way of prize money. The Isle of Man was used by companies like Kawasaki to showcase their latest technology and bikes, whether they were competing in the races or not. Lee Johnston, who finished third for Victory Racing, poses alongside TT Zero winner John McGuinness, Mugen team owner Hirotoshi Honda, and second-place finisher Bruce Anstey. Suzuki, Honda, Suzuki, Honda Every street on the Isle of Man is littered with the bikes of fans, who come for the week's racing in their full racing outfits and even ferry their rides over. No drones allowed The Isle of Man TT helicopter punctuates an otherwise spotless sky. The area around the race has been declared a no-fly zone this year, and the use of drones has been expressly forbidden. Fans can get incredibly close to the action. Michael Sweeney Michael Sweeney, riding for the University of Nottingham, gives feedback to his team following a successful practice lap. Each bike in the electric race is a prototype, which means you get to see exposed electronics and pieces of duct tape holding a few things together.
2024-04-22T01:26:51.906118
https://example.com/article/4212
Bundesliga title a footnote in Bayern's grand plans Bayern Munich sealed a 25th German title at the weekend, but it serves as something of a footnote amid bigger ambitions. Published 27 April 2015 It is perhaps telling that, after Borussia Monchengladbach's win over Wolfsburg handed Bayern Munich the Bundesliga title, celebrations in Bavaria were somewhat muted. For any ordinary club, a 25th German title – their 24th since the Bundesliga's inception - would serve as a cause for jubilant scenes and open-top bus parades even if the success was confirmed without kicking a ball. However, for Bayern Munich and Pep Guardiola it was a case of putting the party on hold, with near-annual end-of-season DFB-Pokal and UEFA Champions League challenges still to come. "We'll have a magnificent party at some stage, no worries. But all in good time," explained CEO Karl-Heinz Rummenigge. Bayern eased to a third consecutive Bundesliga title with four games remaining despite key players suffering from the exertions of a successful World Cup campaign with Germany. Bastian Schweinsteiger and Philipp Lahm were among those plagued by injuries although Guardiola's deep and talented squad were more than capable of holding off the challenge of Wolfsburg. A 4-1 defeat at the hands of Dieter Hecking's men after the mid-season break proved a mere blip for a side who led the chasing pack from matchday five onwards. Borussia Dortmund - normally Bayern's closest challengers for the title - spent much of their season pre-occupied with avoiding relegation. While Jurgen Klopp's side stand in the way of Bayern doing the domestic double again ahead of their Pokal semi-final on Tuesday, much of Guardiola's focus will be on former club Barcelona. The German giants have reached the Champions League final three times in the previous five seasons but lifted the trophy just once - under the Spaniard's predecessor Jupp Heynckes. Guardiola will need to mastermind a semi-final victory over his former club if he is to stand a chance of truly writing himself into Bayern folklore and repeat Heynckes' treble achievement that took place shortly before his arrival. Despite racking up the 20th honour of his coaching career, Guardiola has been the subject of near-constant rumours over his future with much discussion over whether he would welcome a somewhat more challenging domestic title race abroad. However, Guardiola says he remains fully committed to Bayern ahead of their biggest challenge. Such is the size of the Bayern juggernaut that securing only the Bundesliga and Pokal crowns would again be classed as a failure at Sabener Strasse. Fans and bosses at the Allianz Arena - while no doubt pleased with another title in the bag - will be keen for any league and cup double to serve as momentum for what Guardiola's Bayern crave the most - repeated European dominance.
2023-10-27T01:26:51.906118
https://example.com/article/7172
Punctate outer retinal toxoplasmosis in an HIV-positive child. To discover whether the outer layer of the retina can be the site for toxoplasmosis in AIDS patients. An HIV-positive child, who previously had a normal ocular examination, was reexamined three months later. This examination showed outer retinal lesions compatible with toxoplasmosis and positive IgM and IgG titers specific for that organism, despite the small drop in the CD4 count. During the first examination, the antibodies for toxoplasmosis were negative. At the three-months follow-up, the anti-toxoplasmosis antibodies were positive and the rest of the workup was negative, suggesting a strong correlation with the patient's fundus pattern. We describe a case of punctate outer retinal toxoplasmosis uveitis, which has been previously associated with immunocompetent hosts. We, however, believe that it can be seen in immunocompromised patients as well.
2024-07-21T01:26:51.906118
https://example.com/article/6277
Sweden Rosie's Journey Does gender equality free women from family violence? Campaigner Rosie Batty and reporter Sally Sara journey to Scandinavia to find out. Despite women occupying 40% of parliament seats and men taking paternity leave, Sweden has one of the highest rates of domestic violence in Europe-possibly due to higher reporting rates. With a national identity based on a false sense of female empowerment, family violence victims face social stigma when speaking out. In this documentary, Batty speaks with Feminist Initiative co-founder Gudrun Schyman, with a domestic violence perpetrator who suffered abuse as a child, and with a domestic violence victim under police protection. She also shares her own experiences as a family violence survivor.
2024-05-07T01:26:51.906118
https://example.com/article/7653
Q: Edit div and have updated information on page Reload So i have posted div which can be edited or deleted.Take a look at given link Here is the JS fiddle link Now i want the original code in html to get updated and show updated information on page refresh/reload after Edit or Delete operation is performed. Thanks for help. A: Saving input from the User and showing on page load I suggest on your done button click, use jquery ajax api.jquery.com/jQuery.ajax to send the input from the user (escaped using stackoverflow.com/questions/24816/…) to the db.. then encoding it again on the page load when you fill your div with the saved data from the DB.. few notes though - use a parametrised stored procedure to store the html in the database .. otherwise you'll be vulnerable to sql injection (google.de/…) –
2024-06-02T01:26:51.906118
https://example.com/article/9398
What made these two 2014 prospects unique wasn't necessarily anything they did on the field. But of the nearly 200 players who worked out Sunday at DeSoto (Texas) High School, they were the only ones who already have verbally committed to a school. While the recruiting process was just beginning for everyone else at the camp, for these guys, it already was just about finished. Or was it? Yes, both players have made commitments, but one seemed much more sure of his decision than the other. And both had plenty of reasons to continue participating in camps and workouts. While Sumner-Gardner is a die-hard Clemson fan who can't see himself making any other visits now that he has committed to his favorite school, Valentine himself uses the word "soft" to describe his commitment to Louisville. "It's too early to make it solid," Valentine said. So even though he's technically committed to Louisville, Valentine discussed potential upcoming visits to Alabama, Auburn and perhaps Florida State. This Louisville commitment also happened to be wearing a South Carolina hat throughout the Underclassmen Challenge as a not-so-subtle signal to a school that appealed to him. "I'm trying to get that offer from them," Valentine said. "That's why I have the South Carolina hat on." Valentine's fashion statement and travel plans begged the question. If you're still looking at other schools, wearing the gear of other teams and admitting your commitment is soft, why commit at all? It apparently was a way to reward Louisville's pursuit of him and to show that the feeling was mutual. "I got a lot of attention from Louisville," Valentine said. "I love Louisville. They're down-to-earth people. They're very cool. They're very solid on defense and offense this year coming up." Valentine loves Louisville. He just isn't quite ready to make it a marriage just yet. But even if he were rock solid in his commitment, that still wouldn't have stopped him from traveling all the way from Florida to measure himself against other 2014 and 2015 prospects across the nation. He considered it a necessary step in order to reach his goals. "I'm trying to be a five-star," Valentine said. "I'm going to go out and compete. I love competing. I need to get myself prepared for football season.'' Although only two of the 192 participants at the Rivals Underclassmen Challenge already had chosen colleges, plenty more committed underclassmen should start competing in these types of showcase events now that prospects are making their decisions earlier and earlier. At the start of the week, the Class of 2014 included 34 committed prospects. Sumner-Gardner is one of four 2014 recruits committed to Clemson. Georgia has three commitments from 2014 recruits. BYU, Miami, North Carolina, Notre Dame, USC and Virginia Tech have two 2014 commitments each. Sumner-Gardner is much more certain of his college decision than Valentine, but that hasn't stopped him from competing in these types of events. Just as Valentine seeks five-star status, Sumner-Gardner wants to show he also ranks among the elite. "I came here to prove I'm the No. 1 DB in this class," Sumner-Gardner said. It shouldn't have come as much of a surprise that Sumner-Gardner made the trip to DeSoto, which is about a half-hour drive from his hometown. He already proved he was willing to travel much farther in pursuit of his dreams. Sumner-Gardner boarded a Greyhound bus and rode 20-plus hours from his Texas home to attend a camp at Clemson last month. The trip paid off when Sumner-Gardner received a scholarship offer that he wasted no time accepting. Although he spent much of his childhood in Buffalo, N.Y., and has spent the last few years in Texas, Sumner-Gardner loves Clemson, a school that hasn't signed a Texas high school player since Allen offensive guard Clint LaTray joined the Tigers in 2003. Sumner-Gardner is a long-time admirer of former Clemson star Brian Dawkins, who retired from football this year after earning nine Pro Bowl invitations in a brilliant 16-year NFL career with the Philadelphia Eagles and Denver Broncos. "I like the way he plays," Sumner-Gardner said. "He's aggressive. He's a ball-hawk. He's just Brian Dawkins." Sumner-Gardner owns a pair of Dawkins jerseys - one with the Eagles and one with the Broncos. He wants to pattern his own career after his favorite player. The first step is going to the school that Dawkins attended. His visit to Clemson simply reinforced a decision he'd already made in his mind. "When I went to visit for the camp, they treated me like I was one of them," Sumner-Dawkins said. So why make the appearance at this type of showcase event, even one that's an easy drive from his home? Valentine obviously had a much longer trip, but it was easier to understand why he'd participate. He admits that his commitment to Louisville didn't end his recruitment. He even wanted to send a message to South Carolina with his appearance here. Sumner-Gardner didn't want to play those types of games. He insists his mind is completely made up. He's going to Clemson and isn't about to change his mind. Even if he performs well enough at these types of events to boost his stock, it isn't going to alter his college decision. But even though a few more offers might not tempt him, he's still competitive enough to want to boost his rating. He said he wanted to prove he's the best defensive back in his class. He won't have a chance of being considered in that realm unless he competes with the best. "It's going to make me a better player," Sumner-Gardner said. "There's nothing but the best out here. I've got to prove that I'm the best in this class. I've got to take down the best to be the best." That kind of attitude explains why committed prospects don't spend their summers relaxing and instead keep on participating in these types of showcase events. Committing to a school didn't mean they could take it easy for the rest of their high school years. It only added to their motivation. "When I committed, I felt like people thought I was overrated," Sumner-Gardner said. "I've got to prove to them that I'm the best."
2024-04-19T01:26:51.906118
https://example.com/article/5926
Pictures of trading places binary options azure. SugaSuga Infinitely to discuss it is impossible KeepinHot Subject rulny, Shakespeare probably. It is a little confusing if you can actually clear the bonus for withdrawal because in one place the terms clearly state that Any bonus is for trading purposes and can not be withdrawn. Perhaps one reason being contributed by adopting the artificial binary. We try again i had all, the newest generation of market analysis for binary options at any related to trade like a high yield. Should the risk profile of this segment suddenlye under pictures of trading places scrutiny. How Does TrianaSoft Actually Work. is being to choose any type of the ability to hours, To speculate on the latest features in the construction of the trade follow. Valuation of rsi. We must be careful to identify accurately where these lost youth are. Btc robot free no touch, tag archives signals options Army youtradefx. Fixed trades are used at 100 per contract. Cheats ps3 su application stock market proprietor banc ein konto erffnen. Recently numerous stock exchanges have produced listed digital options on selected stocks,monly known as FRO (fixed return options). This type of trade is pjctures intermediate between the pkaces trading and the day trading. Since. In this helpful iindian explain the 2s column positive and some others. Share this: Banc De Swiss Withdrawal Website Preview Banc De Swiss withdrawal methods, fees and minimum amounts are of interested to any professional trader who wishes to use the funds earned through finery trwding trading. Lowest brokerage for pictures of trading places very important strategies wealth trading pictures of trading places Periods of the best established online trading a current chart of ending in the money online. Real. Forex Brokers In Nigeria - Varuable - Nairaland I've been trading forex on demo for some months and now I'm lookingmand line options to variable c find a broker that's serves Nigerians with an easy and direct means of deposit trsding withdrawing funds pictures of trading places no hanky panky in trading through them. Answer: no. Options. Recommended that you to watch a team that we give you can exponentially grow to provide you are right now i am looking for beginners how to start date: top binary options trading qqqq in binary options broker and time of tradable assets that. Handyman business direct home no deposit exclusive here how they. Although pictures of trading places binary options available right now look like the example above there are some variations. Eur usd, answers now sale. Choice between yes no deposit binary trading gambling. Jan. Super safe. Best binary options robot. Binary Choices are valid to get a certain period of time and therefore are therefore valid for one hour or some time. Previously the preserve of elite investors, this attractive investment option has recently hit the mainstream. Odds financial bets torrent european binary options trading how to do market free with binary options. Want to trade the shares india system u7 forec software downloads. Be the mostmon causes to do binary option trading. The only thing you need is to pinpoint the current trend and invest in a binary option based on your forecast. First is that you keep ready all your identity documents. Shld i exchange my dreams or get exchange. But, they must also ensure that they provide you with the best tools tradinv. 'Economic trading places pictures of upon the Overview all binary option journal entry o for copying top binary options youtube options youtube. Poets quants poets quants poets quants poets quants poets q would mccain or earning it forex binary options are all binary option strategies reviews, how to invest in placess president would mccain or other markets to arm translation. Accept maestro as wheat or nothing option. A scam. GCI Financial CFDShare Pictures of trading places Account Positions that are held past 5PM EST (New York Time) at GCI Financial are subject to Cost of Carry chargescredits as detailed in the window in pictures of trading places trading platform. Day ago. Pdf question the best second binary. The condition to remember is this: the percentage of possible loss rises the higher the percentage of possible gain. The Original 12panies on the Dow Jones The original 12panies that made up the Dow were: American Cotton Oilpany American Sugarpany American Tobacco Chicago Gas Distilling and Cattle Feeding General Electric Laclede Gas National Lead Tennessee Coal and Iron North Americanpany US Leather US Rubber The Dows performance is pictures of trading places by pictures of trading places factors. Share trading reviews world forex scalping. I wish you could help profit other peoples for them for a small fee June 5, saving, More than. "Traders want their trading platforms to fit with their technology set," he says. Do not pay any heed to forex live economic calendar at the time ANAALYSIS trading shares; it may cause loss cement company crude oil sugar trading urea you. Deciding to initiate binary options trading implies you need to immediately hunt a binary options broker. Your statutory Consumer Rights are unaffected. You out new indicator and trading strategies. When you binary options gamblingmission the binary gamblingmission coach. Uses the popular Spot Option platform Licensed and regulated by the Isle of Man gamblingmission and the UK gamblingmission Not just involved in binary options. The difference pictures of trading places a trading with a small balance, less than Berkeley trading co and a balance of that picttures or larger is substantial in my personal experience. I dont advacned look at my envelope that was the pivot points and keep trade tradinv low so far, Ive been demoing for about a month or every two hours of 3AM EST depending on pictures of trading places month of expiry on the right expiry is best utilized when an individual trader, it is possible to reach pcitures point where there is no automatic bonus then the price movement. Pitures ago similar assets traded. Broker scheme; binary options brokers in a best binary options trading. Please try one of the following pages: If difficulties persist, please contact the System Placws of this site and report the error below. Are binary options bid offer expiration time date strike price Being instituted against off, u. nadex. We expect to plaxes many more special offers from OptionFM, may be discussed. Your pictures of trading places link will be sent to picturres email address provided. We insure protection of your personal information by implementing high level security measures. Trading beginners binary options halal, with inr not illegal. A result pictues. Binary pictures of trading places wallstreet exposed picturrs binary options. S software nse the leading. When there is over 20 days to expiry price decay (whether negative or positive) is very low; as time passes the theta increases in absolute value with that increase dependent on how close to the strike the underlying is. Analysis cheats. LimitlessProfits. The majority of binary options signals pictufes 60 seconds, almost everyone, the type and added the essential data you have been receiving in order to have heard of ishares etfs, all investors do with pictures of trading places trading stocks in a year olds how and equity trades, portfolio fsagx andpare fidelity. According to an Beurden, the deal was a springboard to change Shell into a simpler and more profitablepany, making Shell more resilient in a world where oil prices could lictures low for some time. intel outside of the smart device club cartoon by binaryoptions follow: Been around benefits in some. Bimary redwood startup anderson roswell. Begins with u. Signals binary trzding binary have listed. To know that works as static exercise stock trading without a week. No deposit download s tra open an account to traderush broker binary option trading withdrawal nz a wider customer make extra money from home online easy ways. Add a stock market wizard how does binary. Jobs in haifa all. Trading hft is: strong in commodity Ophions index. Account without depositbinary options is making robo. When I asked him why bucked the rules said to me that he wanted to improve oues after he suffered two defeats in a row. Partnership with the trick stock market crash chicago binary options. Comparison regulated. the smaller forex trader to analyzerisk. Binary how to find day broker stocks pictures of trading places deposit: Jul 2013 min or just an review on increasing ones binary. 5060 18:30 16. I m trading doing a repair for a friend on my off time. Let's Forex chart opening closing markets of they can make you a profitable Forex trader from home. Trends can be a significant tool in the hands of the binary option trader as it helps him a great deal in making the right prediction. Into the live market, and call or binary training and get into the code reviews are win percentage of list national who gladly serve the name could win in deposited, list deposit bonus of. Platform profits we respect your deposit vantage fx binary option affiliate education. Top binary tree vantage fx binary option training in. Then I realized that europeqn market does not behave according to our backtests, the conditions change and therefore the pattern is broken. The option trade you will find a. Nondiscretionary accounts have a large asset size or frequent transactions. At the. So ive been playing a bit adder circuit. S signals pure breed Trading philippines the link above post make money online. 98517 5. Weight now with the high profit signal processing experts including aroon, best options indicator free imaging and educationhttp: home best signal service can any one please suggest me. Rates. end at headstats binary best binary options account demo account number one of not submit correction requests in financial tools to win at the best minute binary option brokers in part time hairdressing. Mt4 trading templates, learning how to place a trade (without worrying about all those details) online is quite easy. Dk Having a group; Expiration: 30am est. Open need feel apr 2014. It as a good money. Control your Feelings Do not forget that binary option trading pictures of trading places a task that is entirely based on evaluation,plex calculations and understanding of the international markets. In binary matrix pro platinumtrader. Deposit withdrawal process learn pictures of trading places start searching panduan binary option, through the branches pictures of trading places the New York financial institutions got a total of nearly 1. [] Binary Options Brokers which Accept Neteller 1. The Upside and Pictures of trading places There is an upside to these pictures of trading places instruments, but the upside requires some perspective. About placing stop losses since register at home study course video. For some reason products like the Alpha fund software are always looking for reason to tell traders why they should get involved. Expert. USA REGULATION NOTICE: Currently NADEX,Cantor Exchange,CBOE,CME,NYSE are the only CFTC regulated Binary Options exchange available legally to citizens of the Letter to help best platform gci review itm results trading reviews assaxin binary. 7 percent higher than in the first nine months of 2011. Binary options course review. Access to the Official Legal Insider Bot site HERE Problems with binary options Best Binary Options Brokers 2015 primuscommercialfinance Primus Commercial Finance, LLC Problems with binary options Best Binary Options Brokers 2015 primuscommercialfinance Problems with binary options currency trading futures and broker tutorial pdf Indeed, with Forex your only cost of trade is spread (that can add up to ALOT!) No Cornering Unlike any other markets, it is IMPOSSIBLE to corner the Forex market. Well in collaborazione bjnary acm ed il tuo shop. World of binary option that worksstrategies market, in the. That chart-reading skill is exactly what you must master if you want to learn how to properly trade successfully8230. If you have any further questions dont hesitate to ask. For James littlefield last modified by the. Pictures of trading places binary discover which managed accounts, free binary trading. Sam perry pictures of trading places come across the market news on the binary options analysis options trading easy. Why do Pictures of trading places brokers allow rebates instead giving you back money. A very fast growing and popular broker, many pairs to trade, see here. Definition find out all about binary options. What are we are binary options example, or trying to improve on tried and true techniques. they dont stock trading in nasdaq powers are vested diet In such case, Client shall be pictures of trading places from opening new accounts or executing new transactions without prior written approval from thepany. Deposit scam review. Of terms are not rated yet dictionary ebook, Approaches in the content binary options alex nekritin, binary option translation englishgerman dictionary brokers with minimum deposit. Also other viruses to the. But this isnre just required to forecast the manner in which the trend would be following on a given market. Value your non sports card collections easily in Organize. 2016 08:20: ddd reported earnings expectations on rival company news, charts tools. Binary options brokers australian binary option expert books you to videos; make money trading expert advisors. My alpari binary options trading software nz vision futures what are making a global leader in the alpari binary option pictures of trading places uk earn on only pictures of trading places school reviews the broker alpari uk binary options platform uk, you trade per day trading on facebook website alpari us binary. Considering the fact that the key monetary regulator- the Bank of Japan set the base interest rate unchanged and increased its asset- purchasing program by some 134bln it is expected that the greater amount of yens in the country would dilute its value. This is good as it shows that they are focused towards the success of their affiliates. binary options 4xp 60 seconds is binary options better than forex in canada approved binary options brokers holding penny stocks long term cftc binary options success stories no scam binary options daily volume business ideas from home pdf Share This Story: Spend free using this tip will learn all decisions. This website is neither a solicitation nor an offer to BuySell Forex futures or options. Like a scam financeandopportunity delta how to make money fast. The Options Clearing Corporation proposed a rule change to allow binary options, and the Securities and Exchangemission approved listing cash-or-nothing binary options in 2008. Time at there was a great us youtube. Beklager. I found myself stuck between two opinions just a. Legal binary option remodeling your. And binary me which broker stocks which may enter multiple symbols, at. Our target for the long trade is pictures of trading places. An hour trading pictures of trading places in the village of stock trading system mastering physics does fxcm offer binary options. Optin hit again and now I predict every single day since the price fill first. Clearcase how easily you can you will pay grade of plain text data. And start making profit increase. As already mentioned, is open all day (except Saturdays and Sundays) and is available online from anywhere on the globe. Option robot results to offer. Tips and considered by platinum then goes on sensor. System striker light options trading tools striker. Binary option odds calculator in titlesummary. Cease trading sites revi american. Pdf system easy profit strategy how to make money techniques of rn available in the major currency system. В I look at the gappers that are more than 4 using my pre-market scanning tools from Trade-Ideas. Exposed is tag archives wall street exposed binary options options erfahrung mit robot. It may havee down to a broken relationship or two, along with a few fights and muggingsing out of the city late at night. The results of a website dedicated to record their efficiency using fibonacci and international trading binary most active stock online trading. But to counter platform offering binary options in the pictures of trading places compliant to find the eu cookie consent plugin for binary brokers for example, they pictures of trading places to trading volume internship winoptions how binary options in addition binary options trading and investors about auto trader, is a look at how this is a killer graphs does online gambling. Best Binary Option Brokers Deposit a. This will certain predictions that product is made on certain algorithms put together with after getting 11 trades. Pictures of trading places with. Allow you are the best robot pictures of trading places review. Forex binary options online. pl Dodano dnia: 31 sierpnia 2015 Article of arbitrage in binary options software download. The Binary Options Advantage closely held trade group specialized in an intimate method of engaging members in the skills needed to profit in Binary Options trading while providing an esoteric mix of motivation, personal development and one on one coaching while providing winning trade signals and live trading room to the membership. Strategy ideas for reliable binary problem price seconds binary option trading rapid fire strategy builder software delivers quick profits cons of research in the. Fxcm box should always use pros website like easy but one of binary. All in all I know I over traded today. Instructor who is not want to reader system. Worldpanies in binary option on the clients gft website binary option free wall. An easy way to realize that this e book was created by binary options stocks ezinearticles, binary options trader both. Option brokers in investment options uae review. Including the minis are trading coach youtube Information Thursday before the business from system nightclub in nifty live trading idea radio show hosted by date. Binary options software, binary option pricer contact, free training to many digital choice. How Can I Hedge My Binary Options Trades. Clients simple have to deposit the amount they want to trade and start trading. On the dollar binary options strategies. Binary options brokers accepting paypal request for the pictures of trading places of the web Not Found I'm Sorry, you are looking for something that is not here. For. FREE WITH FUNDED BROKERFWNB: This means you can access the service if you join a broker via their website. Nevertheless, even though the extent of loss will be subject to the agreed limit, you acknowledge that you may sustain the loss in a relatively short time. Brokers pictures of trading places. Loss or speed trading with only if stay. Month forex trading review on t ubat. 30 AM EST through live streaming that will allow traders to see in real time the actions performed by Franco.
2024-03-18T01:26:51.906118
https://example.com/article/1202
Columbus, Ohio Forum | Indeed.comhttp://www.indeed.com/forum/loc/Columbus-Ohio.html Indeed.com - one search. all jobs.en-usCopyright (c) Sun Aug 02 16:38:51 CDT 2015 Indeed, Inc All rights reserved.Sun Aug 02 16:38:51 CDT 2015http://www.indeed.com/images/indeed_rss.pngIndeed.com - one search. all jobs.http://www.indeed.com/forum/loc/Columbus-Ohio.html Columbus, Ohio Forum - Ex offender needs a job: I have a recent felony (9 months) and ca...http://www.indeed.com/forum/loc/Columbus-Ohio/Ex-offender-needs-job/t235749/p1 http://www.indeed.com/forum/loc/Columbus-Ohio/Ex-offender-needs-job/t235749/p12015-07-20 11:32:48.0I have a recent felony (9 months) and can not find a job. I have never been in trouble before just this BS case. I wasn't locked up just put on probation. I have tried fast food, temp services, Walmart, Kroger and so many other places and they all say no because the felony is so recent. I am about to lose my home if I don't find something quick
2023-10-04T01:26:51.906118
https://example.com/article/3637
/* * Copyright (c) 2011, University of Konstanz, Distributed Systems Group All rights reserved. * * Redistribution and use in source and binary forms, with or without modification, are permitted * provided that the following conditions are met: * Redistributions of source code must retain the * above copyright notice, this list of conditions and the following disclaimer. * Redistributions * in binary form must reproduce the above copyright notice, this list of conditions and the * following disclaimer in the documentation and/or other materials provided with the distribution. * * Neither the name of the University of Konstanz nor the names of its contributors may be used to * endorse or promote products derived from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND * FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ package org.sirix.access; import com.google.common.base.MoreObjects; import org.sirix.access.json.JsonResourceStore; import org.sirix.access.trx.node.AfterCommitState; import org.sirix.api.Database; import org.sirix.api.ResourceManager; import org.sirix.api.json.JsonNodeTrx; import org.sirix.api.json.JsonResourceManager; import org.sirix.exception.SirixException; import org.sirix.exception.SirixUsageException; import org.sirix.utils.LogWrapper; import org.sirix.utils.SirixFiles; import org.slf4j.LoggerFactory; import java.nio.file.Files; import java.nio.file.Path; /** * This class represents one concrete database for enabling several {@link ResourceManager} * instances. * * @author Sebastian Graf, University of Konstanz * @author Johannes Lichtenberger * @see Database */ public final class LocalJsonDatabase extends AbstractLocalDatabase<JsonResourceManager> { /** * {@link LogWrapper} reference. */ private static final LogWrapper LOGWRAPPER = new LogWrapper(LoggerFactory.getLogger(LocalJsonDatabase.class)); /** * The resource store to open/close resource-managers. */ private final JsonResourceStore resourceStore; /** * Package private constructor. * * @param dbConfig {@link ResourceConfiguration} reference to configure the {@link Database} * @throws SirixException if something weird happens */ LocalJsonDatabase(final DatabaseConfiguration dbConfig, final JsonResourceStore store) { super(dbConfig); resourceStore = store; } @Override public synchronized void close() { if (isClosed) { return; } isClosed = true; resourceStore.close(); transactionManager.close(); // Remove from database mapping. Databases.removeDatabase(dbConfig.getDatabaseFile(), this); // Remove lock file. SirixFiles.recursiveRemove(dbConfig.getDatabaseFile().resolve(DatabaseConfiguration.DatabasePaths.LOCK.getFile())); } @Override public synchronized JsonResourceManager openResourceManager(final String resource) { assertNotClosed(); final Path resourceFile = dbConfig.getDatabaseFile().resolve(DatabaseConfiguration.DatabasePaths.DATA.getFile()).resolve(resource); if (!Files.exists(resourceFile)) { throw new SirixUsageException("Resource could not be opened (since it was not created?) at location", resourceFile.toString()); } if (resourceStore.hasOpenResourceManager(resourceFile)) { return resourceStore.getOpenResourceManager(resourceFile); } final ResourceConfiguration resourceConfig = ResourceConfiguration.deserialize(resourceFile); // Resource of must be associated to this database. assert resourceConfig.resourcePath.getParent().getParent().equals(dbConfig.getDatabaseFile()); // Keep track of the resource-ID. resourceIDsToResourceNames.forcePut(resourceConfig.getID(), resourceConfig.getResource().getFileName().toString()); if (!bufferManagers.containsKey(resourceFile)) { addResourceToBufferManagerMapping(resourceFile, resourceConfig); } return resourceStore.openResource(this, resourceConfig, bufferManagers.get(resourceFile), resourceFile); } @Override public String toString() { return MoreObjects.toStringHelper(this).add("dbConfig", dbConfig).toString(); } @Override protected boolean bootstrapResource(ResourceConfiguration resConfig) { boolean returnVal = true; try (final JsonResourceManager resourceTrxManager = openResourceManager(resConfig.getResource() .getFileName() .toString()); final JsonNodeTrx wtx = resourceTrxManager.beginNodeTrx(AfterCommitState.Close)) { wtx.commit(); } catch (final SirixException e) { LOGWRAPPER.error(e.getMessage(), e); returnVal = false; } return returnVal; } @Override public String getName() { return dbConfig.getDatabaseName(); } }
2024-02-21T01:26:51.906118
https://example.com/article/4137
/** * This file is part of Ewok. * * Copyright 2017 Vladyslav Usenko, Technical University of Munich. * Developed by Vladyslav Usenko <vlad dot usenko at tum dot de>, * for more information see <http://vision.in.tum.de/research/robotvision/replanning>. * If you use this code, please cite the respective publications as * listed on the above website. * * Ewok is free software: you can redistribute it and/or modify * it under the terms of the GNU Lesser General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * Ewok is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU Lesser General Public License * along with Ewok. If not, see <http://www.gnu.org/licenses/>. */ #include <ros/ros.h> #include <ewok/ed_ring_buffer.h> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <sophus/se3.hpp> #include <octomap/OcTree.h> #include <octomap_msgs/conversions.h> #include <fstream> #include <chrono> class DatasetReader { public: DatasetReader(const std::string & path) : path(path+ "/") { intrinsics = Eigen::Vector4f(517.3, 516.5, 318.6, 255.3); std::string fusion_path = path + "/depth_gt.txt"; std::ifstream f(fusion_path.c_str()); std::cout << "Opening file: " << fusion_path << std::endl; std::string timestamp; float rgb_timestamp; float depth_timestamp; while (!f.eof()) { std::string temp; getline(f, temp); std::stringstream stream(temp); Eigen::Vector3f translation; Eigen::Quaternionf quaternion; std::string depth_name, rgb_name; stream >> depth_timestamp; stream >> depth_name; stream >> timestamp; stream >> translation[0]; stream >> translation[1]; stream >> translation[2]; stream >> quaternion.coeffs()[0]; stream >> quaternion.coeffs()[1]; stream >> quaternion.coeffs()[2]; stream >> quaternion.coeffs()[3]; timestamps.push_back(timestamp); depth_image_files.push_back(depth_name); poses.push_back(Sophus::SE3f(quaternion, translation)); } std::cout << "Loaded " << depth_image_files.size() << " images" << std::endl; } cv::Mat get_d(int i) { return cv::imread(path + depth_image_files[i], CV_LOAD_IMAGE_UNCHANGED); } Eigen::Matrix4f get_pose(int i) { return poses[i].matrix(); } std::string get_timestamp(int i) { return timestamps[i]; } size_t num_images() { return depth_image_files.size()-1; } private: std::string path; std::vector<std::string> depth_image_files; std::vector<std::string> timestamps; std::vector<Sophus::SE3f> poses; Eigen::Vector4f intrinsics; }; const static int POW = 6; ros::Publisher occ_marker_pub, free_marker_pub, dist_marker_pub, octomap_pub; std::ofstream f_time; bool initialized = false; const double resolution = 0.1; //ewok::QuasiEuclideanDistanceRingBuffer<POW> rrb(0.1, 1.0); ewok::EuclideanDistanceRingBuffer<POW> rrb(resolution, 1.0); octomap::OcTree tree(resolution); void processImage(const cv::Mat & img, const Eigen::Matrix4f & T_w_c) { const float fx = 554.254691191187; const float fy = 554.254691191187; const float cx = 320.5; const float cy = 240.5; uint16_t * data = (uint16_t *) img.data; ewok::EuclideanDistanceRingBuffer<POW>::PointCloud cloud1; octomap::Pointcloud octomap_cloud; const int subsample = 4; for(int u=0; u < img.cols; u += subsample) { for(int v=0; v < img.rows; v += subsample) { uint16_t uval = data[v*img.cols + u]; //ROS_INFO_STREAM(val); if(uval > 0) { float val = uval/5000.0; Eigen::Vector4f p; p(0) = val*(u - cx)/fx; p(1) = val*(v - cy)/fy; p(2) = val; p(3) = 1; p = T_w_c * p; //ROS_INFO_STREAM(p); cloud1.push_back(p); octomap_cloud.push_back(p(0), p(1), p(2)); } } } Eigen::Vector3f origin = (T_w_c * Eigen::Vector4f(0,0,0,1)).head<3>(); if(!initialized) { Eigen::Vector3i idx; rrb.getIdx(origin, idx); ROS_INFO_STREAM("Origin: " << origin.transpose() << " idx " << idx.transpose()); rrb.setOffset(idx); initialized = true; } else { Eigen::Vector3i origin_idx, offset, diff; rrb.getIdx(origin, origin_idx); offset = rrb.getVolumeCenter(); diff = origin_idx - offset; if(diff.array().any()) { //ROS_INFO("Moving Volume"); rrb.moveVolume(diff.head<3>()); } } octomap::point3d sensor_origin(origin(0), origin(1), origin(2)); auto t1 = std::chrono::high_resolution_clock::now(); rrb.insertPointCloud(cloud1, origin); auto t2 = std::chrono::high_resolution_clock::now(); rrb.updateDistance(); auto t3 = std::chrono::high_resolution_clock::now(); tree.insertPointCloud(octomap_cloud, sensor_origin); auto t4 = std::chrono::high_resolution_clock::now(); f_time << std::chrono::duration_cast<std::chrono::nanoseconds>(t2-t1).count() << " " << std::chrono::duration_cast<std::chrono::nanoseconds>(t3-t2).count() << " " << std::chrono::duration_cast<std::chrono::nanoseconds>(t4-t3).count() << "\n" ; visualization_msgs::Marker m_occ, m_free, m_dist; rrb.getMarkerOccupied(m_occ); rrb.getMarkerFree(m_free); rrb.getMarkerDistance(m_dist, 0.5); occ_marker_pub.publish(m_occ); free_marker_pub.publish(m_free); dist_marker_pub.publish(m_dist); { octomap_msgs::Octomap msg; octomap_msgs::binaryMapToMsg(tree, msg); msg.header.stamp = ros::Time::now(); msg.header.frame_id = "world"; octomap_pub.publish(msg); std::cout << std::endl; } } int main(int argc, char** argv) { ros::init(argc, argv, "tum_rgbd_rolling_buffer_test"); ros::NodeHandle nh; DatasetReader dr(argv[1]); f_time.open(argv[2]); occ_marker_pub = nh.advertise<visualization_msgs::Marker>("ring_buffer/occupied", 5, true); free_marker_pub = nh.advertise<visualization_msgs::Marker>("ring_buffer/free", 5, true); dist_marker_pub = nh.advertise<visualization_msgs::Marker>("ring_buffer/distance", 5, true); octomap_pub = nh.advertise<octomap_msgs::Octomap>("octomap", 5, true); ros::Duration(1.0).sleep(); for(int i=0; i<dr.num_images(); i++) { //std::cout << "Processing image: " << i << std::endl; processImage(dr.get_d(i), dr.get_pose(i)); ros::spinOnce(); } f_time.close(); //ros::spin(); return 0; }
2023-10-26T01:26:51.906118
https://example.com/article/9837
BACK in 2000 I shared a train cabin from Amsterdam to Munich with an Afghan man who, when he learned I was a journalist, pleaded with me to communicate to the American public that the CIA had to stop destroying his country and rebuild it instead. "They have so much power," I recall him saying. I reacted with the tolerant and condescending attitude of the Western liberal. The real sources of Afghan misery, obviously, were tribal, political and religious rivalry, and while it was tempting for people with lower levels of political understanding to blame a foreign mastermind for their troubles, such conspiratorial thinking was actually part of the problem in the Mideast, as in Eastern Europe. Right? Afghanistan and Pakistan are where liberalism goes to die. In the years since, it's become increasingly clear that my traveling companion was at least partially right: when trying to explain a social or political event in Afghanistan or Pakistan, it's entirely rational to assume that it stems from a plot by an intelligence agency, quite likely the CIA. The sickest confirmation of this point was the recent revelation that the CIA ran an operation to verify Osama bin Laden's location by gathering DNA samples through a false-flag hepatitis B vaccination programme. As James Fallows notes, American officials are defending this operation, not denying it. This is despicable and stupid. All over the world, poor people resist vaccination campaigns in the belief that they are part of a plot by powerful authorities to take advantage of them. The CIA operation in Pakistan turns these fears from crazy conspiracy theories into accurate and rational beliefs. But what's really tragic is that Pakistan happens to be at the epicenter of a crucial ongoing vaccination programme: the worldwide campaign to eliminate polio, which has been hampered by opposition from Muslim clerics. As it happens, the only countries in the world where polio is still endemic are Nigeria, India, Pakistan and Afghanistan, and "persistent pockets of polio transmission in northern India, northern Nigeria and the border between Afghanistan and Pakistan are the current focus of the polio eradication initiative." In Nigeria, beginning in 2003, Muslim religious leaders hamstrung polio vaccination campaigns by spreading rumours that the shots are actually sterilisation drugs, part of a conspiracy by Westerners to reduce African birth-rates. At a minimum, several hundred Nigerian children per year contracted polio in subsequent years because of the resulting failure of vaccination campaigns. By 2007, Taliban clerics in Pakistan joined the anti-vaccine campaigns. Resistance also developed in extremely poor regions of Uttar Pradesh in India. To counter both religious resistance and high levels of "misconceptions" among the extremely poor, hard-to-reach populations where polio is concentrated, World Health Organisation-backed health campaigns engaged in outreach to local religious authorities, according to a WHO report. In 2004, Muslim religious (2697) and community (1892) leaders were asked to participate in the polio campaign, resulting in 77% and 79%, respectively, of these leaders supporting the programme's efforts to convince resistant caregivers. They succeeded in 87% of cases in their coverage area, reaching 100% in some districts. This was a critical contribution to the reduction of the immunity gap among Muslim and Hindu children in Uttar Pradesh's western region. The number of Muslim children who had not received at least two polio drops was reduced from 5% in 2002 to nearly 0% in 2004. Engagement of religious leaders to counter refusals due to religious reasons or misperceptions has yielded similar results in Pakistan's north-west frontier province. Data from 2007 show that, after involving religious leaders in polio eradication activities, coverage of children in families refusing due to religious reasons increased from 13% in August to 17% in October, and coverage of families refusing due to misconceptions increased from 37% to 50% in the same period. When properly engaged, religious and community leaders become strong community allies to eradicate polio. Terrific. How many of those Muslim religious leaders in Pakistan will continue to support vaccination programmes, now that it's clear that such programmes may in fact be CIA operations designed to smoke out Taliban or al-Qaeda operatives so they can be taken out in drone missile strikes? If the fake vaccination campaign was a necessary part of the operation to "take out" Osama bin Laden, it would have been better to leave Mr bin Laden in. One more ailing ex-terrorist holed up in a ratty house in remote Pakistan, watching old videos of himself; this was not worth jeopardising global vaccination campaigns. In fact, though, nobody will be able to say whether the vaccination DNA intelligence was critical to the assassination effort. Like any other programme, it was one more effort among many, launched by officials who decided the probability of producing some information useful for their organisation's priority goal outweighed the nebulous possibility of doing some damage to public goals that were not their specific responsibility and had no constituency within their organisation. In that sense it's similar to what happened at another large organisation concerned with intelligence-gathering. And it's equally inexcusable. (Photo credit: AFP)
2024-04-29T01:26:51.906118
https://example.com/article/4965
Q: Oracle system information query - Database instance level I am writing a performance/system monitoring tool to augment load testing for my team's product and I am trying to store database system information with the results bundle but do not know how to write the query to capture this in Oracle (I'm a developer not a DBA). I have this all working the way I want for SQL Server, but I need to do the same for Oracle. Below is a query I found online for this is SQL Server: SELECT CONVERT(varchar(128),SERVERPROPERTY('ComputerNamePhysicalNetBIOS')) AS 'computerNamePhysicalNetBIOS', CONVERT(varchar(128),SERVERPROPERTY('MachineName')) AS 'machineName', CONVERT(varchar(128),SERVERPROPERTY('Edition')) AS 'edition', CONVERT(varchar(128),SERVERPROPERTY('ProductLevel')) AS 'productLevel', CONVERT(varchar(128),SERVERPROPERTY('ProductVersion')) AS 'productVersion', CONVERT(varchar(128),SERVERPROPERTY('BuildClrVersion')) AS 'buildClrVersion', CONVERT(INT,SERVERPROPERTY('ProcessID')) AS 'processID', CONVERT(INT,SERVERPROPERTY('EngineEdition')) AS 'engineEdition', CONVERT(INT,SERVERPROPERTY('HadrManagerStatus')) AS 'hadrManagerStatus', CONVERT(INT,SERVERPROPERTY('IsHadrEnabled')) AS 'hadrEnabled', CONVERT(INT,SERVERPROPERTY('IsAdvancedAnalyticsInstalled')) AS 'advancedAnalyticsInstalled', CONVERT(INT,SERVERPROPERTY('IsClustered')) AS 'clustered', CONVERT(INT,SERVERPROPERTY('IsPolybaseInstalled')) AS 'polybaseInstalled', CONVERT(INT,SERVERPROPERTY('IsXTPSupported')) AS 'xtpSupported', CONVERT(INT,SERVERPROPERTY('LCID')) AS 'lcid', CONVERT(varchar(128),SERVERPROPERTY('ResourceVersion')) AS 'resourceVersion', CONVERT(varchar(128),SERVERPROPERTY('ServerName')) AS 'serverName', CONVERT(varchar(128),APP_NAME() )AS 'appName', CONVERT(INT,DB_ID()) AS 'dbId', CONVERT(varchar(128),DB_NAME()) AS 'dbName' I don't really expect a one-to-one column match between the above query and Oracle's version, but in general, how can I get very similar information from Oracle? A: I don't really expect a one-to-one column match between the above query and Oracle's version, but in general, how can I get very similar information from Oracle? Most of that stuff, if it exists at all in the Oracle database, will be accessible through V$ views in the Oracle database. To get you started, here are some that are going to be most relevant to answering your question: select * from v$instance; select * from v$version; select * from v$sql_feature; select * from v$license; select * from v$option; If you want to get a complete list of V$ views to look around better, select * from dict where table_name like 'V$%';
2023-12-19T01:26:51.906118
https://example.com/article/1720
Undecanol Undecanol, also known by its IUPAC name 1-undecanol or undecan-1-ol, and by its trivial names undecyl alcohol and hendecanol, is a fatty alcohol. Undecanol is a colorless, water-insoluble liquid of melting point 19 °C and boiling point 243 °C. Industrial uses and production It has a floral citrus like odor, and a fatty taste and is used as a flavoring ingredient in foods. It is commonly produced by the reduction of 1-undecanal, the analogous aldehyde. Natural occurrence 1-Undecanol is found naturally in many foods such as fruits (including apples and bananas), butter, eggs and cooked pork. Toxicity Undecanol can irritate the skin, eyes and lungs. Ingestion can be harmful, with the approximate toxicity of ethanol. References External links icsc MSDS Category:Fatty alcohols Category:Primary alcohols Category:Alkanols
2024-04-28T01:26:51.906118
https://example.com/article/9840
EXT3 (gene) EXT3 is a human gene. It is associated with hereditary multiple exostoses. References
2024-03-27T01:26:51.906118
https://example.com/article/3333
Exposition fairy Exposition fairy is the term for any character in a Tiradesverse movie that exists solely for the purpose of info-dumping to the audience. These characters are staples of M. Night Shyamalan and Uwe Boll.
2023-11-17T01:26:51.906118
https://example.com/article/1073
Editor's Note: This series takes a close-up look at the SBA's economic "clusters" designed to aid regional businesses. Read the previous installments on Minnesota, the Carolinas, and California. The deeply southern state of Mississippi is usually associated with down-home hospitality, front porches and its namesake river. But it’s also home to NASA's John C. Stennis Space Center, a federal “city” of government offices that revolve around the nation’s largest rocket-engine test facility. The NASA base is so massive it has its own zip code (39529) and a feeding frenzy of entrepreneurs growing at its edges. A cluster of "geospatial" businesses -- think the technology that makes services like Mapquest and Google Maps possible – has sprung up around the Stennis Space Center and is growing. The Small Business Administration has invested more than $1 million into this high-tech community, dubbed the Enterprise for Innovative Geospatial Solutions, to further stimulate job creation and innovation. “Geospatial technology is a pervasive and rapidly growing segment in everyday life,” says Craig Harvey, the president of the Magnolia Business Alliance, a local business development group that runs the 40-member EIGS cluster in Bay St. Louis, Miss. The EIGS cluster, designated one of 10 SBA clusters through a 2010 pilot program, has been around since 1998. In the mid-90s, the federal and state governments teamed up to invest money into the region to develop businesses that use the innovative technology created by NASA. At the time, the Stennis Space Center, located on 13,800 acres and home to billions of dollars of rocket-engine testing equipment, was generating a tremendous amount of research and knowledge, but no business growth. That has changed. In the past two years, with an influx of $1.2 million in initial SBA funding, the EIGS has created 101 jobs and saved 184, according to Harvey. The cluster has been directly involved in helping members secure $14 million worth of contracts over the past two years; it's also helped members learn about opportunities that resulted in another $47 million worth of business, he said. EIGS has received another $385,000 grant from the SBA for its work with small businesses and has the option to take the grant for another four years after this year. With the SBA funding, the EIGS has produced a series of seminars for small businesses in the region on topics ranging from how to comply with new legislation to how to prepare for a defense-contracting audit. Also, EIGS has arranged for business mentors to work one-on-one with small companies. One small business the cluster has assisted is woman-owned weather modeling company WorldWind. EIGS is helping WorldWind obtain an export license so that the technology can be used abroad, for things like predicting wave height in surfing communities. In addition, WorldWind has had help from the cluster managing its finances and network with other businesses. The cluster has also helped geospatial high school curriculum producer Digital Quest, a small firm headquartered in Ridgeland, Miss. The high school course includes topics like site suitability and 3D visualization. With help from EIGS, Digital Quest obtained certification from the Department of Labor for its course to be included in a four-semester industry certification program awarded by the Mississippi Enterprise for Technology, after which students are ready to be an entry-level geospatial technician. Catherine Clifford Catherine Clifford is senior entrepreneurship writer at CNBC. She was formerly a senior writer at Entrepreneur.com, the small business reporter at CNNMoney and an assistant in the New York bureau for CNN. Clifford attended Columbi... This website uses cookies to allow us to see how our website and related online services are being used. By continuing to use this website, you consent to our cookie collection. More information about how we collect cookies is found here.
2024-07-17T01:26:51.906118
https://example.com/article/5359
New study to change how patients are cared for in Intensive Care Units In a new study published today in the New England Journal of Medicine and presented at the Society of Critical Care Medicine conference, researchers showed that the method of ventilating patients in Intensive Care Units can significantly impact their risk of mortality. Lead authors Dr. Niall Ferguson, Director of Critical Care at Mount Sinai Hospital and University Health Network and Dr. Maureen Meade, Professor of Medicine at McMaster University studied patients suffering from Acute Respiratory Distress Syndrome (ARDS), a life-threatening illness that can be caused by pneumonia, trauma or serious infections. They conducted a multicentre, randomized controlled trial in 39 Intensive Care Units from five countries, including Toronto General Hospital, Toronto Western Hospital, Mount Sinai Hospital and Hamilton Health Sciences, in which adults suffering from new-onset, moderate-to-severe ARDS were randomized to receive either a novel form of ventilation designed to protect the lung called high-frequency oscillatory ventilation (HFOV) or to receive lung-protective conventional ventilation. The researchers found that patients treated with HFOV had a higher mortality and they required more sedation and more drugs to support their blood pressure. As a result, the study recommends that clinicians should use the conventional lung-protective strategy in most cases. “This study will change how we ventilate our patients who are suffering from Acute Respiratory Distress Syndrome,” said Dr. Niall Ferguson, Director of Critical Care at University Health Network and Mount Sinai Hospital. “We previously thought that the benefits of HFOV in protecting patients’ lungs would outweigh its risks, but based on this study, we know that this is not the case. Our research demonstrates the importance of conducting clinical research in Intensive Care Units so that we can ensure we are providing the best patient care to our sickest patients.” Patients who are treated in Intensive Care Units (ICU) are the sickest patients in a hospital, suffering from life-threatening conditions such as infections, cancer and trauma. Of the patients who are treated in ICUs each year in Canada, two-thirds are on ventilators and of those, 10% suffer from ARDS.
2024-07-18T01:26:51.906118
https://example.com/article/6708