source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Cinnamon%20%28desktop%20environment%29 | Cinnamon is a free and open-source desktop environment for Linux and Unix-like operating systems, deriving from GNOME 3 but following traditional desktop metaphor conventions.
The development of Cinnamon began by the Linux Mint team as a reaction to the April 2011 release of GNOME 3 in which the conventional desktop metaphor of GNOME 2 was abandoned in favor of GNOME Shell. Following several attempts to extend GNOME 3 such that it would suit the Linux Mint design goals, the Mint developers forked several GNOME 3 components to build an independent desktop environment. Separation from GNOME was completed in Cinnamon 2.0, which was released in October 2013. Applets and desklets are no longer compatible with GNOME 3.
As the distinguishing factor of Linux Mint, Cinnamon has generally received favorable coverage by the press, in particular for its ease of use and gentle learning curve. With respect to its conservative design model, Cinnamon is similar to the Xfce, MATE, GNOME 2 (and GNOME Flashback) desktop environments.
History
Like several other desktop environments based on GNOME, including Canonical's Unity, Cinnamon was a product of dissatisfaction with GNOME team's abandonment of a traditional desktop experience in April 2011. Until then, GNOME (i.e. GNOME 2) had included the traditional desktop metaphor, but in GNOME 3 this was replaced with GNOME Shell, which lacked a taskbar-like panel and other basic features of a conventional desktop. The elimination of these elementary features was unacceptable to the developers of distributions such as Mint and Ubuntu, which are addressed to users who want interfaces that they would immediately be comfortable with.
To overcome these differences, the Linux Mint team initially set out to develop extensions for the GNOME Shell to replace the abandoned features. The results of this effort were the "Mint GNOME Shell Extensions" (MGSE). Meanwhile, the MATE desktop environment had also been forked from GNOME 2. Linux Mint 12, r |
https://en.wikipedia.org/wiki/Infodemiology | Infodemiology was defined by Gunther Eysenbach in the early 2000s as information epidemiology. It is an area of science research focused on scanning the internet for user-contributed health-related content, with the ultimate goal of improving public health. It is also defined as the science of mitigating public health problems resulting from an infodemic.
Origin of term
Eysenbach first used the term in the context of measuring and predicting the quality of health information on the Web (i.e., measuring the "supply" side of information).
He later included in his definition methods and techniques which are designed to automatically measure and track health information "demand" (e.g., by analyzing search queries) as well as "supply" (e.g., by analyzing postings on webpages, in blogs, and news articles, for example through GPHIN) on the Internet with the overarching goal of informing public health policy and practice. In 2013, the Infovigil Project was launched in an effort to bring the research community together to help realize this goal. It is funded by the Canadian Institutes of Health Research.
Eysenbach demonstrated his point by showing a correlation between flu-related searches on Google (demand data) and flu-incidence data. The method is shown to be better and more timely (i.e., can predict public health events earlier) than traditional syndromic surveillance methods such as reports by sentinel physicians.
Application
Researchers have applied an infodemiological approach to studying the spread of HIV/AIDS, SARS and influenza, vaccination uptake, antibiotics consumption, the incidence of multiple sclerosis, patterns of alcohol consumption, the efficacy of using the social web for personalization of health treatment, the contexts of status epilepticus patients, factors of Abdominal pain and its impact on quality of life and the effectiveness of the Great American Smokeout anti-smoking awareness event. Applications outside the field of health care include urban |
https://en.wikipedia.org/wiki/Syntelic | Syntelic attachment occurs when both sister chromosomes are attached to a single spindle pole.
Normal cell division distributes the genome equally between two daughter cells, with each chromosome attaching to an ovoid structure called the spindle. During the division process, errors commonly occur in attaching the chromosomes to the spindle, estimated to affect 86 to 90 percent of chromosomes.
Such attachment errors are common during the early stages of spindle formation, but they are mostly corrected before the start of anaphase.
Successful cell division requires identification and correction of any dangerous errors before the cell splits in two.
If the syntelic attachment continues, it causes both sister chromatids to be segregated to a single daughter cell.
Causes
Microtubules extend from the spindle poles and attach to the first kinetochore they encounter. Because this process is stochastic and not facilitated or directed, the first microtubules to come into contact with a kinetochore may not have originated at the correct spindle pole. Normally, the sister kinetochores are on opposing sides of the chromosomes, facing outward toward their respective spindle poles. This arrangement enhances the likelihood of properly bi-oriented chromosomes and is sometimes referred to as a mechanism for 'avoidance' of syntelic attachment. However, sometimes the kinetochores are found on the same side of the centromere, and this error cannot be corrected stochastically. Instead, the spindle must actively exert forces on one of the two kinetochores to relocate it to the proper, outer edge of the centromere. If the geometry and orientation of the two kinetochores is not corrected, the cells can still effectively achieve bi-orientation through the employment of error correction mechanisms.
Polyploid cells, and tetraploids in particular, experience an increased number of syntelic attachments, which contributes to their genomic instability. This phenomenon of increased rates of s |
https://en.wikipedia.org/wiki/Genome%20evolution | Genome evolution is the process by which a genome changes in structure (sequence) or size over time. The study of genome evolution involves multiple fields such as structural analysis of the genome, the study of genomic parasites, gene and ancient genome duplications, polyploidy, and comparative genomics. Genome evolution is a constantly changing and evolving field due to the steadily growing number of sequenced genomes, both prokaryotic and eukaryotic, available to the scientific community and the public at large.
History
Since the first sequenced genomes became available in the late 1970s, scientists have been using comparative genomics to study the differences and similarities between various genomes. Genome sequencing has progressed over time to include more and more complex genomes including the eventual sequencing of the entire human genome in 2001. By comparing genomes of both close relatives and distant ancestors the stark differences and similarities between species began to emerge as well as the mechanisms by which genomes are able to evolve over time.
Prokaryotic and eukaryotic genomes
Prokaryotes
Prokaryotic genomes have two main mechanisms of evolution: mutation and horizontal gene transfer. A third mechanism, sexual reproduction, is prominent in eukaryotes and also occurs in bacteria. Prokaryotes can acquire novel genetic material through the process of bacterial conjugation in which both plasmids and whole chromosomes can be passed between organisms. An often cited example of this process is the transfer of antibiotic resistance utilizing plasmid DNA. Another mechanism of genome evolution is provided by transduction whereby bacteriophages introduce new DNA into a bacterial genome. The main mechanism of sexual interaction is natural genetic transformation which involves the transfer of DNA from one prokaryotic cell to another though the intervening medium. Transformation is a common mode of DNA transfer and at least 67 prokaryotic species are kn |
https://en.wikipedia.org/wiki/Oncology%20information%20system | Oncology Information System (OIS) is a software solution that manages departmental, administrative and clinical activities in cancer care. It aggregates information into a complete oncology-specific electronic health record to support medical information management. The OIS allows the capture of patient history information, the documentation of the treatment response, medical prescription of the treatment, the storage of patient documentation and the capture of activities for billing purposes.
Unlike a hospital information system (HIS), which is intended to manage patient records more generally, or radiological information system (RIS), intended to track and manage radiology requests and workflow, the OIS supports the delivery of integrated care and long-term treatment for cancer patients by collecting data during various phases of treatment, maintaining a history of treatment fractions, screening, prevention, diagnosis, image reviews, palliative care and end-of-life care. An OIS will be designed around the specific requirements of chemotherapy, radiotherapy and other supportive activities.
Basic features of an OIS
OIS generally support the following features:
Treatment workflow
Doctor's prescription
Patient register
Management of the treatment schedule
Management of patient documents
Financial control
Health Level 7(HL7) and DICOM RT interoperability |
https://en.wikipedia.org/wiki/Pac-Man | originally called Puck Man in Japan, is a 1980 maze action video game developed and released by Namco for arcades. In North America, the game was released by Midway Manufacturing as part of its licensing agreement with Namco America. The player controls Pac-Man, who must eat all the dots inside an enclosed maze while avoiding four colored ghosts. Eating large flashing dots called "Power Pellets" causes the ghosts to temporarily turn blue, allowing Pac-Man to eat them for bonus points.
Game development began in early 1979, directed by Toru Iwatani with a nine-man team. Iwatani wanted to create a game that could appeal to women as well as men, because most video games of the time had themes of war or sports. Although the inspiration for the Pac-Man character was the image of a pizza with a slice removed, Iwatani has said he also rounded out the Japanese character for mouth, kuchi (). The in-game characters were made to be cute and colorful to appeal to younger players. The original Japanese title of Puck Man was derived from the Japanese phrase "Paku paku taberu" which refers to gobbling something up; the title was changed to Pac-Man for the North American release.
Pac-Man was a widespread critical and commercial success, leading to several sequels, merchandise, and two television series, as well as a hit single by Buckner & Garcia. The character of Pac-Man has become the official mascot of Bandai Namco Entertainment. The game remains one of the highest-grossing and best-selling games, generating more than $14 billion in revenue () and 43 million units in sales combined, and has an enduring commercial and cultural legacy, commonly listed as one of the greatest video games of all time.
Gameplay
Pac-Man is an action maze chase video game; the player controls the eponymous character through an enclosed maze. The objective of the game is to eat all of the dots placed in the maze while avoiding four colored ghosts—Blinky (red), Pinky (pink), Inky (cyan), and Clyde (ora |
https://en.wikipedia.org/wiki/Ternary%20fission | Ternary fission is a comparatively rare (0.2 to 0.4% of events) type of nuclear fission in which three charged products are produced rather than two. As in other nuclear fission processes, other uncharged particles such as multiple neutrons and gamma rays are produced in ternary fission.
Ternary fission may happen during neutron-induced fission or in spontaneous fission (the type of radioactive decay). About 25% more ternary fission happens in spontaneous fission compared to the same fission system formed after thermal neutron capture, illustrating that these processes remain physically slightly different, even after the absorption of the neutron, possibly because of the extra energy present in the nuclear reaction system of thermal neutron-induced fission.
Quaternary fission, at 1 per 10 million fissions, is also known (see below).
Products
The most common nuclear fission process is "binary fission." It produces two charged asymmetrical fission products with maximally probable charged product at 95±15 and 135±15 u atomic mass. However, in this conventional fission of large nuclei, the binary process happens merely because it is the most energetically probable.
In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, the alternative ternary fission process produces three positively charged fragments (plus neutrons, which are not charged and not counted in this reckoning). The smallest of the charged products may range from so small a charge and mass as a single proton (Z=1), up to as large a fragment as the nucleus of argon (Z=18).
Although particles as large as argon nuclei may be produced as the smaller (third) charged product in the usual ternary fission, the most common small fragments from ternary fission are helium-4 nuclei, which make up about 90% of the small fragment products. This high incidence is related to the stability (high binding energy) of the alpha particle, which makes more energy available to the reaction. The second-most common |
https://en.wikipedia.org/wiki/Generative%20pre-trained%20transformer | Generative pre-trained transformers (GPT) are a type of large language model (LLM) and a prominent framework for generative artificial intelligence. The first GPT was introduced in 2018 by OpenAI. GPT models are artificial neural networks that are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content. As of 2023, most LLMs have these characteristics and are sometimes referred to broadly as GPTs.
OpenAI has released very influential GPT foundation models that have been sequentially numbered, to comprise its "GPT-n" series. Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, GPT-4, was released in March 2023. Such models have been the basis for their more task-specific GPT systems, including models fine-tuned for instruction followingwhich in turn power the ChatGPT chatbot service.
The term "GPT" is also used in the names and descriptions of such models developed by others. For example, other GPT foundation models include a series of models created by EleutherAI, and recently seven models created by Cerebras. Also, companies in different industries have developed task-specific GPTs in their respective fields, such as Salesforce's "EinsteinGPT" (for CRM) and Bloomberg's "BloombergGPT" (for finance).
History
Initial developments
Generative pretraining (GP) was a long-established concept in machine learning applications, but the transformer architecture was not available until 2017 when it was invented by employees at Google. That development led to the emergence of large language models such as BERT in 2018 which was a pre-trained transformer (PT) but not designed to be generative (BERT was an "encoder-only" model). Also around that time, in 2018, OpenAI published its article entitled "Improving Language Understanding by Generative Pre-Training," in which it introduced the first |
https://en.wikipedia.org/wiki/Human%20accelerated%20regions | Human accelerated regions (HARs), first described in August 2006, are a set of 49 segments of the human genome that are conserved throughout vertebrate evolution but are strikingly different in humans. They are named according to their degree of difference between humans and chimpanzees (HAR1 showing the largest degree of human-chimpanzee differences). Found by scanning through genomic databases of multiple species, some of these highly mutated areas may contribute to human-specific traits. Others may represent loss of functional mutations, possibly due to the action of biased gene conversion rather than adaptive evolution.
Several of the HARs encompass genes known to produce proteins important in neurodevelopment. HAR1 is a 106-base pair stretch found on the long arm of chromosome 20 overlapping with part of the RNA genes HAR1F and HAR1R. HAR1F is active in the developing human brain. The HAR1 sequence is found (and conserved) in chickens and chimpanzees but is not present in fish or frogs that have been studied. There are 18 base pair mutations different between humans and chimpanzees, far more than expected by its history of conservation.
HAR2 includes HACNS1 a gene enhancer "that may have contributed to the evolution of the uniquely opposable human thumb, and possibly also modifications in the ankle or foot that allow humans to walk on two legs". Evidence to date shows that of the 110,000 gene enhancer sequences identified in the human genome, HACNS1 has undergone the most change during the evolution of humans following the split with the ancestors of chimpanzees. The substitutions in HAR2 may have resulted in loss of binding sites for a repressor, possibly due to biased gene conversion.
HAR genes
HAR01: HAR1F & HAR1R
HAR02: CENTG2 including the HACNS1 module
HAR03: MAD1L1
HAR04: ?
HAR05: WNK1
HAR06: WWOX
HAR07: ?
HAR08: POU6F2
HAR09: PTPRT
HAR10: FHIT
HAR11: DMD
HAR12: ?
HAR20: PPARGC1A
HAR21: NPAS3 - association with psychiatric disorders
HAR23: MGC27016
|
https://en.wikipedia.org/wiki/Cronobacter%20sakazakii | Cronobacter sakazakii, which before 2007 was named Enterobacter sakazakii, is an opportunistic Gram-negative, rod-shaped, pathogenic bacterium that can live in very dry places, otherwise known as xerotolerance. C. sakazakii utilizes a number of genes to survive desiccation and this xerotolerance may be strain specific. The majority of C. sakazakii cases are adults but low-birth-weight preterm neonatal and older infants are at the highest risk. The pathogen is a rare cause of invasive infection in infants, with historically high case fatality rates (40–80%).
In infants it can cause bacteraemia, meningitis and necrotizing enterocolitis. Most neonatal C. sakazakii infections cases have been associated with the use of powdered infant formula with some strains able to survive in a desiccated state for more than two years. However, not all cases have been linked to contaminated infant formula. In November 2011, several shipments of Kotex tampons were recalled due to a Cronobacter (E. sakazakii) contamination. In one study, the pathogen was found in 12% of field vegetables and 13% of hydroponic vegetables.
All Cronobacter species, except C. condimenti, have been linked retrospectively to clinical cases of infection in either adults or infants. However multilocus sequence typing has shown that the majority of neonatal meningitis cases in the past 30 years, across 6 countries, have been associated with only one genetic lineage of the species Cronobacter sakazakii called 'Sequence Type 4' or 'ST4', and therefore this clone appears to be of greatest concern with infant infections.
The bacterium is ubiquitous, being isolated from a range of environments and foods; the majority of Cronobacter cases occur in the adult population. However it is the association with intrinsically or extrinsically contaminated powdered formula which has attracted the main attention. According to multilocus sequence analysis (MLSA) the genus originated ~40 MYA, and the most clinically significant |
https://en.wikipedia.org/wiki/Zolt%C3%A1n%20Szab%C3%B3%20%28mathematician%29 | Zoltán Szabó (born November 24, 1965) is a professor of mathematics at Princeton University known for his work on Heegaard Floer homology.
Education and career
Szabó received his B.A. from Eötvös Loránd University in Budapest, Hungary in 1990, and he received his Ph.D. from Rutgers University in 1994.
Together with Peter Ozsváth, Szabó created Heegaard Floer homology, a homology theory for 3-manifolds. For this contribution to the field of topology, Ozsváth and Szabó were awarded the 2007 Oswald Veblen Prize in Geometry. In 2010, he was elected honorary member of the Hungarian Academy of Sciences.
Selected publications
.
.
Grid Homology for Knots and Links, American Mathematical Society, (2015) |
https://en.wikipedia.org/wiki/Seedless%20fruit | A seedless fruit is a fruit developed to possess no mature seeds. Since eating seedless fruits is generally easier and more convenient, they are considered commercially valuable.
Most commercially produced seedless fruits have been developed from plants whose fruits normally contain numerous relatively large hard seeds distributed throughout the flesh of the fruit.
Varieties
Common varieties of seedless fruits include watermelons, tomatoes, and grapes (such as Termarina rossa). Additionally, there are numerous seedless citrus fruits, such as oranges, lemons and limes.
A recent development over the last twenty years has been that of seedless sweet peppers (Capsicum annuum). The seedless plant combines male sterility in the pepper plant (commonly occurring) with the ability to set seedless fruits (a natural fruit-setting without fertilization). In male sterile plants, the parthenocarpy expresses itself only sporadically on the plant with deformed fruits. It has been reported that plant hormones provided by the ovary seed (such as auxins and gibberellins) promote fruit set and growth to produce seedless fruits. Initially, without seeds in the fruit, vegetative propagation was essential. However, now – as with seedless watermelon – seedless peppers can be grown from seeds.
Biological description
Seedless fruits can develop in one of two ways: either the fruit develops without fertilization (parthenocarpy), or pollination triggers fruit development, but the ovules or embryos abort without producing mature seeds (stenospermocarpy). Seedless banana and watermelon fruits are produced on triploid plants, whose three sets of chromosomes make it very unlikely for meiosis to successfully produce spores and gametophytes. This is because one of the three copies of each chromosome cannot pair with another appropriate chromosome before separating into daughter cells, so these extra third copies end up randomly distributed between the two daughter cells from meiosis 1, resul |
https://en.wikipedia.org/wiki/Cyclic%20surgery%20theorem | In three-dimensional topology, a branch of mathematics, the cyclic surgery theorem states that, for a compact, connected, orientable, irreducible three-manifold M whose boundary is a torus T, if M is not a Seifert-fibered space and r,s are slopes on T such that their Dehn fillings have cyclic fundamental group, then the distance between r and s (the minimal number of times that two simple closed curves in T representing r and s must intersect) is at most 1. Consequently, there are at most three Dehn fillings of M with cyclic fundamental group. The theorem appeared in a 1987 paper written by Marc Culler, Cameron Gordon, John Luecke and Peter Shalen. |
https://en.wikipedia.org/wiki/Suprascapular%20notch | The suprascapular notch (or scapular notch) is a notch in the superior border of the scapula, just medial to the base of the coracoid process. It is converted into the suprascapular canal by the suprascapular ligament.
Structure
This notch is converted into a foramen by the suprascapular ligament, and serves for the passage of the suprascapular nerve. The suprascapular vessels vary in number as well as in their course as they run at the suprascapular notch site. The suprascapular artery pass above the suprascapular ligament in most cases. The suprascapular vein may pass through the suprascapular notch or it may instead pass superior to the suprascapular ligament.
Types
Two main classification systems exists with others being modified approaches of the same principle.
Typing based on subjective observation of the suprascapular notch shape. Introduced by and modified by .
There are six basic types of scapular notch:
Type I: Notch is absent. The superior border forms a wide depression from the medial angle to the coracoid process.
Type II: Notch is a blunted V-shape occupying the middle third of the superior border.
Type III: Notch is U-shaped with nearly parallel margins.
Type IV: Notch is V-shaped and very small. A shallow groove is frequently formed for the suprascapular nerve adjacent to the notch.
Type V: Notch is minimal and U-shaped with a partially ossified ligament.
Type VI: Notch is a foramen as the ligament is completely ossified.
Typing based on parametric measurements of depth to upper width ratio of the suprascapular notch introduced by and modified by .
There are five basic types of scapular notch:
Type I: Depth larger than upper width.
Type II: Depth equal to upper width.
Type III: Depth is smaller than upper width.
Type IV: Notch is a foramen.
Type V: Discrete notch.
The second method of suprascapular notch typing yields more practical approach in clinical diagnosis of the suprascapular nerve entrapment.
Clinical significance |
https://en.wikipedia.org/wiki/Mathematical%20Reviews | Mathematical Reviews is a journal published by the American Mathematical Society (AMS) that contains brief synopses, and in some cases evaluations, of many articles in mathematics, statistics, and theoretical computer science. The AMS also publishes an associated online bibliographic database called MathSciNet which contains an electronic version of Mathematical Reviews and additionally contains citation information for over 3.5 million items
Reviews
Mathematical Reviews was founded by Otto E. Neugebauer in 1940 as an alternative to the German journal Zentralblatt für Mathematik, which Neugebauer had also founded a decade earlier, but which under the Nazis had begun censoring reviews by and of Jewish mathematicians. The goal of the new journal was to give reviews of every mathematical research publication. As of November 2007, the Mathematical Reviews database contained information on over 2.2 million articles. The authors of reviews are volunteers, usually chosen by the editors because of some expertise in the area of the article. It and Zentralblatt für Mathematik are the only comprehensive resources of this type. (The Mathematics section of Referativny Zhurnal is available only in Russian and is smaller in scale and difficult to access.) Often reviews give detailed summaries of the contents of the paper, sometimes with critical comments by the reviewer and references to related work. However, reviewers are not encouraged to criticize the paper, because the author does not have an opportunity to respond. The author's summary may be quoted when it is not possible to give an independent review, or when the summary is deemed adequate by the reviewer or the editors. Only bibliographic information may be given when a work is in an unusual language, when it is a brief paper in a conference volume, or when it is outside the primary scope of the Reviews. Originally the reviews were written in several languages, but later an "English only" policy was introduced. Selected |
https://en.wikipedia.org/wiki/Antimatter-catalyzed%20nuclear%20pulse%20propulsion | Antimatter-catalyzed nuclear pulse propulsion (also antiproton-catalyzed nuclear pulse propulsion) is a variation of nuclear pulse propulsion based upon the injection of antimatter into a mass of nuclear fuel to initiate a nuclear chain reaction for propulsion when the fuel does not normally have a critical mass.
Technically, the process is not a '"catalyzed'" reaction because anti-protons (antimatter) used to start the reaction are consumed; if they were present as a catalyst the particles would be unchanged by the process and used to initiate further reactions. Although antimatter particles may be produced by the reaction itself, they are not used to initiate or sustain chain reactions.
Description
Typical nuclear pulse propulsion has the downside that the minimal size of the engine is defined by the minimal size of the nuclear bombs used to create thrust, which is a function of the amount of critical mass required to initiate the reaction. A conventional thermonuclear bomb design consists of two parts: the primary, which is almost always based on plutonium, and a secondary using fusion fuel, which is normally deuterium in the form of lithium deuteride, and tritium (which is created during the reaction as lithium is transmuted to tritium). There is a minimal size for the primary (about 10 kilograms for plutonium-239) to achieve critical mass. More powerful devices scale up in size primarily through the addition of fusion fuel for the secondary. Of the two, the fusion fuel is much less expensive and gives off far fewer radioactive products, so from a cost and efficiency standpoint, larger bombs are much more efficient. However, using such large bombs for spacecraft propulsion demands much larger structures able to handle the stress. There is a tradeoff between the two demands.
By injecting a small amount of antimatter into a subcritical mass of fuel (typically plutonium or uranium) fission of the fuel can be forced. An anti-proton has a negative electric charge |
https://en.wikipedia.org/wiki/2023%20World%20Jigsaw%20Puzzle%20Championship | The 2023 World Jigsaw Puzzle Championship is the third edition of the World Jigsaw Puzzle Championships organized by the World Jigsaw Puzzle Federation (WJPF). It is due to be held between 20 and 24 September in Valladolid, Spain.
Events and rules
The Championship included three events: individual, pairs, and team. Each event had classifications rounds and a grand final.
Individual event
First round (6 groups): Each individual participant makes a jigsaw puzzles of 500 pieces in the maximum period of 90 minutes. The fastest participants from each country (up to 30 countries), with the remainder (up to 60) in order of classification move onto the semifinals.
Semifinals (2 groups): Each participant makes a jigsaw puzzles of 500 pieces in the maximum period of 90 minutes. The fastest participants from each country, with the remainder (up to 90) in order of classification move into the final.
Final (180 participants): Each participant makes a jigsaw puzzles of 500 pieces in the maximum period of 90 minutes and the fastest one to finish it is the world champion.
Pairs event
Classification Round (3 groups): Each pair makes a jigsaw puzzles of 500 pieces in the maximum period of 90 minutes. The fastest pairs from each country (up to 15 countries), with the remainder (up to 30) in order of classification move onto the finals.
Final (90 pairs): Each pair makes a jigsaw puzzles of 1000 pieces in the maximum period of 2 hours and the fastest pair to finish it win the competition.
Team event
Classification round (2 groups) Teams of 4 members make 2 jigsaw puzzles of 1000 pieces in a maximum period of 3 hours. Best two teams by country (up to 25 countries) and the rest of the teams (up to 50), in order of classification are qualificated to the final.
Final (100 teams) Teams of 4 members make 2 jigsaw puzzles of 1000 pieces in a maximum period of 3 hours. The fastest team to finish them is the champion. The team must complete one puzzle before starting the other.
Sch |
https://en.wikipedia.org/wiki/Minsk%20family%20of%20computers | Minsk family of mainframe computers was developed and produced in the Byelorussian SSR from 1959 to 1975.
Models
The MINSK-1 was a vacuum-tube digital computer that went into production in 1960.
The MINSK-2 was a solid-state digital computer that went into production in 1962.
The MINSK-22 was a modified version of Minsk-2 that went into production in 1965.
The MINSK-23 went into production in 1966.
The most advanced model was Minsk-32, developed in 1968. It supported COBOL, FORTRAN and ALGAMS (a version of ALGOL). This and earlier versions also used a machine-oriented language called AKI (AvtoKod "Inzhener", i.e., "Engineer's Autocode"). It stood somewhere between the native assembly language SSK (Sistema Simvolicheskogo Kodirovaniya, or "System of symbolic coding") and higher-level languages, like FORTRAN.
The word size was 31 bits for Minsk-1 and 37 bits for the other models.
At one point the Minsk-222 (an upgraded prototype based on the most popular model, Minsk-22) and Minsk-32 were considered as a potential base for a future unified line of mutually compatible mainframes — that would later become the ES EVM line, but despite being popular among users, good match between their tech and Soviet tech base and familiarity to both programmers and technicians lost to the proposal to copy the IBM/360 line of mainframes — the possibility to just copy all the software existing for it was deemed more important.
See also
Mark Nemenman |
https://en.wikipedia.org/wiki/Physical%20Review | Physical Review is a peer-reviewed scientific journal established in 1893 by Edward Nichols. It publishes original research as well as scientific and literature reviews on all aspects of physics. It is published by the American Physical Society (APS). The journal is in its third series, and is split in several sub-journals each covering a particular field of physics. It has a sister journal, Physical Review Letters, which publishes shorter articles of broader interest.
History
Physical Review commenced publication in July 1893, organized by Cornell University professor Edward Nichols and helped by the new president of Cornell, J. Gould Schurman. The journal was managed and edited at Cornell in upstate New York from 1893 to 1913 by Nichols, Ernest Merritt, and Frederick Bedell. The 33 volumes published during this time constitute Physical Review Series I.
The American Physical Society (APS), founded in 1899, took over its publication in 1913 and started Physical Review Series II. The journal remained at Cornell under editor-in-chief G. S. Fulcher from 1913 to 1926, before relocating to the location of editor John Torrence Tate, Sr. at the University of Minnesota. In 1929, the APS started publishing Reviews of Modern Physics, a venue for longer review articles.
During the Great Depression, wealthy scientist Alfred Loomis anonymously paid the journal's fees for authors who could not afford them.
After Tate's death in 1950, the journals were managed on an interim basis still in Minnesota by E. L. Hill and J. William Buchta until Samuel Goudsmit and Simon Pasternack were appointed and the editorial office moved to Brookhaven National Laboratory on Eastern Long Island, New York. In July 1958, the sister journal Physical Review Letters was introduced to publish short articles of particularly broad interest, initially edited by George L. Trigg, who remained as editor until 1988.
In 1970, Physical Review split into sub-journals Physical Review A, B, C, and D. A fifth m |
https://en.wikipedia.org/wiki/Human%20Heredity%20and%20Health%20in%20Africa | Human Heredity and Health in Africa, or H3Africa, is an initiative to study the genomics and medical genetics of African people. Its goals are to build the continent's research infrastructure, train researchers and clinicians, and to study questions of scientific and medical interest to Africans. The H3Africa Consortium was formally launched in 2012 in Addis Ababa and has grown to include research projects across 32 countries, a pan-contintental bioinformatics network, and the first whole genome sequencing of many African ethnolinguistic groups.
Origin
The H3Africa initiative was conceived to address inequalities in global health and genetic research. Though significant progress had been made in genomics, African scientists were typically not involved in collaborations beyond sample collection, and very few medical genetics studies were carried out on African populations despite their considerable genetic variation. One of the goals of the consortium became to train and retain African scientists and to develop genomic infrastructure of the continent in support of such studies. The policy framework for the initiative was centred around fairness in genomics and avoiding exploitation while building the continent's research capacity. This led to a strong emphasis on African leadership and giving African researchers preferential access to resources like funding, samples, and data.
During a meeting in Cairo in 2007, the African Society of Human Genetics (AfSHG) membership agreed to spearhead an African Genome Project (AGP). The AGP would have four components: population genetics, medical genetics, training, and infrastructure. Among its goals, it would sample at least 100 ethnic groups from the continent, develop large-scale resource to study gene-environment interplay of diseases in Africa, train African scientists, and establish laboratories and local research capacity. At the 2009 AfSHG meeting in Yaoundé, Cameroon, the concept was renamed Human Heredity and Health |
https://en.wikipedia.org/wiki/Temperate%20Southern%20Africa | Temperate Southern Africa is a biogeographic region of the Earth's seas, comprising the temperate waters of southern Africa, where the Atlantic Ocean and Indian Ocean meet. It includes the coast of South Africa and Namibia, and reaches into southern Angola. It also includes the remote islands of Amsterdam and Saint-Paul, to the east in the southern Indian Ocean.
Temperate Southern Africa is a marine realm, one of the great biogeographic divisions of the world's ocean basins.
The boundary between the Temperate Southern Africa and Western Indo-Pacific marine realms is near Lake St. Lucia, in South Africa near the border with Mozambique. The realm extends up the Atlantic coast of Africa to Tômbua in southern Angola, where it transitions to the Tropical Atlantic realm.
Subdivisions
The Temperate Southern Africa realm is further subdivided into three marine provinces. Benguela province includes the Atlantic portion of the realm, influenced by the cold Benguela Current. Aghulhas province includes the rest of the South Africa's southern and eastern coasts which are influenced by the warm Agulhas Current. The Cape of Good Hope is the boundary between the Benguela and Agulhas provinces. Amsterdam and St. Paul islands are a separate province.
The Benguela and Agulhas marine provinces are divided into two marine ecoregions.
Benguela province
Namib ecoregion
Namaqua ecoregion
Agulhas province
Agulhas ecoregion
Natal ecoregion
Amsterdam–St Paul province
Amsterdam–St Paul ecoregion
See also
Marine ecoregions of the South African exclusive economic zone |
https://en.wikipedia.org/wiki/Crest%20and%20trough | A crest point on a wave is the maximum value of upward displacement within a cycle. A crest is a point on a surface wave where the displacement of the medium is at a maximum. A trough is the opposite of a crest, so the minimum or lowest point in a cycle.
When the crests and troughs of two sine waves of equal amplitude and frequency intersect or collide, while being in phase with each other, the result is called constructive interference and the magnitudes double (above and below the line). When in antiphase – 180° out of phase – the result is destructive interference: the resulting wave is the undisturbed line having zero amplitude.
See also
Crest factor
Superposition principle
Wave |
https://en.wikipedia.org/wiki/Alpha%20shape | In computational geometry, an alpha shape, or α-shape, is a family of piecewise linear simple curves in the Euclidean plane associated with the shape of a finite set of points. They were first defined by . The alpha-shape associated with a set of points is a generalization of the concept of the convex hull, i.e. every convex hull is an alpha-shape but not every alpha shape is a convex hull.
Characterization
For each real number α, define the concept of a generalized disk of radius 1/α as follows:
If α = 0, it is a closed half-plane;
If α > 0, it is a closed disk of radius 1/α;
If α < 0, it is the closure of the complement of a disk of radius −1/α.
Then an edge of the alpha-shape is drawn between two members of the finite point set whenever there exists a generalized disk of radius 1/α containing none of the point set and which has the property that the two points lie on its boundary.
If α = 0, then the alpha-shape associated with the finite point set is its ordinary convex hull.
Alpha complex
Alpha shapes are closely related to alpha complexes, subcomplexes of the Delaunay triangulation of the point set.
Each edge or triangle of the Delaunay triangulation may be associated with a characteristic radius, the radius of the smallest empty circle containing the edge or triangle. For each real number α, the α-complex of the given set of points is the simplicial complex formed by the set of edges and triangles whose radii are at most 1/α.
The union of the edges and triangles in the α-complex forms a shape closely resembling the α-shape; however it differs in that it has polygonal edges rather than edges formed from arcs of circles. More specifically, showed that the two shapes are homotopy equivalent. (In this later work, Edelsbrunner used the name "α-shape" to refer to the union of the cells in the α-complex, and instead called the related curvilinear shape an α-body.)
Examples
This technique can be employed to reconstruct a Fermi surface from the electron |
https://en.wikipedia.org/wiki/Playmobil | Playmobil () is a German line of toys produced by the Brandstätter Group (Geobra Brandstätter GmbH & Co KG), headquartered in Zirndorf, Germany. The signature Playmobil toy is a tall (1:24 scale) human figure with a smiling face. A wide range of accessories, buildings and vehicles, as well as many sorts of animals, are also part of the Playmobil line.
Playmobil toys are produced in themed series of sets as well as individual special figures and playsets. New products and product lines developed by a 50-strong development team are introduced frequently, and older sets are discontinued. Promotional and one-off products are sometimes produced in very limited quantities. These practices have helped give rise to a sizeable community of collectors. Collector activities extend beyond collecting and free-form play and include customization, miniature wargaming, and the creation of photo stories and stop motion films, or simply as decoration.
History
Playmobil was invented by German inventor Hans Beck (1929–2009), considered the "Father of Playmobil". Beck received training as a cabinetmaker and was also an avid hobbyist of model airplanes, a product he pitched to the company Geobra Brandstätter. Horst Brandstätter, the owner of the company, asked Beck to develop toy figures for children instead. (The company had originally been a producer of casket ornaments and handles.)
Beck spent three years from 1971 to 1974 developing what became Playmobil. Beck conducted research that allowed him to develop a toy that would not be too complex but would nevertheless be flexible. He felt that too much flexibility would get in the way of children's imaginations, and too much rigidity would cause frustration. The toy he conceived would fit in a child's hand and its facial design was based on children's drawings: a large head, a big smile, and no nose. "I would put the little figures in their hands without saying anything about what they were," Beck remarked. "They accepted them right |
https://en.wikipedia.org/wiki/American%20Association%20of%20Veterinary%20Parasitologists | The American Association of Veterinary Parasitologists is a professional association for veterinary parasitology. Despite the name it primarily serves both the United States and Canada and to a lesser degree the entire world. The AAVP connects veterinary parasitologists to each other and provides recommendations as to research and practice methods.
Journals
As part of its professional development and education mission the AAVP publishes:
Veterinary Parasitology along with Elsevier |
https://en.wikipedia.org/wiki/Ferber%20method | The Ferber method, or Ferberization, is a technique invented by Richard Ferber to solve infant sleep problems. It involves "sleep-training" children to self-soothe by allowing the child to cry for a predetermined amount of time at intervals before receiving external comfort.
"Cry it out"
The "Cry It Out" (CIO) approach can be traced back to the book The Care and Feeding of Children written by Emmett Holt in 1894. CIO is any sleep-training method which allows a baby to cry for a specified period before the parent will offer comfort. "Ferberization" is one such approach. Ferber does not advocate simply leaving a baby to cry, but rather supports giving the baby time to learn to self-soothe, by offering comfort and support from the parent at predetermined intervals. The best age to attempt Ferber's sleep training method is around 6 months-old.
Other CIO methods, such as Marc Weissbluth's extinction method, are often mistakenly referred to as "Ferberization", though they fall outside of the guidelines Ferber recommended. "Ferberization" is referred to as graduated extinction by Weissbluth. Some pediatricians feel that any form of CIO is unnecessary and damaging to a baby.
Ferberization summarized
Ferber discusses and outlines a wide range of practices to teach an infant to sleep. The term Ferberization is now popularly used to refer to the following techniques:
Take steps to prepare the baby to sleep. This includes night-time rituals and day-time activities.
At bedtime, leave the child in bed and leave the room.
Return at progressively increasing intervals to comfort the baby, but do not pick them up. For example, on the first night, some scenarios call for returning first after three minutes, then after five minutes, and thereafter each ten minutes, until the baby is asleep.
Each subsequent night, return at intervals longer than the night before. For example, the second night may call for returning first after five minutes, then after ten minutes, and thereafter |
https://en.wikipedia.org/wiki/Center%20frequency | In electrical engineering and telecommunications, the center frequency of a filter or channel is a measure of a central frequency between the upper and lower cutoff frequencies. It is usually defined as either the arithmetic mean or the geometric mean of the lower cutoff frequency and the upper cutoff frequency of a band-pass system or a band-stop system.
Typically, the geometric mean is used in systems based on certain transformations of lowpass filter designs, where the frequency response is constructed to be symmetric on a logarithmic frequency scale. The geometric center frequency corresponds to a mapping of the DC response of the prototype lowpass filter, which is a resonant frequency sometimes equal to the peak frequency of such systems, for example as in a Butterworth filter.
The arithmetic definition is used in more general situations, such as in describing passband telecommunication systems, where filters are not necessarily symmetric but are treated on a linear frequency scale for applications such as frequency-division multiplexing. |
https://en.wikipedia.org/wiki/Dynamic%20modulus | Dynamic modulus (sometimes complex modulus) is the ratio of stress to strain under vibratory conditions (calculated from data obtained from either free or forced vibration tests, in shear, compression, or elongation). It is a property of viscoelastic materials.
Viscoelastic stress–strain phase-lag
Viscoelasticity is studied using dynamic mechanical analysis where an oscillatory force (stress) is applied to a material and the resulting displacement (strain) is measured.
In purely elastic materials the stress and strain occur in phase, so that the response of one occurs simultaneously with the other.
In purely viscous materials, there is a phase difference between stress and strain, where strain lags stress by a 90 degree ( radian) phase lag.
Viscoelastic materials exhibit behavior somewhere in between that of purely viscous and purely elastic materials, exhibiting some phase lag in strain.
Stress and strain in a viscoelastic material can be represented using the following expressions:
Strain:
Stress:
where
where is frequency of strain oscillation,
is time,
is phase lag between stress and strain.
The stress relaxation modulus is the ratio of the stress remaining at time after a step strain was applied at time :
,
which is the time-dependent generalization of Hooke's law.
For visco-elastic solids, converges to the equilibrium shear modulus:
.
The fourier transform of the shear relaxation modulus is (see below).
Storage and loss modulus
The storage and loss modulus in viscoelastic materials measure the stored energy, representing the elastic portion, and the energy dissipated as heat, representing the viscous portion. The tensile storage and loss moduli are defined as follows:
Storage:
Loss:
Similarly we also define shear storage and shear loss moduli, and .
Complex variables can be used to express the moduli and as follows:
where is the imaginary unit.
Ratio between loss and storage modulus
The ratio of the loss modulus to storag |
https://en.wikipedia.org/wiki/Beryllium-8 | Beryllium-8 (8Be, Be-8) is a radionuclide with 4 neutrons and 4 protons. It is an unbound resonance and nominally an isotope of beryllium. It decays into two alpha particles with a half-life on the order of 8.19 seconds. This has important ramifications in stellar nucleosynthesis as it creates a bottleneck in the creation of heavier chemical elements. The properties of 8Be have also led to speculation on the fine tuning of the Universe, and theoretical investigations on cosmological evolution had 8Be been stable.
Discovery
The discovery of beryllium-8 occurred shortly after the construction of the first particle accelerator in 1932. Physicists John Douglas Cockcroft and Ernest Walton performed their first experiment with their accelerator at the Cavendish Laboratory in Cambridge, in which they irradiated lithium-7 with protons. They reported that this populated a nucleus with A = 8 that near-instantaneously decays into two alpha particles. This activity was observed again several months later, and was inferred to originate from 8Be.
Properties
Beryllium-8 is unbound with respect to alpha emission by 92 keV; it is a resonance having a width of 6 eV. The nucleus of helium-4 is particularly stable, having a doubly magic configuration and larger binding energy per nucleon than 8Be. As the total energy of 8Be is greater than that of two alpha particles, the decay into two alpha particles is energetically favorable, and the synthesis of 8Be from two 4He nuclei is endothermic. The decay of 8Be is facilitated by the structure of the 8Be nucleus; it is highly deformed, and is believed to be a molecule-like cluster of two alpha particles that are very easily separated. Furthermore, while other alpha nuclides have similar short-lived resonances, 8Be is exceptionally already in the ground state. The unbound system of two α-particles has a low energy of the Coulomb barrier, which enables its existence for any significant length of time. Namely, 8Be decays with a half-life o |
https://en.wikipedia.org/wiki/APMonitor | Advanced process monitor (APMonitor) is a modeling language for differential algebraic (DAE) equations. It is a free web-service or local server for solving representations of physical systems in the form of implicit DAE models. APMonitor is suited for large-scale problems and solves linear programming, integer programming, nonlinear programming, nonlinear mixed integer programming, dynamic simulation, moving horizon estimation, and nonlinear model predictive control. APMonitor does not solve the problems directly, but calls nonlinear programming solvers such as APOPT, BPOPT, IPOPT, MINOS, and SNOPT. The APMonitor API provides exact first and second derivatives of continuous functions to the solvers through automatic differentiation and in sparse matrix form.
Programming language integration
Julia, MATLAB, Python are mathematical programming languages that have APMonitor integration through web-service APIs. The GEKKO Optimization Suite is a recent extension of APMonitor with complete Python integration. The interfaces are built-in optimization toolboxes or modules to both load and process solutions of optimization problems. APMonitor is an object-oriented modeling language and optimization suite that relies on programming languages to load, run, and retrieve solutions. APMonitor models and data are compiled at run-time and translated into objects that are solved by an optimization engine such as APOPT or IPOPT. The optimization engine is not specified by APMonitor, allowing several different optimization engines to be switched out. The simulation or optimization mode is also configurable to reconfigure the model for dynamic simulation, nonlinear model predictive control, moving horizon estimation or general problems in mathematical optimization.
As a first step in solving the problem, a mathematical model is expressed in terms of variables and equations such as the Hock & Schittkowski Benchmark Problem #71 used to test the performance of nonlinear programming |
https://en.wikipedia.org/wiki/Data%20generating%20process | In statistics and in empirical sciences, a data generating process is a process in the real world that "generates" the data one is interested in. Usually, scholars do not know the real data generating model. However, it is assumed that those real models have observable consequences. Those consequences are the distributions of the data in the population. Those distributors or models can be represented via mathematical functions. There are many functions of data distribution. For example, normal distribution, Bernoulli distribution, Poisson distribution, etc. |
https://en.wikipedia.org/wiki/AMOLED | AMOLED (active-matrix organic light-emitting diode, ) is a type of OLED display device technology. OLED describes a specific type of thin-film-display technology in which organic compounds form the electroluminescent material, and active matrix refers to the technology behind the addressing of pixels.
Since 2007, AMOLED technology has been used in mobile phones, media players, TVs and digital cameras, and it has continued to make progress toward low-power, low-cost, high resolution and large size (for example, 88-inch and 8K resolution) applications..
Design
An AMOLED display consists of an active matrix of OLED pixels generating light (luminescence) upon electrical activation that have been deposited or integrated onto a thin-film transistor (TFT) array, which functions as a series of switches to control the current flowing to each individual pixel.
Typically, this continuous current flow is controlled by at least two TFTs at each pixel (to trigger the luminescence), with one TFT to start and stop the charging of a storage capacitor and the second to provide a voltage source at the level needed to create a constant current to the pixel, thereby eliminating the need for the very high currents required for passive-matrix OLED operation.
TFT backplane technology is crucial in the fabrication of AMOLED displays. In AMOLEDs, the two primary TFT backplane technologies, polycrystalline silicon (poly-Si) and amorphous silicon (a-Si), are currently used offering the potential for directly fabricating the active-matrix backplanes at low temperatures (below 150 °C) onto flexible plastic substrates for producing flexible AMOLED displays.
History
AMOLED was developed in 2006. Samsung SDI was one of the main investors in the technology, and many other display companies were also developing it. One of the earliest consumer electronics products with an AMOLED display was the BenQ-Siemens S88 mobile handset and, in 2007, the iriver Clix 2 portable media player. In 2008 it a |
https://en.wikipedia.org/wiki/Brosl%20Hasslacher | Brosl Hasslacher (May 13, 1941 – November 11, 2005) was a theoretical physicist.
Brosl Hasslacher was born in New York City in 1941 and obtained a bachelor's in physics from Harvard University in 1962. He did his Ph.D. with D.Z. Freeman and C.N. Yang at the State University of New York at Stony Brook. After having several postdoctoral and research positions at Institute for Advanced Study in Princeton, New Jersey, Caltech, ENS in Paris, and CERN, he settled for more than twenty years at the Theoretical Division of the Los Alamos National Laboratory. There he was involved in theoretical, experimental, and numerical work in theoretical physics, high-energy physics, nonlinear dynamics, fluid dynamics, nanotechnology, and robotics.
In the 1970s, he worked on the extended hadron model, collaborating with A. Neveu.
During the 1980s, Hasslacher pioneered with Uriel Frisch and Yves Pomeau the lattice-gas method for discrete simulation of fluid flow.
As part of the Los Alamos National Laboratory's Center for Nonlinear Studies, Hasslacher worked with Mitchell Feigenbaum and contributed ideas to chaos theory.
In the 1990s, Hasslacher worked with Mark Tilden on several papers concerning Biomorphic engineering. He is largely credited for using nonlinear dynamics to describe and design Tilden's BEAM robotics.
In 1994, Hasslacher's UNIX account (bhass) at Los Alamos National Laboratory was used by hacker Kevin Mitnick to break into computer security expert Tsutomu Shimomura's computers.
He retired from Los Alamos National Laboratory in 2003.
Notable papers
B. Hasslacher, A. Neveu, "Dynamic charges in field theories", Nuclear Physics (1979).
B. Hasslacher, M.J. Perry, "Spin networks are simplicial quantum gravity", Physics Letters (1981).
Frisch, U., B. Hasslacher, and Y. Pomeau, "Lattice gas Automata for the Navier Stokes Equation"' Phys. Rev. Lett. (1986).
B. Hasslacher, "Spontaneous curvature in a class of lattice field theories" Physica D(1991).
B. Hasslacher, M. |
https://en.wikipedia.org/wiki/Functional%20symptom | A functional symptom is a medical symptom with no known physical cause. In other words, there is no structural or pathologically defined disease to explain the symptom. The use of the term 'functional symptom' does not assume psychogenesis, only that the body is not functioning as expected. Functional symptoms are increasingly viewed within a framework in which 'biological, psychological, interpersonal and healthcare factors' should all be considered to be relevant for determining the aetiology and treatment plans.
Historically, there has often been fierce debate about whether certain problems are predominantly related to an abnormality of structure (disease) or are psychosomatic in nature, and what are at one stage posited to be functional symptoms are sometimes later reclassified as organic, as investigative techniques improve. It is well established that psychosomatic symptoms are a real phenomenon, so this potential explanation is often plausible, however the commonality of a range of psychological symptoms and functional weakness does not imply that one causes the other. For example, symptoms associated with migraine, epilepsy, schizophrenia, multiple sclerosis, stomach ulcers, chronic fatigue syndrome, Lyme disease and many other conditions have all tended historically at first to be explained largely as physical manifestations of the patient's psychological state of mind; until such time as new physiological knowledge is eventually gained. Another specific example is functional constipation, which may have psychological or psychiatric causes. However, one type of apparently functional constipation, anismus, may have a neurological (physical) basis.
Whilst misdiagnosis of functional symptoms does occur, in neurology, for example, this appears to occur no more frequently than of other neurological or psychiatric syndromes. However, in order to be quantified, misdiagnosis has to be recognized as such, which can be problematic in such a challenging field as me |
https://en.wikipedia.org/wiki/Pdr1p | Pdr1p (Pleiotropic Drug Resistance 1p) is a transcription factor found in yeast and is a key regulator of genes involved in general drug response. It induces the expression of ATP-binding cassette transporter, which can export toxic substances out of the cell, allowing cells to survive under general toxic chemicals. It binds to DNA sequences that contain certain motifs called pleiotropic drug response element (PDRE). Pdr1p is encoded by a gene called PDR1 (also known as YGL013C) on chromosome VII.
Transcriptional role
Pdr1p is a main regulator of PDR genes and is known to target about 50 genes. Pdr1p binds to sequence 5'-TCCGYGGR-3' of PDRE, which is located within the promoter sequences of its target genes. 218 genes are reported to possess PDRE. Pdr1p is observed to bind PDRE sites on DNA at basal level and also after simulation with toxins. This shows that Pdr1p-DNA interaction isn't dependent on toxic stimulation. This also suggests an involvement of activator(s) or co-activator(s) that induce PDR genes along with Pdr1p. Pdr1p has a functional homolog called Pdr3p encoded by gene called PDR3. Pdr3p is known to be regulated by Pdr3p and Pdr1p. Pdr1p can form a homodimer with itself or heterodimer with Pdr3p.
Loss of function studies of both PDR1 and PDR3 revealed that Pdr1p mutant shows lower tolerance (grows less in culture) against organic toxins such as cycloheximide and oligomycin. This confirms the functions of Prf1p that confer stronger drug response phenotype than Pdr3p. However, Pdr3p is crucial for PDR responses since cells containing loss of function mutation in both PDR1 and PDR3 genes weren't able to grow at all in the presence of those two toxins.
Both Pdr1p and Pdr3p regulate Pdr5p, which is an ATP-binding cassette transporter. A single amino acid substitution mutation, which is a gain of function mutation of Pdr1p denoted as pdr1-3 (F815S, substitution mutation of Phenylalanine at 815th of the polypeptide by Serine) leads to an over-expres |
https://en.wikipedia.org/wiki/Isotope%20analysis | Isotope analysis is the identification of isotopic signature, abundance of certain stable isotopes of chemical elements within organic and inorganic compounds. Isotopic analysis can be used to understand the flow of energy through a food web, to reconstruct past environmental and climatic conditions, to investigate human and animal diets, for food authentification, and a variety of other physical, geological, palaeontological and chemical processes. Stable isotope ratios are measured using mass spectrometry, which separates the different isotopes of an element on the basis of their mass-to-charge ratio.
Tissues affected
Isotopic oxygen is incorporated into the body primarily through ingestion at which point it is used in the formation of, for archaeological purposes, bones and teeth. The oxygen is incorporated into the hydroxylcarbonic apatite of bone and tooth enamel.
Bone is continually remodelled throughout the lifetime of an individual. Although the rate of turnover of isotopic oxygen in hydroxyapatite is not fully known, it is assumed to be similar to that of collagen; approximately 10 years. Consequently, should an individual remain in a region for 10 years or longer, the isotopic oxygen ratios in the bone hydroxyapatite would reflect the oxygen ratios present in that region.
Teeth are not subject to continual remodelling and so their isotopic oxygen ratios remain constant from the time of formation. The isotopic oxygen ratios, then, of teeth represent the ratios of the region in which the individual was born and raised. Where deciduous teeth are present, it is also possible to determine the age at which a child was weaned. Breast milk production draws upon the body water of the mother, which has higher levels of 18O due to the preferential loss of 16O through sweat, urine, and expired water vapour.
While teeth are more resistant to chemical and physical changes over time, both are subject to post-depositional diagenesis. As such, isotopic analysis makes |
https://en.wikipedia.org/wiki/IGB%20Eletr%C3%B4nica | IGB Eletrônica S.A. (Portuguese for IGB Electronics), doing business as Gradiente, is a Brazilian consumer electronics company based in Manaus, and with offices in São Paulo. The company designs and markets many product lines, including video (e.g. televisions, DVD players), audio, home theater, high end acoustics, office and mobile stereo, wireless, mobile/smart phones, and tablets for the Brazilian market.
History
The company was founded in 1964. In 1993 they founded Playtronic, a fully owned subsidiary who licensed the manufacturing of Nintendo consoles in Brazil, and while publishing games for various systems they also provided Portuguese translations of some games (among them, South Park and Shadow Man for the Nintendo 64). However, they stopped the partnership with Nintendo in 2003 because of the high price of the dollar at the time.
In 1997, Gradiente established a joint venture with Finland-based telecommunications manufacturing firm Nokia, where they were granted the license to manufacture variants of Nokia mobile phones locally under the Nokia and Gradiente brand names.
The Gradiente iPhone case
In 2000, Gradiente, now legally known as IGB Eletrônica SA, filed for the brand name "iphone" in Brazil's INPI (National Institute of Industrial Property, the trademark authority). Only by 2008 the Brazilian government granted full brand ownership for Gradiente, and currently (since January 2012), the company is selling Android-based smartphones under this name. Until 2008 that trademark is fully owned by IGB Eletrônica SA, which released its Android-powered iphone neo one under the Gradiente brand. The iphone neo one is sold for R$ 599 (about US$287), a dual-SIM handset running Android 2.3.4 Gingerbread. It has a 3.7-inch, 320 x 480 display, a 700 MHz CPU, 2GB of expandable storage, Bluetooth, 3G, WiFi and 5 / 0.3-megapixel camera.
See also
List of phonograph manufacturers
Gradiente Expert |
https://en.wikipedia.org/wiki/Project%20Narwhal | Project Narwhal is the name of a computer program used by the 2012 campaign by Barack Obama. It was contrasted in the Mitt Romney presidential campaign by Project Orca, so named because the orca is one of the few predators of the narwhal.
Development
Project Narwhal was developed for six-to-seven days a week and 14 hours a day by a staff of very-experienced workers of companies such as Twitter, Google, Facebook, Craigslist, Quora, Orbitz, and Threadless. The intent of the program was to link previously separate repositories of information, enabling all the data gathered about each individual voter was available to all arms of the campaign. In testing Narwhal, the team, in campaign CTO Harper Reed's words, role-played "every possible disaster situation," including three role-plays where all the systems would go down very quickly on election day. These "game day" practices would prepare them for actual disasters when Amazon Web Services went down on October 21, 2012, and Hurricane Sandy threatened the technology infrastructure in the Eastern United States.
See also
Cambridge Analytica
Catalist
Contingency table
Data dredging
Dan Wagner (data scientist)
The Groundwork
Harper Reed
Herd behavior
Left-wing politics
Michael Slaby
ORCA (computer system)
Psychographic
Predictive Analytics
Project Houdini |
https://en.wikipedia.org/wiki/GreenPeak%20Technologies | GreenPeak Technologies was an Utrecht, Netherlands-based fabless company developing semiconductor products and software for the IEEE 802.15.4 and Zigbee wireless market segment. Zigbee technology is used for Smart Home data communications and to facilitate the Internet of Things, the term used to refer to devices designed to be operated and managed by internet-enabled controllers and management systems.
Industry standards
Wireless sensor applications prosper best within the sphere of industry standards. Standards offer OEMs the freedom to purchase from a larger pool of suppliers and most importantly, standards allow devices from different vendors to interoperate, a feature which is paramount in applications ranging from building automation to industrial automation.
GreenPeak's development is based on open industry standards in addition to the IEEE 802.15.4 wireless network standard. GreenPeak is a member of the Global Semiconductor Alliance and supports the open global standards of the Zigbee Alliance.
Products
GreenPeak’s product offering contains communication controller chips for Zigbee Smart Home and IoT applications for IEEE 802.15.4.
The product portfolio features small size, ultra low-power communications chips, with integrated software, tools and reference designs for seamless integration into residential and consumer electronics applications.
Acquisition by Qorvo
On 17 April 2016, it was announced that GreenPeak had agreed to be acquired by Qorvo. After the acquisition, GreenPeak would continue to function in the Netherlands, Belgium and Hong Kong as the "Low-power Wireless Systems" business unit in the Infrastructure and Defense Products (IDP) group of Qorvo. Cees Links, former CEO of GreenPeak Technologies, was announced to take over as the General Manager of the Wireless Connectivity business unit. |
https://en.wikipedia.org/wiki/IASME | IASME Governance is an Information Assurance standard that is designed to be simple and affordable to help improve the cyber security of Small and medium-sized enterprises (SMEs).
The IASME Governance technical controls are aligned with the Cyber Essentials scheme and certification to the IASME standard includes certification to Cyber Essentials. The IASME Governance standard was developed in 2010 and has proven to be very effective at improving the security of supply chains for large organisations.. The standard maps closely to the international ISO/IEC 27001 information assurance standard.
Background
IASME Governance was originally developed as an academic-SME partnership that attracted a lot of interest from government and small businesses
Research towards the IASME model was undertaken in the UK during 2009–10, after an acknowledgement that the current international information assurance standard (ISO/IEC 27001) was complex for resource-strapped SMEs, providing a weakness in the supply chain. IASME was developed during 2010-11 and was launched later that year. It has been revised regularly to keep pace with changes to the risk environment of SMEs. The development process with SMEs was explained in a published international SME conference paper.
The IASME Governance standard follows the same implementation pattern used by the international standards community including PDCA (Plan-Do-Check-Act) principles and the Information Security Management System (ISMS) which provides a management framework. Both are refined and expressed in business terms recognisable by organisations of all sizes.
The IASME Governance standard was developed and piloted with the help of small businesses mostly in the West Midlands of the UK with encouraging results. The standard has been shown to be useful to SMEs both in the UK and internationally.
Large organisations can use the IASME Governance standard in their supply chains to understand and reduce supplier risk. An article exp |
https://en.wikipedia.org/wiki/History%20of%20HIV/AIDS | AIDS is caused by a human immunodeficiency virus (HIV), which originated in non-human primates
in Central and West Africa. While various sub-groups of the virus acquired human infectivity at different times, the present pandemic had its origins in the emergence of one specific strain – HIV-1 subgroup M – in Léopoldville in the Belgian Congo (now Kinshasa in the Democratic Republic of the Congo) in the 1920s.
There are two types of HIV: HIV-1 and HIV-2. HIV-1 is more virulent, easily transmitted and is the cause of the vast majority of HIV infections globally. The pandemic strain of HIV-1 is closely related to a virus found in chimpanzees of the subspecies Pan troglodytes troglodytes, which live in the forests of the Central African nations of Cameroon, Equatorial Guinea, Gabon, the Republic of the Congo, and the Central African Republic. HIV-2 is less transmittable and is largely confined to West Africa, along with its closest relative, a virus of the sooty mangabey (Cercocebus atys atys), an Old World monkey inhabiting southern Senegal, Guinea-Bissau, Guinea, Sierra Leone, Liberia, and western Ivory Coast.
Transmission from non-humans to humans
Research in this area is conducted using molecular phylogenetics, comparing viral genomic sequences to determine relatedness.
HIV-1 from chimpanzees and gorillas to humans
Scientists generally accept that the known strains (or groups) of HIV-1 are most closely related to the simian immunodeficiency viruses (SIVs) endemic in wild ape populations of West Central African forests. In particular, each of the known HIV-1 strains is either closely related to the SIV that infects the chimpanzee subspecies Pan troglodytes troglodytes (SIVcpz) or closely related to the SIV that infects western lowland gorillas, called SIVgor. The pandemic HIV-1 strain (group M or Main) and a rare strain found only in a few Cameroonian people (group N) are clearly derived from SIVcpz strains endemic in Pan troglodytes troglodytes chimpanzee popu |
https://en.wikipedia.org/wiki/Corpus%20of%20Contemporary%20American%20English | The Corpus of Contemporary American English (COCA) is a one-billion-word corpus of contemporary American English. It was created by Mark Davies, retired professor of corpus linguistics at Brigham Young University (BYU).
Content
The Corpus of Contemporary American English (COCA) is composed of one billion words as of November 2021. The corpus is constantly growing: In 2009 it contained more than 385 million words; In 2010 the corpus grew in size to 400 million words; By March 2019, the corpus had grown to 560 million words.
As of November 2021, the Corpus of Contemporary American English is composed of 485,202 texts. According to the corpus website, the current corpus (November 2021) is composed of texts that include 24-25 million words for each year 1990–2019.
For each year contained in the corpus (1990–2019), the corpus is evenly divided between six registers/genres: TV/movies, spoken, fiction, magazine, newspaper, and academic (see Texts and Registers page of the COCA website). In addition to the six registers that were previously listed, COCA (as of November 2021) also contains 125,496,215 words from blogs, and 129,899,426 from websites, making it a corpus that is truly composed of contemporary English (see Texts and Register page of COCA).
The texts come from a variety of sources:
Spoken: (85 million words) Transcripts of unscripted conversation from nearly 150 different TV and radio programs.
Fiction: (81 million words) Short stories and plays, first chapters of books 1990–present, and movie scripts.
Popular magazines: (86 million words) Nearly 100 different magazines, from a range of domains such as news, health, home and gardening, women's, financial, religion, and sports.
Newspapers: (81 million words) Ten newspapers from across the US, with text from different sections of the newspapers, such as local news, opinion, sports, and the financial section.
Academic journals: (81 million words) Nearly 100 different peer-reviewed journals. These were se |
https://en.wikipedia.org/wiki/Syledis | Syledis (SYstem LEger pour mesure la DIStance) was a terrestrial radio navigation and locating system. The system operated in the UHF segment of 420-450 MHz. It was manufactured in France by Sercel S.A., headquarters Carquefou, and was operational during the 1980s and until about 1995, providing positioning and navigational support for the petroleum sector in the North Sea and to other scientific projects. Syledis has been replaced by GPS.
Functioning
Determination of the position of mobile vehicles, like f.e. vessels, using Syledis is accomplished by measurement of transit time of radio waves between mobiles and radio stations at known points. There are two modes of operation, active range mode and passive pseudo-range mode.
Further explanation of the functioning of the Syledis positioning system
A vessel is equipped with a transmitter that transmits a coded signal to at least three radio beacons each placed at a known point. The beacons send the code back to the transmitter. The returned coded signal is placed in a timeslot to determine the origin of the returned code. Therefore, in an earlier stage a specified timeslot is connected to a specific beacon.
The elapsed time is proportional to twice the distance between the transmitter and the beacons. After the distance to the beacons is derived the position of the vessel can be calculated. The transmitter computes the distances to the beacons and a computer, connected with the transmitter, computes the position of the vessel.
Possible causes for measurement inaccuracy
The Syledis system has a measurement sensitivity that can be expressed in centimeters. Due to weather conditions the wave propagation speed, for electromagnetic waves in air almost the speed of light, can change. In wet conditions, rain or snow, the wave propagation is a little bit slower than in dry conditions. So that gives an inaccuracy in determining the correct position of the vessel.
Another important factor is the length of the cables used |
https://en.wikipedia.org/wiki/Plant%20badge | A clan badge, sometimes called a plant badge, is a badge or emblem, usually a sprig of a specific plant, that is used to identify a member of a particular Scottish clan. They are usually worn affixed to the bonnet behind the Scottish crest badge, or pinned at the shoulder of a lady's tartan sash. According to popular lore clan badges were used by Scottish clans as a means of identification in battle. An authentic example of plants being used in this way (though not by a clan) were the sprigs of oats used by troops under the command of Montrose during the sack of Aberdeen. Similar items are known to have been used by military forces in Scotland, like paper, or the "White Cockade" (a bunch of white ribbon) of the Jacobites.
Authenticity
Despite popular lore, many clan badges attributed to Scottish clans would be completely impractical for use as a means of identification. Many would be unsuitable, even for a modern clan gathering, let alone a raging clan battle. Also, a number of the plants (and flowers) attributed as clan badges are only available during certain times of year. Even though it is maintained that clan badges were used long before the Scottish crest badges used today, according to a former Lord Lyon King of Arms the oldest symbols used at gatherings were heraldic flags such as the banner, standard and pinsel.
There is much confusion as to why some clans have been attributed more than one clan badge. Several 19th century writers variously attributed plants to clans, many times contradicting each other. It has been claimed by one writer that if a clan gained new lands it may have also acquired that district's "badge" and used it along with their own clan badge. It is clear however, that there are several large groups of clans which share badges and also share a historical connection. The Clan Donald group (clans Macdonald, Macdonald of Clanranald, Macdonell of Glengarry, MacDonald of Keppoch) and clans/septs which have been associated with Clan Donald (l |
https://en.wikipedia.org/wiki/Biology%20Monte%20Carlo%20method | Biology Monte Carlo methods (BioMOCA) have been developed at the University of Illinois at Urbana-Champaign to simulate ion transport in an electrolyte environment through ion channels or nano-pores embedded in membranes. It is a 3-D particle-based Monte Carlo simulator for analyzing and studying the ion transport problem in ion channel systems or similar nanopores in wet/biological environments. The system simulated consists of a protein forming an ion channel (or an artificial nanopores like a Carbon Nano Tube, CNT), with a membrane (i.e. lipid bilayer) that separates two ion baths on either side. BioMOCA is based on two methodologies, namely the Boltzmann transport Monte Carlo (BTMC) and particle-particle-particle-mesh (P3M). The first one uses Monte Carlo method to solve the Boltzmann equation, while the later splits the electrostatic forces into short-range and long-range components.
Backgrounds
In full-atomic molecular dynamics simulations of ion channels, most of the computational cost is for following the trajectory of water molecules in the system. However, in BioMOCA the water is treated as a continuum dielectric background media. In addition to that, the protein atoms of the ion channel are also modeled as static point charges embedded in a finite volume with a given dielectric coefficient. So is the lipid membrane, which is treated as a static dielectric region inaccessible to ions. In fact the only non-static particles in the system are ions. Their motion is assumed classical, interacting with other ions through electrostatic interactions and pairwise Lennard-Jones potential. They also interact with the water background media, which is modeled using a scattering mechanism.
The ensemble of ions in the simulation region, are propagated synchronously in time and 3-D space by integrating the equations of motion using the second-order accurate leap-frog scheme. Ion positions r and forces F are defined at time steps t, and t + dt. The ion velocities are def |
https://en.wikipedia.org/wiki/3D%20reconstruction%20from%20multiple%20images | 3D reconstruction from multiple images is the creation of three-dimensional models from a set of images. It is the reverse process of obtaining 2D images from 3D scenes.
The essence of an image is a projection from a 3D scene onto a 2D plane, during which process the depth is lost. The 3D point corresponding to a specific image point is constrained to be on the line of sight. From a single image, it is impossible to determine which point on this line corresponds to the image point. If two images are available, then the position of a 3D point can be found as the intersection of the two projection rays. This process is referred to as triangulation. The key for this process is the relations between multiple views which convey the information that corresponding sets of points must contain some structure and that this structure is related to the poses and the calibration of the camera.
In recent decades, there is an important demand for 3D content for computer graphics, virtual reality and communication, triggering a change in emphasis for the requirements. Many existing systems for constructing 3D models are built around specialized hardware (e.g. stereo rigs) resulting in a high cost, which cannot satisfy the requirement of its new applications. This gap stimulates the use of digital imaging facilities (like a camera). An early method was proposed by Tomasi and Kanade. They used an affine factorization approach to extract 3D from images sequences. However, the assumption of orthographic projection is a significant limitation of this system.
Processing
The task of converting multiple 2D images into 3D model consists of a series of processing steps:
Camera calibration consists of intrinsic and extrinsic parameters, without which at some level no arrangement of algorithms can work. The dotted line between Calibration and Depth determination represents that the camera calibration is usually required for determining depth.
Depth determination serves as the most challe |
https://en.wikipedia.org/wiki/MRNA%20surveillance | mRNA surveillance mechanisms are pathways utilized by organisms to ensure fidelity and quality of messenger RNA (mRNA) molecules. There are a number of surveillance mechanisms present within cells. These mechanisms function at various steps of the mRNA biogenesis pathway to detect and degrade transcripts that have not properly been processed.
Overview
The translation of messenger RNA transcripts into proteins is a vital part of the central dogma of molecular biology. mRNA molecules are, however, prone to a host of fidelity errors which can cause errors in translation of mRNA into quality proteins. RNA surveillance mechanisms are methods cells use to assure the quality and fidelity of the mRNA molecules. This is generally achieved through marking aberrant mRNA molecule for degradation by various endogenous nucleases.
mRNA surveillance has been documented in bacteria and yeast. In eukaryotes, these mechanisms are known to function in both the nucleus and cytoplasm. Fidelity checks of mRNA molecules in the nucleus results in the degradation of improperly processed transcripts before export into the cytoplasm. Transcripts are subject to further surveillance once in the cytoplasm. Cytoplasmic surveillance mechanisms assess mRNA transcripts for the absence of or presence of premature stop codons.
Three surveillance mechanisms are currently known to function within cells: the nonsense-mediated mRNA decay pathway (NMD); the nonstop mediated mRNA decay pathways (NSD); and the no-go mediated mRNA decay pathway (NGD).
Nonsense-mediated mRNA decay
Overview
Nonsense-mediated decay is involved in detection and decay of mRNA transcripts which contain premature termination codons (PTCs). PTCs can arise in cells through various mechanisms: germline mutations in DNA; somatic mutations in DNA; errors in transcription; or errors in post transcriptional mRNA processing. Failure to recognize and decay these mRNA transcripts can result in the production of truncated proteins whic |
https://en.wikipedia.org/wiki/Propositional%20directed%20acyclic%20graph | A propositional directed acyclic graph (PDAG) is a data structure that is used to represent a Boolean function. A Boolean function can be represented as a rooted, directed acyclic graph of the following form:
Leaves are labeled with (true), (false), or a Boolean variable.
Non-leaves are (logical and), (logical or) and (logical not).
- and -nodes have at least one child.
-nodes have exactly one child.
Leaves labeled with () represent the constant Boolean function which always evaluates to 1 (0). A leaf labeled with a Boolean variable is interpreted as the assignment , i.e. it represents the Boolean function which evaluates to 1 if and only if . The Boolean function represented by a -node is the one that evaluates to 1, if and only if the Boolean function of all its children evaluate to 1. Similarly, a -node represents the Boolean function that evaluates to 1, if and only if the Boolean function of at least one child evaluates to 1. Finally, a -node represents the complementary Boolean function its child, i.e. the one that evaluates to 1, if and only if the Boolean function of its child evaluates to 0.
PDAG, BDD, and NNF
Every binary decision diagram (BDD) and every negation normal form (NNF) are also a PDAG with some particular properties. The following pictures represent the Boolean function :
See also
Data structure
Boolean satisfiability problem
Proposition |
https://en.wikipedia.org/wiki/Asx%20motif | The Asx motif is a commonly occurring feature in proteins and polypeptides. It consists of four or five amino acid residues with either aspartate or asparagine as the first residue (residue i). It is defined by two internal hydrogen bonds. One is between the side chain oxygen of residue i and the main chain NH of residue i+2 or i+3; the other is between the main chain oxygen of residue i and the main chain NH of residue i+3 or i+4. Asx motifs occur commonly in proteins and polypeptides.
When one of the hydrogen bonds is between the main chain oxygen of residue i and the side chain NH of residue i+3 the motif incorporates a beta turn. When one of the hydrogen bonds is between the side chain oxygen of residue i and the main chain NH of residue i+2 the motif incorporates an Asx turn.
As with Asx turns, a significant proportion of Asx motifs occur at the N-terminus of an alpha helix with the Asx as the N cap residue. Asx motifs have thus often been described as helix capping features.
A related motif is the ST motif which has serine or threonine as the first residue. |
https://en.wikipedia.org/wiki/Arthur%20Patschke | Arthur Patschke (13 April 1865 – 1934) was a German aether theorist, engineer and opponent of the theory of relativity.
Biography
Patschke was born in Braniewo, Kingdom of Prussia in the Ermland region of East Prussia. He was the son of a mill owner. Patschke studied at Mittweida College of Technology and graduated in 1887 as a mechanical engineer. During 1912-1914 he studied electrical engineering at the Royal Technical College of Charlottenburg. He was a designer of steam turbines.
In 1900, he began to construct a rotating steam engine which he had designed and presented at the Commercial and Industrial Exposition in Düsseldorf, in 1902. He also developed a "transverse steam turbine". Patschke was influenced by his professional experience of engineering and aimed to show "that the earth is a universal turbine, a universal ether turbine in large." In 1907, he moved to Berlin and worked for Siemens-Schuckert, a German electrical engineering company.
A strict mechanist, Patschke proposed the "Universal Law of Force" which stated that "bodies floating in gasses (heavenly bodies, planets, atoms) can only move forward when they have received force from behind". His universal mechanical theory (Universal Law of Force) was outlined in his book Elektromechanik, in 1921. His theory held that atoms have central significance since the pressure force from the movement of atoms is the primordial force from which all natural forces originate.
Patschke is best remembered as an opponent of Albert Einstein's theory of relativity and was convinced that all mechanical phenomena in the universe could be traced to the activity of tiny aether particles. In Patschke's scientific worldview aether attained a quasi-religious status to unlock all mysteries of the universe. Patschke's theory proposed the existence of "primordial force mass", a universal aether that causes all mechanical phenomena (gravitation, electricity, magnetism, heat, light and chemical processes).
Patschke stated |
https://en.wikipedia.org/wiki/Herbert%20Wilf | Herbert Saul Wilf (June 13, 1931 – January 7, 2012) was an American mathematician, specializing in combinatorics and graph theory. He was the Thomas A. Scott Professor of Mathematics in Combinatorial Analysis and Computing at the University of Pennsylvania. He wrote numerous books and research papers. Together with Neil Calkin he founded The Electronic Journal of Combinatorics in 1994 and was its editor-in-chief until 2001.
Biography
Wilf was the author of numerous papers and books, and was adviser and mentor to many students and colleagues. His collaborators include Doron Zeilberger and Donald Knuth. One of Wilf's former students is Richard Garfield, the creator of the collectible card game Magic: The Gathering. He also served as a thesis advisor for E. Roy Weintraub in the late 1960s.
Wilf died of a progressive neuromuscular disease in 2012.
Awards
In 1998, Wilf and Zeilberger received the Leroy P. Steele Prize for Seminal Contribution to Research for their joint paper, "Rational functions certify combinatorial identities" (Journal of the American Mathematical Society, 3 (1990) 147–158). The prize citation reads: "New mathematical ideas can have an impact on experts in a field, on people outside the field, and on how the field develops after the idea has been introduced. The remarkably simple idea of the work of Wilf and Zeilberger has already changed a part of mathematics for the experts, for the high-level users outside the area, and the area itself." Their work has been translated into computer packages that have simplified hypergeometric summation.
In 2002, Wilf was awarded the Euler Medal by the Institute of Combinatorics and its Applications.
Selected publications
1971: (editor with Frank Harary) Mathematical Aspects of Electrical Networks Analysis, SIAM-AMS Proceedings, Volume 3,American Mathematical Society
1998: (with N. J. Calkin) "The Number of Independent Sets in a Grid Graph", SIAM Journal on Discrete Mathematics
Books
A=B ( |
https://en.wikipedia.org/wiki/Palatini%20variation | In general relativity and gravitation the Palatini variation is nowadays thought of as a variation of a Lagrangian with respect to the connection.
In fact, as is well known, the Einstein–Hilbert action for general relativity was first formulated purely in terms of the spacetime metric . In the Palatini variational method one takes as independent field variables not only the ten components but also the forty components of the affine connection , assuming, a priori, no dependence of the from the and their derivatives.
The reason the Palatini variation is considered important is that it means that the use of the Christoffel connection in general relativity does not have to be added as a separate assumption; the information is already in the Lagrangian. For theories of gravitation which have more complex Lagrangians than the Einstein–Hilbert Lagrangian of general relativity, the Palatini variation sometimes gives more complex connections and sometimes tensorial equations.
Attilio Palatini (1889–1949) was an Italian mathematician who received his doctorate from the University of Padova, where he studied under Levi-Civita and Ricci-Curbastro.
The history of the subject, and Palatini's connection with it, are not straightforward (see references). In fact, it seems that what the textbooks now call "Palatini formalism" was actually invented in 1925 by Einstein, and as the years passed, people tended to mix up the Palatini identity and the Palatini formalism.
See also
Palatini identity
Self-dual Palatini action
Tetradic Palatini action |
https://en.wikipedia.org/wiki/Dead%20space%20%28physiology%29 | Dead space is the volume of air that is inhaled that does not take part in the gas exchange, because it either remains in the conducting airways or reaches alveoli that are not perfused or poorly perfused. It means that not all the air in each breath is available for the exchange of oxygen and carbon dioxide. Mammals breathe in and out of their lungs, wasting that part of the inhalation which remains in the conducting airways where no gas exchange can occur.
Components
Total dead space (also known as physiological dead space) is the sum of the anatomical dead space and the alveolar dead space.
Benefits do accrue to a seemingly wasteful design for ventilation that includes dead space.
Carbon dioxide is retained, making a bicarbonate-buffered blood and interstitium possible.
Inspired air is brought to body temperature, increasing the affinity of hemoglobin for oxygen, improving O2 uptake.
Particulate matter is trapped on the mucus that lines the conducting airways, allowing its removal by mucociliary transport.
Inspired air is humidified, improving the quality of airway mucus.
In humans, about a third of every resting breath has no change in O2 and CO2 levels. In adults, it is usually in the range of 150 mL.
Dead space can be increased (and better envisioned) by breathing through a long tube, such as a snorkel. Although one end of the snorkel is open to the air, when the wearer breathes in, they inhale a significant quantity of air that remained in the snorkel from the previous exhalation. Therefore, a snorkel increases the person's dead space by adding even more airway that does not participate in gas exchange.
Anatomical dead space
Anatomical dead space is the volume of the conducting airways (from the nose, mouth and trachea to the terminal bronchioles). These conduct gas to the alveoli but no gas exchange occurs here. In healthy lungs where the alveolar dead space is small, Fowler's method accurately measures the anatomic dead space using a single breath nit |
https://en.wikipedia.org/wiki/Phred%20quality%20score | A Phred quality score is a measure of the quality of the identification of the nucleobases generated by automated DNA sequencing. It was originally developed for the computer program Phred to help in the automation of DNA sequencing in the Human Genome Project. Phred quality scores are assigned to each nucleotide base call in automated sequencer traces. The FASTQ format encodes phred scores as ASCII characters alongside the read sequences. Phred quality scores have become widely accepted to characterize the quality of DNA sequences, and can be used to compare the efficacy of different sequencing methods. Perhaps the most important use of Phred quality scores is the automatic determination of accurate, quality-based consensus sequences.
Definition
Phred quality scores are logarithmically related to the base-calling error probabilities and defined as
.
This relation can be also be written as
.
For example, if Phred assigns a quality score of 30 to a base, the chances that this base is called incorrectly are 1 in 1000.
The phred quality score is the negative ratio of the error probability to the reference level of expressed in Decibel (dB).
History
The idea of sequence quality scores can be traced back to the original description of the SCF file format by Staden's group in 1992. In 1995, Bonfield and Staden proposed a method to use base-specific quality scores to improve the accuracy of consensus sequences in DNA sequencing projects.
However, early attempts to develop base-specific quality scores had only limited success.
The first program to develop accurate and powerful base-specific quality scores was the program Phred. Phred was able to calculate highly accurate quality scores that were logarithmically linked to the error probabilities. Phred was quickly adopted by all the major genome sequencing centers as well as many other laboratories; the vast majority of the DNA sequences produced during the Human Genome Project were processed with Phred.
|
https://en.wikipedia.org/wiki/Allergic%20transfusion%20reaction | An allergic transfusion reaction is when a blood transfusion results in allergic reaction. It is among the most common transfusion reactions to occur. Reported rates depend on the degree of active surveillance versus passing reporting to the blood bank. Overall, they are estimated to complicate up to 3% of all transfusions. The incidence of allergic transfusion reactions is associated with the amount of plasma in the product. More than 90% of these reactions occur during transfusion.
Signs and symptoms
Cause
Allergic reactions from blood transfusion may occur from the presence of allergy-causing antigens within the donor's blood, or transfusion of antibodies from a donor who has allergies, followed by antigen exposure.
An allergic transfusion reaction is a type of transfusion reaction that is defined according to the Center for Disease Control (CDC) as:
Diagnosis
An allergic transfusion reaction is diagnosed if two or more of the following occur within 4 hours of cessation of transfusion:
Conjunctival edema
Edema of lips, tongue and uvula
Erythema and edema of the periorbital area
Generalized flushing
Hypotension
Localized angioedema
Maculopapular rash
Pruritus (itching)
Respiratory distress; bronchospasm
Urticaria (hives)
A probable diagnosis results if any one of the following occurring within 4 hours of cessation of transfusion:
Conjunctival edema
Edema of lips, tongue and uvula
Erythema and edema of periorbital area
Localized angioedema
Maculopapular rash
Pruritus (itching)
Urticaria (hives)
The UK hemovigilance reporting system (SHOT), has classified allergic reactions in to mild, moderate and severe. Reactions can occur that have features of both allergic and febrile reactions.
Mild
A rash, urticaria, or flushing
Moderate
Wheeze (bronchospasm) or angioedema but blood pressure normal and no respiratory compromise. There may or may not be an associated rash or urticaria.
Severe
This can be due to:
Severe breathing problems (Bro |
https://en.wikipedia.org/wiki/%C4%86uk%20converter | The Ćuk converter (, ) is a type of buck-boost converter with low ripple current. A Ćuk converter can be seen as a combination of boost converter and buck converter, having one switching device and a mutual capacitor, to couple the energy.
Similar to the buck-boost converter with inverting topology, the output voltage of non-isolated Ćuk converter is typically inverted, with lower or higher values with respect to the input voltage. Usually in DC converters, the inductor is used as a main energy-storage component. In ćuk converter, the main energy-storage component is the capacitor. It is named after Slobodan Ćuk of the California Institute of Technology, who first presented the design.
Non-isolated Ćuk converter
There are variations on the basic Ćuk converter. For example, the coils may share single magnetic core, which drops the output ripple, and adds efficiency. Because the power transfer flows continuously via the capacitor, this type of switcher has minimized EMI radiation. The Ćuk converter allows energy to flow bidirectionally by using a diode and a switch.
Operating principle
A non-isolated Ćuk converter comprises two inductors, two capacitors, a switch (usually a transistor), and a diode. Its schematic can be seen in figure 1. It is an inverting converter, so the output voltage is negative with respect to the input voltage.
The main advantage of this converter is the continuous currents at the input and output of the converter. The main disadvantage is the high current stress on the switch.
The capacitor C1 is used to transfer energy. It is connected alternately to the input and to the output of the converter via the commutation of the transistor and the diode (see figures 2 and 3).
The two inductors L1 and L2 are used to convert respectively the input voltage source (Vs) and the output voltage source (Vo) into current sources. At a short time scale, an inductor can be considered as a current source as it maintains a constant current. This conversion |
https://en.wikipedia.org/wiki/Kosher%20certification%20agency | A kosher certification agency is an organization or certifying authority that grants a hechsher (, "seal of approval") to ingredients, packaged foods, beverages, and certain materials, as well as food-service providers and facilities in which kosher food is prepared or served. This certification verifies that the ingredients, production process including all machinery, and/or food-service process complies with the standards of kashrut (Jewish dietary law) as stipulated in the Shulchan Arukh, the benchmark of religious Jewish law. The certification agency employs mashgichim (rabbinic field representatives) to make periodic site visits and oversee the food-production or food-service process in order to verify ongoing compliance. Each agency has its own trademarked symbol that it allows manufacturers and food-service providers to display on their products or in-store certificates; use of this symbol can be revoked for non-compliance. Each agency typically has a "certifying rabbi" (Rav Hamachshir) who determines the exact kashrut standards to be applied and oversees their implementation.
A kosher certification agency's purview extends only to those areas mandated by Jewish law. Kosher certification is not a substitute for government or private food safety testing and enforcement.
Scope
As of 2014, there are more than 1,100 kosher certification agencies. These include international, national, regional, Israeli, specialty, and non-Orthodox agencies. Specialty agencies endorse ethical business practices, animal welfare, and environmental awareness on the part of the food producer. Non-Orthodox agencies accept leniences in certain aspects of food production and business operation (such as operating on Shabbat) that Orthodox agencies do not.
Agencies
The largest kosher certification agencies in the United States, known as the "Big Five", certify more than 80% of the kosher food sold in the US. These five agencies are: the OU, OK, KOF-K, Star-K, and CRC.
While the OU, O |
https://en.wikipedia.org/wiki/I.%20M.%20Dharmadasa | I.M. Dharmadasa is Professor of Applied Physics and leads the Electronic Materials and Solar Energy (solar cells and other Semiconductor Devices) Group at Sheffield Hallam University, UK. Dharme has worked in semiconductor research since becoming a PhD student at Durham University as a Commonwealth Scholar in 1977, under the supervision of the late Sir Gareth Roberts. His interest in the electrodeposition of thin film solar cells grew when he joined the Apollo Project at BP Solar in 1988. He continued this area of research on joining Sheffield Hallam University in 1990.
Career and research
He has published over 200 refereed and conference papers, has six British patents on thin film solar cells and has made over 175 conference presentations. He has made five book contributions and is the author of the book Advances in Thin Film Solar cells, which was published in 2012. Dharmadasa has also successfully supervised 20 Ph.D. and M.Phil. candidates and 14 years of PDRA support. He has gained research council and international government funding, and was included in the 2001 Research Assessment Exercise for Metallurgy and Materials which gained the top rating of five.
His recent scientific breakthroughs [1-2], which are fundamental to describing the photovoltaic activity of cadmium telluride/cadmium sulfide solar cells, were summarised in a "new theoretical model for CdTe”. Based on these novel ideas he has reported a higher efficiency of 18% for cadmium telluride/cadmium sulfide cell [3], compared with 16.5% reported by NREL in the United States in 2002. He currently focuses on low-cost methods to develop thin film solar cells based on electrodeposited copper indium gallium selenide materials, where he has reported efficiencies of 15.9% to date, compared with the highest value of 19.5% reported by NREL [4] using more expensive techniques. His article 'Fermi level pinning and effects on CuInGaSe2-based thin-film solar cells' was selected to be part of the Semiconductor |
https://en.wikipedia.org/wiki/Paracrystallinity | In materials science, paracrystalline materials are defined as having short- and medium-range ordering in their lattice (similar to the liquid crystal phases) but lacking crystal-like long-range ordering at least in one direction.
Origin and definition
The words "paracrystallinity" and "paracrystal" were coined by the late Friedrich Rinne in the year 1933. Their German equivalents, e.g. "Parakristall", appeared in print one year earlier.
A general theory of paracrystals has been formulated in a basic textbook, and then further developed/refined by various authors.
Rolf Hosemann's definition of an ideal paracrystal is: "The electron density distribution of any material is equivalent to that of a paracrystal when there is for every building block one ideal point so that the distance statistics to other ideal points are identical for all of these points. The electron configuration of each building block around its ideal point is statistically independent of its counterpart in neighboring building blocks. A building block corresponds then to the material content of a cell of this "blurred" space lattice, which is to be considered a paracrystal."
Theory
Ordering is the regularity in which atoms appear in a predictable lattice, as measured from one point. In a highly ordered, perfectly crystalline material, or single crystal, the location of every atom in the structure can be described exactly measuring out from a single origin. Conversely, in a disordered structure such as a liquid or amorphous solid, the location of the nearest and, perhaps, second-nearest neighbors can be described from an origin (with some degree of uncertainty) and the ability to predict locations decreases rapidly from there out. The distance at which atom locations can be predicted is referred to as the correlation length . A paracrystalline material exhibits a correlation somewhere between the fully amorphous and fully crystalline.
The primary, most accessible source of crystallinity informa |
https://en.wikipedia.org/wiki/Pellonulinae | Pellonulinae is a subfamily of freshwater herrings belonging to the family Clupeidae. Extant species are found in Asia, Africa and Australia, and members of the family occurred in North America in the Eocene. |
https://en.wikipedia.org/wiki/Janet%20Scott%20Salmon%20Blyth | Janet Scott Salmon Blyth (19 February 1902 - 1972) was a Scottish geneticist who specialised in poultry genetics and husbandry in the interwar and post-war decades and played a prominent role in establishing the Poultry Research Centre, one of several institutions that would eventually be amalgamated to form the Roslin Institute.
Early life and university
Janet Scott Salmon Blyth was born on the 19th February 1902 at Logie Farm in Flisk in East Fife to farmer James Blyth and his wife Janet McLaren Blyth (nee Osler). Growing up, she had two younger siblings, James and William, and a large complement of tenants, labourers, and their families for company at the Fife farm.
Blyth undertook her undergraduate degree at the University of Edinburgh from 1918 to 1921, gaining a BSc in Agricultural Sciences in 1921.
She began her PhD at the same institution later that year, based at the university’s Department of Animal Breeding, then under the directorship of noted pioneering geneticist Francis Albert Eley Crew. She was awarded her doctorate in 1925 for her thesis researching the individual wool types and yield produced by the offspring of the Department’s ovine hybridisation programme.
Her early career research in this field, along with that of academic peers such as J.A. Fraser Roberts and F. Fraser Darling would impact ovine research globally for the next quarter of a century.
Research career
In 1926, she was reported to have begun working “as an assistant to” Alan William Greenwood in poultry genetic research, under the supervision of Crew, who had been a keen poultry breeder since his youth. Greenwood only began his doctoral research in 1923, so the actual dynamic of their professional relationship at this stage in their careers requires further investigation.
In 1928, Blyth was a keynote speaker at the 5th meeting of the Royal Physical Society of Edinburgh chaired by zoologist Marion Newbigin, where she delivered an illustrated paper on her early research endea |
https://en.wikipedia.org/wiki/SuperH | SuperH (or SH) is a 32-bit reduced instruction set computing (RISC) instruction set architecture (ISA) developed by Hitachi and currently produced by Renesas. It is implemented by microcontrollers and microprocessors for embedded systems.
At the time of introduction, SuperH was notable for having fixed-length 16-bit instructions in spite of its 32-bit architecture. Using smaller instructions had consequences: the register file was smaller and instructions were generally two-operand format. However for the market the SuperH was aimed at, this was a small price to pay for the improved memory and processor cache efficiency.
Later versions of the design, starting with SH-5, included both 16- and 32-bit instructions, with the 16-bit versions mapping onto the 32-bit version inside the CPU. This allowed the machine code to continue using the shorter instructions to save memory, while not demanding the amount of instruction decoding logic needed if they were completely separate instructions. This concept is now known as a compressed instruction set and is also used by other companies, the most notable example being ARM for its Thumb instruction set.
In 2015, many of the original patents for the SuperH architecture expired and the SH-2 CPU was reimplemented as open source hardware under the name J2.
History
SH-1 and SH-2
The SuperH processor core family was first developed by Hitachi in the early 1990s. The design concept was for a single instruction set (ISA) that would be upward compatible across a series of CPU cores.
In the past, this sort of design problem would have been solved using microcode, with the low-end models in the series performing non-implemented instructions as a series of more basic instructions. For instance, a "long multiply" (multiplying two 32-bit registers to produce a 64-bit product) might be implemented in hardware on high-end models but instead be performed as a series of additions on low-end models.
One of the key realizations during the |
https://en.wikipedia.org/wiki/Wind%20speed | In meteorology, wind speed, or wind flow speed, is a fundamental atmospheric quantity caused by air moving from high to low pressure, usually due to changes in temperature. Wind speed is now commonly measured with an anemometer.
Wind speed affects weather forecasting, aviation and maritime operations, construction projects, growth and metabolism rate of many plant species, and has countless other implications. Wind direction is usually almost parallel to isobars (and not perpendicular, as one might expect), due to Earth's rotation.
Units
The metre per second (m/s) is the SI unit for velocity and the unit recommended by the World Meteorological Organization for reporting wind speeds, and is amongst others used in weather forecasts in the Nordic countries. Since 2010 the International Civil Aviation Organization (ICAO) also recommends meters per second for reporting wind speed when approaching runways, replacing their former recommendation of using kilometres per hour (km/h).
For historical reasons, other units such as miles per hour (mph), knots (kn) or feet per second (ft/s) are also sometimes used to measure wind speeds. Historically, wind speeds have also been classified using the Beaufort scale, which is based on visual observations of specifically defined wind effects at sea or on land.
Factors affecting wind speed
Wind speed is affected by a number of factors and situations, operating on varying scales (from micro to macro scales). These include the pressure gradient, Rossby waves and jet streams, and local weather conditions. There are also links to be found between wind speed and wind direction, notably with the pressure gradient and terrain conditions.
Pressure gradient is a term to describe the difference in air pressure between two points in the atmosphere or on the surface of the Earth. It is vital to wind speed, because the greater the difference in pressure, the faster the wind flows (from the high to low pressure) to balance out the variation. Th |
https://en.wikipedia.org/wiki/Message%20authentication%20code | In cryptography, a message authentication code (MAC), sometimes known as an authentication tag, is a short piece of information used for authenticating and integrity checking a message. In other words, to confirm that the message came from the stated sender (its authenticity) and has not been changed (its integrity). The MAC value allows verifiers (who also possess a secret key) to detect any changes to the message content.
Terminology
The term message integrity code (MIC) is frequently substituted for the term MAC, especially in communications to distinguish it from the use of the latter as media access control address (MAC address). However, some authors use MIC to refer to a message digest, which aims only to uniquely but opaquely identify a single message. RFC 4949 recommends avoiding the term message integrity code (MIC), and instead using checksum, error detection code, hash, keyed hash, message authentication code, or protected checksum.
Definitions
Informally, a message authentication code system consists of three algorithms:
A key generation algorithm selects a key from the key space uniformly at random.
A signing algorithm efficiently returns a tag given the key and the message.
A verifying algorithm efficiently verifies the authenticity of the message given the same key and the tag. That is, return accepted when the message and tag are not tampered with or forged, and otherwise return rejected.
A secure message authentication code must resist attempts by an adversary to forge tags, for arbitrary, select, or all messages, including under conditions of known- or chosen-message. It should be computationally infeasible to compute a valid tag of the given message without knowledge of the key, even if for the worst case, we assume the adversary knows the tag of any message but the one in question.
Formally, a message authentication code (MAC) system is a triple of efficient algorithms (G, S, V) satisfying:
G (key-generator) gives the key k on input 1n, |
https://en.wikipedia.org/wiki/Gyrator%E2%80%93capacitor%20model | The gyrator–capacitor model - sometimes also the capacitor-permeance model - is a lumped-element model for magnetic circuits, that can be used in place of the more common resistance–reluctance model. The model makes permeance elements analogous to electrical capacitance (see magnetic capacitance section) rather than electrical resistance (see magnetic reluctance). Windings are represented as gyrators, interfacing between the electrical circuit and the magnetic model.
The primary advantage of the gyrator–capacitor model compared to the magnetic reluctance model is that the model preserves the correct values of energy flow, storage and dissipation. The gyrator–capacitor model is an example of a group of analogies that preserve energy flow across energy domains by making power conjugate pairs of variables in the various domains analogous. It fills the same role as the impedance analogy for the mechanical domain.
Nomenclature
Magnetic circuit may refer to either the physical magnetic circuit or the model magnetic circuit. Elements and dynamical variables that are part of the model magnetic circuit have names that start with the adjective magnetic, although this convention is not strictly followed. Elements or dynamical variables in the model magnetic circuit may not have a one to one correspondence with components in the physical magnetic circuit. Symbols for elements and variables that are part of the model magnetic circuit may be written with a subscript of M. For example, would be a magnetic capacitor in the model circuit.
Electrical elements in an associated electrical circuit may be brought into the magnetic model for ease of analysis. Model elements in the magnetic circuit that represent electrical elements are typically the electrical dual of the electrical elements. This is because transducers between the electrical and magnetic domains in this model are usually represented by gyrators. A gyrator will transform an element into its dual. For example, a magn |
https://en.wikipedia.org/wiki/Gilliflower | A gilliflower or gillyflower () is:
The carnation or a similar plant of the genus Dianthus, especially the Clove Pink Dianthus caryophyllus.
Matthiola incana, also known as stock.
Several other plants, such as the wallflower, which have fragrant flowers.
The name derives from the French giroflée from Greek karyophyllon = "nut-leaf" = the spice called clove, the association deriving from the flower's scent.
Gilliflowers have been claimed to be used as payment for peppercorn rent in medieval feudal tenure contracts. For example, in 1262 in Bedfordshire an area of land called The Hyde was held by someone "for the rent of one clove of gilliflower", and Elmore Court in Gloucester was granted to the Guise family by John De Burgh for the rent of "The clove of one Gillyflower" each year. In Kent in the 13th century Bartholomew de Badlesmere upon an exchange made between King Edward I and himself, received a royal grant in fee of a manor and chapel, to hold in socage, "by the service of paying one pair of clove gilliflowers", by the hands of the Sheriff. However, it is more likely that the rent was paid in the form of actual cloves (in Latin, gariofilum; the flower was later named after the spice, via French), cloves and peppercorns both being exotic spices.
The rose and gillyflower appear on the station badge of RAF Waterbeach in Cambridgeshire, and subsequently on the badge of 39 Engineer Regiment based at Waterbeach Barracks. A rose and gillyflower were demanded by the owner of the land on which Waterbeach Abbey was built, in the 12th century.
An old recipe for gilliflower wine is mentioned in the Cornish Recipes Ancient & Modern dated to 1753:
Gilliflowers are mentioned by Mrs. Lovett in the song "Wait" from the Sondheim musical Sweeney Todd and in the novel La Faute de l'Abbé Mouret (aka Abbe Mouret's Transgression or the Sin of the Father Mouret) by Émile Zola as part of the Les Rougon-Macquart series. Charles Ryder has them growing under his window when he is a |
https://en.wikipedia.org/wiki/International%20Neuroethics%20Society | The International Neuroethics Society (INS) is a professional organization that studies the social, legal, ethical, and policy implications of advances in neuroscience. Its mission is to encourage and inspire research and dialogue on the responsible use of advances in brain science. The current INS President is Joseph J. Fins, MD.
History
The INS was formed as the Neuroethics Society in May 2006 in Asilomar, California by a multidisciplinary group of 13 members, including neuroscientists, psychologists, philosophers, bioethicists and lawyers. This group formed the INS following the first meeting solely devoted to neuroethics held in San Francisco in 2002, entitled 'Neuroethics: Mapping the Field'. This meeting was co-hosted by Stanford University and the University of California, San Francisco (UCSF), and sponsored by the Dana Foundation. This event prompted the attending and future founders of the INS to meet again and discuss the creation of a society devoted to neuroethics. The formation of the Neuroethics Society was formally announced in July 2006.
The founding president of the INS was Professor Steven Hyman, who served as president from 2006 to 2014. Hyman stated that the role of the Society was to study the issues related to the nervous system that are not neatly contained within traditional bioethics, as well as to bridge the gap between advances in neuroscience and the world of policy and ethics.
The Neuroethics Society was renamed the International Neuroethics Society in 2011, prior to the Society's 2011 Annual Meeting, to reflect its international membership and mission.
The official journal of the INS is the American Journal of Bioethics-Neuroscience (AJOB-Neuroscience), which has Paul Root Wolpe as its Editor-in-Chief. The journal launched in 2007 as a section of the American Journal of Bioethics and became an independent journal in 2010, publishing four issues a year.
Past Presidents of the Society include: Nita Farahany (2019–2021), Hank Greely |
https://en.wikipedia.org/wiki/Rain%2C%20Rain%2C%20Rain | "Rain, Rain, Rain" is a song, originally released in English in 1973 by German singer and music producer Simon Butterfly (real name: Bernd Simon).
In the same year it was adapted into French under the title "Viens, viens" and recorded by Marie Laforêt, and in Italian as "Lei, lei" and was recorded by both Dalida and Laforet.
Background and writing
The original was produced by Bernd Simon (Simon Butterfly) himself.
Charts
"Rain, Rain Rain" by Simon Butterfly
"Viens, viens" by Marie Laforêt |
https://en.wikipedia.org/wiki/Closest%20pair%20of%20points%20problem | The closest pair of points problem or closest pair problem is a problem of computational geometry: given points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.
Time bounds
Randomized algorithms that solve the problem in linear time are known, in Euclidean spaces whose dimension is treated as a constant for the purposes of asymptotic analysis. This is significantly faster than the time (expressed here in big O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest.
It is also possible to solve the problem without randomization, in random-access machine models of computation with unlimited memory that allow the use of the floor function, in near-linear time. In even more restricted models of computation, such as the algebraic decision tree, the problem can be solved in the somewhat slower time bound, and this is optimal for this model, by a reduction from the element uniqueness problem. Both sweep line algorithms and divide-and-conquer algorithms with this slower time bound are commonly taught as examples of these algorithm design techniques.
Linear-time randomized algorithms
A linear expected time randomized algorithm of , modified slightly by Richard Lipton to make its analysis easier, proceeds as follows, on an input set consisting of points in a -dimensional Euclidean space:
Select pairs of points uniformly at random, with replacement, and let be the minimum distance of the selected pairs.
Round the input points to a square grid of points whose size (the separation between adjacent grid points) is , and use a hash table to collect together pairs of input points that round to the same grid point.
For each input point, compute the distance |
https://en.wikipedia.org/wiki/Exercise%20%28mathematics%29 | A mathematical exercise is a routine application of algebra or other mathematics to a stated challenge. Mathematics teachers assign mathematical exercises to develop the skills of their students. Early exercises deal with addition, subtraction, multiplication, and division of integers. Extensive courses of exercises in school extend such arithmetic to rational numbers. Various approaches to geometry have based exercises on relations of angles, segments, and triangles. The topic of trigonometry gains many of its exercises from the trigonometric identities. In college mathematics exercises often depend on functions of a real variable or application of theorems. The standard exercises of calculus involve finding derivatives and integrals of specified functions.
Usually instructors prepare students with worked examples: the exercise is stated, then a model answer is provided. Often several worked examples are demonstrated before students are prepared to attempt exercises on their own. Some texts, such as those in Schaum's Outlines, focus on worked examples rather than theoretical treatment of a mathematical topic.
Overview
In primary school students start with single digit arithmetic exercises. Later most exercises involve at least two digits. A common exercise in elementary algebra calls for factorization of polynomials. Another exercise is completing the square in a quadratic polynomial. An artificially produced word problem is a genre of exercise intended to keep mathematics relevant. Stephen Leacock described this type:
The student of arithmetic who has mastered the first four rules of his art and successfully striven with sums and fractions finds himself confronted by an unbroken expanse of questions known as problems. These are short stories of adventure and industry with the end omitted and, though betraying a strong family resemblance, are not without a certain element of romance.
A distinction between an exercise and a mathematical problem was made by Alan |
https://en.wikipedia.org/wiki/Oceanography | Oceanography (), also known as oceanology, sea science and ocean science, is the scientific study of the oceans. It is an Earth science, which covers a wide range of topics, including ecosystem dynamics; ocean currents, waves, and geophysical fluid dynamics; plate tectonics and seabed geology; and fluxes of various chemical substances and physical properties within the ocean and across its boundaries. These diverse topics reflect multiple disciplines that oceanographers utilize to glean further knowledge of the world ocean, including astronomy, biology, chemistry, climatology, geography, geology, hydrology, meteorology and physics. Paleoceanography studies the history of the oceans in the geologic past. An oceanographer is a person who studies many matters concerned with oceans, including marine geology, physics, chemistry, and biology.
History
Early history
Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken.
The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic.
The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour:
"nam se fezeram indo a acertar: mas partiam os n |
https://en.wikipedia.org/wiki/THG%20Ingenuity%20Cloud%20Services | THG Ingenuity Cloud Services, formerly UK2 Group, is a global provider of internet services. It forms part of THG Ingenuity, an e-commerce services platform. Its services include web hosting, virtual private servers, domain name registration, management, dedicated servers and a content delivery network.
History
1998–2010
The group's first brand, UK2.Net, was launched in the UK by Danish entrepreneur Bo Bendtsen in October 1998 as a low-cost, no-frills provider of Internet domain names. By 2000, it had become the UK's largest web hosting company, with an estimated 435,000 customers.
Bendtsen retired from the company in 2002 but maintained a controlling stake. Following his departure, the company struggled to maintain its market position under chief executive Erik Anderson. Over the next three years, its customer base shrank by around 50%. In 2006 Anderson stepped down, and the group appointed Ditlev Bredahl as chief executive.
The next three years were marked by a series of strategic acquisitions, within and outside the UK. In 2006, UK2 Group acquired UK domain name business Another.com. In 2007, it purchased the reseller business of US web hosting company Stargate, which it integrated and rebranded as Resell.biz. It also acquired Stargate's shared hosting business, which was integrated and rebranded as US2.net. In 2007 it also purchased US shared hosting and VPS provider midPhase Services, including the company's ANHosting.com and Autica.com brands.
In 2008, it acquired ICANN-accredited domain registrar Naming Web, and US-based web-hosting brand WingSix from ServerCentral. It also acquired Australian community-focused web-hosting provider Dotable, US-based shared and dedicated hosting provider WestHost, including its data center in Salt Lake City, Utah, and UK-based dedicated and managed hosting company Virtual Internet.
That year, it also created the 10TB.com brand, partnering with cloud provider SoftLayer Technologies to offer a global, high-bandwidth dedi |
https://en.wikipedia.org/wiki/Carter%20constant | The Carter constant is a conserved quantity for motion around black holes in the general relativistic formulation of gravity. Its SI base units are kg2⋅m4⋅s−2. Carter's constant was derived for a spinning, charged black hole by Australian theoretical physicist Brandon Carter in 1968. Carter's constant along with the energy, axial angular momentum, and particle rest mass provide the four conserved quantities necessary to uniquely determine all orbits in the Kerr–Newman spacetime (even those of charged particles).
Formulation
Carter noticed that the Hamiltonian for motion in Kerr spacetime was separable in Boyer–Lindquist coordinates, allowing the constants of such motion to be easily identified using Hamilton–Jacobi theory. The Carter constant can be written as follows:
,
where is the latitudinal component of the particle's angular momentum, is the energy of the particle, is the particle's axial angular momentum, is the rest mass of the particle, and is the spin parameter of the black hole. Because functions of conserved quantities are also conserved, any function of and the three other constants of the motion can be used as a fourth constant in place of . This results in some confusion as to the form of Carter's constant. For example it is sometimes more convenient to use:
in place of . The quantity is useful because it is always non-negative. In general any fourth conserved quantity for motion in the Kerr family of spacetimes may be referred to as "Carter's constant".
As generated by a Killing tensor
Noether's theorem states that each conserved quantity of a system generates a continuous symmetry of that system. Carter's constant is related to a higher order symmetry of the Kerr metric generated by a second order Killing tensor field (different than used above). In component form:
,
where is the four-velocity of the particle in motion. The components of the Killing tensor in Boyer–Lindquist coordinates are:
,
where are the componen |
https://en.wikipedia.org/wiki/BmKK2%20toxin | In molecular biology, the BmKK2 toxins are a family of scorpion toxins. They belong to the scorpion toxin subfamily alpha-KTx 14. They include a novel short-chain peptide from the Asian scorpion Mesobuthus martensii Karsch, a potassium channel blocker composed of 31 amino acid residues. The peptide adopts a classical alpha/beta-scaffold for alpha-KTxs. BmKK2 selectively inhibits the delayed rectifier K+ current, but does not affect the fast transient K+ current.
In comparison with typical short-chain scorpion toxins (e.g., CTX and NTX), the alpha helix is shorter and the beta-sheet element is smaller (each strand consists of only two residues). There is an alpha-mode binding between the toxin and the channels. It has a lower activity towards Kv channels and it is predicted that it may prefer a type of SK channel with a narrower entryway as a specific receptor. |
https://en.wikipedia.org/wiki/ISO%202146 | ISO 2146 is an ISO standard defining an information model for "registry services for libraries and related organisations". Operating at a higher level than item-level standards such as MARC, it takes as principal elements parties (people or organisations), collections (of books, data, etc.), services and activities (grants, projects, etc.)
The first edition of ISO 2146 was published in 1972, as "Directories of libraries, information and documentation centres"; the second edition was published in 1988. The third edition was initially driven by the need to support interlibrary loan services online, but it has been broadened in scope to encompass the rules for registries operating in a network environment to provide the information about collections, parties, activities and services needed by libraries and related organizations to manage their collections and deliver information and documentation services across a range of applications and domains.
The third edition reached publication stage in March 2010.
Usage
The Research Data Australia service (formerly the ANDS Collections Registry) run by the Australian National Data Service uses RIF-CS, a profile of ISO 2146, as a data interchange format.
External links
ISO 2146:2010: Information and documentation - Registry services for libraries and related organizations
Research Data Australia |
https://en.wikipedia.org/wiki/Journal%20of%20Radiation%20Research | The Journal of Radiation Research is a bimonthly peer-reviewed scientific journal covering research on radiation and oncology. It was established in 1960 and is published by Oxford University Press. Its editor-in-chief is Kenshi Komatsu (University of Kyoto).
It is an affiliated journal of the Japan Radiation Research Society and the Japanese Society for Radiation Oncology. In 1998 the journal absorbed the Japanese Society for Radiation Oncology's former title, the Journal of JASTRO. This extended the scope of the journal to include medical and oncology research.
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Index Medicus/MEDLINE/PubMed
Science Citation Index Expanded
Current Contents/Life Sciences
BIOSIS Previews
Scopus
According to the Journal Citation Reports, the journal has a 2019 impact factor of 2.014. 5 year Impact Factor 2.063 |
https://en.wikipedia.org/wiki/Autoimmune%20hemolytic%20anemia | Autoimmune hemolytic anemia (AIHA) occurs when antibodies directed against the person's own red blood cells (RBCs) cause them to burst (lyse), leading to an insufficient number of oxygen-carrying red blood cells in the circulation. The lifetime of the RBCs is reduced from the normal 100–120 days to just a few days in serious cases. The intracellular components of the RBCs are released into the circulating blood and into tissues, leading to some of the characteristic symptoms of this condition. The antibodies are usually directed against high-incidence antigens, therefore they also commonly act on allogenic RBCs (RBCs originating from outside the person themselves, e.g. in the case of a blood transfusion). AIHA is a relatively rare condition, with an incidence of 5–10 cases per 1 million persons per year in the warm-antibody type and 0.45 to 1.9 cases per 1 million persons per year in the cold antibody type. Autoimmune hemolysis might be a precursor of later onset systemic lupus erythematosus.
The terminology used in this disease is somewhat ambiguous. Although MeSH uses the term "autoimmune hemolytic anemia", some sources prefer the term "immunohemolytic anemia" so drug reactions can be included in this category. The National Cancer Institute considers "immunohemolytic anemia", "autoimmune hemolytic anemia", and "immune complex hemolytic anemia" to all be synonyms.
Signs and symptoms
Symptoms of AIHA may be due to the underlying anemia; including shortness of breath or dyspnea, fatigue, headache, muscle weakness and pallor. In cold agglutinin disease (cold antibody type), agglutination and impaired passage of red blood cells through capillaries in the extremities causes acrocyanosis and Raynaud phenomenon with a rare complication of gangrene
Spherocytes are found in immunologically mediated hemolytic anemias. Signs of hemolysis that are present in AIHA include low hemoglobin (blood count), alterations in levels of cell markers of hemolysis; including elevated la |
https://en.wikipedia.org/wiki/Geodin | Geodin is an antibiotic against Gram-positive bacteria with the molecular formula C17H8CL2O7. Geodin is produced by the fungus Aspergillus terreus. |
https://en.wikipedia.org/wiki/List%20of%20computer%20worms |
See also
Timeline of notable computer viruses and worms
Comparison of computer viruses
List of trojan horses |
https://en.wikipedia.org/wiki/Polynomial%20solutions%20of%20P-recursive%20equations | In mathematics a P-recursive equation can be solved for polynomial solutions. Sergei A. Abramov in 1989 and Marko Petkovšek in 1992 described an algorithm which finds all polynomial solutions of those recurrence equations with polynomial coefficients. The algorithm computes a degree bound for the solution in a first step. In a second step an ansatz for a polynomial of this degree is used and the unknown coefficients are computed by a system of linear equations. This article describes this algorithm.
In 1995 Abramov, Bronstein and Petkovšek showed that the polynomial case can be solved more efficiently by considering power series solution of the recurrence equation in a specific power basis (i.e. not the ordinary basis ).
Other algorithms which compute rational or hypergeometric solutions of a linear recurrence equation with polynomial coefficients also use algorithms which compute polynomial solutions.
Degree bound
Let be a field of characteristic zero and a recurrence equation of order with polynomial coefficients , polynomial right-hand side and unknown polynomial sequence . Furthermore denotes the degree of a polynomial (with for the zero polynomial) and denotes the leading coefficient of the polynomial. Moreover letfor where denotes the falling factorial and the set of nonnegative integers. Then . This is called a degree bound for the polynomial solution . This bound was shown by Abramov and Petkovšek.
Algorithm
The algorithm consists of two steps. In a first step the degree bound is computed. In a second step an ansatz with a polynomial of that degree with arbitrary coefficients in is made and plugged into the recurrence equation. Then the different powers are compared and a system of linear equations for the coefficients of is set up and solved. This is called the method undetermined coefficients. The algorithm returns the general polynomial solution of a recurrence equation.
algorithm polynomial_solutions is
input: Linear recurrence |
https://en.wikipedia.org/wiki/Comma%20code | A comma code is a type of prefix-free code in which a comma, a particular symbol or sequence of symbols, occurs at the end of a code word and never occurs otherwise. This is an intuitive way to express arrays.
For example, Fibonacci coding is a comma code in which the comma is 11. 11 and 1011 are valid Fibonacci code words, but 101, 0111, and 11011 are not.
Examples
Unary coding, in which the comma is 0. This allows NULL values (when the code and comma is a single 0, the value can be taken as a NULL or a 0).
Fibonacci coding, in which the comma is 11.
All Huffman codes can be converted to comma codes by prepending a 1 to the entire code and using a single 0 as a code and the comma.
The definition of word being a number of symbols ending in a comma, the equivalent of a space character.
50% commas in all data axiom – All implied data specifically variable length bijective data can be shown to be consisting of exactly 50% of commas.
All scrambled data or suitably curated same-length data exhibits so called implied probability.
Such data that can be termed 'generic data' can be analysed using any interleaving unary code as headers where additional bijective bits (equal to the length of the unary code just read) are read as data while the unary code serves as an introduction or header for the data. This header serves as a comma. The data can be read in an interleaving fashion between each bit of the header or in post read fashion when the data is only read after the entire unary header code is read like Chen-Ho encoding.
It can be seen by random walk techniques and by statistical summation that all generic data has a header or comma of an average of 2 bits and data of an additional 2 bits (minimum 1).
This also allows for an inexpensive base increase algorithm before transmission in non binary communication channels, like base-3 or base-5 communication channels.
Where '?' is '1' or '2' for the value of the bijective digit that requires no further process |
https://en.wikipedia.org/wiki/CRYL1 | Crystallin, lambda 1 is a protein that in humans is encoded by the CRYL1 gene. |
https://en.wikipedia.org/wiki/Surface%20tension%20biomimetics | Surface tension is one of the areas of interest in biomimetics research. Surface tension forces will only begin to dominate gravitational forces below length scales on the order of the fluid's capillary length, which for water is about 2 millimeters. Because of this scaling, biomimetic devices that utilize surface tension will generally be very small, however there are many ways in which such devices could be used.
Applications
Coatings
A lotus leaf is well known for its ability to repel water and self-clean. Yuan and his colleagues fabricated a negative mold of alotus leaf from polydimethylsiloxane (PDMS) to capture the tiny hierarchical structures integral for the leaf's ability to repel water, known as the lotus effect. The lotus leaf's surface was then replicated by allowing a copper sheet to flow into the negative mold with the assistance of ferric chloride and pressure. The result was a lotus leaf-like surface inherent on the copper sheet. Static water contact angle measurements of the biomimetic surface were taken to be 132° after etching the copper and 153° after a stearic acid surface treatment to mimic the lotus leaf's waxy coating. A surface that mimics the lotus leaf could have numerous applications by providing water repellent outdoor gear.
Various species of floating fern are able to sustain a liquid-solid barrier of air between the fern and the surrounding water when they are submerged. Like the lotus leaf, floating fern species have tiny hierarchical structures that prevent water from wetting the plant surface. Mayser and Barthlott demonstrated this ability by submerging different species of the floating fern salvinia in water inside a pressure vessel to study how the air barrier between the leaf and surrounding water react to changes in pressure that would be similar to those experienced by the hull of a ship. Much other research is ongoing using these hierarchical structures in coatings on ship hulls to reduce viscous drag effects.
Biomedi |
https://en.wikipedia.org/wiki/Heartbeat%20%28computing%29 | In computer science, a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a computer system. Heartbeat mechanism is one of the common techniques in mission critical systems for providing high availability and fault tolerance of network services by detecting the network or systems failures of nodes or daemons which belongs to a network cluster—administered by a master server—for the purpose of automatic adaptation and rebalancing of the system by using the remaining redundant nodes on the cluster to take over the load of failed nodes for providing constant services. Usually a heartbeat is sent between machines at a regular interval in the order of seconds; a heartbeat message. If the endpoint does not receive a heartbeat for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed. Heartbeat messages are typically sent non-stop on a periodic or recurring basis from the originator's start-up until the originator's shutdown. When the destination identifies a lack of heartbeat messages during an anticipated arrival period, the destination may determine that the originator has failed, shutdown, or is generally no longer available.
Heartbeat protocol
A heartbeat protocol is generally used to negotiate and monitor the availability of a resource, such as a floating IP address, and the procedure involves sending network packets to all the nodes in the cluster to verify its reachability. Typically when a heartbeat starts on a machine, it will perform an election process with other machines on the heartbeat network to determine which machine, if any, owns the resource. On heartbeat networks of more than two machines, it is important to take into account partitioning, where two halves of the network could be functioning but not able to communicate with each other. In a situation such as this, it is important that the resource is only owned by o |
https://en.wikipedia.org/wiki/Reciprocal%20gamma%20function | In mathematics, the reciprocal gamma function is the function
where denotes the gamma function. Since the gamma function is meromorphic and nonzero everywhere in the complex plane, its reciprocal is an entire function. As an entire function, it is of order 1 (meaning that grows no faster than ), but of infinite type (meaning that grows faster than any multiple of , since its growth is approximately proportional to in the left-half plane).
The reciprocal is sometimes used as a starting point for numerical computation of the gamma function, and a few software libraries provide it separately from the regular gamma function.
Karl Weierstrass called the reciprocal gamma function the "factorielle" and used it in his development of the Weierstrass factorization theorem.
Infinite product expansion
Following from the infinite product definitions for the gamma function, due to Euler and Weierstrass respectively, we get the following infinite product expansion for the reciprocal gamma function:
where is the Euler–Mascheroni constant. These expansions are valid for all complex numbers .
Taylor series
Taylor series expansion around 0 gives:
where is the Euler–Mascheroni constant. For , the coefficient for the term can be computed recursively as
where is the Riemann zeta function. An integral representation for these coefficients was recently found by Fekih-Ahmed (2014):
For small values, these give the following values:
Fekih-Ahmed (2014) also gives an approximation for :
where and is the minus-first branch of the Lambert W function.
The Taylor expansion around has the same (but shifted) coefficients, i.e.:
(the reciprocal of Gauss' pi-function).
Asymptotic expansion
As goes to infinity at a constant we have:
Contour integral representation
An integral representation due to Hermann Hankel is
where is the Hankel contour, that is, the path encircling 0 in the positive direction, beginning at and returning to positive infinity with respect for the |
https://en.wikipedia.org/wiki/Mir-3180%20microRNA%20precursor%20family | In molecular biology mir-3180 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. The mir-10 microRNA precursor is a short non-coding RNA gene that is part of an RNA gene family which contains mir-3180-1, mir-3180-2, mir-3180-3, mir-3180-4 and mir-3180-5. They have now been predicted or experimentally confirmed in a wide range of cancers in humans. (MIPF0000894, mir-3180) mir-3180 has currently only been identified in human Homo sapiens.
Species distribution
The presence of mir-3180 has been detected in a range of catarrhine monkeys. It has been identified in primates including human (Homo sapiens), Sumatran orangutan (Pongo abelii), western lowland gorilla (Gorilla gorilla gorilla), Northern white-cheeked gibbon (Nomascus leucogenys), Chlorocebus sabaeus (Chlorocebus sabaeus), Olive baboon (Papio anubis) and Rhesus monkey (Macaca mulatta). In some of these species the presence of mir-3180 has been shown experimentally, in others the genes encoding mir-3180 might have been predicted computationally.
Genomic location
The mir-3180 genes are found within the Chromosome 16. In humans there are five mir-3180 clusters, these contain five genes encoding miRNAs (mir-3180-1, mir-3180-2, mir-3180-3, mir-3180-4 and mir-3180-5). The mir-3180 genes have the following locations:
hsa-mir-3180-1 chr16
Start: 14911220
End: 14911313
Strand: +
hsa-mir-3180-2 chr16
Start: 16309879
End: 16309966
Strand: +
hsa-mir-3180-3 chr16
Start: 18402178
End: 18402271
Strand: -
hsa-mir-3180-4 chr16
Start: 15154850
End: 15155002
Strand: -
hsa-mir-3180-5 chr16
Start: 2135977
End: 2136129
Strand: -
Association with cancer
Recently there has been much interest in abnormal levels of expression of microRNAs in cancers. Upregulation of mir-3180 has been found in cancers. Increased levels of mir-3180 have been found in colon cancer.
See also
MicroRNA
Further reading
External links
MicroRNA
MicroRNA precursor famili |
https://en.wikipedia.org/wiki/Everglades%20Nutrient%20Removal%20Project | The Everglades Nutrient Removal Project (ENRP) was a demonstration-scale wetland project proposed by the Everglades Forever Act. Functioning as a prototype for the much larger scale Everglades Construction Project, the ENRP was designed to model the process of using Stormwater treatment areas (STAs) to remove nutrients, especially phosphorus, from agricultural runoff entering the Everglades.
Description
Changes in the biotic integrity of the Everglades ecosystem has been largely attributed to the introduction of nutrient-rich runoff from the Everglades Agricultural Area. In 1994, The Everglades Forever Act authorized a 40,000-acre construction project (the Everglades Construction Program) that would use STAs as a way to clean water headed for Everglades National Park of nutrients that would throw the fragile ecosystem out of balance. Never before had a project of that size been managed and so the ENRP was created as an opportunity to gain perspective in the construction and operation of wetlands for nutrient removal. It was designed using 3,815 acres of land as opposed to the 40,000+ acres proposed for the final project. Its primary goal was to reduce the levels of phosphorus entering Water Conservation Area 1 (WCA-1) and to offer critical data and insight into the design and operation of the much larger scale project to come.
Management
The South Florida Water Management District (SFWMD) conducted construction, research and monitoring of the project. This required building structural elements like levees and pump stations and also establishing vegetation. Particular emergent plants such as cattails were employed for their ability to uptake phosphorus out of the water as a means of short term removal, as well as absorbing nutrients into dead plant matter and soil particles for long term removal.
Information gained from the experiment allowed for SFWMD to both anticipate potential problems and ensure that optimized phosphorus retention results could be achieved b |
https://en.wikipedia.org/wiki/Homogeneous%20tree | In descriptive set theory, a tree over a product set is said to be homogeneous if there is a system of measures such that the following conditions hold:
is a countably-additive measure on .
The measures are in some sense compatible under restriction of sequences: if , then .
If is in the projection of , the ultrapower by is wellfounded.
An equivalent definition is produced when the final condition is replaced with the following:
There are such that if is in the projection of and , then there is such that . This condition can be thought of as a sort of countable completeness condition on the system of measures.
is said to be -homogeneous if each is -complete.
Homogeneous trees are involved in Martin and Steel's proof of projective determinacy. |
https://en.wikipedia.org/wiki/Capitalization%20rate | Capitalization rate (or "cap rate") is a real estate valuation measure used to compare different real estate investments. Although there are many variations, the cap rate is generally calculated as the ratio between the annual rental income produced by a real estate asset to its current market value. Most variations depend on the definition of the annual rental income and whether it is gross or net of annual costs, and whether the annual rental income is the actual amount received (initial yields), or the potential rental income that could be received if the asset was optimally rented (ERV yield).
Basic formula
The rate is calculated in a simple fashion as follows:
Some investors may calculate the cap rate differently.
In instances where the purchase or market value is unknown, investors can determine the capitalization rate using a different equation based upon historical risk premiums, as follows:
Explanatory examples
For example, if a building is purchased for sale price and it produces in positive net operating income (the amount left over after fixed costs and variable costs are subtracted from gross lease income) during one year, then:
= 0.10 = 10%
The asset's capitalization rate is ten percent; one-tenth of the building's cost is paid by the year's net proceeds.
If the owner bought the building twenty years ago for that is now worth , his cap rate is
= 0.25 = 25%.
The investor must take into account the opportunity cost of keeping their money tied up in this investment. By keeping this building, they are losing the opportunity of investing (by selling the building at its market value and investing the proceeds). This is why the current value of the investment, not the actual initial investment, should be used in the cap rate calculation. Thus, for the owner of the building who bought it twenty years ago for , the real cap rate is twenty-five percent, not fifty percent, and they have invested, not .
As another example of why the current value |
https://en.wikipedia.org/wiki/Lanthanum%20aluminate-strontium%20titanate%20interface | The interface between lanthanum aluminate (LaAlO3) and strontium titanate (SrTiO3) is a notable materials interface because it exhibits properties not found in its constituent materials. Individually, LaAlO3 and SrTiO3 are non-magnetic insulators, yet LaAlO3/SrTiO3 interfaces can exhibit electrical metallic conductivity, superconductivity, ferromagnetism, large negative in-plane magnetoresistance, and giant persistent photoconductivity. The study of how these properties emerge at the LaAlO3/SrTiO3 interface is a growing area of research in condensed matter physics.
Emergent properties
Conductivity
Under the right conditions, the LaAlO3/SrTiO3 interface is electrically conductive, like a metal. The angular dependence of Shubnikov–de Haas oscillations indicates that the conductivity is two-dimensional, leading many researchers to refer to it as a two-dimensional electron gas (2DEG). Two-dimensional does not mean that the conductivity has zero thickness, but rather that the electrons are confined to only move in two directions. It is also sometimes called a two-dimensional electron liquid (2DEL) to emphasize the importance of inter-electron interactions.
Conditions necessary for conductivity
Not all LaAlO3/SrTiO3 interfaces are conductive. Typically, conductivity is achieved only when:
The LaAlO3/SrTiO3 interface is along the 001,110 and 111 crystallographic direction
The LaAlO3 and SrTiO3 are crystalline and epitaxial
The SrTiO3 side of the interface is TiO2-terminated (causing the LaAlO3 side of the interface to be LaO-terminated)
The LaAlO3 layer is at least 4 unit cells thick
Conductivity can also be achieved when the SrTiO3 is doped with oxygen vacancies; however, in that case, the interface is technically LaAlO3/SrTiO3−x instead of LaAlO3/SrTiO3.
Hypotheses for conductivity
The source of conductivity at the LaAlO3/SrTiO3 interface has been debated for years. SrTiO3 is a wide-band gap semiconductor that can be doped n-type in a variety of ways. Clarify |
https://en.wikipedia.org/wiki/KIF2A | Kinesin-like protein KIF2A is a protein that in humans is encoded by the KIF2A gene.
Kinesins, such as KIF2, are microtubule-associated motor proteins. For background information on kinesins, see MIM 148760.[supplied by OMIM] |
https://en.wikipedia.org/wiki/Primus%20Telecommunications%20%28Australia%29 | iPrimus is an Australian telecommunications company and wholly owned subsidiary of Vocus Communications.
iPrimus primarily focuses on fixed, mobile, and broadband services. Primus Telecom was the first telecommunications carrier to receive a licence when full deregulation and competition was introduced in Australia in 1997 and has network facilities across Australia. Primus operates its own fibre network in the five major capital cities; Sydney, Melbourne, Brisbane, Adelaide and Perth.
History
Market entry (1997–1999)
Primus originally entered the Australian telecommunications market in 1996 by acquiring Australia's fourth largest telecommunications service provider, Axicorp. Primus Telecom was one of the first carriers to be granted a telecommunications licence in 1997. Prior to this Australia's telecommunications industry was operated as a duopoly between the former Australian Government owned Telecom Australia, later rebranded Telstra and Optus, before the market was opened to full competition in July 1997.
1998 saw Primus expand their operations further in Australia with the acquisition of Eclipse Telecommunications Pty. Ltd. and Hotkey Internet Services Pty Limited.
In 1999 Primus' parent company Primus Telecommunications Inc. purchased the retail operations of Telegroup Inc., a long-distance telephone company based in Fairfield, Iowa, who was in Chapter 11 bankruptcy. This acquisition included the Australian subsidiary of Telegroup which allowed Primus to improve their capabilities in the Australian market.
1999 was also the year Primus Australia's U.S. parent company launched the iPrimus brand to spearhead the organisations strategy of aggressively gaining retail market share by offering 'one-stop' shopping for bundled voice, data, Internet and cellular services.
Primus' growth (2000–2003)
During the period from 2000–2003 Primus achieved some significant growth. They were one of many telco providers to benefit from the collapse of One.Tel by actively p |
https://en.wikipedia.org/wiki/Spherical%20Earth | Spherical Earth or Earth's curvature refers to the approximation of the figure of the Earth as a sphere.
The earliest documented mention of the concept dates from around the 5th century BC, when it appears in the writings of Greek philosophers. In the 3rd century BC, Hellenistic astronomy established the roughly spherical shape of Earth as a physical fact and calculated the Earth's circumference. This knowledge was gradually adopted throughout the Old World during Late Antiquity and the Middle Ages. A practical demonstration of Earth's sphericity was achieved by Ferdinand Magellan and Juan Sebastián Elcano's circumnavigation (1519–1522).
The concept of a spherical Earth displaced earlier beliefs in a flat Earth: In early Mesopotamian mythology, the world was portrayed as a disk floating in the ocean with a hemispherical sky-dome above, and this forms the premise for early world maps like those of Anaximander and Hecataeus of Miletus. Other speculations on the shape of Earth include a seven-layered ziggurat or cosmic mountain, alluded to in the Avesta and ancient Persian writings (see seven climes).
The realization that the figure of the Earth is more accurately described as an ellipsoid dates to the 17th century, as described by Isaac Newton in Principia. In the early 19th century, the flattening of the earth ellipsoid was determined to be of the order of 1/300 (Delambre, Everest). The modern value as determined by the US DoD World Geodetic System since the 1960s is close to 1/298.25.
Cause
Earth is massive enough that the pull of gravity maintains its roughly spherical shape. Most of its deviation from spherical stems from the centrifugal force caused by rotation around its north-south axis. This force deforms the sphere into an oblate ellipsoid.
Formation
The Solar System formed from a dust cloud that was at least partially the remnant of one or more supernovas that produced heavy elements by nucleosynthesis. Grains of matter accreted through electrostatic i |
https://en.wikipedia.org/wiki/List%20of%20q-analogs | This is a list of q-analogs in mathematics and related fields.
Algebra
Iwahori–Hecke algebra
Quantum affine algebra
Quantum enveloping algebra
Quantum group
Analysis
Jackson integral
q-derivative
q-difference polynomial
Quantum calculus
Combinatorics
LLT polynomial
q-binomial coefficient
q-Pochhammer symbol
q-Vandermonde identity
Orthogonal polynomials
q-Bessel polynomials
q-Charlier polynomials
q-Hahn polynomials
q-Jacobi polynomials:
Big q-Jacobi polynomials
Continuous q-Jacobi polynomials
Little q-Jacobi polynomials
q-Krawtchouk polynomials
q-Laguerre polynomials
q-Meixner polynomials
q-Meixner–Pollaczek polynomials
q-Racah polynomials
Probability and statistics
Gaussian q-distribution
q-exponential distribution
q-Weibull diribution
Tsallis q-Gaussian
Tsallis entropy
Special functions
Basic hypergeometric series
Elliptic gamma function
Hahn–Exton q-Bessel function
Jackson q-Bessel function
q-exponential
q-gamma function
q-theta function
See also
Lists of mathematics topics
Q-analogs |
https://en.wikipedia.org/wiki/Caribicus%20anelpistus | Caribicus anelpistus, the Altagracia giant galliwasp, is a possibly extinct species of lizard of the Diploglossidae family endemic to the Dominican Republic on the Caribbean island of Hispaniola.
Taxonomy
Along with the other members of its genus, it was formerly classified in the genus Celestus.
Conservation
Due to habitat loss and small Indian mongoose predation, it is considered critically endangered, if not extinct. Known only from the holotype, it has not been seen since 1977 in San Cristobal Province, though a giant galliwasp sighted in the vicinity of Jarabacoa in 2004 may be this species. |
https://en.wikipedia.org/wiki/SERENDIP | SERENDIP (Search for Extraterrestrial Radio Emissions from Nearby Developed Intelligent Populations) is a Search for Extra-Terrestrial Intelligence (SETI) program originated by the Berkeley SETI Research Center at the University of California, Berkeley.
SERENDIP takes advantage of ongoing "mainstream" radio telescope observations as a "piggy-back" or "commensal" program. Rather than having its own observation program, SERENDIP analyzes deep space radio telescope data that it obtains while other astronomers are using the telescope.
Background
The initial SERENDIP instrument was a 100-channel analog radio spectrometer covering 100 kHz of bandwidth. Subsequent instruments have been significantly more capable, with the number of channels doubling roughly every year. These instruments have been deployed at a large number of telescopes including the NRAO 90m telescope at Green Bank and the Arecibo 305m telescope.
SERENDIP observations have been conducted at frequencies between 400 MHz and 5 GHz, with most observations near the so-called Cosmic Water Hole (1.42 GHz (21 cm) neutral hydrogen and 1.66 GHz hydroxyl transitions).
Projects
SERENDIP V was installed at the Arecibo Observatory in June 2009. The digital back-end instrument was an FPGA-based 128 million-channel digital spectrometer covering 200 MHz of bandwidth. It took data commensally with the seven-beam Arecibo L-band Feed Array (ALFA).
The next generation of SERENDIP experiments, SERENDIP VI was deployed in 2014 at both Arecibo and the Green Bank Telescope. SERENDIP VI will also look for fast radio bursts, in collaboration with scientists from University of Oxford and West Virginia University.
Findings
The program has found around 400 suspicious signals, but there is not enough data to prove that they belong to extraterrestrial intelligence. In September–October 2004 the media wrote about Radio source SHGb02+14a and its artificial origin, but scrutiny has not been able to confirm its connection with an ext |
https://en.wikipedia.org/wiki/Eomaia | Eomaia ("dawn mother") is a genus of extinct fossil mammals containing the single species Eomaia scansoria, discovered in rocks that were found in the Yixian Formation, Liaoning Province, China, and dated to the Barremian Age of the Lower Cretaceous about . The single fossil specimen of this species is in length and virtually complete. An estimate of the body weight is . It is exceptionally well-preserved for a 125-million-year-old specimen. Although the fossil's skull is squashed flat, its teeth, tiny foot bones, cartilages and even its fur are visible.
Description
The Eomaia fossil shows clear traces of hair. However, this is not the earliest clear evidence of hair in the mammalian lineage, as fossils of Volaticotherium, and the docodont Castorocauda, discovered in rocks dated to about , also have traces of fur.
Eomaia scansoria possessed several features in common with placental mammals that distinguish them from metatherians, the group that includes modern marsupials. These include an enlarged malleolus ("little hammer") at the bottom of the tibia (the larger of the two shin bones), a joint between the first metatarsal bone and the medial cuneiform bone in the foot which is offset further back than the joint between the second metatarsal and intermediate cuneiform bones (in metatherians these joints are level with each other), as well as various features of jaws and teeth.
However, E. scansoria is not a true placental mammal as it lacks some features that are specific to placentals. These include the presence of a malleolus at the bottom of the fibula, the smaller of the two shin bones, a complete mortise and tenon upper ankle joint, where the rearmost bones of the foot fit into a socket formed by the ends of the tibia and fibula, and an atypical ancestral eutherian dental formula of . Eomaia had five upper and four lower incisors (much more typical for metatherians) and five premolars to three molars. Placental mammals have only up to three incisors on eac |
https://en.wikipedia.org/wiki/Guarded%20logic | Guarded logic is a choice set of dynamic logic involved in choices, where outcomes are limited.
A simple example of guarded logic is as follows: if X is true, then Y, else Z can be expressed in dynamic logic as (X?;Y)∪(~X?;Z). This shows a guarded logical choice: if X holds, then X?;Y is equal to Y, and ~X?;Z is blocked, and Y∪block is also equal to Y. Hence, when X is true, the primary performer of the action can only take the Y branch, and when false the Z branch.
A real-world example is the idea of paradox: something cannot be both true and false. A guarded logical choice is one where any change in true affects all decisions made down the line.
History
Before the use of guarded logic there were two major terms used to interpret modal logic. Mathematical logic and database theory (Artificial Intelligence) were first-order predicate logic. Both terms found sub-classes of first-class logic and efficiently used in solvable languages which can be used for research. But neither could explain powerful fixed-point extensions to modal style logics.
Later Moshe Y. Vardi made a conjecture that a tree model would work for many modal style logics. The guarded fragment of first-order logic was first introduced by Hajnal Andréka, István Németi and Johan van Benthem in their article Modal languages and bounded fragments of predicate logic. They successfully transferred key properties of description, modal, and temporal logic to predicate logic. It was found that the robust decidability of guarded logic could be generalized with a tree model property. The tree model can also be a strong indication that guarded logic extends modal framework which retains the basics of modal logics.
Modal logics are generally characterized by invariances under bisimulation. It also so happens that invariance under bisimulation is the root of tree model property which helps towards defining automata theory.
Types of guarded logic
Within Guarded Logic there exists numerous guarded objects. The |
https://en.wikipedia.org/wiki/ADAM17 | A disintegrin and metalloprotease 17 (ADAM17), also called TACE (tumor necrosis factor-α-converting enzyme), is a 70-kDa enzyme that belongs to the ADAM protein family of disintegrins and metalloproteases.
Chemical characteristics
ADAM17 is an 824-amino acid polypeptide.
Function
ADAM17 is understood to be involved in the processing of tumor necrosis factor alpha (TNF-α) at the surface of the cell, and from within the intracellular membranes of the trans-Golgi network. This process, which is also known as 'shedding', involves the cleavage and release of a soluble ectodomain from membrane-bound pro-proteins (such as pro-TNF-α), and is of known physiological importance. ADAM17 was the first 'sheddase' to be identified, and is also understood to play a role in the release of a diverse variety of membrane-anchored cytokines, cell adhesion molecules, receptors, ligands, and enzymes.
Cloning of the TNF-α gene revealed it to encode a 26 kDa type II transmembrane pro-polypeptide that becomes inserted into the cell membrane during its maturation. At the cell surface, pro-TNF-α is biologically active, and is able to induce immune responses via juxtacrine intercellular signaling. However, pro-TNF-α can undergo a proteolytic cleavage at its Ala76-Val77 amide bond, which releases a soluble 17kDa extracellular domain (ectodomain) from the pro-TNF-α molecule. This soluble ectodomain is the cytokine commonly known as TNF-α, which is of pivotal importance in paracrine signaling. This proteolytic liberation of soluble TNF-α is catalyzed by ADAM17.
Recently, ADAM17 was discovered as a crucial mediator of resistance to radiotherapy. Radiotherapy can induce a dose-dependent increase of furin-mediated cleavage of the ADAM17 proform to active ADAM17, which results in enhanced ADAM17 activity in vitro and in vivo. It was also shown that radiotherapy activates ADAM17 in non-small cell lung cancer, which results in shedding of multiple survival factors, growth factor pathway activati |
https://en.wikipedia.org/wiki/D%C3%A9cio%20Villares | Décio Rodrigues Villares (1 December 1851, in Rio de Janeiro – 21 June 1931, in Rio de Janeiro) was a Brazilian painter, sculptor, caricaturist, and graphic designer. He is best known for helping to design the blue disc on the Brazilian Flag and his designs for the monument honoring Júlio de Castilhos.
Biography
His father, José Rodrigues Villares, was a Lieutenant Colonel, a member of the Nova Iguaçu city council, and a participant in the Liberal rebellions of 1842. Although his family was not wealthy, they were politically connected (his uncle, Manoel Rodrigues Villares (1804-1878), served as a Minister of the Supreme Federal Court), so he was able to gain entrance to the Colégio Pedro II and the Academia Imperial de Belas Artes. There, he studied with Victor Meirelles and Pedro Américo.
In 1870, he began providing caricatures for the satirical magazine, Comédia Social, published by Américo and his younger brother, , who was also a student there at that time. For unknown reasons, he was frequently absent from his classes, did not participate in exhibitions and did not compete for the travel scholarship. Eventually, he dropped out. For the next nine years, he travelled, initially (1872) to Paris.
There, he studied in the workshops of Alexandre Cabanel. In 1874, he was awarded a gold medal at the Salon for his painting of Paolo Malatesta and Francesca da Rimini, which was praised by the notoriously difficult-to-please art critic, . During that period, he was first exposed to the positivist philosophy of Auguste Comte and abandoned Catholicism. This would result in his being rejected for a teaching position at the Académie des Beaux-Arts. He then went from Paris to Florence, where Pedro Américo maintained a studio. He may have studied sculpture with Rodolfo Bernardelli. It is not known exactly how long he stayed in Italy, although letters indicate that he was still there in 1878. It is possible that he returned briefly to France.
What is known for certain is t |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.