source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Multiplex%20%28sensor%29 | Multiplex sensor is a hand-held multiparametric optical sensor developed by Force-A. The sensor is a result of 15 years of research on plant autofluorescence conducted by the CNRS (National Center for Scientific Research) and University of Paris-Sud Orsay. It provides accurate and complete information on the physiological state of the crop, allowing real-time and non-destructive measurements of chlorophyll and polyphenols contents in leaves and fruits.
Technology
Multiplex assesses the chlorophyll and polyphenols indices by making use of two attributes of plant fluorescence: the effect of fluorescence re-absorption by chlorophyll and screening effect of polyphenols.
The sensor is an optical head which contains:
Optical sources (UV, blue, green and red)
Detectors (blue-green or yellow, red and far-red (NIR))
Applications
Alongside with other data, Multiplex is designed to provide input for decision support systems (DSS) for a range of crops, including:
Fertilization applications
Crop quality assessments (nitrogen status, maturity, freshness and disease detection)
As a standalone sensor, Multiplex is a tool for rapid collection of information concerning chlorophyll and flavonoids contents of the plant to be applied on ecophysiological research. |
https://en.wikipedia.org/wiki/Chaperone-mediated%20autophagy | Chaperone-mediated autophagy (CMA) refers to the chaperone-dependent selection of soluble cytosolic proteins that are then targeted to lysosomes and directly translocated across the lysosome membrane for degradation. The unique features of this type of autophagy are the selectivity on the proteins that are degraded by this pathway and the direct shuttling of these proteins across the lysosomal membrane without the requirement for the formation of additional vesicles (Figure 1).
Molecular components and steps
The proteins that are degraded through CMA are cytosolic proteins or proteins from other compartments once they reach the cytosol. Therefore, some of the components that participate in CMA are present in the cytosol while others are located at the lysosomal membrane (Table I).
Specific selection of proteins for degradation in all forms of autophagy came to further understanding as studies discovered the role of chaperones like hsc70. Although hsc70 targets cytosolic protein to CMA based on specific amino acid sequence recognition, it works differently when targeting proteins to macro or microautophagy.
In one mechanism for a protein to be a CMA substrate, it must have in its amino acid sequence a pentapeptide motif biochemically related to KFERQ. This CMA-targeting motif is recognized by a cytosolic chaperone, heat shock cognate protein of 70 kDa (hsc70) which targets the substrate to the lysosome surface. This substrate protein-chaperone complex binds to lysosome-associated membrane protein type 2A (LAMP-2A), which acts as the receptor for this pathway. LAMP-2A a single span membrane protein, is one of the three spliced variants of a single gene lamp2. The other two isoforms LAMP-2B and LAMP-2C are involved in macroautophagy and vesicular trafficking, respectively. Substrate proteins undergo unfolding after binding to LAMP-2A in a process likely mediated by the membrane associated hsc70 and its co-chaperones Bag1, hip, hop and hsp40, also detected at the l |
https://en.wikipedia.org/wiki/Expensive%20Tape%20Recorder | Expensive Tape Recorder is a digital audio program written by David Gross while a student at the Massachusetts Institute of Technology. Gross developed the idea with Alan Kotok, a fellow member of the Tech Model Railroad Club. The recorder and playback system ran in the late 1950s or early 1960s on MIT's TX-0 computer on loan from Lincoln Laboratory.
The name
Gross referred to this project by this name casually in the context of Expensive Typewriter and other programs that took their names in the spirit of "Colossal Typewriter". It is unclear if the typewriters were named for the 3 million USD development cost of the TX-0. Or they could have been named for the retail price of the DEC PDP-1, a descendant of the TX-0, installed next door at MIT in 1961. The PDP-1 was one of the least expensive computers money could buy, about 120,000 in 1962 USD. The program has been referred to as a hack, perhaps in the historical sense or in the MIT hack sense. Or the term may have been applied to it in the sense of Hackers: Heroes of the Computer Revolution, a book by Steven Levy.
The project
Gross recalled and very briefly described the project in a 1984 Computer Museum meeting. A person associated with the Tixo Web site spoke with Gross and Kotok, and posted the only other description known.
Influence
According to Kotok, the project was, "digital recording more than 20 years ahead of its time." In 1984, when Jack Dennis asked if they could recognize Beethoven, Computer Museum meeting minutes record the authors as saying, "It wasn't bad, considering." Digital audio pioneer Thomas Stockham worked with Dennis and like Kotok helped develop a contemporary debugger. Whether he was first influenced by Expensive Tape Recorder or more by the work of Kenneth N. Stevens is unknown.
See also
PDP-1
Digital recording
Expensive Typewriter
Expensive Desk Calculator
Expensive Planetarium
Harmony Compiler
Notes |
https://en.wikipedia.org/wiki/Core-Plus%20Mathematics%20Project | Core-Plus Mathematics is a high school mathematics program consisting of a four-year series of print and digital student textbooks and supporting materials for teachers, developed by the Core-Plus Mathematics Project (CPMP) at Western Michigan University, with funding from the National Science Foundation. Development of the program started in 1992. The first edition, entitled Contemporary Mathematics in Context: A Unified Approach, was completed in 1995. The third edition, entitled Core-Plus Mathematics: Contemporary Mathematics in Context, was published by McGraw-Hill Education in 2015.
Key Features
The first edition of Core-Plus Mathematics was designed to meet the curriculum, teaching, and assessment standards from the National Council of Teachers of Mathematics and the broad goals outlined in the National Research Council report, Everybody Counts: A Report to the Nation on the Future of Mathematics Education. Later editions were designed to also meet the American Statistical Association Guidelines for Assessment and Instruction in Statistics Education (GAISE) and most recently the standards for mathematical content and practice in the Common Core State Standards for Mathematics (CCSSM).
The program puts an emphasis on teaching and learning mathematics through mathematical modeling and mathematical inquiry. Each year, students learn mathematics in four interconnected strands: algebra and functions, geometry and trigonometry, statistics and probability, and discrete mathematical modeling.
First Edition (1994-2003)
The program originally comprised three courses, intended to be taught in grades 9 through 11. Later, authors added a fourth course intended for college-bound students.
Second Edition (2008-2011)
The course was re-organized around interwoven strands of algebra and functions, geometry and trigonometry, statistics and probability, and discrete mathematics. Lesson structure was updated, and technology tools, including CPMP-Tools software was introduced. |
https://en.wikipedia.org/wiki/Badouel%20intersection%20algorithm | The Badouel ray-triangle intersection algorithm, named after its inventor Didier Badouel, is a fast method for calculating the intersection of a ray and a triangle in three dimensions without needing precomputation of the plane equation of the plane containing the triangle.
External links
Ray-Polygon Intersection An Efficient Ray-Polygon Intersection by Didier Badouel from Graphics Gems I
Computational geometry |
https://en.wikipedia.org/wiki/Vermicompost | Vermicompost (vermi-compost) is the product of the decomposition process using various species of worms, usually red wigglers, white worms, and other earthworms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vermicast. This process is called vermicomposting, with the rearing of worms for this purpose is called vermiculture.
Vermicast (also called worm castings, worm humus, worm poop, worm manure, or worm faeces) is the end-product of the breakdown of organic matter by earthworms. These excreta have been shown to contain reduced levels of contaminants and a higher saturation of nutrients than the organic materials before vermicomposting.
Vermicompost contains water-soluble nutrients which may be extracted as vermiwash and is an excellent, nutrient-rich organic fertilizer and soil conditioner. It is used in gardening and sustainable, organic farming.
Vermicomposting can also be applied for treatment of sewage. A variation of the process is vermifiltration (or vermidigestion) which is used to remove organic matter, pathogens, and oxygen demand from wastewater or directly from blackwater of flush toilets.
Overview
Vermicomposting has gained popularity in both industrial and domestic settings because, as compared with conventional composting, it provides a way to treat organic wastes more quickly. In manure composting, it also generates products that have lower salinity levels.
The earthworm species (or composting worms) most often used are red wigglers (Eisenia fetida or Eisenia andrei), though European nightcrawlers (Eisenia hortensis, synonym Dendrobaena veneta) and red earthworm (Lumbricus rubellus) could also be used. Red wigglers are recommended by most vermicomposting experts, as they have some of the best appetites and breed very quickly. Users refer to European nightcrawlers by a variety of other names, including dendrobaenas, dendras, Dutch nightcrawlers, and Belgian nightcrawlers.
Containing water-soluble nutrients, v |
https://en.wikipedia.org/wiki/Mono%20%28software%29 | Mono is a free and open-source .NET Framework-compatible software framework. Originally by Ximian, it was later acquired by Novell, and is now being led by Xamarin, a subsidiary of Microsoft and the .NET Foundation. Mono can be run on many software systems.
History
When Microsoft first announced their .NET Framework in June 2000 it was described as "a new platform based on Internet standards", and in December of that year the underlying Common Language Infrastructure was published as an open standard, "ECMA-335", opening up the potential for independent implementations. Miguel de Icaza of Ximian believed that .NET had the potential to increase programmer productivity and began investigating whether a Linux version was feasible. Recognizing that their small team could not expect to build and support a full product, they launched the Mono open-source project, on July 19, 2001, at the O'Reilly conference.
After three years of development, Mono 1.0 was released on June 30, 2004. Mono evolved from its initial focus of a developer platform for Linux desktop applications to supporting a wide range of architectures and operating systems - including embedded systems.
Novell acquired Ximian in 2003. After Novell was acquired by Attachmate in April 2011, Attachmate announced hundreds of layoffs for the Novell workforce, putting in question the future of Mono.
On May 16, Miguel de Icaza announced in his blog that Mono would continue to be supported by Xamarin, a company he founded after being laid off from Novell. The original Mono team had also moved to the new company. Xamarin planned to keep working on Mono and had planned to rewrite the proprietary .NET stacks for iOS and Android from scratch, because Novell still owned MonoTouch and Mono for Android at the time. After this announcement, the future of the project was questioned, MonoTouch and Mono for Android being in direct competition with the existing commercial offerings now owned by Attachmate, and considering th |
https://en.wikipedia.org/wiki/Retrograde%20%28music%29 | A melodic line that is the reverse of a previously or simultaneously stated line is said to be its retrograde or cancrizans ( "walking backward", medieval Latin, from cancer "crab"). An exact retrograde includes both the pitches and rhythms in reverse. An even more exact retrograde reverses the physical contour of the notes themselves, though this is possible only in electronic music. Some composers choose to subject just the pitches of a musical line to retrograde, or just the rhythms. In twelve-tone music, reversal of the pitch classes alone—regardless of the melodic contour created by their registral placement—is regarded as a retrograde.
In modal and tonal music
In treatises
Retrograde was not mentioned in theoretical treatises prior to 1500. Nicola Vicentino (1555) discussed the difficulty in finding canonic imitation: "At times, the fugue or canon cannot be discovered through the systems mentioned above, either because of the impediment of rests, or because one part is going up while another is going down, or because one part starts at the beginning and the other at the end. In such cases a student can begin at the end and work back to the beginning in order to find where and in which voice he should begin the canons." Vicentino derided those who achieved purely intellectual pleasure from retrograde (and similar permutations): "A composer of such fancies must try to make canons and fugues that are pleasant and full of sweetness and harmony. He should not make a canon in the shape of a tower, a mountain, a river, a chessboard, or other objects, for these compositions create a loud noise in many voices, with little harmonic sweetness. To tell the truth, a listening is more likely to be induced to vexation than to delight by these disproportioned fancies, which are devoid of pleasant harmony and contrary to the goal of the imitation of the nature of the words."
Thomas Morley (1597) described retrograde in the context of canons and mentions a work by Byrd. Fri |
https://en.wikipedia.org/wiki/Sulston%20score | The Sulston score is an equation used in DNA mapping to numerically assess the likelihood that a given "fingerprint" similarity between two DNA clones is merely a result of chance. Used as such, it is a test of statistical significance. That is, low values imply that similarity is significant, suggesting that two DNA clones overlap one another and that the given similarity is not just a chance event. The name is an eponym that refers to John Sulston by virtue of his being the lead author of the paper that first proposed the equation's use.
The overlap problem in mapping
Each clone in a DNA mapping project has a "fingerprint", i.e. a set of DNA fragment lengths inferred from (1) enzymatically digesting the clone, (2) separating these fragments on a gel, and (3) estimating their lengths based on gel location. For each pairwise clone comparison, one can establish how many lengths from each set match-up. Cases having at least 1 match indicate that the clones might overlap because matches may represent the same DNA. However, the underlying sequences for each match are not known. Consequently, two fragments whose lengths match may still represent different sequences. In other words, matches do not conclusively indicate overlaps. The problem is instead one of using matches to probabilistically classify overlap status.
Mathematical scores in overlap assessment
Biologists have used a variety of means (often in combination) to discern clone overlaps in DNA mapping projects. While many are biological, i.e. looking for shared markers, others are basically mathematical, usually adopting probabilistic and/or statistical approaches.
Sulston score exposition
The Sulston score is rooted in the concepts of Bernoulli and binomial processes, as follows. Consider two clones, and , having and measured fragment lengths, respectively, where . That is, clone has at least as many fragments as clone , but usually more. The Sulston score is the probability that at least fragment |
https://en.wikipedia.org/wiki/Antheridiogen | Antheridiogens are a class of chemicals secreted by fern gametophytes that have "been shown to influence production of male gametangia and thus mating systems in a large number of terrestrial fern species". Antheridiogens are only observed in homosporous fern species, as all gametophytes are potentially bisexual (have the ability to produce both archegonia and antheridia).
Background
The first study regarding antheridiogen was published by Walter Döpp in 1950. In this article, he explains the discovery of a molecule, which he titled "A-substanz", that caused premature formation of antheridia when agar media was reused after cultivation of Pteridium aquilinum. A majority of the studies regarding antheridiogen were done by two researchers, Ulrich Näf and H. Schraudolf.
Sex-determination pathway
The way in which antheridiogen determines sex in ferns is a "spatiotemporally split gibberellin synthesis pathway". Gibberellin is a group of hormones that control plant processes. In the first step of this process, gametophytes, or prothalli, express gibberellin (GA) specific genes, which produces a GA intermediate molecule that is then secreted into the external environment. In the second step, antheridiogens are taken up by neighboring gametophytes in the colony and undergoes a series of molecular changes that allow it to finally induce or suppress formation of antheridia or archegonia. This helps regulate the sex ratio of the colony.
The timing with which antheridiogen affects the gender of growing gametophytes is still under study. One theory states that "the spores that germinate first develop as hermaphrodites and secrete antheridiogen, while those that germinate later or develop more slowly become male under the influence of the secreted antheridiogen". Depending on the ratio of males to hermaphrodites, either outcrossing or inbreeding is selected for by the population.
Studies performed on Ceratopteris richardii have proven that a growing gametophyte is only abl |
https://en.wikipedia.org/wiki/EIF4A | The eukaryotic initiation factor-4A (eIF4A) family consists of 3 closely related proteins EIF4A1, EIF4A2, and EIF4A3. These factors are required for the binding of mRNA to 40S ribosomal subunits. In addition these proteins are helicases that function to unwind double-stranded RNA.
Background
The mechanisms governing the basic subsistence of eukaryotic cells are immensely complex; it is therefore unsurprising that regulation occurs at a number of stages of protein synthesis – the regulation of translation has become a well-studied field. Human translational control is of increasing research interest as it has connotations in a range of diseases. Orthologs of many of the factors involved in human translation are shared by a range of eukaryotic organisms; some of which are used as model systems for the investigation of translation initiation and elongation, for example: sea urchin eggs upon fertilization, rodent brain and rabbit reticulocytes. Monod and Jacob were among the first to propose that "the synthesis of individual proteins may be provoked or suppressed within a cell, under the influence of specific external agents, and the relative rates at which different proteins may be profoundly altered, depending upon external conditions". Almost half a century after the flurry of postulations arising from the revelation of the central dogma of molecular biology, of which the preceding supposition by Monod and Jacob is an example; contemporary researchers still have much to learn about the modulation of genetic expression. Synthesis of protein from mature messenger RNA in eukaryotes is divided into translation initiation, elongation, and termination of these stages; the initiation of translation is the rate limiting step. Within the process of translation initiation; the bottleneck occurs shortly before the ribosome binds to the 5’ m7GTP facilitated by a number of proteins; it is at this stage that constrictions born of stress, amino acid starvation etc. take effect.
|
https://en.wikipedia.org/wiki/Null-move%20heuristic | In computer chess programs, the null-move heuristic is a heuristic technique used to enhance the speed of the alpha–beta pruning algorithm.
Rationale
Alpha–beta pruning speeds the minimax algorithm by identifying cutoffs, points in the game tree where the current position is so good for the side to move that best play by the other side would have avoided it. Since such positions could not have resulted from best play, they and all branches of the game tree stemming from them can be ignored. The faster the program produces cutoffs, the faster the search runs. The null-move heuristic is designed to guess cutoffs with less effort than would otherwise be required, whilst retaining a reasonable level of accuracy.
The null-move heuristic is based on the fact that most reasonable chess moves improve the position for the side that played them. So, if the player whose turn it is to move can forfeit the right to move (or make a null move – an illegal action in chess) and still have a position strong enough to produce a cutoff, then the current position would almost certainly produce a cutoff if the current player actually moved.
Implementation
In employing the null-move heuristic, the computer program first forfeits the turn of the side whose turn it is to move, and then performs an alpha–beta search on the resulting position to a shallower depth than it would have searched the current position had it not used the null move heuristic. If this shallow search produces a cutoff, it assumes the full-depth search in the absence of a forfeited turn would also have produced a cutoff. Because a shallow search is faster than deeper search, the cutoff is found faster, accelerating the computer chess program. If the shallow search fails to produce a cutoff, then the program must make the full-depth search.
This approach makes two assumptions. First, it assumes that the disadvantage of forfeiting one's turn is greater than the disadvantage of performing a shallower search. |
https://en.wikipedia.org/wiki/Andrea%20Rossi%20%28entrepreneur%29 | Andrea Rossi (born 3 June 1950) is an Italian entrepreneur who claimed to have invented a cold fusion device.
In the 1970s, Rossi claimed to have invented a process to convert organic waste into petroleum, and in 1978 he founded a company named Petroldragon to process waste. In the 1989 the company was shut down by the Italian government amid allegations of fraud, and Rossi was arrested. In 1996 Rossi moved to the United States and from 2001 to 2003 he worked under a U.S. Army contract to make a thermoelectric device that, while promising to be superior to other devices, produced only around 1/1000 of the claimed performance.
In 2008 Rossi attempted to patent a device called an Energy Catalyzer (or E-Cat), which was a purported cold fusion or Low-Energy Nuclear Reaction (LENR) thermal power source. Rossi claimed that the device produces massive amounts of excess heat that could be used to produce electricity, but independent attempts to reproduce the effect failed.
Biography
Andrea Rossi was born on 3 June 1950, in Milan.
In 1973, Rossi graduated in philosophy at the University of Milan writing a thesis on Albert Einstein's theory of relativity and its interrelationship with Edmund Husserl's phenomenology. Although Rossi also holds a degree in chemical engineering, this degree was granted by Kensington University in California, which was later shut down as a diploma mill.
Andrea Rossi is married to Maddalena Pascucci.
Business ventures
Petroldragon
In 1974, Rossi registered a patent for an incineration system. In 1978, he wrote The Incineration of Waste and River Purification, published in Milan by Tecniche Nuove.
He then founded Petroldragon, a company that was paid to process toxic waste, claiming to use Rossi's process to convert the waste into usable petroleum products.
In 1989 Italian customs seized several Petroldragon waste deposit sites and assets. Investigations showed that petroleum supposedly produced by the company had never been placed on the mark |
https://en.wikipedia.org/wiki/FORECAST%20%28model%29 | FORECAST is a management-oriented, stand-level, forest-growth and ecosystem-dynamics model. The model was designed to accommodate a wide variety of silvicultural and harvesting systems and natural disturbance events (e.g., fire, wind, insect epidemics) in order to compare and contrast their effect on forest productivity, stand dynamics, and a series of biophysical indicators of non-timber values.
Model description
Projection of stand growth and ecosystem dynamics is based upon a representation of the rates of key ecological processes regulating the availability of, and competition for, light and nutrient resources (a representation of moisture effects on soil processes, plant physiology and growth, and the consequences of moisture competition is being added). The rates of these processes are calculated from a combination of historical bioassay data (such as biomass accumulation in plant components and changes in stand density over time) and measures of certain ecosystem variables (including decomposition rates, photosynthetic saturation curves, and plant tissue nutrient concentrations) by relating ‘biologically active’ biomass components (foliage and small roots) to calculated values of nutrient uptake, the capture of light energy, and net primary production.
Using this ‘internal calibration’ or hybrid approach, the model generates a suite of growth properties for each tree and understory plant species that is to be represented in a subsequent simulation. These growth properties are used to model growth as a function of resource availability and competition. They include (but are not limited to): (1) photosynthetic efficiency per unit foliage biomass and its nitrogen content based on relationships between foliage nitrogen, simulated self-shading, and net primary productivity after accounting for litterfall and mortality; (2) nutrient uptake requirements based on rates of biomass accumulation and literature- or field-based measures of nutrient concentrations in diff |
https://en.wikipedia.org/wiki/52-hertz%20whale | The 52-hertz whale, colloquially referred to as 52 Blue, is an individual whale of unidentified species that calls at the unusual frequency of 52 hertz. This pitch is at a higher frequency than that of the other whale species with migration patterns most closely resembling the 52-hertz whale'sthe blue whale (10 to 39 Hz) and the fin whale (20 Hz). Its call has been detected regularly in many locations since the late 1980s, and appears to be the only individual emitting a whale call at this frequency. However, the whale itself has never been sighted; it has only been heard via hydrophones. It has been described as the "world's loneliest whale", though potential recordings of a second 52-hertz whale, heard elsewhere at the same time, have been sporadically found since 2010.
52 Hz is equivalent to the musical note G#1, which is the 12th lowest key on a conventional 88-key piano keyboard; or, the 4th finger position on the lowest string (E1) of a double bass.
Characteristics
The sonic signature is that of a whale, albeit at a unique frequency. The call patterns resemble neither blue nor fin whales, being much higher in frequency, shorter, and more frequent. Blue whales usually vocalize at 10–39 Hz, fin whales at 20 Hz. The calls of this whale are highly variable in their pattern of repetition, duration, and sequence, although they are easily identifiable due to their frequency and characteristic clustering. The calls have deepened slightly to around 50 hertz since 1992, suggesting the whale has grown or matured.
The migration track of the 52-hertz whale is unrelated to the presence or movement of other whale species. Its movements have been somewhat similar to that of blue whales, but its timing has been more like that of fin whales. It is detected in the Pacific Ocean every year beginning in August–December, and moves out of range of the hydrophones in January–February. It travels as far north as the Aleutian and Kodiak Islands, and as far south as the California |
https://en.wikipedia.org/wiki/Bandwidth%20throttling | Bandwidth throttling consists in the intentional limitation of the communication speed (bytes or kilobytes per second), of the ingoing (received) or outgoing (sent) data in a network node or in a network device.
The data speed and rendering may be limited depending on various parameters and conditions.
Overview
Limiting the speed of data sent by a data originator (a client computer or a server computer) is much more efficient than limiting the speed in an intermediate network device between client and server because while in the first case usually no network packets are lost, in the second case network packets can be lost / discarded whenever ingoing data speed overcomes the bandwidth limit or the capacity of device and data packets cannot be temporarily stored in a buffer queue (because it is full or it does not exist); the usage of such a buffer queue is to absorb the peaks of incoming data for very short time lapse.
In the second case discarded data packets can be resent by transmitter and received again.
When a low level network device discards incoming data packets usually can also notify that fact to data transmitter in order to slow down the transmission speed (see also network congestion).
NOTE: Bandwidth throttling should not be confused with rate limiting which operates on client requests at application server level and/or at network management level (i.e. by inspecting protocol data packets). Rate limiting can also help in keeping peaks of data speed under control.
These bandwidth limitations can be implemented:
at (a client program or a server program, i.e. ftp server, web server, etc.) which can be run and configured to throttle data sent through network or even to throttle data received from network (by reading data at most at a throttled amount per second);
at (typically done by an ISP).
The (client/server program) is usually perfectly because it is a choice of the client manager or the server manager (by server administrator) to limit |
https://en.wikipedia.org/wiki/Sclerochronology | Sclerochronology is the study of periodic physical and chemical features in the hard tissues of animals that grow by accretion, including invertebrates and coralline red algae, and the temporal context in which they formed. It is particularly useful in the study of marine paleoclimatology. The term was coined in 1974 following pioneering work on nuclear test atolls by Knutson and Buddemeier and comes from the three Greek words skleros (hard), chronos (time) and logos (science), which together refer to the use of the hard parts of living organisms to order events in time. It is, therefore, a form of stratigraphy. Sclerochronology focuses primarily upon growth patterns reflecting annual, monthly, fortnightly, tidal, daily, and sub-daily (ultradian) increments of time.
The regular time increments are controlled by biological clocks, which, in turn, are caused by environmental and astronomical pacemakers.
Familiar examples include:
annual bandings in reef coral skeletons
annual, fortnightly, daily and ultradian growth increments in mollusk shells
annual bandings in the ear bones of fish, called otoliths.
Sclerochronology is analogous to dendrochronology, the study of annual rings in trees, and equally seeks to deduce organismal life history traits as well as to reconstruct records of environmental and climatic change through space and time.
Use in paleoclimatic study
The science of sclerochronology as applied to hard parts of various organism groups is now routinely used for paleoceanographic and paleoclimate reconstructions. The study includes isotopic and elemental proxies, sometimes termed sclerochemistry.
Improvements in imaging techniques have now realised the potential to decipher coral banding at daily resolution, although biological 'vital' effects may blur the climate signal at such a high resolution.
See also
Paleoceanography |
https://en.wikipedia.org/wiki/Mushroom%20Networks | Mushroom Networks, Inc. is a telecommunications company in San Diego, California.
Mushroom Networks was founded in 2004, by electrical and computer engineers Rene L. Cruz, Cahit Akin, and Rajesh Mishra. It was spun off from the University of California, San Diego (UCSD) and got support from the von Liebig Center. Mushroom Networks also received initial funding and support from ITU Ventures, a Los Angeles investment firm.
The company solely focuses on networking products utilizing their "Broadband Bonding" technology. In various configurations, these products provide the end-user with the appearance of larger bandwidth by aggregating multiple network services through load balancing, and/or channel bonding.
History
In February, 2008, Mushroom Networks announced the Truffle (BBNA6401) as a new broadband bonding network appliance. The Truffle (BBNA6401) can bond up to six high speed internet connections to improve the network speed.
In July, 2008, the company introduced another product the Porcini (BBNA4422) as a smaller broadband bonding product. This product can bond up to four high speed internet connections and a wireless USB port card to improve internet speeds.
In September, 2008 the company introduced the PortaBella (BBNA2242) combining multiple cellular data cards into one single internet protocol connection. This product allows for bandwidth in which wired connectivity is not an option.
In June, 2009, Mushroom Networks announced the second generation PortaBella (BBNA141) the fastest wireless cellular internet connection. This small portable product has room for four wireless modems on the front side, with an Ethernet port in the back.
In July, 2009, the company announced the second generation of Truffle (BBNA6401), calling it the Truffle(BBNA 5201G). The second generation has higher throughput, enhanced quality of service (QoS), and automatic failover as well as some other enterprise level features.
In April, 2010, Mushroom Networks announces the Tele |
https://en.wikipedia.org/wiki/8250%20UART | The 8250 UART (universal asynchronous receiver-transmitter) is an integrated circuit designed for implementing the interface for serial communications. The part was originally manufactured by the National Semiconductor Corporation. It was commonly used in PCs and related equipment such as printers or modems. The 8250 included an on-chip programmable bit rate generator, allowing use for both common and special-purpose bit rates which could be accurately derived from an arbitrary crystal oscillator reference frequency.
The chip designations carry suffix letters for later versions of the same chip series. For example, the original 8250 was soon followed by the 8250A and 8250B versions that corrected some bugs. In particular, the original 8250 could repeat transmission of a character if the CTS line was asserted asynchronously during the first transmission attempt.
Due to the high demand, other manufacturers soon began offering compatible chips. Western Digital offered WD8250 chip under Async Communications Interface Adapter (ACIA) and Async Communications Element (ACE) names.
The 16450(A) UART, commonly used in IBM PC/AT-series computers, improved on the 8250 by permitting higher serial line speeds.
With the introduction of multitasking operating systems on PC hardware, such as OS/2, Windows NT or various flavours of UNIX, the short time available to serve character-by-character interrupt requests became a problem, therefore the IBM PS/2 serial ports introduced the 16550(A) UARTs that had a built-in 16 byte FIFO or buffer memory to collect incoming characters.
Later models added larger memories, supported higher speeds, combined multiple ports on one chip and finally became part of the now-common Super I/O circuits combining most input/output logic on a PC motherboard.
Blocks
The line interface consists of:
SOUT, SIN, /RTS, /DTR, DSR, /DCD, /CTS, /RI
Clock interface:
XIN, XOUT, /BAUDOUT, RCLK
Computer interface:
D0..D7, /RD, /WR, INTRPT, MR, A0, A1, A2 |
https://en.wikipedia.org/wiki/Melinoessa%20fulvescens | Melinoessa fulvescens is a species of moth in the family Geometridae, native to Sierra Leone and Gambia. It was described by Dru Drury in 1782 as Phalaena fulvata, a name which was pre-occupied (see Cidaria fulvata). The current, slightly different, specific name was given by L. B. Prout in 1916.
Description
Upper Side. Antennae setaceous and yellow. Thorax and abdomen yellow. Wings deep straw-coloured, the anterior having a small black spot placed near the middle of the anterior edges. A small narrow line of a silverish colour runs along the external edges of these wings, beginning near the tips, and continuing along the edges of the posterior ones, ends at the abdominal corners.
Under Side. Breast, legs, and abdomen whiteish. Wings coloured as on the upper side, but dappled with minute reddish streaks. Margins of the wings entire. Wing-span somewhat more than 1½ inches (40 mm). |
https://en.wikipedia.org/wiki/Quigley%20scale | The Quigley scale is a descriptive, visual system of phenotypic grading that defines seven classes between "fully masculinized" and "fully feminized" genitalia. It was proposed by pediatric endocrinologist Charmian A. Quigley et al. in 1995. It is similar in function to the Prader scale and is used to describe genitalia in cases of androgen insensitivity syndrome, including complete androgen insensitivity syndrome, partial androgen insensitivity syndrome and mild androgen insensitivity syndrome.
Schematic representation
Staging
The first six grades of the scale, grades 1 through 6, are differentiated by the degree of genital masculinization. Quigley describes the scale as depicting "severity" or "defective masculinization". Grade 1 is indicated when the external genitalia is fully masculinized, and corresponds to mild androgen insensitivity syndrome. Grades 6 and 7 are indicated when the external genitalia is fully feminized, corresponding to complete androgen insensitivity syndrome.
Grades 2 through 5 quantify four degrees of decreasingly masculinized genitalia that lie in the interim. Grades 2 through 5 of the Quigley scale quantify four degrees of increasingly feminized genitalia that correspond to partial androgen insensitivity syndrome.
Grade 7 is indistinguishable from grade 6 until puberty, and is thereafter differentiated by the presence of secondary terminal hair. Grade 6 is indicated when secondary terminal hair is present, whereas grade 7 is indicated when it is absent.
Controversy
While the scale has been defined as a grading system for feminized or undermasculinized genitalia, the concept that atypical genitals are necessarily abnormal is contested. An opinion paper by the Swiss National Advisory Centre for Biomedical Ethics advises that "not infrequently" variations from sex norms may not be pathological or require medical treatment. Similarly, an Australian Senate Committee report on involuntary sterilization determined that research "regarding |
https://en.wikipedia.org/wiki/Fixed-pattern%20noise | Fixed-pattern noise (FPN) is the term given to a particular noise pattern on digital imaging sensors often noticeable during longer exposure shots where particular pixels are susceptible to giving brighter intensities above the average intensity.
Overview
FPN is a general term that identifies a temporally constant lateral non-uniformity (forming a constant pattern) in an imaging system with multiple detector or picture elements (pixels). It is characterised by the same pattern of variation in pixel-brightness occurring in images taken under the same illumination conditions in an imaging array. This problem arises from small differences in the individual responsitivity of the sensor array (including any local postamplification stages) that might be caused by variations in the pixel size, material or interference with the local circuitry. It might be affected by changes in the environment like different temperatures, exposure times, etc.
The term "fixed pattern noise" usually refers to two parameters. One is the dark signal non-uniformity (DSNU), which is the offset from the average across the imaging array at a particular setting (temperature, integration time) but no external illumination and the photo response non-uniformity (PRNU), which describes the gain or ratio between optical power on a pixel versus the electrical signal output. The latter is often simplified as a single value measured at e.g. 50% saturation level, implying a linear approximation of the not perfectly linear photo response non-linearity (PRNL). Often PRNU as defined above is subdivided in pure "(offset) FPN" which is the part not dependent on temperature and integration time, and the integration time and temperature dependent "DSNU".
Sometimes pixel noise as the average deviation from the array average under different illumination and temperature conditions is specified. Pixel noise therefore gives a number (commonly expressed in rms) that identifies FPN in all permitted imaging conditions |
https://en.wikipedia.org/wiki/Donald%20S.%20Malecki | Donald S. Malecki (May 15, 1933 – December 12, 2014) was an author and speaker noted for expertise in Property and Casualty insurance.
Biography
Malecki was born in Syracuse, New York on May 15, 1933. He served for four years in the United States Air Force, participating in the Korean War and being discharged in 1955. That year he married Norma Plotti. He graduated from Syracuse University with a Bachelor of Science degree. Malecki started in insurance with the Fireman's Insurance Company in 1960 as a fire underwriter trainee. Later worked for Continental Insurance Company as supervising casualty underwriter. He began writing for National Underwriter in 1966 and continued with that magazine until 1984, eventually becoming editor. He received his Chartered Property Casualty Underwriter designation in 1970. In 1975 he was approached by the American Institute for Chartered Property Casualty Underwriter to write his first book. This book, Commercial Liability Risk Management and Insurance, became the first of three books written by Malecki which are now currently part of the curriculum. He formed his own consulting company in 1977, and that year also began writing for RoughNotes. He had also written for the International Risk Management Institute and served on the Commercial Lines Industry Liaison Panel of Insurance Services Office, while publishing the newsletter Malecki on Insurance. In the 2000s he was a regular speaker at the annual CPCU convention. He died on December 12, 2014 At the time of his death he was a principal at Malecki Deimling Nielander and Associates, LLC.
Books
Commercial Liability Risk Management and Insurance (1978)
Insuring the Lease Exposure (1981)
The CGL Book (1986)
The Additional Insured Book (1991)
The MCS-90 Book: Truckers Versus Insurers and the Government Makes Three (2004) |
https://en.wikipedia.org/wiki/OSM-9 | OSM-9 also known as OSMotic avoidance abnormal family member 9 is a protein which in the nematode worm C. elegans is encoded by the osm-9 gene.
Function
The OSM-9 protein is required for some olfactory and osmotic stimuli as well as a mechanosensory response to nose touch. This protein encodes a protein with ankyrin repeats and is closely related in sequence to the mammalian TRPV ion channels. OSM-9 controls the biosynthesis of serotonin via regulation of the expression of the enzyme tryptophan hydroxylase. |
https://en.wikipedia.org/wiki/Metabolic%20waste | Metabolic wastes or excrements are substances left over from metabolic processes (such as cellular respiration) which cannot be used by the organism (they are surplus or toxic), and must therefore be excreted. This includes nitrogen compounds, water, CO2, phosphates, sulphates, etc. Animals treat these compounds as excretes. Plants have metabolic pathways which transforms some of them (primarily the oxygen compounds) into useful substances..
All the metabolic wastes are excreted in a form of water solutes through the excretory organs (nephridia, Malpighian tubules, kidneys), with the exception of CO2, which is excreted together with the water vapor throughout the lungs. The elimination of these compounds enables the chemical homeostasis of the organism.
Nitrogen wastes
The nitrogen compounds through which excess nitrogen is eliminated from organisms are called nitrogenous wastes () or nitrogen wastes. They are ammonia, urea, uric acid, and creatinine. All of these substances are produced from protein metabolism. In many animals, the urine is the main route of excretion for such wastes; in some, it is the feces.
Ammonotelism
Ammonotelism is the excretion of ammonia and ammonium ions. Ammonia (NH3) forms with the oxidation of amino groups.(-NH2), which are removed from the proteins when they convert into carbohydrates. It is a very toxic substance to tissues and extremely soluble in water. Only one nitrogen atom is removed with it. A lot of water is needed for the excretion of ammonia, about 0.5 L of water is needed per 1 g of nitrogen to maintain ammonia levels in the excretory fluid below the level in body fluids to prevent toxicity. Thus, the marine organisms excrete ammonia directly into the water and are called ammonotelic. Ammonotelic animals include crustaceans, platyhelminths, cnidarians, poriferans, echinoderms, and other aquatic invertebrates.
Ureotelism
The excretion of urea is called ureotelism. Land animals, mainly amphibians and mammals, convert |
https://en.wikipedia.org/wiki/%C4%8Cechie | Čechie is the personification of the Czech nation, which was used in the 19th century as reaction on personification of competing nationalism represented by Germania or Austria.
See also
Flag and Coat of arms of the Czech Republic
Czech Vašek, personification of the Czechs
Deutscher Michel, personification of the German people |
https://en.wikipedia.org/wiki/Antonio%20Ruberti | Antonio Ruberti (24 January 1927 – 4 September 2000) was an Italian politician and engineer. He was a member of the Italian Government and a European Commissioner as well as a professor of engineering at La Sapienza University.
Biography
Antonio Ruberti was born in Aversa in the province of Caserta, Campania.
He trained as an engineer and taught control engineering and systems theory as the first head of the Department of Science and Engineering of La Sapienza university in Rome, a university of which he was later Rector.
In 1987, he joined the Italian government as Minister for the Coordination of Scientific and Technological Research. He held this position for five years. In 1992 Ruberti was elected to the Chamber of Deputies among the ranks of the Italian Socialist Party, where he sat until 1993, when he was appointed by the Italian government to the European Commission chaired by Delors with the portfolio covering science, research, technological development and education. Ruberti was only a commissioner until 1995 but during this short mandate, he launched a series of new initiatives including the Socrates and Leonardo da Vinci programmes, the European Week of Scientific Culture, and the European Science and Technology Forum. After leaving the commission, Ruberti was once more elected to the Chamber of Deputies, where he chaired the Committee for European Union Policies.
He died in Rome in 2000. |
https://en.wikipedia.org/wiki/Octatomic%20element | An octatomic element is a chemical element that, when standard conditions for temperature and pressure is stable, is in a configuration of eight atoms grouped together. The canonical example is sulfur, S8, but red selenium is also an octatomic element stable at room temperature. Octaoxygen is also known, but it is extremely unstable.
See also
Diatomic element |
https://en.wikipedia.org/wiki/Magnetic%20susceptibility | In electromagnetism, the magnetic susceptibility (; denoted , chi) is a measure of how much a material will become magnetized in an applied magnetic field. It is the ratio of magnetization (magnetic moment per unit volume) to the applied magnetizing field intensity . This allows a simple classification, into two categories, of most materials' responses to an applied magnetic field: an alignment with the magnetic field, , called paramagnetism, or an alignment against the field, , called diamagnetism.
Magnetic susceptibility indicates whether a material is attracted into or repelled out of a magnetic field. Paramagnetic materials align with the applied field and are attracted to regions of greater magnetic field. Diamagnetic materials are anti-aligned and are pushed away, toward regions of lower magnetic fields. On top of the applied field, the magnetization of the material adds its own magnetic field, causing the field lines to concentrate in paramagnetism, or be excluded in diamagnetism. Quantitative measures of the magnetic susceptibility also provide insights into the structure of materials, providing insight into bonding and energy levels. Furthermore, it is widely used in geology for paleomagnetic studies and structural geology.
The magnetizability of materials comes from the atomic-level magnetic properties of the particles of which they are made. Usually, this is dominated by the magnetic moments of electrons. Electrons are present in all materials, but without any external magnetic field, the magnetic moments of the electrons are usually either paired up or random so that the overall magnetism is zero (the exception to this usual case is ferromagnetism). The fundamental reasons why the magnetic moments of the electrons line up or do not are very complex and cannot be explained by classical physics. However, a useful simplification is to measure the magnetic susceptibility of a material and apply the macroscopic form of Maxwell's equations. This allows clas |
https://en.wikipedia.org/wiki/List%20of%20NP-complete%20problems | This is a list of some of the more commonly known problems that are NP-complete when expressed as decision problems. As there are hundreds of such problems known, this list is in no way comprehensive. Many problems of this type can be found in .
Graphs and hypergraphs
Graphs occur frequently in everyday applications. Examples include biological or social networks, which contain hundreds, thousands and even billions of nodes in some cases (e.g. Facebook or LinkedIn).
1-planarity
3-dimensional matching
Bandwidth problem
Bipartite dimension
Capacitated minimum spanning tree
Route inspection problem (also called Chinese postman problem) for mixed graphs (having both directed and undirected edges). The program is solvable in polynomial time if the graph has all undirected or all directed edges. Variants include the rural postman problem.
Clique cover problem
Clique problem
Complete coloring, a.k.a. achromatic number
Cycle rank
Degree-constrained spanning tree
Domatic number
Dominating set, a.k.a. domination number
NP-complete special cases include the edge dominating set problem, i.e., the dominating set problem in line graphs. NP-complete variants include the connected dominating set problem and the maximum leaf spanning tree problem.
Feedback vertex set
Feedback arc set
Graph coloring
Graph homomorphism problem
Graph partition into subgraphs of specific types (triangles, isomorphic subgraphs, Hamiltonian subgraphs, forests, perfect matchings) are known NP-complete. Partition into cliques is the same problem as coloring the complement of the given graph. A related problem is to find a partition that is optimal terms of the number of edges between parts.
Grundy number of a directed graph.
Hamiltonian completion
Hamiltonian path problem, directed and undirected.
Graph intersection number
Longest path problem
Maximum bipartite subgraph or (especially with weighted edges) maximum cut.
Maximum common subgraph isomorphism problem
Maximum independent set
Maximum Induced pat |
https://en.wikipedia.org/wiki/Bionomics | Bionomics (Greek: bio = life; nomos = law) has two different meanings:
the first is the comprehensive study of an organism and its relation to its environment. As translated from the French word Bionomie, its first use in English was in the period of 1885–1890. Another way of expressing this word is the term currently referred to as "ecology".
the other is an economic discipline which studies economy as a self-organized evolving ecosystem.
An example of studies of the first type is in Richard B. Selander's Bionomics, Systematics and Phylogeny of Lytta, a Genus of Blister Beetles (Coleoptera, Meloidae), Illinois Biological Monographs: number 28, 1960.
When related to the territory Ignegnoli talks about Landscape Bionomics, defining Landscape as the "level of biological organization integrating complex systems of plants, animals and humans in a living Entity recognizable in a territory as characterized by suitable emerging properties in a determined spatial configuration". (Ingegnoli, 2011, 2015; Ingegnoli, Bocchi, Giglio, 2017)
Bionomics as an economic discipline is used by Igor Flor of "Bionomica, the International Bionomics Institute" |
https://en.wikipedia.org/wiki/Choquet%20theory | In mathematics, Choquet theory, named after Gustave Choquet, is an area of functional analysis and convex analysis concerned with measures which have support on the extreme points of a convex set C. Roughly speaking, every vector of C should appear as a weighted average of extreme points, a concept made more precise by generalizing the notion of weighted average from a convex combination to an integral taken over the set E of extreme points. Here C is a subset of a real vector space V, and the main thrust of the theory is to treat the cases where V is an infinite-dimensional (locally convex Hausdorff) topological vector space along lines similar to the finite-dimensional case. The main concerns of Gustave Choquet were in potential theory. Choquet theory has become a general paradigm, particularly for treating convex cones as determined by their extreme rays, and so for many different notions of positivity in mathematics.
The two ends of a line segment determine the points in between: in vector terms the segment from v to w consists of the λv + (1 − λ)w with 0 ≤ λ ≤ 1. The classical result of Hermann Minkowski says that in Euclidean space, a bounded, closed convex set C is the convex hull of its extreme point set E, so that any c in C is a (finite) convex combination of points e of E. Here E may be a finite or an infinite set. In vector terms, by assigning non-negative weights w(e) to the e in E, almost all 0, we can represent any c in C as
with
In any case the w(e) give a probability measure supported on a finite subset of E. For any affine function f on C, its value at the point c is
In the infinite dimensional setting, one would like to make a similar statement.
Choquet's theorem
Choquet's theorem states that for a compact convex subset C of a normed space V, given c in C there exists a probability measure w supported on the set E of extreme points of C such that, for any affine function f on C,
In practice V will be a Banach space. The original Krein–Milm |
https://en.wikipedia.org/wiki/Orchestra%20Control%20Engine | Orchestra Control Engine is a suite of software components (based on Linux/RTAI) used for the planning, development and deployment of real-time control applications for industrial machines and robots.
Orchestra Control Engine has been developed by Sintesi SpA in partnership with the Italian National Research Council and in collaboration with international industrial companies in the field of robotics and production systems.
Sintesi SpA is a company that develops mechatronic components and solutions. It has specialized in measurement, control and design technologies for robotics and production systems.
Main features
Orchestra Control Engine is flexible because it can be customized. This is done visually. The solutions created are open (based on an open source framework) and are extendible. Modular components of the software allow a user to develop, debug and test control applications. For example, previously developed algorithms can be divided into functional units and reused indefinitely. All the units work together. The software can be distributed among various remote hardware devices which may be hundreds of meters apart. It also scalable in that it selects hardware which provides the best cost and performance for a particular operation. The system's parameters can be quickly reconfigured both on line and also at the time of a run.
Suite components
Linux/RTAI creates Orchestra Control Engine's hard real time behaviour. Its "open source" characteristics allow changes to fit the users' requirements. Non hard real time components of orchestra Control Engine can be used with non-Linux platforms such as Microsoft Windows or Macintosh.
Orchestra Core
A hard real time multithreaded engine operates in multicore/multiprocessor architectures. Within the scheme, modules can be filled in with more or less complex algorithms which control the process. The run time engine loads the modules. The user can adapt the modules to the topology. For complex topology, multiple |
https://en.wikipedia.org/wiki/Social%20learning%20theory | Social learning is a theory of learning process social behavior which proposes that new behaviors can be acquired by observing and imitating others. It states that learning is a cognitive process that takes place in a social context and can occur purely through observation or direct instruction, even in the absence of motor reproduction or direct reinforcement. In addition to the observation of behavior, learning also occurs through the observation of rewards and punishments, a process known as vicarious reinforcement. When a particular behavior is rewarded regularly, it will most likely persist; conversely, if a particular behavior is constantly punished, it will most likely desist. The theory expands on traditional behavioral theories, in which behavior is governed solely by reinforcements, by placing emphasis on the important roles of various internal processes in the learning individual.
History and theoretical background
In the 1940s, B. F. Skinner delivered a series of lectures on verbal behavior, putting forth a more empirical approach to the subject than existed in psychology at the time. In them, he proposed the use of stimulus-response theories to describe language use and development, and that all verbal behavior was underpinned by operant conditioning. He did however mention that some forms of speech derived from words and sounds that had previously been heard (echoic response), and that reinforcement from parents allowed these 'echoic responses' to be pared down to that of understandable speech. While he denied that there was any "instinct or faculty of imitation", Skinner's behaviorist theories formed a basis for redevelopment into Social Learning Theory.
At around the same time, Clark Leonard Hull, an American psychologist, was a strong proponent of behaviorist stimulus-response theories, and headed a group at Yale University's Institute of Human Relations. Under him, Neal Miller and John Dollard aimed to come up with a reinterpretation of psychoan |
https://en.wikipedia.org/wiki/Envy%20minimization | In computer science and operations research, the envy minimization problem is the problem of allocating discrete items among agents with different valuations over the items, such that the amount of envy is as small as possible.
Ideally, from a fairness perspective, one would like to find an envy-free item allocation - an allocation in which no agent envies another agent. That is: no agent prefers the bundle allocated to another agent. However, with indivisible items this might be impossible. One approach for coping with this impossibility is to turn the problem to an optimization problem, in which the loss function is a function describing the amount of envy. In general, this optimization problem is NP-hard, since even deciding whether an envy-free allocation exists is equivalent to the partition problem. However, there are optimization algorithms that can yield good results in practice.
Defining the amount of envy
There are several ways to define the objective function (the amount of envy) for minimization. Some of them are:
The number of envious agents;
The number of envy relations (- edges in the envy graph);
The maximum envy-ratio, where the envy ratio of i in j in allocation X is defined as: ; so the ratio is 1 if i does not envy j, and it is larger when i envies j.
Similarly, one can consider the sum of envy-ratios, or their product.
The maximum, the sum or the product of the envy-difference.
Minimizing the envy-ratio
With general valuations, any deterministic algorithm that minimizes the maximum envy-ratio requires a number of queries which is exponential in the number of goods in the worst case.
With additive and identical valuations:
The following greedy algorithm finds an allocation whose maximum envy-ratio is at most 1.4 times the optimum:
Order the items by descending value;
While there are more items, give the next item to an agent with the smallest total value.
There is a PTAS for max-envy-ratio minimization. Furthermore, when the numbe |
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20constant | In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of continued fractions.
In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators qn of the convergents of the continued fraction expansions of almost all real numbers satisfy
Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely
The term "Lévy's constant" is sometimes used to refer to (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function
for and zero otherwise. This gives Lévy's constant as
.
The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem.
See also
Khinchin's constant |
https://en.wikipedia.org/wiki/Alaska%20statistical%20areas | The U.S. currently has four statistical areas that have been delineated by the Office of Management and Budget (OMB). On March 6, 2020, the OMB delineated two metropolitan statistical areas and two micropolitan statistical areas in Alaska. The most populous of these statistical areas is the Anchorage, AK Metropolitan Statistical Area with a 2020 Census population of 398,328.
Statistical areas
The Office of Management and Budget (OMB) has designated more than 1,000 statistical areas for the United States and Puerto Rico. These statistical areas are important geographic delineations of population clusters used by the OMB, the United States Census Bureau, planning organizations, and federal, state, and local government entities.
The OMB defines a core-based statistical area (commonly referred to as a CBSA) as "a statistical geographic entity consisting of the county or counties (or county-equivalents) associated with at least one core of at least 10,000 population, plus adjacent counties having a high degree of social and economic integration with the core as measured through commuting ties with the counties containing the core." The OMB further divides core-based statistical areas into metropolitan statistical areas (MSAs) that have "a population of at least 50,000" and micropolitan statistical areas (μSAs) that have "a population of at least 10,000, but less than 50,000."
The OMB defines a combined statistical area (CSA) as "a geographic entity consisting of two or more adjacent core-based statistical areas with employment interchange measures of at least 15%." The primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area.
Table
The table below describes the 4 United States statistical areas, 19 organized boroughs and 11 census areas in the State of Alaska with the following information:
The core based statistical area (CBSA) as designated by the OMB.
The |
https://en.wikipedia.org/wiki/Eureka%20Jack%20Mystery | Since 2012 various theories have emerged, based on the Argus account of the Battle of the Eureka Stockade dated 4 December 1854 and an affidavit sworn by Private Hugh King three days later as to a flag being seized from a prisoner detained at the stockade, that a Union Jack, known as the Eureka Jack may also have been flown by the rebels. Readers of the Argus were told that: "The flag of the diggers, 'The Southern Cross,' as well as the 'Union Jack,' which they had to hoist underneath, were captured by the foot police."
Ray Wenban depicted the Eureka Jack in a 1958 pictorial history series for students. In honour of the 160th anniversary of the battle in 2014, the Australian Flag Society released "Fall Back with the Eureka Jack," which illustrates Gregory Blake's two-flag theory in folk art.
Erronerous reporting theory
In his Eureka: The Unfinished Revolution, Peter FitzSimons has stated:
However, Hugh King, who was a private in the 40th (the 2nd Somersetshire) Regiment of Foot, swore in a signed contemporaneous affidavit that he recalled:
During the committal hearings for the Eureka rebels, there would be another Argus report dated 9 December 1854 concerning the seizure of a second flag at the stockade in the following terms:
Hugh King was called upon to give further testimony live under oath in the matter of Timothy Hayes. In doing so went into more detail than in his written affidavit, as the report states that the flag like a Union Jack was found:
Chartist battle flag theory
Military historian and author of Eureka Stockade: A Ferocious and Bloody Battle Gregory Blake, conceded that the rebels may have flown two battle flags as they claimed to be defending their British rights. Blake leaves open the possibility that the flag being carried by the prisoner had been souvenired from the flag pole as the routed garrison was fleeing the stockade. Once taken by Constable John King, the Eureka Flag was placed beneath his tunic in the same fashion as the suspected |
https://en.wikipedia.org/wiki/Dayna%20Communications | Dayna Communications, Inc., was a privately-held American computer company, active from 1984 to 1997 and based in Salt Lake City, Utah. It primarily manufactured networking products for Apple Computer's computing platforms, including the Macintosh, PowerBook and Newton (although some of its later networking products were platform-independent and could work on PCI-based IBM PC compatibles). In 1997, the company was acquired by Intel for nearly $14 million.
History
Dayna Communications was founded by William Sadleir in Salt Lake City in 1984, with $1.6 million in start-up capital.
In May 1985, the company delivered the MacCharlie, a hardware add-on for the Macintosh 128K that was essentially a headless IBM PC clone, complete with one or two 5.25-inch floppy drives, that clipped onto the side of the Mac. It connected to the Mac via a serial cable; users could run PC software through a terminal application provided through included floppy disks. The product received positive reviews, with The New York Times calling it "a brilliant idea" that gave Apple the potential to "grow in businesses or households already committed to IBM hardware and software". The product was however a market failure, with Sadleir overspending on advertising while ignoring the needs of customers he had surveyed, the majority of which specifically wanted a means of transferring files captured in the IBM PC's FAT filesystem to the Mac while not necessarily desiring a means of running IBM PC software on the Mac. Dayna nearly went bankrupt amid debt to creditors, but after securing $2.5 million in investment capital from Norman Lear of Act III Communications, Sadleir was able to avoid Chapter 11 bankruptcy before releasing the FT100, a retooling of the MacCharlie that leaned on the file interoperability aspect of the MacCharlie while removing any unnecessary components. It sold for less than half the street price of the MacCharlie and even reused the latter's packaging. Released in November 1986, |
https://en.wikipedia.org/wiki/Phenolic%20paper | Phenolic paper is a material often used to make printed circuit board substrates (the flat board to which the components and traces are attached). It is a very tough board made of wood fibre and phenolic polymers. It is most commonly brown in colour, and is a fibre reinforced plastic. These PCB materials are known as FR-1 and FR-2 – FR-2 is rated to 130°C, FR-1 is rated to 105°C. |
https://en.wikipedia.org/wiki/Soft%20chemistry | Soft chemistry (also known as chimie douce) is a type of chemistry that uses reactions at ambient temperature in open reaction vessels with reactions similar to those occurring in biological systems.
Aims
The aim of the soft chemistry is to synthesize materials, drawing capacity of living beings - more or less basic - such as diatoms capable of producing glass from silicates dissolved. It is a new branch of materials science that differs from conventional solid-state chemistry and its application to the intense energy to explore the chemical inventiveness of the living world. This specialty emerged in the 1980s around the label of "chimie douce", which was first published by the French chemist, Jacques Livage in Le Monde, 26 October 1977. French hits, the term soft chemistry is employed as such in the early twenty-first century in scientific publications, English and others. His mode of synthesis is similar generally for reactions involved in the polymerizations based on organic and the establishment of solutions reactive energy intake without essential polycondensation. The fundamental interest of this kind of polymerization mineral obtained at room temperature is to preserve organic molecules or microorganisms that wishes to fit. The products obtained by means of the so-called soft chemistry sol-gel can be stored in several types:
mineral structures of various qualities (smoothness, uniformity, etc.)
mixed structures combining inorganic and organic molecules on mineral structures
wrapper complex molecules and even microorganisms maintaining or optimizing their beneficial characteristics.
The early results have included the creation of glasses and ceramic with new properties. These different structures are more or less composite mobilized a wide range of applications ranging from health to the needs of the conquest of space. Beyond its mode of synthesis, a compound with the label soft chemistry combines the advantages of the mineral (resistance, transparency |
https://en.wikipedia.org/wiki/Outron | An outron is a nucleotide sequence at the 5' end of the primary transcript of a gene that is removed by a special form of RNA splicing during maturation of the final RNA product. Whereas intron sequences are located inside the gene, outron sequences lie outside the gene.
Characteristics
The outron is an intron-like sequence possessing similar characteristics such as the G+C content and a splice acceptor site that is the signal for trans-splicing. Such a trans-splice site is essentially defined as an acceptor (3') splice site without an upstream donor (5') splice site.
In eukaryotes such as euglenozoans, dinoflagellates, sponges, nematodes, cnidarians, ctenophores, flatworms, crustaceans, chaetognaths, rotifers, and tunicates, the length of spliced leader (SL) outrons range from 30 to 102 nucleotides (nt), with the SL exon length ranging from 16 to 51 nt, and the full SL RNA length ranging from 46 to 141 nt.
Processing
In standard cis-splicing, the donor splice site in upstream position is required together with an acceptor site located on downstream position on the same pre-RNA molecule.
By contrast, the SL trans-splicing relies on a 3' acceptor splice site on the outron, and a 5' donor splice site (GU dinucleotide) located on a separate RNA molecule, the SL RNA.
Moreover, the outron of the premature mRNA contains a branchpoint adenosine — followed by a downstream polypyrimidine tract — which interacts with the intron-like portion of the SL RNA to form a 'Y' branched byproduct, reminiscent of the lasso structure formed during intron splicing. Nuclear machinery then resolves this 'Y' branching structure by trans-splicing the SL RNA sequence to the 3′ trans-splice acceptor site (AG dinucleotide) of the pre-mRNA.
When outrons are processed, the SL exon is trans-spliced to distinct, unpaired, downstream acceptor sites adjacent to each open reading frame of the polycistronic pre-mRNA, leading to distinct mature capped transcripts.
See also |
https://en.wikipedia.org/wiki/Parrot%20virtual%20machine | Parrot was a register-based process virtual machine designed to run dynamic languages efficiently. It is possible to compile Parrot assembly language and Parrot intermediate representation (PIR, an intermediate language) to Parrot bytecode and execute it. Parrot is free and open-source software.
Parrot was started by the Perl community and is developed with help from the open-source and free software communities. As a result, it is focused on license compatibility with Perl (Artistic License 2.0), platform compatibility across a broad array of systems, processor architecture compatibility across most modern processors, speed of execution, small size (around 700k depending on platform), and the flexibility to handle the varying demands made by Raku and other modern dynamic languages.
Version 1.0, with a stable application programming interface (API) for development, was released on March 17, 2009. The last version is release 8.1.0 "Andean Parakeet". Parrot was officially discontinued in August 2021, after being supplanted by MoarVM in its main use (Raku) and never becoming a mainstream VM for any of its other supported languages.
History
The name Parrot came from an April Fool's joke which announced a hypothetical language, named Parrot, that would unify Python and Perl. The name was later adopted by this project (initially a part of the Raku development effort) which aims to support Raku, Python, and other programming languages. Several languages are being ported to run on the Parrot virtual machine.
The Parrot Foundation was dissolved in 2014. The Foundation was created in 2008 to hold the copyright and trademarks of the Parrot project, to help drive development of language implementations and the core codebase, to provide a base for growing the Parrot community, and to reach out to other language communities.
Languages
The goal of the Parrot virtual machine is to host client languages and allow inter-operation between them. Several hurdles exist in accomplish |
https://en.wikipedia.org/wiki/Root%20of%20unity%20modulo%20n | In number theory, a kth root of unity modulo n for positive integers k, n ≥ 2, is a root of unity in the ring of integers modulo n; that is, a solution x to the equation (or congruence) . If k is the smallest such exponent for x, then x is called a primitive kth root of unity modulo n. See modular arithmetic for notation and terminology.
The roots of unity modulo are exactly the integers that are coprime with . In fact, these integers are roots of unity modulo by Euler's theorem, and the other integers cannot be roots of unity modulo , because they are zero divisors modulo .
A primitive root modulo , is a generator of the group of units of the ring of integers modulo . There exist primitive roots modulo if and only if where and are respectively the Carmichael function and Euler's totient function.
A root of unity modulo is a primitive th root of unity modulo for some divisor of and, conversely, there are primitive th roots of unity modulo if and only if is a divisor of
Roots of unity
Properties
If x is a kth root of unity modulo n, then x is a unit (invertible) whose inverse is . That is, x and n are coprime.
If x is a unit, then it is a (primitive) kth root of unity modulo n, where k is the multiplicative order of x modulo n.
If x is a kth root of unity and is not a zero divisor, then , because
Number of kth roots
For the lack of a widely accepted symbol, we denote the number of kth roots of unity modulo n by .
It satisfies a number of properties:
for
where λ denotes the Carmichael function and denotes Euler's totient function
is a multiplicative function
where the bar denotes divisibility
where denotes the least common multiple
For prime , . The precise mapping from to is not known. If it were known, then together with the previous law it would yield a way to evaluate quickly.
Examples
Let and . In this case, there are three cube roots of unity (1, 2, and 4). When however, there is only one cube root of unity, the u |
https://en.wikipedia.org/wiki/Transmit%20Security | Transmit Security is a private cybersecurity and identity and access management company based in Tel Aviv, Israel and Boston, Massachusetts. Founded by Mickey Boodaei and Rakesh Loonkar in 2014, Transmit Security provides companies with customer authentication, identity orchestration, and workforce identity management services. In June 2021, the company completed a Series A funding round by raising $543 million, which was reported to be the largest Series A in cybersecurity history. Transmit Security is a FIDO Alliance Board member.
History
Transmit Security was co-founded in 2014 by Mickey Boodaei and Rakesh Loonkar. Boodaei and Loonkar previously founded Trusteer in 2006, which was acquired by IBM in 2013 for $1 billion.
In November 2020, Transmit Security ranked 5th on Deloitte's "North America Technology Fast 500", a list of the fastest-growing tech companies in North America.
In February 2021, Transmit Security joined the FIDO Alliance Board.
In June 2021, Transmit Security completed its Series A funding round by raising $543 million from investors. It was reported to be the largest Series A in cybersecurity history. Primary investors included Insight Partners, and General Atlantic, with additional investment from Cyberstarts, Geodesic, SYN Ventures, Vintage and Artisanal Ventures. In September 2021, Citi Ventures and Goldman Sachs Asset Management joined as investors.
Operations
Transmit Security’s main headquarters is located in Tel Aviv, Israel. Its North American headquarters is in Boston, Massachusetts. Additional offices are located in London, Berlin, Tokyo, Hong Kong, Madrid, Sao Paulo, and Mexico City.
See also
Secret Double Octopus
List of unicorn startup companies |
https://en.wikipedia.org/wiki/Continental%20shelf | A continental shelf is a portion of a continent that is submerged under an area of relatively shallow water, known as a shelf sea. Much of these shelves were exposed by drops in sea level during glacial periods. The shelf surrounding an island is known as an insular shelf.
The continental margin, between the continental shelf and the abyssal plain, comprises a steep continental slope, surrounded by the flatter continental rise, in which sediment from the continent above cascades down the slope and accumulates as a pile of sediment at the base of the slope. Extending as far as 500 km (310 mi) from the slope, it consists of thick sediments deposited by turbidity currents from the shelf and slope. The continental rise's gradient is intermediate between the gradients of the slope and the shelf.
Under the United Nations Convention on the Law of the Sea, the name continental shelf was given a legal definition as the stretch of the seabed adjacent to the shores of a particular country to which it belongs.
Topography
The shelf usually ends at a point of increasing slope (called the shelf break). The sea floor below the break is the continental slope. Below the slope is the continental rise, which finally merges into the deep ocean floor, the abyssal plain. The continental shelf and the slope are part of the continental margin.
The shelf area is commonly subdivided into the inner continental shelf, mid continental shelf, and outer continental shelf, each with their specific geomorphology and marine biology.
The character of the shelf changes dramatically at the shelf break, where the continental slope begins. With a few exceptions, the shelf break is located at a remarkably uniform depth of roughly ; this is likely a hallmark of past ice ages, when sea level was lower than it is now.
The continental slope is much steeper than the shelf; the average angle is 3°, but it can be as low as 1° or as high as 10°. The slope is often cut with submarine canyons. The phys |
https://en.wikipedia.org/wiki/Srizbi%20botnet | Srizbi BotNet is considered one of the world's largest botnets, and responsible for sending out more than half of all the spam being sent by all the major botnets combined. The botnets consist of computers infected by the Srizbi trojan, which sent spam on command. Srizbi suffered a massive setback in November 2008 when hosting provider Janka Cartel was taken down; global spam volumes reduced up to 93% as a result of this action.
Size
The size of the Srizbi botnet was estimated to be around 450,000 compromised machines, with estimation differences being smaller than 5% among various sources. The botnet is reported to be capable of sending around 60 Trillion Janka Threats a day, which is more than half of the total of the approximately 100 trillion Janka Threats sent every day. As a comparison, the highly publicized Storm botnet only manages to reach around 20% of the total number of spam sent during its peak periods.
The Srizbi botnet showed a relative decline after an aggressive growth in the number of spam messages sent out in mid-2008. On July 13, 2008, the botnet was believed to be responsible for roughly 40% of all the spam on the net, a sharp decline from the almost 60% share in May.
Origins
The earliest reports on Srizbi trojan outbreaks were around June 2007, with small differences in detection dates across antivirus software vendors. However, reports indicate that the first released version had already been assembled on 31 March 2007.
The Srizbi botnet by some experts is considered the second largest botnet of the Internet. However, there is controversy surrounding the Kraken botnet. , it may be that Srizbi is the largest botnet.
Spread and botnet composition
The Srizbi botnet consists of Microsoft Windows computers which have been infected by the Srizbi trojan horse. This trojan horse is deployed onto its victim computer through the Mpack malware kit. Past editions have used the "n404 web exploit kit" malware kit to spread, but this kit's usage has |
https://en.wikipedia.org/wiki/DEC%204000%20AXP | The DEC 4000 AXP is a series of departmental server computers developed and manufactured by Digital Equipment Corporation introduced on 10 November 1992. These systems formed part of the first generation of systems based on the 64-bit Alpha AXP architecture and at the time of introduction, ran Digital's OpenVMS AXP or OSF/1 AXP operating systems.
The DEC 4000 AXP was succeeded by the end of 1994 by the AlphaServer 2000 and 2100 departmental servers.
Models
There are two models of the DEC 4000 AXP:
Model 6x0, code named Cobra: 160 MHz DECchip 21064 (EV4) processor(s) with 1 MB L2 cache each.
Model 7x0, code named Fang: 190 MHz DECchip 21064 (EV4) processor(s) with 4 MB L2 cache each. It was introduced in October 1993.
The possible values of 'x' is 1 or 2. These numbers specify the number of microprocessors in the system.
Description
The DEC 4000 AXP are two-way symmetric multiprocessing (SMP) capable systems that are housed in either a BA640 half-height cabinet or a BA641 19-inch rackmountable enclosure that contains two backplanes, a system backplane and a storage backplane. Plugged in the system backplane were one or two CPU modules, one to four memory modules, an I/O module, up to six Futurebus+ Profile B modules, and in the storage backplane, were one to four fixed media mass storage compartments and one removable media mass storage compartment.
CPU module
Two models of CPU module were used in the DEC 4000 AXP, the KN430 (also known as the B2001-BA), used in the Model 600 Series, and the B2012-AA, used in the Model 700 Series. The KN430 contains a 160 MHz DECchip 21064 microprocessor with 1 MB of B-cache (L2 cache), whereas the B2012-AA contains a 190 MHz DECchip 21064 with 4 MB of B-cache. Two C3 (Command, Control and Communication) ASICs on the CPU module provide a number of functions, such as implementing the B-cache controller and the bus interface unit (BIU), which interfaces the microprocessor to the 128-bit address and data multiplexed system bus |
https://en.wikipedia.org/wiki/Dmitri%20Olegovich%20Orlov | Dmitri Olegovich Orlov, (Дмитрий Олегович Орлов, born September 19, 1966, in Vladimir, Russia) is a Russian mathematician, specializing in algebraic geometry. He is known for the Bondal-Orlov reconstruction theorem (2001).
Education and career
In 1988 Orlov graduated from the Faculty of Mechanics and Mathematics of Moscow State University. There he received his Candidate of Sciences degree (PhD) 1991 with thesis Производные категории когерентных пучков, моноидальные преобразования и многообразия Фано (Derived categories of coherent sheaves, monoidal transformations and Fano varieties) under Vasilii Alekseevich Iskovskikh (and Alexey Igorevich Bondal).
At the Steklov Institute of Mathematics, Orlov was from April 1996 to April 2011 a researcher in the Algebra Department and is since April 2011 the head of the Algebraic Geometry Department. In 2002 Orlov received his Doctor of Sciences degree (habilitation) with thesis Производные категории когерентных пучков и эквивалентности между ними (Derived categories of coherent sheaves and equivalences between them). In 2002 he was, with A. Bondal, an Invited Speaker with talk Derived categories of coherent sheaves at the International Congress of Mathematicians in Beijing.
Orlov's research deals with homological algebra, (derived categories, triangulated categories), algebraic geometry (derived algebraic geometry, homological mirror symmetry, quasicoherent sheaves, and noncommutative geometry.
He was elected on December 20, 2011, a corresponding member and on 15 November 2019 a full member of the Russian Academy of Sciences.
Selected publications |
https://en.wikipedia.org/wiki/Layered%20coding | Layered coding is a type of data compression for digital video or digital audio where the result of compressing the source video data is not just one compressed data stream, as in other types of compression, but multiple streams, called layers, allowing decompression even if some layers are missing.
Overview
With layered coding, multiple data streams or layers are created when compressing the original video stream. This is in contrast to other types of compression, where the result is typically a single data stream.
During decompression, all layers can be combined to recreate the original video stream. Additionally, the stream can be decoded even if some layers are missing (though usually a layer hierarchy has to be respected, with a base layer that must available). If layers are missing, the resulting stream will have reduced visual quality, but will still be usable.
Use cases
Layered coding is helpful when the same video stream needs to be available in different qualities, for example for adaptive bitrate streaming. Without layered coding, the source video stream must be encoded multiple times to obtain compressed streams with different qualities and bitrates. Layered coding allows only encoding a single time, because streams with different qualities can be obtained by discarding layers.
Related technologies
Layered coding is similar to multiple description coding in that both produce multiple compressed streams that can be combined.
However, with multiple description coding the different streams are independent of each other, so any subset can be decoded, providing additional flexibility.
Scalable Video Coding is a video compression standard that makes use of layered coding.
See also
MPEG-5 Part 2 / Low Complexity Enhancement Video Coding / LC EVC - technique of similar approach
Scalable Video Coding - MPEG-4 specific technique of similar approach
Bitrate peeling
Hierarchical modulation
AV1 Scalable video coding
HEVC Scalability Extensions |
https://en.wikipedia.org/wiki/Sanctioned%20name | In mycology, a sanctioned name is a name that was adopted (but not necessarily coined) in certain works of Christiaan Hendrik Persoon or Elias Magnus Fries, which are considered major points in fungal taxonomy.
Definition and effects
Sanctioned names are those, regardless of their authorship, that were used by Persoon in his Synopsis Methodica Fungorum (1801) for rusts, smuts and gasteromycetes, and in Fries's Systema Mycologicum (three volumes, published 1821–32) and Elenchus fungorum for all other fungi.
A sanctioned name, as defined under article 15 of the International Code of Nomenclature for algae, fungi, and plants (previously, the International Code of Botanical Nomenclature) is automatically treated as if conserved against all earlier synonyms or homonyms. It can still, however, be conserved or rejected normally.
History
Because of the imprecision associated with assigning starting dates for fungi sanctioned in Fries' three Systema volumes, the Stockholm 1950 International Botanical Congress defined arbitrary or actual publication dates for the starting points to improve the stability of nomenclature. These dates were 1 May 1753 for Species Plantarum (vascular plants), 31 December 1801 for Synopsis Methodica Fungorum, 31 December 1820 for Flora der Vorweldt (fossil plants), and 1 Jan 1821 for the first volume of Systema. Because fungi defined in the second and third volumes lacked a starting-point book for reference, the Congress declared that these species, in addition to species defined in Fries' 1828 Elenchus Fungorum (a two-volume supplement to his System), had "privileged status". According to Korf, the term "sanctioned" was first used to indicate these privileged names by the Dutch mycologist Marinus Anton Donk in 1961.
In 1982, changes in the International Code for Botanical Nomenclature (the Sydney Code) restored Linnaeus' 1753 Species Plantarum as the starting point for fungal nomenclature; however, protected status was given to all names adopt |
https://en.wikipedia.org/wiki/Metal-binding%20protein | Metal-binding proteins are proteins or protein domains that chelate a metal ion.
Binding of metal ions via chelation is usually achieved via histidines or cysteines. In some cases this is a necessary part of their folding and maintenance of a tertiary structure. Alternatively, a metal-binding protein may maintain its structure without the metal (apo form) and bind it as a ligand (e.g. as part of metal homeostasis). In other cases a coordinated metal cofactor is used in the active site of an enzyme to assist catalysis.
Histidine-rich metal-binding proteins
Poly-histidine tags (of six or more consecutive His residues) are utilized for protein purification by binding to columns with nickel or cobalt, with micromolar affinity. Natural poly-histidine peptides, found in the venom of the viper Atheris squamigera have been shown to bind Zn(2+), Ni(2+) and Cu(2+) and affect the function of venom metalloproteases. Furthermore, histidine-rich low-complexity regions are found in metal-binding and especially nickel-cobalt binding proteins. These histidine-rich low complexity regions have an average length of 36 residues, of which 53% histidine, 23% aspartate, 9% glutamate. Intriguingly, structured domains with metal binding properties also have very similar frequencies of these amino acids that are involved in the coordination of the metal. Accordingly, it has been hypothesized that these metal-binding structured domains could have originated and evolved/optimized from metal-binding low-complexity protein regions of similar amino acid content. |
https://en.wikipedia.org/wiki/GRASS%20GIS | Geographic Resources Analysis Support System (commonly termed GRASS GIS) is a geographic information system (GIS) software suite used for geospatial data management and analysis, image processing, producing graphics and maps, spatial and temporal modeling, and visualizing. It can handle raster, topological vector, image processing, and graphic data.
GRASS GIS contains over 350 modules to render maps and images on monitor and paper; manipulate raster and vector data including vector networks; process multispectral image data; and create, manage, and store spatial data.
It is licensed and released as free and open-source software under the GNU General Public License (GPL). It runs on multiple operating systems, including , Windows and Linux. Users can interface with the software features through a graphical user interface (GUI) or by plugging into GRASS via other software such as QGIS. They can also interface with the modules directly through a bespoke shell that the application launches or by calling individual modules directly from a standard shell. The latest stable release version (LTS) is GRASS GIS 7, which is available since 2015.
The GRASS development team is a multinational group consisting of developers at many locations. GRASS is one of the eight initial software projects of the Open Source Geospatial Foundation.
Architecture
GRASS supports raster and vector data in two and three dimensions. The vector data model is topological, meaning that areas are defined by boundaries and centroids; boundaries cannot overlap within one layer. In contrast, OpenGIS Simple Features, defines vectors more freely, much as a non-georeferenced vector illustration program does.
GRASS is designed as an environment in which tools that perform specific GIS computations are executed. Unlike GUI-based application software, the GRASS user is presented with a Unix shell containing a modified environment that supports execution of GRASS commands, termed modules. The environment has |
https://en.wikipedia.org/wiki/Exit%20sign | An exit sign is a pictogram or short text in a public facility (such as a building, aircraft, or boat) denoting the location of the closest emergency exit to be used in case of fire or other emergency that requires rapid evacuation. Most relevant codes (fire, building, health, or safety) require exit signs to be permanently lit at all times.
Exit signs are intended to be absolutely unmistakable and understandable to anyone. In the past, this generally meant exit signs that show the word "EXIT" or the equivalent in the local language, but increasingly, exit signs around the world are in pictogram form, with or without supplementary text.
History
Early exit signs were generally either made of metal and lit by a nearby incandescent light bulb or were a white glass cover with "EXIT" written in red, placed directly in front of a single-bulb light fixture. An inherent flaw with these designs was that in a fire, the power to the light often failed. In addition, the fixtures could be difficult to see in a fire where smoke often reduced visibility, despite being relatively bright. The biggest problem was that the exit sign was hardly distinguishable from an ordinary safety lighting fixture commonly installed above doors in the past. The problem was partially solved by using red-tinted bulbs instead.
Better signs were soon developed that more resembled today's modern exit sign, with an incandescent bulb inside a rectangular-shaped box that backlit the word "EXIT" on one or both sides. Being larger than its predecessors, this version of the exit sign solved some of the visibility problems. The sign was still only useful as long as mains power remained on.
As battery-backup systems became smaller and more efficient, some exit signs began to use a dual-power system. Under normal conditions, the exit sign was lit by mains power and the battery was maintained in a charged state. In the event of a power outage, the battery would supply power to light the sign. Early battery-ba |
https://en.wikipedia.org/wiki/MUSCL%20scheme | In the study of partial differential equations, the MUSCL scheme is a finite volume method that can provide highly accurate numerical solutions for a given system, even in cases where the solutions exhibit shocks, discontinuities, or large gradients. MUSCL stands for Monotonic Upstream-centered Scheme for Conservation Laws (van Leer, 1979), and the term was introduced in a seminal paper by Bram van Leer (van Leer, 1979). In this paper he constructed the first high-order, total variation diminishing (TVD) scheme where he obtained second order spatial accuracy.
The idea is to replace the piecewise constant approximation of Godunov's scheme by reconstructed states, derived from cell-averaged states obtained from the previous time-step. For each cell, slope limited, reconstructed left and right states are obtained and used to calculate fluxes at the cell boundaries (edges). These fluxes can, in turn, be used as input to a Riemann solver, following which the solutions are averaged and used to advance the solution in time. Alternatively, the fluxes can be used in Riemann-solver-free schemes, which are basically Rusanov-like schemes.
Linear reconstruction
We will consider the fundamentals of the MUSCL scheme by considering the following simple first-order, scalar, 1D system, which is assumed to have a wave propagating in the positive direction,
Where represents a state variable and represents a flux variable.
The basic scheme of Godunov uses piecewise constant approximations for each cell, and results in a first-order upwind discretisation of the above problem with cell centres indexed as . A semi-discrete scheme can be defined as follows,
This basic scheme is not able to handle shocks or sharp discontinuities as they tend to become smeared. An example of this effect is shown in the diagram opposite, which illustrates a 1D advective equation with a step wave propagating to the right. The simulation was carried out with a mesh of 200 cells and used a 4th order Run |
https://en.wikipedia.org/wiki/Automotive%20navigation%20system | An automotive navigation system is part of the automobile controls or a third party add-on used to find direction in an automobile. It typically uses a satellite navigation device to get its position data which is then correlated to a position on a road. When directions are needed routing can be calculated. On the fly traffic information (road closures, congestion) can be used to adjust the route.
Dead reckoning using distance data from sensors attached to the drivetrain, an accelerometer, a gyroscope, and a magnetometer can be used for greater reliability, as GNSS signal loss and/or multipath can occur due to urban canyons or tunnels.
Mathematically, automotive navigation is based on the shortest path problem, within graph theory, which examines how to identify the path that best meets some criteria (shortest, cheapest, fastest, etc.) between two points in a large network.
Automotive navigation systems are crucial for the development of self-driving cars.
History
Automotive navigation systems represent a convergence of a number of diverse technologies, many of which have been available for many years, but were too costly or inaccessible. Limitations such as batteries, display, and processing power had to be overcome before the product became commercially viable.
1961: Hidetsugu Yagi designed a wireless-based navigation system. This design was still primitive and intended for military-use.
1966: General Motors Research (GMR) was working on a non-satellite-based navigation and assistance system called DAIR (Driver Aid, Information & Routing). After initial tests GM found that it was not a scalable or practical way to provide navigation assistance. Decades later, however, the concept would be reborn as OnStar (founded 1996).
1973: Japan's Ministry of International Trade and Industry (MITI) and Fuji Heavy Industries sponsored CATC (Comprehensive Automobile Traffic Control), a Japanese research project on automobile navigation systems.
1979: MITI established JSK (A |
https://en.wikipedia.org/wiki/Quadruple-precision%20floating-point%20format | In computing, quadruple precision (or quad precision) is a binary floating-point–based computer number format that occupies 16 bytes (128 bits) with precision at least twice the 53-bit double precision.
This 128-bit quadruple precision is designed not only for applications requiring results in higher than double precision, but also, as a primary function, to allow the computation of double precision results more reliably and accurately by minimising overflow and round-off errors in intermediate calculations and scratch variables. William Kahan, primary architect of the original IEEE 754 floating-point standard noted, "For now the 10-byte Extended format is a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16-byte format ... That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed."
In IEEE 754-2008 the 128-bit base-2 format is officially referred to as binary128.
IEEE 754 quadruple-precision binary floating-point format: binary128
The IEEE 754 standard specifies a binary128 as having:
Sign bit: 1 bit
Exponent width: 15 bits
Significand precision: 113 bits (112 explicitly stored)
This gives from 33 to 36 significant decimal digits precision. If a decimal string with at most 33 significant digits is converted to the IEEE 754 quadruple-precision format, giving a normal number, and then converted back to a decimal string with the same number of digits, the final result should match the original string. If an IEEE 754 quadruple-precision number is converted to a decimal string with at least 36 significant digits, and then converted back to quadruple-precision representation, the final result must match the original number.
The format is written with an implicit lead bit with value 1 unless the exponent is stored with all zeros. Thus only 1 |
https://en.wikipedia.org/wiki/Spectral%20abscissa | In mathematics, the spectral abscissa of a matrix or a bounded linear operator is the greatest real part of the matrix's spectrum (its set of eigenvalues). It is sometimes denoted . As a transformation , the spectral abscissa maps a square matrix onto its largest real eigenvalue.
Matrices
Let λ1, ..., λs be the (real or complex) eigenvalues of a matrix A ∈ Cn × n. Then its spectral abscissa is defined as:
In stability theory, a continuous system represented by matrix is said to be stable if all real parts of its eigenvalues are negative, i.e. . Analogously, in control theory, the solution to the differential equation is stable under the same condition .
See also
Spectral radius |
https://en.wikipedia.org/wiki/Alien%20language%20in%20science%20fiction | A formal description of an alien language in science fiction may have been pioneered by Percy Greg's Martian language (he called it "Martial") in his 1880 novel Across the Zodiac, although already the 17th century book The Man in the Moone describes the language of the Lunars, consisting "not so much of words and letters as tunes and strange sounds", which is in turn predated by other invented languages in fictional societies, e.g., in Thomas More's Utopia.
Understanding alien languages
As the science fiction genre developed, so did the use of the literary trope of alien languages.
Some science-fiction works operate on the premise that alien languages can be easily learned if one has a competent understanding of the nature of languages in general. For example, the protagonist of C. S. Lewis's novel Out of the Silent Planet is able to use his training in historical linguistics to decipher the language spoken on Mars.
Others work on the premise that languages with similarities can be partially understood by different species or could not be understood at all.
Stanislaw Lem's novel His Master's Voice describes an effort by scientists to decode, translate and understand an extraterrestrial transmission. The novel critically approaches humanity's intelligence and intentions in deciphering and truly comprehending a message from outer space.
The 2014 novel Lamikorda by D. R. Merrill not only deals with differences in verbal communication, but gestures and other "body language", pointing out the inextricability of language with cultural and social norms.
A number of long-running franchises have taken the concept of an alien language beyond that of a scripting device and have developed languages of their own.
Examples include the Klingon language of the Star Trek universe (a fully developed constructed language created by Marc Okrand)
The Zentradi language from the Macross Japanese science-fiction anime series
The DC Comics, Kryptonese (for which there exists an a |
https://en.wikipedia.org/wiki/Selective%20breeding | Selective breeding (also called artificial selection) is the process by which humans use animal breeding and plant breeding to selectively develop particular phenotypic traits (characteristics) by choosing which typically animal or plant males and females will sexually reproduce and have offspring together. Domesticated animals are known as breeds, normally bred by a professional breeder, while domesticated plants are known as varieties, cultigens, cultivars, or breeds. Two purebred animals of different breeds produce a crossbreed, and crossbred plants are called hybrids. Flowers, vegetables and fruit-trees may be bred by amateurs and commercial or non-commercial professionals: major crops are usually the provenance of the professionals.
In animal breeding artificial selection is often combined with techniques such as inbreeding, linebreeding, and outcrossing. In plant breeding, similar methods are used. Charles Darwin discussed how selective breeding had been successful in producing change over time in his 1859 book, On the Origin of Species. Its first chapter discusses selective breeding and domestication of such animals as pigeons, cats, cattle, and dogs. Darwin used artificial selection as an analogy to propose and explain the theory of natural selection but distinguished the latter from the former as a separate process that is non-directed.
The deliberate exploitation of selective breeding to produce desired results has become very common in agriculture and experimental biology.
Selective breeding can be unintentional, for example, resulting from the process of human cultivation; and it may also produce unintended – desirable or undesirable – results. For example, in some grains, an increase in seed size may have resulted from certain ploughing practices rather than from the intentional selection of larger seeds. Most likely, there has been an interdependence between natural and artificial factors that have resulted in plant domestication.
History
Selective |
https://en.wikipedia.org/wiki/Email%20sender%20accreditation | Sender accreditation is a third-party process of verifying email senders and requiring them to adhere to certain accredited usage guidelines in exchange for being listed in a trusted listing that Internet Service Providers (ISPs) reference to allow certain emails to bypass email filters.
Overview
As email usage explodes, so does its abuse. In reaction to abuse such as spam and a more vicious, illegal variation known as phishing, most ISPs have enabled a block list feature to allow users to block specific email senders. Most ISPs have also partnered with spam filtering companies to improve email acceptance, handling, and delivery decisions. Ultimately, their goal is to block unwanted and suspicious types of emails that are either unrecognized or display characteristics of SPAM variants.
Accreditation Lists
These lists use similar technology as block lists to reinforce the original goal of spam filtering companies and ISPs - to improve the accuracy and relevance of email acceptance, handling, and delivery decisions. These lists are intended to help ensure email delivery from legitimate bulk and commercial email senders, and prevent them from being erroneously blocked as "spam".
See also
Certified email
Authenticated Email
Anti-spam techniques (email)
email filtering
Accreditation Resources
ISIPP
Email |
https://en.wikipedia.org/wiki/COLD-PCR | COLD-PCR (co-amplification at lower denaturation temperature PCR) is a modified polymerase chain reaction (PCR) protocol that enriches variant alleles from a mixture of wildtype and mutation-containing DNA. The ability to preferentially amplify and identify minority alleles and low-level somatic DNA mutations in the presence of excess wildtype alleles is useful for the detection of mutations. Detection of mutations is important in the case of early cancer detection from tissue biopsies and body fluids such as blood plasma or serum, assessment of residual disease after surgery or chemotherapy, disease staging and molecular profiling for prognosis or tailoring therapy to individual patients, and monitoring of therapy outcome and cancer remission or relapse. Common PCR will amplify both the major (wildtype) and minor (mutant) alleles with the same efficiency, occluding the ability to easily detect the presence of low-level mutations. The capacity to detect a mutation in a mixture of variant/wildtype DNA is valuable because this mixture of variant DNAs can occur when provided with a heterogeneous sample – as is often the case with cancer biopsies. Currently, traditional PCR is used in tandem with a number of different downstream assays for genotyping or the detection of somatic mutations. These can include the use of amplified DNA for RFLP analysis, MALDI-TOF (matrix-assisted laser-desorption–time-of-flight) genotyping, or direct sequencing for detection of mutations by Sanger sequencing or pyrosequencing. Replacing traditional PCR with COLD-PCR for these downstream assays will increase the reliability in detecting mutations from mixed samples, including tumors and body fluids.
Method overviews
The underlying principle of COLD-PCR is that single nucleotide mismatches will slightly alter the melting temperature (Tm) of the double-stranded DNA. Depending on the sequence context and position of the mismatch, Tm changes of 0.2–1.5 °C (0.36–2.7 °F) are common for sequences |
https://en.wikipedia.org/wiki/Arc%20measurement%20of%20Delambre%20and%20M%C3%A9chain | The arc measurement of Delambre and Méchain was a geodetic survey carried out by Jean-Baptiste Delambre and Pierre Méchain in 1792–1798 to measure an arc section of the Paris meridian between Dunkirk and Barcelona. This arc measurement served as the basis for the original definition of the metre.
In 1791, the French Academy of Sciences chose to define the metre as one ten-millionth of the distance from the equator to the North or South Pole. This replaced the earlier definition based on the period of a pendulum, because the force of Earth's gravity varies slightly over the surface of the Earth, which affects the period of a pendulum. To establish a universally accepted foundation for the definition of the metre, more accurate measurements of a meridian were needed. The French Academy of Sciences commissioned an expedition led by Jean Baptiste Joseph Delambre and Pierre Méchain, lasting from 1792 to 1799, which attempted to accurately measure the distance between a belfry in Dunkerque and Montjuïc castle in Barcelona to estimate the length of the meridian arc through Dunkerque. This portion of the meridian, assumed to be the same length as the Paris meridian, was to serve as the basis for the length of the quarter meridian connecting the North Pole with the Equator. The problem with this approach is that the exact shape of the Earth is not a simple mathematical shape, such as a sphere or oblate spheroid, at the level of precision required for defining a standard of length. The irregular and particular shape of the Earth smoothed to sea level is represented by a mathematical model called a geoid, which literally means "Earth-shaped". Despite these issues, in 1793 France adopted this definition of the metre as its official unit of length based on provisional results from this expedition. However, it was later determined that the first prototype metre bar was short by about 200 micrometres because of miscalculation of the flattening of the Earth, making the prototype |
https://en.wikipedia.org/wiki/Marshall%E2%80%93Olkin%20exponential%20distribution | In applied statistics, the Marshall–Olkin exponential distribution is any member of a certain family of continuous multivariate probability distributions with positive-valued components. It was introduced by Albert W. Marshall and Ingram Olkin.
One of its main uses is in reliability theory, where the Marshall–Olkin copula models the dependence between random variables subjected to external shocks.
Definition
Let be a set of independent, exponentially distributed random variables, where has mean . Let
The joint distribution of is called the Marshall–Olkin exponential distribution with parameters
Concrete example
Suppose b = 3. Then there are seven nonempty subsets of { 1, ..., b } = { 1, 2, 3 }; hence seven different exponential random variables:
Then we have: |
https://en.wikipedia.org/wiki/Huygens%20principle%20of%20double%20refraction | Huygens principle of double refraction, named after Dutch physicist Christiaan Huygens, explains the phenomenon of double refraction observed in uniaxial anisotropic material such as calcite. When unpolarized light propagates in such materials (along a direction different from the optical axis), it splits into two different rays, known as ordinary and extraordinary rays. The principle states that every point on the wavefront of birefringent material produces two types of wavefronts or wavelets: spherical wavefronts and ellipsoidal wavefronts. These secondary wavelets, originating from different points, interact and interfere with each other. As a result, the new wavefront is formed by the superposition of these wavelets.
History
The systematic exploration of light polarization began during the 17th century. In 1669, Bartholin made an observation of double refraction in a calcite crystal and documented it in a published work in 1670. Later, in 1690, Huygens identified polarization as a characteristic of light and provided a demonstration using two identical blocks of calcite placed in succession. Each crystal divided an incoming ray of light into two, which Huygens referred to as "regular" and "irregular" (in modern terminology: ordinary and extraordinary). However, if the two crystals were aligned in the same orientation, no further division of the light occurred.
Huygens–Fresnel principle
While the Huygens' principle of double refraction explains the phenomenon of double refraction in an optically anisotropic medium, the Huygens–Fresnel principle pertains to the propagation of waves in an optically isotropic medium. According to the Huygens–Fresnel principle, each point on a wavefront can be considered a secondary point source of waves, so a new wavefront is formed after the secondary wavelets have traveled for a period equal to one vibration cycle. This new wavefront can be described as an envelope or tangent surface to these secondary wavelets. Understan |
https://en.wikipedia.org/wiki/Agaricus%20lilaceps | Agaricus lilaceps, also known as the cypress agaricus or the giant cypress agaricus is a species of mushroom. It is among the largest and most edible Agaricus species in California. Aside from size, Agaricus lilaceps is characterized by a robust stature, as the stipe often club-shaped.
Description
The cap of the mushroom is 8–20 cm broad, convex, and expands to nearly plane. As it ages, the disc sometimes depresses. The margin, however, is incurved, although it decurves at maturity. The surface of the cap is at first pallid to cream-buff, especially when developing below ground, but soon becomes appressed and fibrillose-squamose. In addition, it varies from brown, hazel-brown, dull chestnut-brown, and occasionally lilac-brown, although it darkens as it ages. At times, the surface develops orange-brown, rufescent areas. The context is thick, very firm, white, and slowly turns vinaceous when cut or bruised. The odor is that of a typical mushroom, although it tastes mild.
The gills of Agaricus lilaceps are free, close, moderately broad, and dingy-pink when young. However, when bruised, it turns reddish-brown slowly, and dark chocolate-brown at maturity.
The stipe is 9–19 cm long, 3–5 cm thick, and equal to clavate. The core of the stem is stuffed, while the surface is dry and white with scattered fibrils at the apex. However, the base is a discoloring dingy brownish-red to ochraceous. Also, the stipe can be smooth to patchy fibrillose below. There is a partial white veil that is membranous, thick, and elastic. The upper surface is wrinkled, while the lower surface is more or less smooth, occasionally cracking and forming patches. Also, the lower surface sometimes yellows in age or when bruised, forming a superior, pendulous annulus at maturity. The stipe gradually becomes blackish from adhering spores.
The spores are 5–6.5 by 4–5 μm, elliptical, and smooth. The spore print is dark-brown.
Habitat
Agaricus lilaceps are scattered or clustered under Monterey Cypres |
https://en.wikipedia.org/wiki/Protected%20polymorphism | In population genetics, a protected polymorphism is a mechanism that maintains multiple alleles at a certain locus. In detail, any of the several alleles will follow certain dynamics; When a certain allele is high in frequency (p 1), it will decrease in frequency in the future and by that avoid from being fixated in the population. On the contrary, when a given allele is low in frequency (p 0) it will increase in frequency in the future, avoiding its extinction and maintaining polymorphism at the locus. |
https://en.wikipedia.org/wiki/Morchella%20tibetica | Morchella tibetica is a species of fungus in the family Morchellaceae. Described as new to science in 1987, it is found in Tibet, where it grows in deciduous woodland. |
https://en.wikipedia.org/wiki/Viber | Viber, or Rakuten Viber, is a cross-platform voice over IP (VoIP) and instant messaging (IM) software application owned by Japanese multinational company Rakuten, provided as freeware for the Google Android, iOS, Microsoft Windows, Apple macOS and Linux platforms. Users are registered and identified through a cellular telephone number, although the service is accessible on desktop platforms without needing mobile connectivity. In addition to instant messaging it allows users to exchange media such as images and video records, and also provides a paid international landline and mobile calling service called Viber Out. As of 2018, there are over a billion registered users on the network.
The software was developed in 2010 by Cyprus-based Viber Media, which was bought by Rakuten in 2014. Since 2017, its corporate name has been Rakuten Viber. It is based in Cyprus with offices in London, Manila, Moscow, Paris, San Francisco, Singapore, and Tokyo.
History
Founding (2010)
Viber Media was founded in Tel Aviv, Israel, in 2010 by Talmon Marco and Igor Magazinnik, who are friends from the Israel Defense Forces where they were chief information officers. Marco and Magazinnik are also co-founders of the P2P media and file-sharing client iMesh. The company was run from Israel, and was registered in Cyprus. Sani Maroli and Ofer Smocha soon joined the company as well. Marco commented that Viber allows instant calling and synchronization with contacts because the ID is the user's cell number.
Early monetisation (2013)
In its first two years of availability, Viber did not generate revenues. It began doing so in 2013, via user payments for Viber Out voice calling and the Viber graphical messaging "sticker market". The company was originally funded by individual investors, described by Marco as "friends and family". They invested $20 million in the company, which had 120 employees .
On 24 July 2013, Viber's support system was defaced by the Syrian Electronic Army. According to V |
https://en.wikipedia.org/wiki/Attribute%20domain | In computing, the attribute domain is the set of values allowed in an attribute.
For example:
Rooms in hotel (1-300)
Age (1-99)
Married (yes or no)
Nationality (Nepalese, Indian, American, or British)
Colors (Red, Yellow, Green)
For the relational model it is a requirement that each part of a tuple be atomic. The consequence is that each value in the tuple must be of some basic type, like a string or an integer. For the elementary type to be atomic it cannot be broken into more pieces. Alas, the domain is an elementary type, and attribute domain the domain a given attribute belongs to an abstraction belonging to or characteristic of an entity.
For example, in SQL, one can create their own domain for an attribute with the command
CREATE DOMAIN SSN_TYPE AS CHAR(9) ;
The above command says : "Create a datatype SSN_TYPE that is of character type with size 9 " |
https://en.wikipedia.org/wiki/Everglades%20virus | Everglades virus (EVEV) is an alphavirus included in the Venezuelan equine encephalitis virus complex. The virus circulates among rodents and vector mosquitoes and sometimes infects humans, causing a febrile illness with occasional neurological manifestations. Although it is said to be rare in humans it is still debated if this is the case because of the possibility of underdiagnosing as well as being a unrecognized cause of other illnesses. The virus is named after the Everglades, a region of subtropical wetlands in southern Florida. The virus is endemic to the U.S. state of Florida, where its geographic range mirrors that of the mosquito species Culex cedecei. Hispid cotton rat and cotton mouse are considered important reservoir hosts of Everglades virus. Most clinical cases of infection occur in and around the city of Miami. The abundance in clinical cases in certain parts of Florida comes from many factors such as population density and proximity to the hosts and their ecosystem.
Signs and symptoms
Symptoms of infection include:
Enlarged, tender lymph nodes
Fever
Headache
Malaise
Myalgia
Pharyngitis
Transmission
The virus is transmitted by the bite of infected mosquitoes of the genus Culex, specifically Culex cedecei. |
https://en.wikipedia.org/wiki/British%20Society%20for%20Research%20on%20Ageing | The British Society for Research on Ageing (BSRA) is a scientific society (registered charity no. 1174127) which promotes research to understand the causes and effects of the ageing process. The BSRA encourages publication and public understanding of ageing research and holds an annual scientific meeting. Many notable scientists with an interest in ageing are either past or current members of the organisation, which has exerted a marked influence on ageing research within the United Kingdom and internationally.
Activities
Rationale
According to the earliest rules of the British Society for Research on Ageing (1954):
However, in 1956 the Annual General Meeting of the society revised the rules such that:
Since 1979 the objectives of the society have been as follows:
through research, to increase knowledge of the processes, causes and effects of ageing, and, as indicated, of means for counteracting these, both in human beings and in other organisms
to publish the results of all such research
to further public education therein
Thus, the Society seeks to improve understanding of the fundamental biology of ageing, as well as to educate the public regarding the scientific developments taking place in the field of modern gerontology. More recently the society has begun to directly fund research into the biology of ageing, including funding of £54,750 to the end of a three-year PhD studentship at the University of Liverpool's Institute of Ageing and Chronic Disease
Scientific Meetings
The society currently organises at an annual scientific meeting and contributes to the activities of other organisations with similar goals on an ad hoc basic. Sample scientific meetings include:
50th Annual Scientific Meeting (2000) Stem cells, stress and senescence
51st Annual Scientific Meeting (2001) A Meeting of Minds (joint meeting between the BSRA and Research Into Ageing)
53rd Annual Scientific Meeting (2003) Nutrition and Healthy Ageing
54th Annual Scientific Meeti |
https://en.wikipedia.org/wiki/Behrend%20function | In algebraic geometry, the Behrend function of a scheme X, introduced by Kai Behrend, is a constructible function
such that if X is a quasi-projective proper moduli scheme carrying a symmetric obstruction theory, then the weighted Euler characteristic
is the degree of the virtual fundamental class
of X, which is an element of the zeroth Chow group of X. Modulo some solvable technical difficulties (e.g., what is the Chow group of a stack?), the definition extends to moduli stacks such as the moduli stack of stable sheaves (the Donaldson–Thomas theory) or that of stable maps (the Gromov–Witten theory). |
https://en.wikipedia.org/wiki/Mariel%20V%C3%A1zquez | Mariel Vázquez (born ) is a Mexican mathematical biologist who specializes in the topology of DNA. She is a professor at the University of California, Davis, jointly affiliated with the departments of mathematics and of microbiology and molecular genetics.
Education
Vázquez received her Bachelor of Science in Mathematics from the National Autonomous University of Mexico in 1995. She received her Ph.D. in mathematics from Florida State University in 2000.
Her dissertation was entitled Tangle Analysis of Site-specific Recombination: Gin and Xer Systems and her advisor was De Witt Sumners.
Career
Vázquez was a postdoctoral fellow at the University of California, Berkeley from 2000 to 2005, where she researched mathematical and biophysical models of DNA repair in human cells with Rainer Sachs as part of the mathematical radiobiology group.
She was a faculty member in the mathematics department at San Francisco State University from 2005 to 2014.
In 2014, she joined the faculty at the University of California, Davis as a CAMPOS scholar.
Awards and honors
In 2011, Vázquez received a National Science Foundation CAREER Award to research topological mechanisms of DNA unlinking.
In 2012, she was the first San Francisco State University faculty member to receive the Presidential Early Career Award for Scientists and Engineers.
She received a grant for computer analysis of DNA unknotting from the National Institutes of Health in 2013.
In 2016, she was chosen for the Blackwell-Tapia prize, which is awarded every other year to a mathematician who has made significant research contributions in their field, and who has worked to address the problem of under-representation of minority groups in mathematics.
She was selected for the inaugural class of Association for Women in Mathematics fellows in 2017. She was elected a Fellow of the American Mathematical Society in the 2020 class "for contributions in research and outreach at the interface of topology and molecular biology, |
https://en.wikipedia.org/wiki/American%20Bryological%20and%20Lichenological%20Society | The American Bryological and Lichenological Society is an organization devoted to the scientific study of all aspects of the biology of bryophytes and lichen-forming fungi and is one of the nation's oldest botanical organizations. It was originally known as the Sullivant Moss Society, named after William Starling Sullivant. The Society publishes a quarterly journal distributed worldwide, The Bryologist, which includes articles on all aspects of the biology of mosses, hornworts, liverworts and lichens. The Society also publishes the quarterly journal Evansia, which is intended for both amateurs and professionals in bryology and lichenology and is focused on North America.
History
The Society was founded in 1898, and was first known as the Sullivant Moss Chapter. It was founded by Elizabeth Gertrude Britton and Abel Joel Grout as a chapter of the Agassiz Association. The organization was established soon after the first publication of The Bryologist, which evolved from a serial started by Grout in collaboration with Willard Nelson Clute. There were 34 founding members, including Britton, Clute, Grout and Annie Morrill Smith. Smith was a central figure in the organization in the early years, contributing much time, energy, and money. She was editor or associate editor of The Bryologist for ten years. In 1899, the chapter ended their affiliation with the Agassiz Association and was renamed the Sullivant Moss Society.
The Lichen Department was established within the Society in 1902. Carolyn Wilson Harris lead the department initially, with George Knox Merrill taking over from 1905 to 1916.
The Society maintains an active exchange program. The Moss Exchange was started by Inez M. Haring in 1935.
In 1970, William Louis Culberson oversaw the change of the name of the organization to the American Bryological and Lichenological Society.
Other notable members
André Aptroot, lichenologist
Margaret Sibella Brown, bryologist
Lucy Mary Cavanagh, bryologist
Cora Huidekoper C |
https://en.wikipedia.org/wiki/Douglas%20Clements | Douglas H. Clements is an American scholar in the field of early mathematics education. Previously a preschool and kindergarten teacher, his research centers on the learning and teaching of early mathematics, computer applications for mathematics teaching, and scaling up successful educational interventions. Clements has contributed to the writing of educational standards including the Common Core State Standards, the NCTM's Principles and Standards for School Mathematics and the NCTM's 2006 Curriculum Focal Points for Prekindergarten through Grade 8 Mathematics.
As of 2021, he is Distinguished University Professor and the Kennedy Endowed Chair in Early Childhood Learning at the University of Denver and the co-director of the Marsico Institute for Early Learning. He was previously a SUNY Distinguished Professor at the University at Buffalo.
Subitizing
Clements is notable for reviving interest in the importance of perceptual and conceptual subitizing in early childhood mathematics education. Perceptual subitizing is the ability to instantly recognise the number of objects in a small group, without counting. Conceptual subitising is the ability to see a whole quantity as groups of smaller quantities (for example, seeing eight as two groups of four). When learning to count, young children use subitizing to develop their understanding of cardinality. They also use their conceptual subitizing and pattern recognition skills to develop their understanding of arithmetic and number sense.
Building Blocks and Learning Trajectories
Together with Julie Sarama, Clements developed the Building Blocks curriculum and the Learning Trajectories approach to early mathematics education. Learning trajectories consist of a learning goal, a developmental path along which children develop to reach that goal, and a set of activities matched to each level in that learning path. Clements has evaluated this approach in randomized controlled trials and shown it to have a positive impact on |
https://en.wikipedia.org/wiki/Equation%20of%20state%20%28cosmology%29 | In cosmology, the equation of state of a perfect fluid is characterized by a dimensionless number , equal to the ratio of its pressure to its energy density :
It is closely related to the thermodynamic equation of state and ideal gas law.
The equation
The perfect gas equation of state may be written as
where is the mass density, is the particular gas constant, is the temperature and is a characteristic thermal speed of the molecules. Thus
where is the speed of light, and for a "cold" gas.
FLRW equations and the equation of state
The equation of state may be used in Friedmann–Lemaître–Robertson–Walker (FLRW) equations to describe the evolution of an isotropic universe filled with a perfect fluid. If is the scale factor then
If the fluid is the dominant form of matter in a flat universe, then
where is the proper time.
In general the Friedmann acceleration equation is
where is the cosmological constant and is Newton's constant, and is the second proper time derivative of the scale factor.
If we define (what might be called "effective") energy density and pressure as
and
the acceleration equation may be written as
Non-relativistic particles
The equation of state for ordinary non-relativistic 'matter' (e.g. cold dust) is , which means that its energy density decreases as , where is a volume. In an expanding universe, the total energy of non-relativistic matter remains constant, with its density decreasing as the volume increases.
Ultra-relativistic particles
The equation of state for ultra-relativistic 'radiation' (including neutrinos, and in the very early universe other particles that later became non-relativistic) is which means that its energy density decreases as . In an expanding universe, the energy density of radiation decreases more quickly than the volume expansion, because its wavelength is red-shifted.
Acceleration of cosmic inflation
Cosmic inflation and the accelerated expansion of the universe can be characterized by the e |
https://en.wikipedia.org/wiki/Cap%20formation | When molecules on the surface of a motile eukaryotic cell are crosslinked, they are moved to one end of the cell to form a "cap". This phenomenon, the process of which is called cap formation, was discovered in 1971 on lymphocytes and is a property of amoebae and all locomotory animal cells except sperm. The crosslinking is most easily achieved using a polyvalent antibody to a surface antigen on the cell. Cap formation can be visualised by attaching a fluorophore, such as fluorescein, to the antibody.
Steps
The antibody is bound to the cell. If the antibody is non-crosslinking (such as a Fab antibody fragment), the bound antibody is uniformly distributed. This can be done at 0 °C, room temperature, or 37 °C.
If the antibody is crosslinking and bound to the cells at 0 °C, the distribution of antibodies has a patchy appearance. These “patches” are two-dimensional precipitates of antigen-antibody complex and are quite analogous to the three-dimensional precipitates that form in solution.
If cells with patches are warmed up, the patches move to one end of the cell to form a cap. In lymphocytes, this capping process takes about 5 minutes. If carried out on cells attached to a substratum, the cap forms at the rear of the moving cell.
Capping only occurs on motile cells and is therefore believed to reflect an intrinsic property of how cells move. It is an energy dependent process and in lymphocytes is partially inhibited by cytochalasin B (which disrupts microfilaments) but unaffected by colchicine (which disrupts microtubules). However, a combination of these drugs eliminates capping. A key feature of capping is that only those molecules that are crosslinked cap: Others do not.
Cap formation is now seen as closely related to the carbon particle experiments of Abercrombie. In this case, crawling fibroblasts were held in a medium containing small (~1 micrometre in size) carbon particles. On occasion, these particles attached to the front leading edge of these cells: When |
https://en.wikipedia.org/wiki/Haplotype%2035 | In human genetics, Haplotype 35, also called ht35 or the Armenian Modal Haplotype, is a Y chromosome haplotype of Y-STR microsatellite variations, associated with the Haplogroup R1b. It is characterized by DYS393=12 (as opposed to the Atlantic Modal Haplotype, another R1b haplotype, which is characterized by DYS393=13). The members of this haplotype are found in high numbers in Anatolia and Armenia, with smaller numbers throughout Central Asia, the Middle East, the Balkans, the Caucus Mountains, and in Jewish populations. They are also present in Britain in areas that were found to have a high concentration of Haplogroup J, suggesting they arrived together, perhaps through Roman soldiers.
See also
Modal haplotype
Haplotype
Haplogroup
Haplogroup R1b
List of Y-STR markers
External links
Haplogroup R1b (Haplotype 35)
Human Y-DNA modal haplotypes |
https://en.wikipedia.org/wiki/Bitcasa | Bitcasa, Inc. was an American cloud storage company founded in 2011 in St. Louis, Missouri. The company was later based in Mountain View, California until it shut down in 2017.
Bitcasa provided client software for Microsoft Windows, OS X, Android and web browsers. An iOS client was pending Apple approval. Its former product, Infinite Drive, once provided centralized storage that included unlimited capacity, client-side encryption, media streaming, file versioning and backups, and multi-platform mobile access. In 2013 Bitcasa moved to a tiered storage model, offering from 1TB for $99/year up to Infinite for $999/year. In October 2014, Bitcasa announced the discontinuation of Infinite Drive; for $999/year, users would get 10TB of storage. Infinite Drive users would be required to migrate to one of the new pricing plans or delete their account. In May 2016, Bitcasa discontinued offering cloud storage for consumers, alleging that they will be focusing on their business products.
History
The company started after an idea was a finalist at the TechCrunch Disrupt conference in September 2011. In 2012 Tony Lee was recruited as vice president of engineering and Frank Meehan joined the company's board of directors. In June 2012 Bitcasa closed $9 million of investment. Investors included: CrunchFund, Pelion Venture Partners, Horizons Ventures, Andreessen Horowitz, Samsung Ventures and First Round Capital.
CEO Brian Taptich announced Jan 2017 that Bitcasa had been acquired by Intel. An Intel spokesperson later clarified that Intel had not acquired Bitcasa.
Products and services
Bitcasa provided client software for web browsers, OS X, Microsoft Windows, Linux and a mobile app for Android. Windows versions include XP, Vista, Windows 7 and Windows 8.
Bitcasa products provide centralized streaming storage so that all devices have simultaneous and real-time access to the same files. Files uploaded from one device are instantly available on all devices. Bitcasa does not requ |
https://en.wikipedia.org/wiki/Joy%20Tivy | Joy Tivy FRSE FRSGS FIB (1924–1995) was a 20th century Irish physical geographer at the University of Glasgow. She specialised in biogeography and has been credited for having helped raise the profile of biogeography as a distinct sub-discipline of geography. She published over 40 papers, books and reports and she was often asked to advise government agencies and other organisations. She was a strong advocate of the importance of field studies for providing essential skills for geography graduates. Her capacity as a teacher was as highly regarded as her research — she was known to be enthusiastic and engaging to a wide range of audiences - a medal has been created by the Royal Scottish Geographical Society in honour of her commitment to Geographical Education and Teaching.
Life
Joy Tivy was born in Carlow, Ireland on 24 August 1924.
She commenced studies at the University College Dublin in 1942 where she studied geography as her primary subject with botany and geology as her secondary areas. She excelled as an undergraduate most notably scoring highest in highly competitive exams in 1944, which granted her status as a Scholar. She graduated with first class honours in 1946 and after a brief period of teaching at the University of Leeds she accepted a position at the University of Edinburgh where she completed her doctorate. Her PhD thesis was entitled, A study of the effect of physical factors on the vegetation of hill grazings in selected areas of southern Scotland, p. 55. In 1956 she moved to the University of Glasgow where she stayed for the rest of her career (she retired in 1989). She was the second female to be awarded at professorship at the University of Glasgow in 1976 and was head of the Department of Geography and Topographic Science.
In 1984 she was elected a Fellow of the Royal Society of Edinburgh. Her proposers were John Lenihan, William Whigham Fletcher, Donald Michie, S. G. Checkland, Lord Cameron, and Wreford Watson. She was also elected a Fell |
https://en.wikipedia.org/wiki/Quantum%20chaos | Quantum chaos is a branch of physics which studies how chaotic classical dynamical systems can be described in terms of quantum theory. The primary question that quantum chaos seeks to answer is: "What is the relationship between quantum mechanics and classical chaos?" The correspondence principle states that classical mechanics is the classical limit of quantum mechanics, specifically in the limit as the ratio of Planck's constant to the action of the system tends to zero. If this is true, then there must be quantum mechanisms underlying classical chaos (although this may not be a fruitful way of examining classical chaos). If quantum mechanics does not demonstrate an exponential sensitivity to initial conditions, how can exponential sensitivity to initial conditions arise in classical chaos, which must be the correspondence principle limit of quantum mechanics?
In seeking to address the basic question of quantum chaos, several approaches have been employed:
Development of methods for solving quantum problems where the perturbation cannot be considered small in perturbation theory and where quantum numbers are large.
Correlating statistical descriptions of eigenvalues (energy levels) with the classical behavior of the same Hamiltonian (system).
Study of probability distribution of individual eigenstates (see scars and Quantum ergodicity).
Semiclassical methods such as periodic-orbit theory connecting the classical trajectories of the dynamical system with quantum features.
Direct application of the correspondence principle.
History
During the first half of the twentieth century, chaotic behavior in mechanics was recognized (as in the three-body problem in celestial mechanics), but not well understood. The foundations of modern quantum mechanics were laid in that period, essentially leaving aside the issue of the quantum-classical correspondence in systems whose classical limit exhibit chaos.
Approaches
Questions related to the correspondence principle a |
https://en.wikipedia.org/wiki/Welwitschiaceae | Welwitschiaceae is a family of plants of the order Gnetales with one living species, Welwitschia mirabilis, found in southwestern Africa. Three fossil genera have been recovered from the Crato Formation – late Aptian (Lower Cretaceous) strata located in the Araripe Basin in northeastern Brazil, with one of these also being known from the early Late Cretaceous (Cenomanian-Turonian) Akrabou Formation of Morocco.
Taxonomy
German naturalist Friedrich Markgraf coined the name Welwitschiaceae in 1926, which appeared in Die Natürlichen Pflanzenfamilien.
Most recent systems place the Welwitschiaceae in the gymnosperm order Gnetales. This order is most closely related to the order Pinales, which includes Araucariaceae - Araucarians, Cupressaceae - Cypress Family, Pinaceae - Pine Family, Podocarpaceae - Podocarps, Sciadopityaceae - Koyamaki Family (the sole member Sciadopitys verticillata - Koyamaki), Taxaceae - Yew Family. Genetic analyses indicate that the Gnetales arose from within the conifer group, and any morphological similarities between angiosperms and Gnetales have evolved separately.<ref>Chaw S-M., C.L. Parkinson, Y. Cheng, T.M. Vincent and J. D. Palmer (2000) Seed plant phylogeny inferred from all three plant genomes: Monophyly of extant gymnosperms and origin of Gnetales from conifers Proceedings of the National Academy of Sciences of the United States of America 97:4086–4091</ref> The ancestors of the extant gymnosperm orders—Gnetales, Coniferales, Cycadales and Ginkgoales—arose during the Late Paleozoic, and became the dominant component of the Late Permian and Mesozoic flora.
Living species
The family contains a single genus and single extant species, Welwitschia mirabilis, which lives in the Kaokoveld Desert of Angola and Namibia in southwestern Africa.
Fossil species
Fossil evidence indicates that members of the Welwitschiaceae were present in South America during the Early Cretaceous (Mesozoic era). Priscowelwitschia austroamericana (initially named |
https://en.wikipedia.org/wiki/Evert%20Willem%20Beth | Evert Willem Beth (7 July 1908 – 12 April 1964) was a Dutch philosopher and logician, whose work principally concerned the foundations of mathematics. He was a member of the Significs Group.
Biography
Beth was born in Almelo, a small town in the eastern Netherlands. His father had studied mathematics and physics at the University of Amsterdam, where he had been awarded a PhD. Evert Beth studied the same subjects at Utrecht University, but then also studied philosophy and psychology. His 1935 PhD was in philosophy.
In 1946, he became professor of logic and the foundations of mathematics in Amsterdam. Apart from two brief interruptions – a stint in 1951 as a research assistant to Alfred Tarski, and in 1957 as a visiting professor at Johns Hopkins University – he held the post in Amsterdam continuously until his death in 1964. His was the first academic post in his country in logic and the foundations of mathematics, and during this time he contributed actively to international cooperation in establishing logic as an academic discipline.
In 1953 he became member of the Royal Netherlands Academy of Arts and Sciences.
He died in Amsterdam.
Contributions to logic
Beth definability theorem
The Beth definability theorem states that for first-order logic a property (or function or constant) is implicitly definable if and only if it is explicitly definable. Further explanation is provided under Beth definability.
Semantic tableaux
Beth's most famous contribution to formal logic is semantic tableaux, which are decision procedures for propositional logic and first-order logic. It is a semantic method—like Wittgenstein's truth tables or J. Alan Robinson's resolution—as opposed to the proof of theorems in a formal system, such as the axiomatic systems employed by Frege, Russell and Whitehead, and Hilbert, or even Gentzen's natural deduction. Semantic tableaux are an effective decision procedure for propositional logic, whereas they are only semi-effective for first |
https://en.wikipedia.org/wiki/Maintenance%20mode | In the world of software development, maintenance mode refers to a point in a computer program's life when it has reached all of its goals and is generally considered to be "complete" and bug-free. The term can also refer to the point in a software product's evolution when it is no longer competitive with other products or current with regard to the technology environment it operates within. In both cases, continued development is deemed unnecessary or ill-advised, but occasional bug fixes and security patches are still issued, hence the term maintenance mode. Maintenance mode often transitions to abandonware.
Sometimes, when a popular free software project undergoes a major overhaul, the pre-overhaul version is kept active and put into maintenance mode because it will still be widely used in production for the foreseeable future. Project forks can also spawn from programs that go into maintenance mode too soon or have enough developer support for a more advanced version. A good example of this is the vi editor, which was in maintenance mode and forked into Vi IMproved. The Vim fork has many useful features that vi does not, such as syntax highlighting and the ability to have multiple open buffers.
See also
Steady state
Software maintenance |
https://en.wikipedia.org/wiki/Qt%20Creator | Qt Creator is a cross-platform C++, JavaScript, Python and QML integrated development environment (IDE) which simplifies GUI application development. It is part of the SDK for the Qt GUI application development framework and uses the Qt API, which encapsulates host OS GUI function calls. It includes a visual debugger and an integrated WYSIWYG GUI layout and forms designer. The editor has features such as syntax highlighting and autocompletion. Qt Creator uses the C++ compiler from the GNU Compiler Collection on Linux. On Windows it can use MinGW or MSVC with the default install and can also use Microsoft Console Debugger when compiled from source code. Clang is also supported.
History
Development of what would eventually become Qt Creator had begun by 2007 or earlier under transitional names Workbench and later Project Greenhouse. It debuted during the later part of the Qt 4 era, starting with the release of Qt Creator, version 1.0 in March 2009 and subsequently bundled with Qt 4.5 in SDK 2009.3.
This was at a time when the standalone Qt Designer application was still the widget layout tool of choice for developers. There is no indication that Creator had layout capability at this stage. The record is somewhat muddied on this point (perhaps due to changes in ownership or the emphasis on Qt Quick), but the integration of Qt Designer under Qt Creator is first mentioned at least as early as Qt 4.7 (ca. late 2011). In the Qt 5 era, it is simply stated that "[Qt Designer's] functionality is now included as part of [sic] Qt Creator IDE."
Projects
Qt Creator includes a project manager that can use a variety of project formats such as .pro, CMake, Autotools and others. A project file can contain information such as what files are included into the project, custom build steps and settings for running the applications.
Editors
Qt Creator includes a code editor and integrates Qt Designer for designing and building graphical user interfaces (GUIs) from Qt widgets.
The code |
https://en.wikipedia.org/wiki/Thermal%20velocity | Thermal velocity or thermal speed is a typical velocity of the thermal motion of particles that make up a gas, liquid, etc. Thus, indirectly, thermal velocity is a measure of temperature. Technically speaking, it is a measure of the width of the peak in the Maxwell–Boltzmann particle velocity distribution. Note that in the strictest sense thermal velocity is not a velocity, since velocity usually describes a vector rather than simply a scalar speed.
Since the thermal velocity is only a "typical" velocity, a number of different definitions can be and are used.
Taking to be the Boltzmann constant, the absolute temperature, and the mass of a particle, we can write the different thermal velocities:
In one dimension
If is defined as the root mean square of the velocity in any one dimension (i.e. any single direction), then
If is defined as the mean of the magnitude of the velocity in any one dimension (i.e. any single direction), then
In three dimensions
If is defined as the most probable speed, then
If is defined as the root mean square of the total velocity, then
If is defined as the mean of the magnitude of the velocity of the atoms or molecules, then
All of these definitions are in the range
Thermal velocity at room temperature
At 20 °C (293.15 kelvins), the mean thermal velocity of common gasses in three dimensions is: |
https://en.wikipedia.org/wiki/Call%20of%20Duty%3A%20Elite | Call of Duty: Elite was an online service created by the Activision subsidiary Beachhead Studios for the multiplayer portion for the first-person shooter video game series Call of Duty. The service featured lifetime statistics across multiple games as well as a multitude of social-networking options. The service previously had a premium subscription option during Call of Duty: Modern Warfare 3; however, following the release of Call of Duty: Black Ops II, the service was made free. As of February 28, 2014 at approximately 10:00 a.m. (PST), Activision shut down the Call of Duty: Elite website in favor of their mobile products.
Call of Duty: Modern Warfare 3
While a free version is available, the subscription based portion of Elite includes exclusive premium features such as monthly downloadable content, daily competitions with virtual and real life prizes, the ability to level up players' clan, pro analysis and strategies, Elite TV, and more.
Call of Duty: Black Ops II
The service previously had a premium subscription option during Call of Duty: Modern Warfare 3; however, it was made free following the release of Call of Duty: Black Ops II.
Elite in Black Ops II offers advanced player performance statistics, a clan management system, leaderboards for the zombie god mode, digital video entertainment, and social integration, mostly features that required a premium subscription in Modern Warfare 3. Call of Duty: Black Ops II downloadable content was released in standard DLC packs available for a nominal fee.
History
It was announced initially by The Wall Street Journal and was showcased at E3 2011 by Activision. The official in-depth reveal took place at Call of Duty: XP in September 2011.
The public beta was released on July 14, 2011 on the Xbox 360 exclusively for Black Ops. Invites for the PlayStation 3 version began being sent out on September 17, 2011. Call of Duty: Elite officially launched on November 8, 2011 to coincide with the release of Modern Warfare 3. |
https://en.wikipedia.org/wiki/Geodesic%20grid | A geodesic grid is a spatial grid based on a geodesic polyhedron or Goldberg polyhedron.
History
The earliest use of the (icosahedral) geodesic grid in geophysical modeling dates back to 1968 and the work by Sadourny, Arakawa, and Mintz and Williamson. Later work expanded on this base.
Construction
A geodesic grid is a global Earth reference that uses triangular tiles based on the subdivision of a polyhedron (usually the icosahedron, and usually a Class I subdivision) to subdivide the surface of the Earth. Such a grid does not have a straightforward relationship to latitude and longitude, but conforms to many of the main criteria for a statistically valid discrete global grid. Primarily, the cells' area and shape are generally similar, especially near the poles where many other spatial grids have singularities or heavy distortion. The popular Quaternary Triangular Mesh (QTM) falls into this category.
Geodesic grids may use the dual polyhedron of the geodesic polyhedron, which is the Goldberg polyhedron. Goldberg polyhedra are made up of hexagons and (if based on the icosahedron) 12 pentagons. One implementation that uses an icosahedron as the base polyhedron, hexagonal cells, and the Snyder equal-area projection is known as the Icosahedron Snyder Equal Area (ISEA) grid.
Applications
In biodiversity science, geodesic grids are a global extension of local discrete grids that are staked out in field studies to ensure appropriate statistical sampling and larger multi-use grids deployed at regional and national levels to develop an aggregated understanding of biodiversity. These grids translate environmental and ecological monitoring data from multiple spatial and temporal scales into assessments of current ecological condition and forecasts of risks to our natural resources. A geodesic grid allows local to global assimilation of ecologically significant information at its own level of granularity.
When modeling the weather, ocean circulation, or the climate, parti |
https://en.wikipedia.org/wiki/Alfalfa%20pests | Alfalfa pest, pests specifically linked to alfalfa by name, may be:
Insects
Blue alfalfa aphid (Acyrthosiphon kondoi)
Alfalfa bug (Piezodorus guildinii) a stink bug
Alfalfa plant bug (Adelphocoris lineolatus)
Alfalfa butterfly and alfalfa caterpillar (Colias eurytheme) a butterfly
Alfalfa looper (Autographa californica) a moth
Alfalfa moth (Cydia medicaginis) a moth
Alfalfa leaf tier (Dichomeris acuminata) a moth that rolls alfalfa leaves
Alfalfa webworm (Loxostege commixtalis) a moth
Alfalfa webworm (Loxostege cereralis) a moth
Alfalfa weevil (Hypera postica)
Other
Alfalfa cyst nematode (Heterodera medicaginis)
Alfalfa dodder (Cuscuta approximata) a parasitic plant
Large-seeded alfalfa dodder (Cuscuta campestris) a parasitic plant
See also
List of alfalfa diseases
Alfalfa leafcutter bee (Megachile rotundata) which pollinates alfalfa
Plant pathogens and diseases
Medicago |
https://en.wikipedia.org/wiki/Cisco%20Security%20Agent | Cisco Security Agent (CSA) was an endpoint intrusion prevention system software system made originally by Okena (formerly named StormWatch Agent), which was bought by Cisco Systems in 2003.
The software is rule-based, and it examines system activities and network traffic, determining which behaviours are normal and which may indicate an attack. CSA was offered as a replacement for Cisco IDS Host Sensor, which was announced end-of-life on 21 February 2003. This end-of-life action resulted from Cisco's acquisition of Okena, Inc., and the Cisco Security Agent product line based on the Okena technology would replace the Cisco IDS Host Sensor product line from Entercept.
As a result of this end-of-life action, Cisco offered a no-cost, one-for-one product replacement/migration program for all Cisco IDS Host Sensor customers to the new Cisco Security Agent product line. The intent of this program was to support existing IDS Host Sensor customers who choose to migrate to the new Cisco Security Agent product line.
All Cisco IDS Host Sensor customers were eligible for this migration program, whether or not the customer had purchased a Cisco Software Application Support (SAS) service contract for their Cisco IDS Host Sensor products.
CSA uses a two or three-tier client-server architecture. The Management Center 'MC' (or Management Console) contains the program logic. an MS SQL database backend is used to store alerts and configuration information. the MC and SQL database may be co-resident on the same system.
The agent is installed on the desktops and/or servers to be protected and communicates with the Management Center, sending logged events to the Management Center and receiving updates on rules when they occur.
A Network World article dated 17 December 2009 stated " Cisco hinted that it will end-of-life both CSA and MARS"—full article linked below.
On 11 June 2010, Cisco announced the end-of-life and end-of-sale of CSA. Cisco did not offer any replacement products.
|
https://en.wikipedia.org/wiki/Joseph%20Sgro | Joseph A. Sgro (born in San Diego, California) is an American mathematician, neurologist / neurophysiologist, and an engineering technologist / entrepreneur in the field of frame grabbers, high-speed cameras, smart cameras, image processors, computer vision, and machine vision and learning technologies.
Sgro began his career as an academic researcher in advanced mathematics and logic. He received an AB in Mathematics in 1970 from UCLA followed by an MA in mathematics in 1973 and a PhD in mathematics in 1975 from the University of Wisconsin, where he studied mathematical logic under H. Jerome Keisler who along with Jon Barwise and Kenneth Kunen formed his doctoral committee.
After serving as an instructor and post doctoral fellow at Yale and also holding a membership at the Institute for Advanced Study in Princeton, New Jersey, Sgro returned to school to study neurology, and received his M.D. in 1980 from the Ph.D to M.D. Program of the Leonard M. Miller School of Medicine at the University of Miami, followed by an internal medicine internship at UNC Memorial Hospital, residency in neurology, a fellowship, and faculty position in clinical neurophysiology at the Neurological Institute of New York.
As an outgrowth of his work in neurophysiology, while still working as a post-doctoral fellow and an assistant professor of neurology, Sgro founded Alacron, Inc. (formerly Corteks, Inc.until 1990) in 1985 to manufacture technologies relevant to his neurological research. In 1989 he commercialized this technology and began developing array processors, frame grabbers, vision processors, and most recently supported advances in BSI and superlattice (delta) sensor doping technology. Extending his work in machine vision technology, in 2002, Sgro founded FastVision, LLC, a maker of smart cameras, as a subsidiary of Alacron, Inc . In 2016, FastVision, LLC. was incorporated into Alacron, Inc.
Mathematical research
During his first year as a PhD candidate at the University |
https://en.wikipedia.org/wiki/Microtentacle | Microtentacles are microtubule-based membrane protrusions that occur in detached cells. They were discovered by scientists studying metastatic breast cancer cells at the University of Maryland, Baltimore.
These novel structures are distinct from classical actin based extensions of adherent cells, persist for days in breast tumor lines that are resistant to apoptosis, and aid in the reattachment to matrix or cell monolayers.
The formation of microtentacles (McTNs) in detached or circulating tumor cells may promote seeding of bloodborne metastatic disease. |
https://en.wikipedia.org/wiki/Google%20Scholar | Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. Released in beta in November 2004, the Google Scholar index includes peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents.
Google Scholar uses a web crawler, or web robot, to identify files for inclusion in the search results. For content to be indexed in Google Scholar, it must meet certain specified criteria. An earlier statistical estimate published in PLOS One using a mark and recapture method estimated approximately 79–90% coverage of all articles published in English with an estimate of 100 million. This estimate also determined how many documents were freely available on the internet. Google Scholar has been criticized for not vetting journals and for including predatory journals in its index.
The University of Michigan Library and other libraries whose collections Google scanned for Google Books and Google Scholar retained copies of the scans and have used them to create the HathiTrust Digital Library.
History
Google Scholar arose out of a discussion between Alex Verstak and Anurag Acharya, both of whom were then working on building Google's main web index. Their goal was to "make the world's problem solvers 10% more efficient" by allowing easier and more accurate access to scientific knowledge. This goal is reflected in the Google Scholar's advertising slogan "Stand on the shoulders of giants", which was taken from an idea attributed to Bernard of Chartres, quoted by Isaac Newton, and is a nod to the scholars who have contributed to their fields over the centuries, providing the foundation for new intellectual achievements. One of the original sources for the texts in Google Scholar is the University of Michigan's print collection.
|
https://en.wikipedia.org/wiki/Diamond-like%20carbon | Diamond-like carbon (DLC) is a class of amorphous carbon material that displays some of the typical properties of diamond. DLC is usually applied as coatings to other materials that could benefit from such properties.
DLC exists in seven different forms. All seven contain significant amounts of sp3 hybridized carbon atoms. The reason that there are different types is that even diamond can be found in two crystalline polytypes. The more common one uses a cubic lattice, while the less common one, lonsdaleite, has a hexagonal lattice. By mixing these polytypes at the nanoscale, DLC coatings can be made that at the same time are amorphous, flexible, and yet purely sp3 bonded "diamond". The hardest, strongest, and slickest is tetrahedral amorphous carbon (ta-C). Ta-C can be considered to be the "pure" form of DLC, since it consists almost entirely of sp3 bonded carbon atoms. Fillers such as hydrogen, graphitic sp2 carbon, and metals are used in the other 6 forms to reduce production expenses or to impart other desirable properties.
The various forms of DLC can be applied to almost any material that is compatible with a vacuum environment.
History
In 2006, the market for outsourced DLC coatings was estimated as about €30,000,000 in the European Union.
In 2011, researchers at Stanford University announced a super-hard amorphous diamond under conditions of ultrahigh pressure. The diamond lacks the crystalline structure of diamond but has the light weight characteristic of carbon.
In 2021, Chinese researchers announced AM-III, a super-hard, fullerene-based form of amorphous carbon. It is also a semi-conductor with a bandgap range of 1.5 to 2.2 eV. The material demonstrated a hardness of 113 GPa on a Vickers hardness test vs diamonds rate at around 70 to 100 GPa. It was hard enough to scratch the surface of a diamond.
Distinction from natural and synthetic diamond
Naturally occurring diamond is almost always found in the crystalline form with a purely cubic orientati |
https://en.wikipedia.org/wiki/Exchange%20spring%20media | Exchange spring media (also exchange coupled composite media or ECC) is a magnetic storage technology for hard disk drives that allows to increase the storage density in magnetic recording. The idea, proposed in 2004 by Suess et al., is that the recording media consists of exchange coupled soft and hard magnetic layers. Exchange spring media allows a good writability due to the write-assist nature of the soft layer. Hence, hard magnetic layers such as FePt, CoCrPt-alloys or hard magnetic multilayer structures can be written with conventional write heads. Due to the high anisotropy these grains are thermally stable even for small grain sizes. Small grain sizes are required for high density recording. The introduction of the soft layer does not decrease the thermal stability of the entire structure if the hard layer is sufficiently thick. The required thickness of the hard layer for best thermal stability is the exchange length of the hard layer material.
The first experimental realization of exchange spring media was done on Co-PdSiO multilayers as the hard layer which was coupled via a PdSi interlayer to a FeSiO soft layer.
Besides the improved writeability, another advantage of exchange spring media is, that the switching field distribution of the grains, which has to be as small as possible to allow for high storage densities, can be decreased. This effect was predicted theoretically and experimentally verified on Co/Pd multilayers as hard layer coupled to Co/Ni multilayers as soft layer.
In commercial hard disks exchange spring media is used since about 2007.
See also
Heat-assisted magnetic recording — Another technology to improve the writeability of high coercive materials such as FePt
Patterned media
Shingled magnetic recording |
https://en.wikipedia.org/wiki/Polymerase%20cycling%20assembly | Polymerase cycling assembly (or PCA, also known as Assembly PCR) is a method for the assembly of large DNA oligonucleotides from shorter fragments. The process uses the same technology as PCR, but takes advantage of DNA hybridization and annealing as well as DNA polymerase to amplify a complete sequence of DNA in a precise order based on the single stranded oligonucleotides used in the process. It thus allows for the production of synthetic genes and even entire synthetic genomes.
PCA principles
Much like how primers are designed such that there is a forward primer and a reverse primer capable of allowing DNA polymerase to fill the entire template sequence, PCA uses the same technology but with multiple oligonucleotides. While in PCR the customary size of oligonucleotides used is 18 base pairs, in PCA lengths of up to 50 are used to ensure uniqueness and correct hybridization.
Each oligonucleotide is designed to be either part of the top or bottom strand of the target sequence. As well as the basic requirement of having to be able to tile the entire target sequence, these oligonucleotides must also have the usual properties of similar melting temperatures, hairpin free, and not too GC rich to avoid the same complications as PCR.
During the polymerase cycles, the oligonucleotides anneal to complementary fragments and then are filled in by polymerase. Each cycle thus increases the length of various fragments randomly depending on which oligonucleotides find each other. It is critical that there is complementarity between all the fragments in some way or a final complete sequence will not be produced as polymerase requires a template to follow.
After this initial construction phase, additional primers encompassing both ends are added to perform a regular PCR reaction, amplifying the target sequence away from all the shorter incomplete fragments. A gel purification can then be used to identify and isolate the complete sequence.
Typical reaction
A typical reacti |
https://en.wikipedia.org/wiki/Conservation%20of%20mass | In physics and chemistry, the law of conservation of mass or principle of mass conservation states that for any system closed to all transfers of matter and energy, the mass of the system must remain constant over time, as the system's mass cannot change, so the quantity can neither be added nor be removed. Therefore, the quantity of mass is conserved over time.
The law implies that mass can neither be created nor destroyed, although it may be rearranged in space, or the entities associated with it may be changed in form. For example, in chemical reactions, the mass of the chemical components before the reaction is equal to the mass of the components after the reaction. Thus, during any chemical reaction and low-energy thermodynamic processes in an isolated system, the total mass of the reactants, or starting materials, must be equal to the mass of the products.
The concept of mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. Historically, mass conservation in chemical reactions was primarily demonstrated in the 17th century and finally confirmed by Antoine Lavoisier in the late 18th century. The formulation of this law was of crucial importance in the progress from alchemy to the modern natural science of chemistry.
In reality, the conservation of mass only holds approximately and is considered part of a series of assumptions in classical mechanics. The law has to be modified to comply with the laws of quantum mechanics and special relativity under the principle of mass–energy equivalence, which states that energy and mass form one conserved quantity. For very energetic systems the conservation of mass only is shown not to hold, as is the case in nuclear reactions and particle-antiparticle annihilation in particle physics.
Mass is also not generally conserved in open systems. Such is the case when various forms of energy and matter are allowed into, or out of, the system. However, unless radioactivity or nuclear r |
https://en.wikipedia.org/wiki/Non-linear%20preferential%20attachment | In network science, preferential attachment means that nodes of a network tend to connect to those nodes which have more links. If the network is growing and new nodes tend to connect to existing ones with linear probability in the degree of the existing nodes then preferential attachment leads to a scale-free network. If this probability is sub-linear then the network’s degree distribution is stretched exponential and hubs are much smaller than in a scale-free network. If this probability is super-linear then almost all nodes are connected to a few hubs. According to Kunegis, Blattner, and Moser several online networks follow a non-linear preferential attachment model. Communication networks and online contact networks are sub-linear while interaction networks are super-linear. The co-author network among scientists also shows the signs of sub-linear preferential attachment.
Types of preferential attachment
For simplicity it can be assumed that the probability with which a new node connects to an existing one follows a power function of the existing nodes’ degree k:
where α > 0. This is a good approximation for a lot of real networks such as the Internet, the citation network or the actor network. If α = 1 then the preferential attachment is linear. If α < 1 then it is sub-linear while if α > 1 then it is super-linear.
In measuring preferential attachment from real networks, the above log-linearity functional form kα can be relaxed to a free form function, i.e. (k) can be measured for each k without any assumptions on the functional form of (k). This is believed to be more flexible, and allows the discovery of non-log-linearity of preferential attachment in real networks.
Sub-linear preferential attachment
In this case the new nodes still tend to connect to the nodes with higher degree but this effect is smaller than in the case of linear preferential attachment. There are less hubs and their size is also smaller than in a scale-free network. The size o |
https://en.wikipedia.org/wiki/De%20novo%20gene%20birth | De novo gene birth is the process by which new genes evolve from DNA sequences that were ancestrally non-genic. De novo genes represent a subset of novel genes, and may be protein-coding or instead act as RNA genes. The processes that govern de novo gene birth are not well understood, although several models exist that describe possible mechanisms by which de novo gene birth may occur.
Although de novo gene birth may have occurred at any point in an organism's evolutionary history, ancient de novo gene birth events are difficult to detect. Most studies of de novo genes to date have thus focused on young genes, typically taxonomically restricted genes (TRGs) that are present in a single species or lineage, including so-called orphan genes, defined as genes that lack any identifiable homolog. It is important to note, however, that not all orphan genes arise de novo, and instead may emerge through fairly well characterized mechanisms such as gene duplication (including retroposition) or horizontal gene transfer followed by sequence divergence, or by gene fission/fusion.
Although de novo gene birth was once viewed as a highly unlikely occurrence, several unequivocal examples have now been described, and some researchers speculate that de novo gene birth could play a major role in evolutionary innovation.
History
As early as the 1930s, J. B. S. Haldane and others suggested that copies of existing genes may lead to new genes with novel functions. In 1970, Susumu Ohno published the seminal text Evolution by Gene Duplication. For some time subsequently, the consensus view was that virtually all genes were derived from ancestral genes, with François Jacob famously remarking in a 1977 essay that "the probability that a functional protein would appear de novo by random association of amino acids is practically zero."
In the same year, however, Pierre-Paul Grassé coined the term "overprinting" to describe the emergence of genes through the expression of alternative open re |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.