source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Global%20Ocean%20Observing%20System | The Global Ocean Observing System (GOOS) is a global system for sustained observations of the ocean comprising the oceanographic component of the Global Earth Observing System of Systems (GEOSS). GOOS is administrated by the Intergovernmental Oceanographic Commission (IOC), and joins the Global Climate Observing System, GCOS, and Global Terrestrial Observing System, GTOS, as fundamental building blocks of the GEOSS.
GOOS is a platform for:
International cooperation for sustained observations of the oceans.
Generation of oceanographic products and services.
Interaction between research, operational, and user communities.
GOOS serves oceanographic researchers, coastal managers, parties to international conventions, national meteorological and oceanographic agencies, hydrographic offices, marine and coastal industries, policymakers, and the interested general public.
GOOS is sponsored by the IOC, UNEP, WMO , and ICSU. It is implemented by member states via their government agencies, navies and oceanographic research institutions working together in a wide range of thematic panels and regional alliances.
The GOOS Scientific Steering Committee provides guidance, while Scientific and Technical Panels evaluate Essential Ocean Variable observation systems. The secretariat director, from 2004 to 2011 was Keith Alverson. The secretariat director from 2011–2022 it was Albert Fischer.
Essential ocean variables
Essential Ocean Variables are a collection of ocean properties selected in a way so as to provide the best, most cost-effective suite of data that enables quantification of key ocean processes. They are selected based on their Relevance, Feasibility, and Cost effectiveness. They fall into four categories - physics, biogeochemistry, ecosystems, and cross-disciplinary. Their consistent usage is promoted by agencies such as GOOS and Southern. The EOVs are:
Physics
Sea state
Ocean surface stress
Sea ice
Sea surface height
Sea surface temperature
Subsurface temperatu |
https://en.wikipedia.org/wiki/CoNTub | CoNTub is a software project written in Java which runs on Windows, Mac OS X, Linux and Unix Operating systems through any Java-enabled web browser. It is the first implementation of an algorithm for generating 3D structures of arbitrary carbon nanotube connections by means of the placement of non-hexagonal (pentagonal or heptagonal) rings, also referred as defects or disclinations.
The software is a set of tools dedicated to the construction of complex carbon nanotube structures for use in computational chemistry. CoNTub 1.0[1] was the first implementation for building these complex structures and included nanotube heterojunctions, while CoNTub 2.0[2] is mainly devoted to three-nanotube junctions. Its aim is to help in the design and research about new nanotube-based devices. CoNTub is based on the strip algebra, and is able to find the unique structure for connecting two specific and arbitrary carbon nanotubes and many of the possible three-tube junctions.
CoNTub generates the geometry of various types of nanotube junctions, i.e., nanotube heterojunctions and three-nanotube junctions, including also single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs).
Although the current version of CoNTub is v2.0, this version does not supersedes v1.0, as v2.0 is dedicated currently to only three-nanotube junctions, although the incorporation of v1.0 functionality into v.2.0 is planned. Nanotube heterojunctions can be generated only with v1.0.
CoNTub v1.0 is organized in five Tabbed panels CoNTub[1], the first three being dedicated to structure generation, the fourth to the output in PDB format, and the fifth contains a short help section.
CoNTub v2.0 has experimented a major redesign, and the panes have been removed, instead, a conventional menubar has been added where the type of structure to be generated can be chosen. Although the menu item for heterojunction generation appears in the menu, the button is disabled, so NTHJ's can be only generated with v1.0 |
https://en.wikipedia.org/wiki/Microtubule%20nucleation | In cell biology, microtubule nucleation is the event that initiates de novo formation of microtubules (MTs). These filaments of the cytoskeleton typically form through polymerization of α- and β-tubulin dimers, the basic building blocks of the microtubule, which initially interact to nucleate a seed from which the filament elongates.
Microtubule nucleation occurs spontaneously in vitro, with solutions of purified tubulin giving rise to full-length polymers. The tubulin dimers that make up the polymers have an intrinsic capacity to self-aggregate and assemble into cylindrical tubes, provided there is an adequate supply of GTP. The kinetics barriers of such a process, however, mean that the rate at which microtubules spontaneously nucleate is relatively low.
Role of γ-tubulin and the γ-tubulin ring complex (γ-TuRC)
In vivo, cells get around this kinetic barrier by using various proteins to aid microtubule nucleation. The primary pathway by which microtubule nucleation is assisted requires the action of a third type of tubulin, γ-tubulin, which is distinct from the α and β subunits that compose the microtubules themselves. The γ-tubulin combines with several other associated proteins to form a conical structure known as the γ-tubulin ring complex (γ-TuRC). This complex, with its 13-fold symmetry, acts as a scaffold or template for α/β tubulin dimers during the nucleation process—speeding up the assembly of the ring of 13 protofilaments that make up the growing microtubule. The γ-TuRC also acts as a cap of the (−) end while the microtubule continues growth from its (+) end. This cap provides both stability and protection to the microtubule (-) end from enzymes that could lead to its depolymerization, while also inhibiting (-) end growth.
MT Nucleation from Microtubule Organizing Centers (MTOCs)
The γ-TuRC is typically found as the core functional unit in a microtubule organizing center (MTOC), such as the centrosome in some animal cells or the spindle pole bodies i |
https://en.wikipedia.org/wiki/Membrane%20lipid | Membrane lipids are a group of compounds (structurally similar to fats and oils) which form the lipid bilayer of the cell membrane. The three major classes of membrane lipids are phospholipids, glycolipids, and cholesterol. Lipids are amphiphilic: they have one end that is soluble in water ('polar') and an ending that is soluble in fat ('nonpolar'). By forming a double layer with the polar ends pointing outwards and the nonpolar ends pointing inwards membrane lipids can form a 'lipid bilayer' which keeps the watery interior of the cell separate from the watery exterior. The arrangements of lipids and various proteins, acting as receptors and channel pores in the membrane, control the entry and exit of other molecules and ions as part of the cell's metabolism. In order to perform physiological functions, membrane proteins are facilitated to rotate and diffuse laterally in two dimensional expanse of lipid bilayer by the presence of a shell of lipids closely attached to protein surface, called annular lipid shell.
Biological roles
The bilayer formed by membrane lipids serves as a containment unit of a living cell. Membrane lipids also form a matrix in which membrane proteins reside. Historically lipids were thought to merely serve a structural role. Functional roles of lipids are in fact many: They serve as regulatory agents in cell growth and adhesion. They participate in the biosynthesis of other biomolecules. They can serve to increase enzymatic activities of enzymes.
Non-bilayer forming lipid like monogalactosyl diglyceride (MGDG) predominates the bulk lipids in thylakoid membranes, which when hydrated alone, forms reverse hexagonal cylindrical phase. However, in combination with other lipids and carotenoids/chlorophylls of thylakoid membranes, they too conform together as lipid bilayers.
Major classes
Phospholipids
Phospholipids and glycolipids consist of two long, nonpolar (hydrophobic) hydrocarbon chains linked to a hydrophilic head group.
The heads of |
https://en.wikipedia.org/wiki/Rusi%20Taleyarkhan | Rusi P. Taleyarkhan is a nuclear engineer and former academic fraudster who has been a faculty member in the Department of Nuclear Engineering at Purdue University since 2003. Prior to that, he was on staff at the Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee. He obtained his Bachelor of Technology degree in mechanical engineering from the Indian Institute of Technology, Madras in 1977 and MS and PhD (Nuclear Engineering and Science) degrees from Rensselaer Polytechnic Institute (RPI) in 1978 and 1982 respectively. He also holds an MBA (Business Administration) from RPI.
In 2008, he was judged guilty of research misconduct for "falsification of the research record" by a Purdue review board.
Sonofusion work and controversy
In 2002, while a senior scientist at ORNL, Taleyarkhan published a paper on fusion achieved by bombarding a container of liquid solvent with strong ultrasonic vibrations, a process known as sonofusion or bubble fusion. In theory, the vibrations collapsed gas bubbles in the solvent, heating them to temperatures high enough to fuse hydrogen atoms and release energy. Following his move from Oak Ridge to Purdue in 2003, Taleyarkhan published additional papers about his research in this area.
Numerous other scientists, however, were not able to replicate Taleyarkhan's work, including in published articles in Physical Review Letters from the University of Göttingen, from UCLA, from University of Illinois, from former colleagues at Oak Ridge National Labs, and a study funded by the Office of Naval Research in the University of California.
Taleyarkhan's results were reportedly repeated by Edward Forringer of LeTourneau University in Taleyarkhan's own labs at Purdue in November 2006. Purdue decided at that time not to further investigate the initial narrowly defined charges of misconduct against Taleyarkhan made by other members of the Purdue Faculty.
The Chronicle of Higher Education, however, has noted some problems with the verificat |
https://en.wikipedia.org/wiki/Relativistic%20electron%20beam | Relativistic electron beams are streams of electrons moving at relativistic speeds. They are the lasing medium in free electron lasers to be used in atmospheric research conducted at entities such as the Pan-oceanic Environmental and Atmospheric Research Laboratory (PEARL) at the University of Hawaii and NASA.
It has been suggested that relativistic electron beams could be used to heat and accelerate the reaction mass in electrical rocket engines that Dr. Robert W. Bussard called quiet electric-discharge engines (QEDs). |
https://en.wikipedia.org/wiki/Beatty%20sequence | In mathematics, a Beatty sequence (or homogeneous Beatty sequence) is the sequence of integers found by taking the floor of the positive multiples of a positive irrational number. Beatty sequences are named after Samuel Beatty, who wrote about them in 1926.
Rayleigh's theorem, named after Lord Rayleigh, states that the complement of a Beatty sequence, consisting of the positive integers that are not in the sequence, is itself a Beatty sequence generated by a different irrational number.
Beatty sequences can also be used to generate Sturmian words.
Definition
Any irrational number that is greater than one generates the Beatty sequence
The two irrational numbers and naturally satisfy the equation .
The two Beatty sequences and that they generate form a pair of complementary Beatty sequences. Here, "complementary" means that every positive integer belongs to exactly one of these two sequences.
Examples
When is the golden ratio , the complementary Beatty sequence is generated by . In this case, the sequence , known as the lower Wythoff sequence, is
and the complementary sequence , the upper Wythoff sequence, is
These sequences define the optimal strategy for Wythoff's game, and are used in the definition of the Wythoff array.
As another example, for the square root of 2, , . In this case, the sequences are
For and , the sequences are
Any number in the first sequence is absent in the second, and vice versa.
History
Beatty sequences got their name from the problem posed in The American Mathematical Monthly by Samuel Beatty in 1926. It is probably one of the most often cited problems ever posed in the Monthly. However, even earlier, in 1894 such sequences were briefly mentioned by Lord Rayleigh in the second edition of his book The Theory of Sound.
Rayleigh theorem
Rayleigh's theorem (also known as Beatty's theorem) states that given an irrational number there exists so that the Beatty sequences and partition the set of positive integers: each po |
https://en.wikipedia.org/wiki/QPPB | The QoS Policy Propagation via BGP, often abbreviated to QPPB, is a mechanism that allows propagation of quality of service (QoS) policy and classification by the sending party based on access lists, community lists, and autonomous system paths in the Border Gateway Protocol (BGP), thus helping to classify based on destination instead of source address.
See also
Computer network
Traffic engineering (telecommunications)
External links
ASR9000/XR: Implementing QOS policy propagation for BGP (QPPB)
Internet architecture |
https://en.wikipedia.org/wiki/Home%20server | A home server is a computing server located in a private computing residence providing services to other devices inside or outside the household through a home network or the Internet. Such services may include file and printer serving, media center serving, home automation control, web serving (on the network or Internet), web caching, file sharing and synchronization, video surveillance and digital video recorder, calendar and contact sharing and synchronization, account authentication, and backup services.
Because of the relatively low number of computers on a typical home network, a home server commonly does not require significant computing power. Home servers can be implemented do-it-yourself style with a re-purposed, older computer, or a plug computer; pre-configured commercial home server appliances are also available. An uninterruptible power supply is sometimes used in case of power outages that can possibly corrupt data.
Services provided by home servers
Administration and configuration
Home servers often run headless, and can be administered remotely through a command shell, or graphically through a remote desktop system such as RDP, VNC, Webmin, Apple Remote Desktop, or many others.
Some home server operating systems (such as Windows Home Server) include a consumer-focused graphical user interface (GUI) for setup and configuration that is available on home computers on the home network (and remotely over the Internet via remote access). Others simply enable users to use native operating system tools for configuration.
Centralized storage
Home servers often act as network-attached storage (NAS) providing the major benefit that all users' files can be centrally and securely stored, with flexible permissions applied to them. Such files can be easily accessed from any other system on the network, provided the correct credentials are supplied. This also applies to shared printers.
Such files can also be shared over the Internet to be accessible from a |
https://en.wikipedia.org/wiki/Acyl-CoA | Acyl-CoA is a group of coenzymes that metabolize fatty acids. Acyl-CoA's are susceptible to beta oxidation, forming, ultimately, acetyl-CoA. The acetyl-CoA enters the citric acid cycle, eventually forming several equivalents of ATP. In this way, fats are converted to ATP, the universal biochemical energy carrier.
Functions
Fatty acid activation
Fats are broken down by conversion to acyl-CoA. This conversion is one response to high energy demands such as exercise.
The oxidative degradation of fatty acids is a two-step process, catalyzed by acyl-CoA synthetase. Fatty acids are converted to their acyl phosphate, the precursor to acyl-CoA. The latter conversion is mediated by acyl-CoA synthase"
acyl-P + HS-CoA → acyl-S-CoA + Pi + H+
Three types of acyl-CoA synthases are employed, depending on the chain length of the fatty acid. For example, the substrates for medium chain acyl-CoA synthase are 4-11 carbon fatty acids. The enzyme acyl-CoA thioesterase takes of the acyl-CoA to form a free fatty acid and coenzyme A.
Beta Oxidation of Acyl-CoA
The second step of fatty acid degradation is beta oxidation. Beta oxidation occurs in mitochondria. After formation in the cytosol, acyl-CoA is transported into the mitochondria, the locus of beta oxidation. Transport of acyl-CoA into the mitochondria requires carnitine palmitoyltransferase 1 (CPT1), which converts acyl-CoA into acylcarnitine, which gets transported into the mitochondrial matrix. Once in the matrix, acylcarnitine is converted back to acyl-CoA by CPT2. Beta oxidation may begin now that Acyl-CoA is in the mitochondria.
Beta oxidation of acyl-CoA occurs in four steps.
1. Acyl-CoA dehydrogenase catalyzes dehydrogenation of the acyl-CoA, creating a double bond between the alpha and beta carbons. FAD is the hydrogen acceptor, yielding FADH2.
2. Enoyl-CoA hydrase catalyzes the addition of water across the newly formed double bond to make an alcohol.
3. 3-hydroxyacyl-CoA dehydrogenase oxi |
https://en.wikipedia.org/wiki/The%20Stone%20%28video%20game%29 | The Stone is an online game developed by web company Abject Modernity Internet Creations Inc. in 1995. The mystery game was created in 1996 but launched as a consumer product in 1997. People had to buy a physical stone containing the login credentials to the website, which was unheard of at the time. In 1999, The Stone was profiled by Forbes magazine.
"Stoners", a film about The Stone, was released by Rod Bruinooge and Scott Jaworski in September 2004. It covered the activities of the internet/online gaming community that emerged around The Stone. Pink Floyd provided the soundtrack to the film, with all music taken from The Division Bell Album.
Gameplay
Puzzles of The Stone are located in a place called The Immediate. There are a total of 216 Stone puzzles, grouped into 6 categories, each category having 6 different levels of difficulty. Once all the puzzles are solved, The Stone is said to unlock an ancient mystery called the Enigma. The secret of The Stone is kept by the Stonekeepers.
A player of The Stone is often referred to as a stoner. When trying to solve a certain Stone puzzle, a stoner may go to a place called The Commons and ask for a nudge (i.e., a hint) from other stoners who have already solved that particular puzzle. Once a stoner has solved all available Stone puzzles, he or she is allowed into the Sisyphus Lodge.
Championship tournament
The Stone Championship Tournament, also known as the Final Six Tournament, began September 30, 2005. The first Stone player to solve all of the six final puzzles would be crowned Champion of The Stone and would immediately be granted the status of Stonekeeper. The tournament was won August 11, 2007 by the Stone players "Gary_" and "cinnabar." The solution to the Enigma was discovered August 22, 2007 by Stone player "grissy".
Subsequent to the completion of the Final Six Tournament, the tournament puzzles were opened to all the remaining Stone players as the Final Six Redux, featuring slightly different answers |
https://en.wikipedia.org/wiki/Poshlib | Posh is a software framework used in cross-platform software development. It was created by Brian Hook. It is BSD licensed and , at version 1.3.002.
The Posh software framework provides a header file and an optional C source file.
Posh does not provide alternatives where a host platform does not offer a feature, but informs through preprocessor macros what is supported and what is not. It sets macros to assist in compiling with various compilers (such as GCC, MSVC and OpenWatcom), and different host endiannesses. In its simplest form, only a single header file is required. In the optional C source file, there are functions for byte swapping and in-memory serialisation/deserialisation.
Brian Hook also created SAL (Simple Audio Library) that utilises Posh. Both are featured in his book "Write Portable Code". Posh is also used in Ferret and Vega Strike.
See also
libslack
Simple DirectMedia Layer (SDL) |
https://en.wikipedia.org/wiki/Department%20of%20Biochemistry%2C%20University%20of%20Oxford | The Department of Biochemistry of Oxford University is located in the Science Area in Oxford, England. It is one of the largest biochemistry departments in Europe. The Biochemistry Department is part of the University of Oxford's Medical Sciences Division, the largest of the university's four academic divisions, which has been ranked first in the world for biomedicine.
History
The Department of Biochemistry at Oxford University began as the physiological chemistry section of the Physiology Department, and acquired its own separate department and building in the 1920s. In 1920, Benjamin Moore was elected to the position of the Whitley Professor of Biochemistry, the newly established Chair of Biochemistry at Oxford University. He was followed by Rudolph Peters in 1923, and an endowment of £75,000 was soon granted by the Rockefeller Foundation for the construction of a new departmental building, purchase of its equipment, and its maintenance. The Biochemistry Department building opened in 1927.
In 1954, Hans Krebs was appointed the Whitley Chair of Biochemistry, and his appointment brought greater prominence to the department. He brought with him the Medical Research Council unit established to conduct research on cell metabolism. In 1955, a second professorship in the department, the Iveagh Chair of Microbiology, was established with funding from Guinness and the sub-department of Microbiology created, with Donald Woods its first holder. The eight-storey Hans Krebs Building was constructed in 1964 with funds from the Rockefeller Foundation. Krebs was succeeded by Rodney Porter in 1967. Genetics was brought into the Biochemistry Department when Walter Bodmer was appointed the first Professor of Genetics in 1970. The Laboratory of Molecular Biophysics, first established in the Zoology Department with support from Krebs and also linked to the Physical Chemistry Laboratory of the Chemistry Department, became part of the Biochemistry Department. It moved into the |
https://en.wikipedia.org/wiki/Isotomic%20conjugate | In geometry, the isotomic conjugate of a point with respect to a triangle is another point, defined in a specific way from and : If the base points of the lines on the sides opposite are reflected about the midpoints of their respective sides, the resulting lines intersect at the isotomic conjugate of .
Construction
We assume that is not collinear with any two vertices of . Let be the points in which the lines meet sidelines (extended if necessary). Reflecting in the midpoints of sides will give points respectively. The isotomic lines joining these new points to the vertices meet at a point (which can be proved using Ceva's theorem), the isotomic conjugate of .
Coordinates
If the trilinears for are , then the trilinears for the isotomic conjugate of are
where are the side lengths opposite vertices respectively.
Properties
The isotomic conjugate of the centroid of triangle is the centroid itself.
The isotomic conjugate of the symmedian point is the third Brocard point, and the isotomic conjugate of the Gergonne point is the Nagel point.
Isotomic conjugates of lines are circumconics, and conversely, isotomic conjugates of circumconics are lines. (This property holds for isogonal conjugates as well.)
See also
Isogonal conjugate
Triangle center |
https://en.wikipedia.org/wiki/G%C3%B6mb%C3%B6c | The gömböc ( ) is the first known physical example of a class of convex three-dimensional homogeneous bodies, called mono-monostatic, which, when resting on a flat surface have just one stable and one unstable point of equilibrium. The existence of this class was conjectured by the Russian mathematician Vladimir Arnold in 1995 and proven in 2006 by the Hungarian scientists Gábor Domokos and Péter Várkonyi by constructing at first a mathematical example and subsequently a physical example. Mono-monostatic shapes exist in countless varieties, most of which are close to a sphere, with a stringent shape tolerance (about one part in a thousand).
The gömböc is the first mono-monostatic shape which has been constructed physically. It has a sharpened top, as shown in the photo. Its shape helped to explain the body structure of some tortoises in relation to their ability to return to an equilibrium position after being placed upside down. Copies of the gömböc have been donated to institutions and museums, and the largest one was presented at the World Expo 2010 in Shanghai, China.
Name
If analyzed quantitatively in terms of flatness and thickness, the discovered mono-monostatic bodies are the most sphere-like, apart from the sphere itself. Because of this, the first physical example was named gömböc, a diminutive form of ("sphere" in Hungarian).
History
In geometry, a body with a single stable resting position is called monostatic, and the term mono-monostatic has been coined to describe a body which additionally has only one unstable point of balance. (The previously known monostatic polyhedron does not qualify, as it has several unstable equilibria.) A sphere weighted so that its center of mass is shifted from the geometrical center is mono-monostatic. However, it is inhomogeneous; that is, its material density varies across its body. Another example of an inhomogeneous mono-monostatic body is the Comeback Kid, Weeble or roly-poly toy (see left figure). At equilibri |
https://en.wikipedia.org/wiki/Sieving%20coefficient | In mass transfer, the sieving coefficient is a measure of equilibration between the concentrations of two mass transfer streams. It is defined as the mean pre- and post-contact concentration of the mass receiving stream divided by the pre- and post-contact concentration of the mass donating stream.
where
S is the sieving coefficient
Cr is the mean concentration mass receiving stream
Cd is the mean concentration mass donating stream
A sieving coefficient of unity implies that the concentrations of the receiving and donating stream equilibrate, i.e. the out-flow concentrations (post-mass transfer) of the mass donating and receiving stream are equal to one another. Systems with sieving coefficient that are greater than one require an external energy source, as they would otherwise violate the laws of thermodynamics.
Sieving coefficients less than one represent a mass transfer process where the concentrations have not equilibrated.
Contact time between mass streams is important in consider in mass transfer and affects the sieving coefficient.
In kidney
In renal physiology, the glomerular sieving coefficient (GSC) can be expressed as:
sieving coefficient = clearance / ultrafiltration rate
See also
Heat exchanger
Condenser pinch point
Sieve |
https://en.wikipedia.org/wiki/List%20of%20ATSC%20standards | Below are the published ATSC standards for ATSC digital television service, issued by the Advanced Television Systems Committee.
A/49: Ghost Canceling Reference Signal for NTSC (for adjacent-channel interference or co-channel interference with analog NTSC stations nearby)
A/52B: audio data compression (Dolby AC-3 and E-AC-3)
A/53E: "ATSC Digital Television Standard" (the primary document governing the standard)
A/55: "Program Guide for Digital Television" (now deprecated in favor of A/65 PSIP)
A/56: "System Information for Digital Television" (now deprecated in favor of A/65 PSIP)
A/57A: "Content Identification and Labeling for ATSC Transport" (for assigning a unique digital number to each episode of each TV show, to assist DVRs)
A/63: "Standard for Coding 25/50 Hz Video" (for use with PAL and SECAM-originated programming)
A/64A "Transmission Measurement and Compliance for Digital Television"
A/65C: "Program and System Information Protocol for Terrestrial Broadcast and Cable" (PSIP includes virtual channels, electronic program guides, and content ratings)
A/68: "PSIP Standard for Taiwan" (defines use of Chinese characters via Unicode 3.0)
A/69: recommended practices for implementing PSIP at a TV station
A/70A: "Conditional Access System for Terrestrial Broadcast"
A/71: "ATSC Parameterized Services Standard"
A/72: "Video System Characteristics of AVC in the ATSC Digital Television System" (implementing H.264/MPEG-4 as well as MVC for 3D television)
A/76: "Programming Metadata Communication Protocol" (XML-based PMCP maintains PSIP metadata though a TV station's airchain)
A/79: "Conversion of ATSC Signals for Distribution to NTSC Viewers" (recommended practice, issued February 2009)
A/80: "Modulation and Coding Requirements for Digital TV (DTV) Applications Over Satellite" (ATSC-S)
A/81: "Direct-to-Home Satellite Broadcast Standard" (not yet implemented by any services)
A/82: "Automatic Transmitter Power Control (ATPC) Data Return Link (DRL) Standard"
A/85: "Techniqu |
https://en.wikipedia.org/wiki/Marine%20larval%20ecology | Marine larval ecology is the study of the factors influencing dispersing larvae, which many marine invertebrates and fishes have. Marine animals with a larva typically release many larvae into the water column, where the larvae develop before metamorphosing into adults.
Marine larvae can disperse over long distances, although determining the actual distance is challenging, because of their size and the lack of a good tracking method. Knowing dispersal distances is important for managing fisheries, effectively designing marine reserves, and controlling invasive species.
Theories on the evolution of a biphasic life history
Larval dispersal is one of the most important topics in marine ecology, today. Many marine invertebrates and many fishes have a bi-phasic life cycle with a pelagic larva or pelagic eggs that can be transported over long distances, and a demersal or benthic adult. There are several theories behind why these organisms have evolved this biphasic life history:
Larvae use different food sources than adults, which decreases competition between life stages.
Pelagic larvae can disperse large distances, colonize new territory, and move away from habitats that has become overcrowded or otherwise unsuitable.
A long pelagic larval phase can help a species to break its parasite cycles.
Pelagic larvae avoid benthic predators.
Dispersing as pelagic larvae can be risky. For example, while larvae do avoid benthic predators, they are still exposed to pelagic predators in the water column.
Larval development strategies
Marine larvae develop via one of three strategies: Direct, lecithotrophic, or planktotrophic. Each strategy has risks of predation and the difficulty of finding a good settlement site.
Direct developing larvae look like the adult. They have typically very low dispersal potential, and are known as "crawl-away larvae", because they crawl away from their egg after hatching. Some species of frogs and snails hatch this way.
Lecithotrophic larvae hav |
https://en.wikipedia.org/wiki/Supertramp%20%28ecology%29 | In ecology, a supertramp species is any type of animal which follows the "supertramp" strategy of high dispersion among many different habitats, towards none of which it is particularly specialized. Supertramp species are typically the first to arrive in newly available habitats, such as volcanic islands and freshly deforested land; they can have profoundly negative effects on more highly specialized flora and fauna, both directly through predation and indirectly through competition for resources.
The name was coined by Jared Diamond in 1974, as an allusion to both the itinerant lifestyle of the tramp, and the then-popular band Supertramp. Although Diamond originally applied the term only to birds, the term has since been applied to insects and reptiles as well, among others; any species which can migrate can be a supertramp.
In an evolutionary context, the supertramp may represent the first stage of the taxon cycle.
See also
Assembly rules |
https://en.wikipedia.org/wiki/Carry%20operator | The carry operator, symbolized by the ¢ sign, is an abstraction of the operation of determining whether a portion of an adder network generates or propagates a carry. It is defined as follows:
¢
External links
http://www.aoki.ecei.tohoku.ac.jp/arith/mg/algorithm.html
Computer arithmetic |
https://en.wikipedia.org/wiki/Probabilistic%20design | Probabilistic design is a discipline within engineering design. It deals primarily with the consideration of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects are related to quality and reliability. Thus, probabilistic design is a tool that is mostly used in areas that are concerned with quality and reliability. For example, product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor.
Designer's perspective
When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system. By considering this flow, a designer can make adjustments to reduce the flow of random variability, and improve quality. Proponents of the approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.
The objective of probabilistic design
Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma
Methods used
Essentially, probabilistic design focuses upon the prediction of the effects of random variability. Some methods that are used to predict the random variability of an output include:
the Monte Carlo method (i |
https://en.wikipedia.org/wiki/Mobilome | The mobilome is the entire set of mobile genetic elements in a genome. Mobilomes are found in eukaryotes, prokaryotes, and viruses. The compositions of mobilomes differ among lineages of life, with transposable elements being the major mobile elements in eukaryotes, and plasmids and prophages being the major types in prokaryotes. Virophages contribute to the viral mobilome.
Mobilome in eukaryotes
Transposable elements are elements that can move about or propagate within the genome, and are the major constituents of the eukaryotic mobilome. Transposable elements can be regarded as genetic parasites because they exploit the host cell's transcription and translation mechanisms to extract and insert themselves in different parts of the genome, regardless of the phenotypic effect on the host.
Eukaryotic transposable elements were first discovered in maize (Zea mays) in which kernels showed a dotted color pattern. Barbara McClintock described the maize Ac/Ds system in which the Ac locus promotes the excision of the Ds locus from the genome, and excised Ds elements can mutate genes responsible for pigment production by inserting into their coding regions.
Other examples of transposable elements include: yeast (Saccharomyces cerevisiae) Ty elements, a retrotransposon which encodes a reverse transcriptase to convert its mRNA transcript into DNA which can then insert into other parts of the genome; and fruit fly (Drosophila melanogaster) P-elements, which randomly inserts into the genome to cause mutations in germ line cells, but not in somatic cells.
Mobilome in prokaryotes
Plasmids were discovered in the 1940s as genetic materials outside of bacterial chromosomes. Prophages are genomes of bacteriophages (a type of virus) that are inserted into bacterial chromosomes; prophages can then be spread to other bacteria through the lytic cycle and lysogenic cycle of viral replication.
While transposable elements are also found in prokaryotic genomes, the most common mobile |
https://en.wikipedia.org/wiki/Well-pointed%20category | In category theory, a category with a terminal object is well-pointed if for every pair of arrows such that , there is an arrow such that . (The arrows are called the global elements or points of the category; a well-pointed category is thus one that has "enough points" to distinguish non-equal arrows.)
See also
Pointed category |
https://en.wikipedia.org/wiki/Thermal%20effusivity | In thermodynamics, a material's thermal effusivity, also known as thermal responsivity, is a measure of its ability to exchange thermal energy with its surroundings. It is defined as the square root of the product of the material's thermal conductivity () and its volumetric heat capacity () or as the ratio of thermal conductivity to the square root of thermal diffusivity ().
The SI units for thermal effusivity are , or, equivalently, .
Thermal effusivity is a good approximation for the material's thermal inertia for a semi-infinite rigid body where heat transfer is dominated by the diffusive process of conduction only.
Thermal effusivity is a parameter that emerges upon applying solutions of the heat equation to heat flow through a thin surface-like region. It becomes particularly useful when the region is selected adjacent to a material's actual surface. Knowing the effusivity and equilibrium temperature of each of two material bodies then enables an estimate of their interface temperature when placed into thermal contact.
If and are the temperature of the two bodies, then upon contact, the temperature of the contact interface (assumed to be a smooth surface) becomes
Specialty sensors have also been developed based on this relationship to measure effusivity.
Thermal effusivity and thermal diffusivity are related quantities; respectively a product versus a ratio of a material's fundamental transport and storage properties. The diffusivity appears explicitly in the heat equation, which is an energy conservation equation, and measures the speed at which thermal equilibrium can be reached by a body. By contrast a body's effusivity (also sometimes called inertia, accumulation, responsiveness etc.) is its ability to resist a temperature change when subjected to a time-periodic, or similarly perturbative, forcing function.
Applications
Temperature at a contact surface
If two semi-infinite bodies initially at temperatures and are brought in perfect ther |
https://en.wikipedia.org/wiki/Algebraic%20differential%20equation | In mathematics, an algebraic differential equation is a differential equation that can be expressed by means of differential algebra. There are several such notions, according to the concept of differential algebra used.
The intention is to include equations formed by means of differential operators, in which the coefficients are rational functions of the variables (e.g. the hypergeometric equation). Algebraic differential equations are widely used in computer algebra and number theory.
A simple concept is that of a polynomial vector field, in other words a vector field expressed with respect to a standard co-ordinate basis as the first partial derivatives with polynomial coefficients. This is a type of first-order algebraic differential operator.
Formulations
Derivations D can be used as algebraic analogues of the formal part of differential calculus, so that algebraic differential equations make sense in commutative rings.
The theory of differential fields was set up to express differential Galois theory in algebraic terms.
The Weyl algebra W of differential operators with polynomial coefficients can be considered; certain modules M can be used to express differential equations, according to the presentation of M.
The concept of Koszul connection is something that transcribes easily into algebraic geometry, giving an algebraic analogue of the way systems of differential equations are geometrically represented by vector bundles with connections.
The concept of jet can be described in purely algebraic terms, as was done in part of Grothendieck's EGA project.
The theory of D-modules is a global theory of linear differential equations, and has been developed to include substantive results in the algebraic theory (including a Riemann-Hilbert correspondence for higher dimensions).
Algebraic solutions
It is usually not the case that the general solution of an algebraic differential equation is an algebraic function: solving equations typically produces novel transc |
https://en.wikipedia.org/wiki/Boyle%20temperature | The Boyle temperature is formally defined as the temperature for which the second virial coefficient, , becomes zero.
It is at this temperature that the attractive forces and the repulsive forces acting on the gas particles balance out
This is the virial equation of state and describes a real gas.
Since higher order virial coefficients are generally much smaller than the second coefficient, the gas tends to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature (or when or are minimized).
In any case, when the pressures are low, the second virial coefficient will be the only relevant one because the remaining concern terms of higher order on the pressure. Also at Boyle temperature the dip in a PV diagram tends to a straight line over a period of pressure. We then have
where is the compressibility factor.
Expanding the van der Waals equation in one finds that .
See also
Virial equation of state
Temperature
Thermodynamics |
https://en.wikipedia.org/wiki/Flux-corrected%20transport | Flux-corrected transport (FCT) is a conservative shock-capturing scheme for solving Euler equations and other hyperbolic equations which occur in gas dynamics, aerodynamics, and magnetohydrodynamics. It is especially useful for solving problems involving shock or contact discontinuities. An FCT algorithm consists of two stages, a transport stage and a flux-corrected anti-diffusion stage. The numerical errors introduced in the first stage (i.e., the transport stage) are corrected in the anti-diffusion stage. |
https://en.wikipedia.org/wiki/Kama%20%28food%29 | Kama (in Estonian) or talkkuna (in Finnish) or tolokno (in Russian: толокно), talqan (in Turkic languages) is a traditional Estonian, Finnish, Russian, Turkic finely milled flour mixture. The kama or talkkuna powder is a mixture of roasted barley, rye, oat and pea flour. The oat flour may be completely replaced by wheat flour, or kibbled black beans may be added to the mixture. In Finland talkkuna is made by first steaming grains, then grinding them up and finally roasting them into talkkuna.
"Historically kama was a non-perishable, easy-to-carry food that could be quickly fashioned into a stomach-filling snack by rolling it into butter or lard; it did not require baking, as it was already roasted".
Nowadays it is used for making some desserts. It is mostly enjoyed for breakfast mixed with milk, buttermilk or kefir as mush. It is frequently sweetened with sugar and especially with blueberry, more rarely with other fruits or honey or served unsweetened. It is also used for milk or sour desserts, together with the forest berries typical in Estonia and Finland.
Kama can be bought as a souvenir in Estonia, where it is a distinctive national food.
A similar product is skrädmjöl, a flour consisting exclusively of roasted oats which is traditionally made in the Swedish province of Värmland. It was brought there by Forest Finns.
In Turkic languages, it is called talqan. It is made of coarse or finely milled flour from roasted barley or wheat. It is common in the cuisine of Altay people, Nogays, Bashkirs, Kazakhs, Tatars, Tuvans, Uzbeks, Khakas.
See also
Gofio
Misutgaru
Rubaboo
Tsampa |
https://en.wikipedia.org/wiki/Bottleneck%20%28network%29 | In a communication network, sometimes a max-min fairness of the network is desired, usually opposed to the basic first-come first-served policy. With max-min fairness, data flow between any two nodes is maximized, but only at the cost of more or equally expensive data flows. To put it another way, in case of network congestion any data flow is only impacted by smaller or equal flows.
In such context, a bottleneck link for a given data flow is a link that is fully utilized (is saturated) and of all the flows sharing this link, the given data flow achieves maximum data rate network-wide. Note that this definition is substantially different from a common meaning of a bottleneck. Also note, that this definition does not forbid a single link to be a bottleneck for multiple flows.
A data rate allocation is max-min fair if and only if a data flow between any two nodes has at least one bottleneck link.
See also
Fairness measure
Max-min fairness |
https://en.wikipedia.org/wiki/Generation%20time | In population biology and demography, generation time is the average time between two consecutive generations in the lineages of a population. In human populations, generation time typically has ranged from 20 to 30 years, with wide variation based on gender and society. Historians sometimes use this to date events, by converting generations into years to obtain rough estimates of time.
Definitions and corresponding formulas
The existing definitions of generation time fall into two categories: those that treat generation time as a renewal time of the population, and those that focus on the distance between individuals of one generation and the next. Below are the three most commonly used definitions:
The time it takes for the population to grow by a factor of its net reproductive rate
The net reproductive rate is the number of offspring an individual is expected to produce during its lifetime (a net reproductive rate of 1 means that the population is at its demographic equilibrium). This definition envisions the generation time as a renewal time of the population. It justifies the very simple definition used in microbiology ("the time it takes for the population to double", or doubling time) since one can consider that during the exponential phase of bacterial growth mortality is very low and as a result a bacterium is expected to be replaced by two bacteria in the next generation (the mother cell and the daughter cell). If the population dynamic is exponential with a growth rate , that is,
,
where is the size of the population at time , then this measure of the generation time is given by
.
Indeed, is such that , i.e. .
The average difference in age between parent and offspring
This definition is a measure of the distance between generations rather than a renewal time of the population. Since many demographic models are female-based (that is, they only take females into account), this definition is often expressed as a mother-daughter distance (the "ave |
https://en.wikipedia.org/wiki/Multiple%20EM%20for%20Motif%20Elicitation | Multiple Expectation maximizations for Motif Elicitation (MEME) is a tool for discovering motifs in a group of related DNA or protein sequences.
A motif is a sequence pattern that occurs repeatedly in a group of related protein or DNA sequences and is often associated with some biological function. MEME represents motifs as position-dependent letter-probability matrices which describe the probability of each possible letter at each position in the pattern. Individual MEME motifs do not contain gaps. Patterns with variable-length gaps are split by MEME into two or more separate motifs.
MEME takes as input a group of DNA or protein sequences (the training set) and outputs as many motifs as requested. It uses statistical modeling techniques to automatically choose the best width, number of occurrences, and description for each motif.
MEME is the first of a collection of tools for analyzing motifs called the MEME suite.
Definition
The MEME algorithm could be understood from two different perspectives. From a biological point of view, MEME identifies and characterizes shared motifs in a set of unaligned sequences. From the computer science aspect, MEME finds a set of non-overlapping, approximately matching substrings given a starting set of strings.
Use
MEME can be used to find similar biological functions and structures in different sequences. It is necessary to take into account that the sequences variation can be significant and that the motifs are sometimes very small. It is also useful to take into account that the binding sites for proteins are very specific. This makes it easier to reduce wet-lab experiments (saving cost and time). Indeed, to better discover the motifs relevant from a biological point it is necessary to carefully choose: the best width of motifs, the number of occurrences in each sequence, and the composition of each motif.
Algorithm components
The algorithm uses several types of well known functions:
Expectation maximization (EM).
EM base |
https://en.wikipedia.org/wiki/Standard%20map | The standard map (also known as the Chirikov–Taylor map or as the Chirikov standard map) is an area-preserving chaotic map from a square with side onto itself. It is constructed by a Poincaré's surface of section of the kicked rotator, and is defined by:
where and are taken modulo .
The properties of chaos of the standard map were established by Boris Chirikov in 1969.
Physical model
This map describes the Poincaré's surface of section of the motion of a simple mechanical system known as the kicked rotator. The kicked rotator consists of a stick that is free of the gravitational force, which can rotate frictionlessly in a plane around an axis located in one of its tips, and which is periodically kicked on the other tip.
The standard map is a surface of section applied by a stroboscopic projection on the variables of the kicked rotator. The variables and respectively determine the angular position of the stick and its angular momentum after the n-th kick. The constant K measures the intensity of the kicks on the kicked rotator.
The kicked rotator approximates systems studied in the fields of mechanics of particles, accelerator physics, plasma physics, and solid state physics. For example, circular particle accelerators accelerate particles by applying periodic kicks, as they circulate in the beam tube. Thus, the structure of the beam can be approximated by the kicked rotor. However, this map is interesting from a fundamental point of view in physics and mathematics because it is a very simple model of a conservative system that displays Hamiltonian chaos. It is therefore useful to study the development of chaos in this kind of system.
Main properties
For the map is linear and only periodic and quasiperiodic orbits are possible. When plotted in phase space (the θ–p plane), periodic orbits appear as closed curves, and quasiperiodic orbits as necklaces of closed curves whose centers lie in another larger closed curve. Which type of orbit is observed depends |
https://en.wikipedia.org/wiki/Fauna%20of%20Great%20Britain | The island of Great Britain, along with the rest of the archipelago known as the British Isles, has a largely temperate climate. It contains a relatively small fraction of the world's wildlife. The biota was severely diminished in the last ice age, and shortly (in geological terms) thereafter was separated from the continent by the English Channel's formation. Since then, humans have hunted the most dangerous forms (the wolf, the brown bear and the wild boar) to extinction, though domesticated forms such as the dog and the pig remain. The wild boar has subsequently been reintroduced as a meat animal.
Overview
In most of Great Britain there is a temperate climate, with high levels of precipitation and medium levels of sunlight. Further northwards, the climate becomes colder and coniferous forests appear, replacing the largely deciduous forests of the south. There are a few variations in the generally temperate British climate, with some areas of subarctic conditions, such as the Scottish Highlands and Teesdale, and even sub-tropical in the Isles of Scilly. Plants have to cope with seasonal changes across the British Isles, such as in levels of sunlight, rainfall and temperature, as well as the risk of snow and frost during the winter.
Since the mid 18th century, Great Britain has gone through industrialisation and increasing urbanisation. A DEFRA study from 2006 suggested that 100 species became extinct in the UK during the 20th century: about 100 times the background extinction rate. This has had a major impact on indigenous animal populations. Song birds in particular are becoming scarcer, and habitat loss has affected larger mammalian species. Some species have however adapted to the expanding urban environment, particularly the red fox, which is the most successful urban mammal after the brown rat, and other creatures such as common wood pigeon.
Invertebrates
Molluscs
There are 220 species of non-marine molluscs that have been recorded as living in the |
https://en.wikipedia.org/wiki/BT%20Smart%20Hub | The BT Smart Hub (formerly BT Home Hub) is a family of wireless residential gateway router modems distributed by BT for use with their own products and services and those of wholesale resellers (i.e. LLUs) but not with other Internet services. Since v 5 Home/Smart Hubs support the faster Wi-Fi 802.11ac standard, in addition to the 802.11b/g/n standards. All models of the Home Hub prior to Home Hub 3 support VoIP Internet telephony via BT's Broadband Talk service, and are compatible with DECT telephone handsets. Since the Home Hub 4, all models have been dual band (i.e. both 2.4GHz and 5GHz).
The BT Home Hub works with the now defunct BT Fusion service and with the BT Vision video on demand service. The BT Home Hub 1.0, 1.5 and 2.0 devices connect to the Internet using a standard ADSL connection. The BT Home Hub 3 and 4 models support PPPoA for ADSL and PPPoE for VDSL2, in conjunction with an Openreach-provided VDSL2 modem to support BT's FTTC network (BT Infinity). Version 5 of the Home Hub, released in August 2013, includes a VDSL2 modem for fibre-optic connections. New firmware is pushed out to Home Hubs connected to the Internet automatically by BT.
The Home Hub 5 was followed on 20 June 2016 by the Smart Hub, a further development of the Home Hub, internally referred to as "Home Hub 6". It has more WiFi antennas than its predecessor. It supports Wave 2 802.11ac WiFi, found on review to be 50% faster than non-Wave 2. The Smart Hub was subsequently replaced with the Smart Hub 2 (Home Hub 6DX).
History
Prior to release of the Home Hub (2004–2005), BT offered a product based on the 2Wire 1800HG, and manufactured by 2Wire. This was described as the "BT Wireless Hub 1800HG", or in some documentation as the "BT Wireless Home Hub 1800". This provided one USB connection, four Ethernet ports and Wi-Fi 802.11b or 802.11g wireless connection. A total of ten devices in any combination of these was supported.
The Home Hub 3B was manufactured by Huawei and also supports A |
https://en.wikipedia.org/wiki/Charge%20density%20wave | A charge density wave (CDW) is an ordered quantum fluid of electrons in a linear chain compound or layered crystal. The electrons within a CDW form a standing wave pattern and sometimes collectively carry an electric current. The electrons in such a CDW, like those in a superconductor, can flow through a linear chain compound en masse, in a highly correlated fashion. Unlike a superconductor, however, the electric CDW current often flows in a jerky fashion, much like water dripping from a faucet due to its electrostatic properties. In a CDW, the combined effects of pinning (due to impurities) and electrostatic interactions (due to the net electric charges of any CDW kinks) likely play critical roles in the CDW current's jerky behavior, as discussed in sections 4 & 5 below.
Most CDW's in metallic crystals form due to the wave-like nature of electrons – a manifestation of quantum mechanical wave–particle duality – causing the electronic charge density to become spatially modulated, i.e., to form periodic "bumps" in charge. This standing wave affects each electronic wave function, and is created by combining electron states, or wavefunctions, of opposite momenta. The effect is somewhat analogous to the standing wave in a guitar string, which can be viewed as the combination of two interfering, traveling waves moving in opposite directions (see interference (wave propagation)).
The CDW in electronic charge is accompanied by a periodic distortion – essentially a superlattice – of the atomic lattice. The metallic crystals look like thin shiny ribbons (e.g., quasi-1-D NbSe3 crystals) or shiny flat sheets (e.g., quasi-2-D, 1T-TaS2 crystals). The CDW's existence was first predicted in the 1930s by Rudolf Peierls. He argued that a 1-D metal would be unstable to the formation of energy gaps at the Fermi wavevectors ±kF, which reduce the energies of the filled electronic states at ±kF as compared to their original Fermi energy EF. The temperature below which such gaps form |
https://en.wikipedia.org/wiki/Marvell%20Software%20Solutions%20Israel | Marvell Software Solutions Israel, known as RADLAN Computer Communications Limited before 2007, is a wholly owned subsidiary of Marvell Technology Group, that specializes in local area network (LAN) technologies.
History
The company was founded in 1998 as a spin-off from RND, which was founded by brothers Yehuda and Zohar Zisapel. RND was also the product of a spin-off, from the Zisapel brothers' RAD Group. Eventually, RND was split into two companies - Radware and RADLAN.
In February 2003, the integrated circuit (IC) designer Marvell Technology Group closed the deal to acquire RADLAN Computer Communications for $49.7 million in cash and shares.
California-based Marvell said it would incorporate its mixed-signal ICs with RADLAN's networking infrastructure drivers, interfaces and software modules to make improved networking communications products like routers. Currently, Marvell's product lineup includes read channels (which convert analog data from a magnetic disk into digital data for computing), preamplifiers, and Ethernet switch controllers and transceivers.
In May 2007 Radlan was officially renamed Marvell Software Solutions Israel (MSSI), to complete the integration into Marvell.
The company is located in the Petah Tikva technology park, Ezorim.
Yuval Cohen replaced Jacob Zankel as chief executive in late 2006.
Technology
RADLAN's core technology, Open and Portable Embedded Networking System (OpENS), provided IP-routed core software coupled with customizable management application, development environment and testing tools.
RADLAN's product lines are divided into three areas of
development: Intelligent Intranet Switching; Intranet Accelerator Engines; Intelligent Network Services.
See also
Economy of Israel |
https://en.wikipedia.org/wiki/Copiotroph | A copiotroph is an organism found in environments rich in nutrients, particularly carbon. They are the opposite to oligotrophs, which survive in much lower carbon concentrations.
Copiotrophic organisms tend to grow in high organic substrate conditions. For example, copiotrophic organisms grow in Sewage lagoons. They grow in organic substrate conditions up to 100x higher than oligotrophs. Due to this substrate concentration inclination, copiotrophs are often found in nutrient rich waters near coastlines or estuaries.
Classification and Identification
The bacterial phyla can be differentiated into copiotrophic or oligotrophic categories that correspond and structure the functions of soil bacterial communities.
Interaction with other organisms
Copiotrophic relation between oligotrophic bacteria depends on the amount of concentration the soil has of C compounds. If the soil has large amounts of organic C, it would then favor the copiotrophic bacteria.
Ecology
Copiotrophic bacteria are a key component in the soil C cycle. It is most important during the period of the year when vegetation is photosynthetically active and exudes large amounts of simple C compounds like sugar, amino acids, and organic acids. Copiotrophic bacteria are also found within marine life.
Lifestyle
Copiotrophs have a higher Michaelis-Menten constant than oligotrophs. This constant is directly correlated to environmental substrate preference. In these high resource environments, copiotrophs exhibit a “feast-and-famine” lifestyle. They utilize the available nutrients in the environment rapidly resulting in nutrient depletion which forces them to starve. This is possible through increasing their growth rate with nutrient uptake. However when nutrients in the environment get depleted, copiotrophs struggle to survive for long periods of time. Copiotrophs do not have the ability to respond to starvation. It is hypothesized that this may be a lost trait. Another possibility is that microbes |
https://en.wikipedia.org/wiki/Lax%E2%80%93Wendroff%20method | The Lax–Wendroff method, named after Peter Lax and Burton Wendroff, is a numerical method for the solution of hyperbolic partial differential equations, based on finite differences. It is second-order accurate in both space and time. This method is an example of explicit time integration where the function that defines the governing equation is evaluated at the current time.
Definition
Suppose one has an equation of the following form:
where and are independent variables, and the initial state, is given.
Linear case
In the linear case, where , and is a constant,
Here refers to the dimension and refers to the dimension.
This linear scheme can be extended to the general non-linear case in different ways. One of them is letting
Non-linear case
The conservative form of Lax-Wendroff for a general non-linear equation is then:
where is the Jacobian matrix evaluated at .
Jacobian free methods
To avoid the Jacobian evaluation, use a two-step procedure.
Richtmyer method
What follows is the Richtmyer two-step Lax–Wendroff method. The first step in the Richtmyer two-step Lax–Wendroff method calculates values for at half time steps, and half grid points, . In the second step values at are calculated using the data for and .
First (Lax) steps:
Second step:
MacCormack method
Another method of this same type was proposed by MacCormack. MacCormack's method uses first forward differencing and then backward differencing:
First step:
Second step:
Alternatively,
First step:
Second step: |
https://en.wikipedia.org/wiki/Electrochromatography | Electrochromatography is a chemical separation technique in analytical chemistry, biochemistry and molecular biology used to resolve and separate mostly large biomolecules such as proteins. It is a combination of size exclusion chromatography (gel filtration chromatography) and gel electrophoresis. These separation mechanisms operate essentially in superposition along the length of a gel filtration column to which an axial electric field gradient has been added. The molecules are separated by size due to the gel filtration mechanism and by electrophoretic mobility due to the gel electrophoresis mechanism. Additionally there are secondary chromatographic solute retention mechanisms.
Capillary electrochromatography
Capillary electrochromatography (CEC) is an electrochromatography technique in which the liquid mobile phase is driven through a capillary containing the chromatographic stationary phase by electroosmosis. It is a combination of high-performance liquid chromatography and capillary electrophoresis. The capillaries is packed with HPLC stationary phase and a high voltage is applied to achieve separation is achieved by electrophoretic migration of the analyte and differential partitioning in the stationary phase.
See also
Chromatography
Protein electrophoresis
Electrofocusing
Two-dimensional gel electrophoresis
Temperature gradient gel electrophoresis |
https://en.wikipedia.org/wiki/List%20of%20quantum-mechanical%20systems%20with%20analytical%20solutions | Much insight in quantum mechanics can be gained from understanding the closed-form solutions to the time-dependent non-relativistic Schrödinger equation. It takes the form
where is the wave function of the system, is the Hamiltonian operator, and is time. Stationary states of this equation are found by solving the time-independent Schrödinger equation,
which is an eigenvalue equation. Very often, only numerical solutions to the Schrödinger equation can be found for a given physical system and its associated potential energy. However, there exists a subset of physical systems for which the form of the eigenfunctions and their associated energies, or eigenvalues, can be found. These quantum-mechanical systems with analytical solutions are listed below.
Solvable systems
The two-state quantum system (the simplest possible quantum system)
The free particle
The delta potential
The double-well Dirac delta potential
The particle in a box / infinite potential well
The finite potential well
The one-dimensional triangular potential
The particle in a ring or ring wave guide
The particle in a spherically symmetric potential
The quantum harmonic oscillator
The quantum harmonic oscillator with an applied uniform field
The hydrogen atom or hydrogen-like atom e.g. positronium
The hydrogen atom in a spherical cavity with Dirichlet boundary conditions
The particle in a one-dimensional lattice (periodic potential)
The particle in a one-dimensional lattice of finite length
The Morse potential
The Mie potential
The step potential
The linear rigid rotor
The symmetric top
The Hooke's atom
The Spherium atom
Zero range interaction in a harmonic trap
The quantum pendulum
The rectangular potential barrier
The Pöschl–Teller potential
The Inverse square root potential
Multistate Landau–Zener models
The Luttinger liquid (the only exact quantum mechanical solution to a model including interparticle interactions)
See also
List of quantum-mechanical potentials – a list of physically |
https://en.wikipedia.org/wiki/Truncated%20differential%20cryptanalysis | In cryptography, truncated differential cryptanalysis is a generalization of differential cryptanalysis, an attack against block ciphers. Lars Knudsen developed the technique in 1994. Whereas ordinary differential cryptanalysis analyzes the full difference between two texts, the truncated variant considers differences that are only partially determined. That is, the attack makes predictions of only some of the bits instead of the full block. This technique has been applied to SAFER, IDEA, Skipjack, E2, Twofish, Camellia, CRYPTON, and even the stream cipher Salsa20. |
https://en.wikipedia.org/wiki/JGroups | JGroups is a library for reliable one-to-one or one-to-many communication written in the Java language.
It can be used to create groups of processes whose members send messages to each other. JGroups enables developers to create reliable multipoint (multicast) applications where reliability is a deployment issue. JGroups also relieves the application developer from implementing this logic themselves. This saves significant development time and allows for the application to be deployed in different environments without having to change code.
Features
Group creation and deletion. Group members can be spread across LANs or WANs
Joining and leaving of groups
Membership detection and notification about joined/left/crashed members
Detection and removal of crashed members
Sending and receiving of member-to-group messages (point-to-multipoint)
Sending and receiving of member-to-member messages (point-to-point)
Code sample
This code below demonstrates the implementation of a simple command-line IRC client using JGroups:
public class Chat extends ReceiverAdapter {
private JChannel channel;
public Chat(String props, String name) {
channel = new JChannel(props)
.setName(name)
.setReceiver(this)
.connect("ChatCluster");
}
public void viewAccepted(View view) {
System.out.printf("** view: %s\n", view);
}
public void receive(Message msg) {
System.out.printf("from %s: %s\n", msg.getSource(), msg.getObject());
}
private void send(String line) {
try {
channel.send(new Message(null, line));
} catch (Exception e) {}
}
public void run() throws Exception {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
while (true) {
System.out.print("> ");
System.out.flush();
send(in.readLine().toLowerCase());
}
}
public void end() throws Exception {
chan |
https://en.wikipedia.org/wiki/History%20of%20general-purpose%20CPUs | The history of general-purpose CPUs is a continuation of the earlier history of computing hardware.
1950s: Early designs
In the early 1950s, each computer design was unique. There were no upward-compatible machines or computer architectures with multiple, differing implementations. Programs written for one machine would run on no other kind, even other kinds from the same company. This was not a major drawback then because no large body of software had been developed to run on computers, so starting programming from scratch was not seen as a large barrier.
The design freedom of the time was very important because designers were very constrained by the cost of electronics, and only starting to explore how a computer could best be organized. Some of the basic features introduced during this period included index registers (on the Ferranti Mark 1), a return address saving instruction (UNIVAC I), immediate operands (IBM 704), and detecting invalid operations (IBM 650).
By the end of the 1950s, commercial builders had developed factory-constructed, truck-deliverable computers. The most widely installed computer was the IBM 650, which used drum memory onto which programs were loaded using either paper punched tape or punched cards. Some very high-end machines also included core memory which provided higher speeds. Hard disks were also starting to grow popular.
A computer is an automatic abacus. The type of number system affects the way it works. In the early 1950s, most computers were built for specific numerical processing tasks, and many machines used decimal numbers as their basic number system; that is, the mathematical functions of the machines worked in base-10 instead of base-2 as is common today. These were not merely binary-coded decimal (BCD). Most machines had ten vacuum tubes per digit in each processor register. Some early Soviet computer designers implemented systems based on ternary logic; that is, a bit could have three states: +1, 0, or -1, correspo |
https://en.wikipedia.org/wiki/WokFi | WokFi (a portmanteau derived from blending the words Wok + Wi-Fi) is a slang term for a style of homemade Wi-Fi antenna consisting of a crude parabolic antenna made with a low-cost Asian kitchen wok, spider skimmer or similar household metallic dish. The dish forms a directional antenna which is pointed at the wireless access point antenna, allowing reception of the wireless signal at greater distances than standard omnidirectional Wi-Fi antennas.
Description
WokFi antennas are fabricated out of commonly available concave metal kitchen dishes or dish covers (which need not be perfectly parabolic); Asian woks are favored because they have shapes closest to parabolic. A commercial Wi-Fi antenna, usually a USB Wi-Fi dongle, is suspended in front of the dish, attached by cable to the computer.
The WokFi antenna is considered simpler and cheaper than other home-built antenna projects (such as the popular cantenna), but is a very effective method to boost the Wi-Fi connection quality, audit access point coverage, and even quickly establish WLAN viability – perhaps if a more professional setup is eventually intended.
Advantages
A significant advantage is that with a USB modem the RF signal is converted to a conventional digital signal at the antenna. Therefore, by using standard USB extension cables, the antenna can be located at a distance from the computer of five meters or more, with no concerns over microwave signal losses that would occur in an RF coaxial cable feedline of that length used to attach a conventional antenna to the RF input of a computer modem. Chaining active USB repeaters, it is possible to locate the antenna at much greater distances from the computer, which is especially useful when line-of-sight (LOS) obstacles (such as vegetation and walls) require the antenna to be located on a roof, for example. If using mesh reflectors, usually with a grid under 5 mm, the antenna will be lighter and present a smaller wind-load than larger dishes.
Perf |
https://en.wikipedia.org/wiki/Eukaryotic%20initiation%20factor | Eukaryotic initiation factors (eIFs) are proteins or protein complexes involved in the initiation phase of eukaryotic translation. These proteins help stabilize the formation of ribosomal preinitiation complexes around the start codon and are an important input for post-transcription gene regulation. Several initiation factors form a complex with the small 40S ribosomal subunit and Met-tRNAiMet called the 43S preinitiation complex (43S PIC). Additional factors of the eIF4F complex (eIF4A, E, and G) recruit the 43S PIC to the five-prime cap structure of the mRNA, from which the 43S particle scans 5'-->3' along the mRNA to reach an AUG start codon. Recognition of the start codon by the Met-tRNAiMet promotes gated phosphate and eIF1 release to form the 48S preinitiation complex (48S PIC), followed by large 60S ribosomal subunit recruitment to form the 80S ribosome. There exist many more eukaryotic initiation factors than prokaryotic initiation factors, reflecting the greater biological complexity of eukaryotic translation. There are at least twelve eukaryotic initiation factors, composed of many more polypeptides, and these are described below.
eIF1 and eIF1A
eIF1 and eIF1A both bind to the 40S ribosome subunit-mRNA complex. Together they induce an "open" conformation of the mRNA binding channel, which is crucial for scanning, tRNA delivery, and start codon recognition. In particular, eIF1 dissociation from the 40S subunit is considered to be a key step in start codon recognition.
eIF1 and eIF1A are small proteins (13 and 16 kDa, respectively in humans) and are both components of the 43S PIC. eIF1 binds near the ribosomal P-site, while eIF1A binds near the A-site, in a manner similar to the structurally and functionally related bacterial counterparts IF3 and IF1, respectively.
eIF2
eIF2 is the main protein complex responsible for delivering the initiator tRNA to the P-site of the preinitiation complex, as a ternary complex containing Met-tRNAiMet and GTP (the |
https://en.wikipedia.org/wiki/Bacterial%20initiation%20factor | A bacterial initiation factor (IF) is a protein that stabilizes the initiation complex for polypeptide translation.
Translation initiation is essential to protein synthesis and regulates mRNA translation fidelity and efficiency in bacteria. The 30S ribosomal subunit, initiator tRNA, and mRNA form an initiation complex for elongation. This complex process requires three essential protein factors in bacteria – IF1, IF2, and IF3. These factors bind to the 30S subunit and promote correct initiation codon selection on the mRNA. IF1, the smallest factor at 8.2 kDa, blocks elongator tRNA binding at the A-site. IF2 is the major component that transports initiator tRNA to the P-site. IF3 checks P-site codon-anticodon pairing and rejects incorrect initiation complexes.
The orderly mechanism of initiation starts with IF3 attaching to the 30S subunit and changing its shape. IF1 joins next, followed by mRNA binding, and starts codon-P-site interaction. IF2 enters with the initiator tRNA and places it on the start codon. GTP hydrolysis by IF2 releases it and IF3, enabling 50S subunit joining. The coordinated binding and activities of IF1, IF2, and IF3 are essential for the rapid and precise translation initiation in bacteria. They facilitate start codon selection and assemble an active, protein-synthesis-ready 70S ribosome.
IF1
Bacterial initiation factor 1 associates with the 30S ribosomal subunit in the A site and prevents an aminoacyl-tRNA from entering. It modulates IF2 binding to the ribosome by increasing its affinity. It may also prevent the 50S subunit from binding, stopping the formation of the 70S subunit. It also contains a β-domain fold common for nucleic acid-binding proteins. It is a homolog of eIF1A. Initiation factor IF-1 is the smallest translation factor at only 8.2kDa. Beyond blocking the A-site, it affects the dynamics of ribosome association and dissociation. IF-1 enhances dissociation with IF-3, likely by inducing conformational changes in the 30S subun |
https://en.wikipedia.org/wiki/Jacobi%20theta%20functions%20%28notational%20variations%29 | There are a number of notational systems for the Jacobi theta functions. The notations given in the Wikipedia article define the original function
which is equivalent to
where and .
However, a similar notation is defined somewhat differently in Whittaker and Watson, p. 487:
This notation is attributed to "Hermite, H.J.S. Smith and some other mathematicians". They also define
This is a factor of i off from the definition of as defined in the Wikipedia article. These definitions can be made at least proportional by x = za, but other definitions cannot. Whittaker and Watson, Abramowitz and Stegun, and Gradshteyn and Ryzhik all follow Tannery and Molk, in which
Note that there is no factor of π in the argument as in the previous definitions.
Whittaker and Watson refer to still other definitions of . The warning in Abramowitz and Stegun, "There is a bewildering variety of notations...in consulting books caution should be exercised," may be viewed as an understatement. In any expression, an occurrence of should not be assumed to have any particular definition. It is incumbent upon the author to state what definition of is intended. |
https://en.wikipedia.org/wiki/Suppression%20subtractive%20hybridization | Subtractive hybridization is a technology that allows for PCR-based amplification of only cDNA fragments that differ between a control (driver) and experimental transcriptome. cDNA is produced from mRNA. Differences in relative abundance of transcripts are highlighted, as are genetic differences between species. The technique relies on the removal of dsDNA formed by hybridization between a control and test sample, thus eliminating cDNAs or gDNAs of similar abundance, and retaining differentially expressed, or variable in sequence, transcripts or genomic sequences.
Suppression subtractive hybridization has also been successfully used to identify strain- or species-specific DNA sequences in a variety of bacteria including Vibrio species (Metagenomics).
See also
Representational difference analysis
External links
Overview at evrogen.com
Biotechnology |
https://en.wikipedia.org/wiki/Launch%20lug | Launch lugs are small cylinders attached to the sides of most model rockets, into which the launch rod is placed prior to a launch. They are generally made of either plastic or thin cardboard to minimize additional mass.
Use
The sole purpose of launch lug is to provide stability for a model rocket prior to and during liftoff by forcing the rocket to remain parallel to the launch rod during the first seconds of flight, before significant velocities are reached and enough momentum is built up to maintain stability. At higher velocities, the fins act as the rocket's primary stabilizing devices.
Launch lugs remain attached to rockets throughout flight, and the aerodynamic drag can lead to lower flight altitudes. An alternate way to stabilize a model rocket, and eliminate a launch lug, is to use a tower launcher. The tower launcher has rails which guide the rocket like a launch rod would, until the rocket reaches a velocity where its fins stabilize it for flight.
Position
In smaller rockets, one launch lug is generally considered enough, and is attached at the joint between one of the rocket's fins and the main rocket body. In larger, heavier model rockets, a second launch lug is generally added closer to the nose cone and lined up with the first, to provide additional support. The diameter of a launch lug generally closely matches that of the launch rod, although it is very slightly larger to minimize friction during the precarious first moments of flight. Length varies, ranging from less than a half-inch in smaller rockets to a few inches or longer in larger ones. |
https://en.wikipedia.org/wiki/TGF%20beta%202 | Transforming growth factor-beta 2 (TGF-β2) is a secreted protein known as a cytokine that performs many cellular functions and has a vital role during embryonic development (alternative names: Glioblastoma-derived T-cell suppressor factor, G-TSF, BSC-1 cell growth inhibitor, Polyergin, Cetermin). It is an extracellular glycosylated protein. It is known to suppress the effects of interleukin dependent T-cell tumors. There are two named isoforms of this protein, created by alternative splicing of the same gene (i.e., ).
Further reading
Proteins
TGFβ domain |
https://en.wikipedia.org/wiki/ID-MM7 | ID-MM7 is a protocol developed and promoted by the Liberty Alliance, driven by major mobile operators such as Vodafone and Telefónica Móviles, to standardize identity-based web services interfaces to mobile messaging.
The ID-MM7 specification adds significant value to existing web services. MM7 has long been used by operators for relaying MMS and SMS traffic. ID-MM enables an entirely new business model wherein the content providers know their subscribers only pseudonymously - providing the capability to thwart spam, identity theft and fraud.
Known implementations of the protocol include:
Symlabs Federated Identity Platform ID-Messaging
Computer access control protocols
Mobile telecommunications standards
Mobile web |
https://en.wikipedia.org/wiki/Borden%20Base%20Line | The Borden Base Line is a historic survey line (7.42 miles, long) running north/south through Hatfield and South Deerfield, Massachusetts. It was completed in 1831. It was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1981.
The baseline measurement was the first project of its kind undertaken in America, and essential for Massachusetts' pioneering Trigonometrical Survey, performed under chief engineer Robert Treat Paine. Its careful measurement was critical since the accuracy of the whole triangulation network depended on it.
The baseline was measured with greater accuracy than previously possible by using a new measuring device invented by Simeon Borden, which employed a bi-metallic measuring instrument to provide constant readings despite temperature variations. His apparatus was long, enclosed in a tube, and employed with four compound microscopes.
Borden was a highly competent engineer whose ability was widely recognized. Indeed, the entire project became generally known as the Borden Survey. He measured the baseline with a nominal accuracy of better than one part in 5 million. As Professor A. D. Butterfield has written, "The work performed and results obtained far surpassed in magnitude and attainment of any previous work of this kind in America."
It appears that the north end of the baseline lies just south of the intersection of today's Route 116 and Route 5 in South Deerfield, Massachusetts. According to the Valley Historians, the south end is still marked by a copper plug set into a boulder, located in the back yard of the house at 30 Bridge Street, Hatfield, Massachusetts.
See also
Surveying
External links
American Society of Civil Engineers -Borden baseline 1831 landmark |
https://en.wikipedia.org/wiki/Canonical%20units | A canonical unit is a unit of measurement agreed upon as default in a certain context.
In astrodynamics
In astrodynamics, canonical units are defined in terms of some important object’s orbit that serves as a reference. In this system, a reference mass, for example the Sun’s, is designated as 1 “canonical mass unit” and the mean distance from the orbiting object to the reference object is considered the “canonical distance unit”.
Canonical units are useful when the precise distances and masses of objects in space are not available. Moreover, by designating the mass of some chosen central or primary object to be “1 canonical mass unit” and the mean distance of the reference object to another object in question to be “1 canonical distance unit”, many calculations can be simplified.
Overview
The Canonical Distance Unit is defined to be the mean radius of the reference orbit.
The Canonical Time Unit is defined by the gravitational parameter :
where
is the gravitational constant
is the mass of the central reference body
In canonical units, the gravitational parameter is given by:
Any triplet of numbers, and that satisfy the equation above is a “canonical” set.
The quantity of the time unit can be solved in another unit system (e.g. the metric system) if the mass and radius of the central body have been determined. Using the above equation and applying dimensional analysis, set the two equations expressing equal to each other:
The time unit () can be converted to another unit system for a more useful qualitative solution using the following equation:
For Earth-orbiting satellites, approximate unit conversions are as follows:
1 = 6378.1 km = 20,925,524.97 ft
1 = 7.90538 km/s = 25,936.29 ft/sec
1 = 806.80415 s
Astronomical Unit
The astronomical unit () is the canonical distance unit for the orbit around the Sun of the combined Earth-Moon system (based on the formerly best-known value). The corresponding time unit is the (sidereal) year)), and th |
https://en.wikipedia.org/wiki/Math%20Curse | Math Curse is a children's picture book written by Jon Scieszka and illustrated by Lane Smith. Published in 1995 through Viking Press, the book tells the story of a student who is cursed by the manner in which mathematics is connected to everyday life. In 2009, a film based on the book was released by Weston Woods Studios, Inc.
Plot summary
The nameless student begins with a seemingly innocent statement by her math teacher: "you know, almost everything in life can be considered a math problem." The next morning, the hero finds herself thinking of the time she needs to get up along the lines of algebra. Next comes the mathematical school of probability, followed by charts and statistics. As the narrator slowly turns into a "math zombie", everything in her life is transformed into a problem. A class treat of cupcakes becomes a study in fractions, while a trip to the store turns into a problem of money. Finally, she is left painstakingly calculating how many minutes of "math madness" will be in her life now that she is a "mathematical lunatic." Her sister asks her what her problem is, and she responds, "365 days x 24 hours x 60 minutes." Finally, she collapses on her bed, and dreams that she is trapped in a blackboard-room covered in math problems. Armed with only a piece of chalk, she must escape and she manages to do just that by breaking the chalk in half, because "two halves make a whole." She escapes through this "whole", and awakens the next morning with the ability to solve any problem. Her curse is broken...until the next day, when her science teacher mentions that in life, everything can be viewed as a science experiment.
Math problems
The book is full of actual math problems (and some rather unrelated questions, such as "What does this inkblot look like?"). Readers can try to solve the problems and check their answers, which are located on the back cover of the book.
Adapted for the stage
The book was also adapted for the stage by Heath Corson and Kathl |
https://en.wikipedia.org/wiki/Sarcalumenin | Sarcalumenin is a protein that in humans is encoded by the SRL gene.
Sarcalumenin is a calcium-binding protein that can be found in the sarcoplasmic reticulum of striated muscle. Sarcalumenin is partially responsible for calcium buffering in the lumen of the sarcoplasmic reticulum and helps out calcium pump proteins. Additionally, sarcalumenin is necessary for keeping a normal sinus rhythm during both aerobic and anaerobic exercise activity. Sarcalumenin is a calcium-binding glycoprotein composed of 473 acidic amino acids with a molecular weight of 160 KDa. Together along with other luminal calcium buffer proteins, sarcalumenin plays an important role in regulation of calcium uptake and release during excitation-contraction coupling (ECC) in muscle fibers. |
https://en.wikipedia.org/wiki/Reverse-delete%20algorithm | The reverse-delete algorithm is an algorithm in graph theory used to obtain a minimum spanning tree from a given connected, edge-weighted graph. It first appeared in , but it should not be confused with Kruskal's algorithm which appears in the same paper. If the graph is disconnected, this algorithm will find a minimum spanning tree for each disconnected part of the graph. The set of these minimum spanning trees is called a minimum spanning forest, which contains every vertex in the graph.
This algorithm is a greedy algorithm, choosing the best choice given any situation. It is the reverse of Kruskal's algorithm, which is another greedy algorithm to find a minimum spanning tree. Kruskal’s algorithm starts with an empty graph and adds edges while the Reverse-Delete algorithm starts with the original graph and deletes edges from it. The algorithm works as follows:
Start with graph G, which contains a list of edges E.
Go through E in decreasing order of edge weights.
For each edge, check if deleting the edge will further disconnect the graph.
Perform any deletion that does not lead to additional disconnection.
Pseudocode
function ReverseDelete(edges[] E) is
sort E in decreasing order
Define an index i ← 0
while i < size(E) do
Define edge ← E[i]
delete E[i]
if graph is not connected then
E[i] ← edge
i ← i + 1
return edges[] E
In the above the graph is the set of edges E with each edge containing a weight and connected vertices v1 and v2.
Example
In the following example green edges are being evaluated by the algorithm and red edges have been deleted.
Running time
The algorithm can be shown to run in O(E log V (log log V)3) time (using big-O notation), where E is the number of edges and V is the number of vertices. This bound is achieved as follows:
Sorting the edges by weight using a comparison sort takes O(E log E) time, which can be simplified to O(E log V) using the fac |
https://en.wikipedia.org/wiki/Polyanhydride | Polyanhydrides are a class of biodegradable polymers characterized by anhydride bonds that connect repeat units of the polymer backbone chain. Their main application is in the medical device and pharmaceutical industry. In vivo, polyanhydrides degrade into non-toxic diacid monomers that can be metabolized and eliminated from the body. Owing to their safe degradation products, polyanhydrides are considered to be biocompatible.
Applications
The characteristic anhydride bonds in polyanhydrides are water-labile (the polymer chain breaks apart at the anhydride bond). This results in two carboxylic acid groups which are easily metabolized and biocompatible.
Biodegradable polymers, such as polyanhydrides, are capable of releasing physically entrapped or encapsulated drugs by well-defined kinetics and are a growing area of medical research. Polyanhydrides have been investigated as an important material for the short-term release of drugs or bioactive agents. The rapid degradation and limited mechanical properties of polyanhydrides render them ideal as controlled drug delivery devices.
One example, Gliadel, is a device in clinical use for the treatment of brain cancer. This product is made of a polyanhydride wafer containing a chemotherapeutic agent. After removal of a cancerous brain tumor, the wafer is inserted into the brain releasing a chemotherapy agent at a controlled rate proportional to the degradation rate of the polymer. The localized treatment of chemotherapy protects the immune system from high levels of radiation.
Other applications of polyanhydrides include the use of unsaturated polyanhydrides in bone replacement, as well as polyanhydride copolymers as vehicles for vaccine delivery.
Classes
There are three main classes of polyanhydrides: aliphatic, unsaturated, and aromatic. These classes are determined by examining their R groups (the chemistry of the molecule between the anhydride bonds).
Aliphatic polyanhydrides consist of R groups |
https://en.wikipedia.org/wiki/RNA%20activation | RNA activation (RNAa) is a small RNA-guided and Argonaute (Ago)-dependent gene regulation phenomenon in which promoter-targeted short double-stranded RNAs (dsRNAs) induce target gene expression at the transcriptional/epigenetic level. RNAa was first reported in a 2006 PNAS paper by Li et al. who also coined the term "RNAa" as a contrast to RNA interference (RNAi) to describe such gene activation phenomenon. dsRNAs that trigger RNAa have been termed small activating RNA (saRNA). Since the initial discovery of RNAa in human cells, many other groups have made similar observations in different mammalian species including human, non-human primates, rat and mice, plant and C. elegans, suggesting that RNAa is an evolutionarily conserved mechanism of gene regulation.
RNAa can be generally classified into two categories: exogenous and endogenous. Exogenous RNAa is triggered by artificially designed saRNAs which target non-coding sequences such as the promoter and the 3’ terminus of a gene and these saRNAs can be chemically synthesized or expressed as short hairpin RNA (shRNA). Whereas for endogenous RNAa, upregulation of gene expression is guided by naturally occurring endogenous small RNAs such as miRNA in mammalian cells and C. elegans, and 22G RNA in C. elegans.
Mechanism
The molecular mechanism of RNAa is not fully understood. Similar to RNAi, it has been shown that mammalian RNAa requires members of the Ago clade of Argonaute proteins, particularly Ago2, but possesses kinetics distinct from RNAi. In contrast to RNAi, promoter-targeted saRNAs induce prolonged activation of gene expression associated with epigenetic changes. It is currently suggested that saRNAs are first loaded and processed by an Ago protein to form an Ago-RNA complex which is then guided by the RNA to its promoter target. The target can be a non-coding transcript overlapping the promoter or the chromosomal DNA. The RNA-loaded Ago then recruits other proteins such as RHA, also known as nuclear D |
https://en.wikipedia.org/wiki/Manual%20testing | Compare with Test automation.
Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases.
Overview
A key step in the process is testing the software for correct behavior prior to release to end users.
For small scale engineering efforts (including prototypes), ad hoc testing may be sufficient. With this informal approach, the tester does not follow any rigorous testing procedure and simply performs testing without planning or documentation. Conversely, exploratory testing, which involves simultaneous learning, test design and test execution, explores the user interface of the application using as many of its features as possible, using information gained in prior tests to intuitively derive additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester, because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application.
Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps.
Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and software licenses are identified and acquired.
Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes.
Assign the test cases to testers, who manually follow the steps and record the results.
Author a test report, det |
https://en.wikipedia.org/wiki/Hybrid%20ternary%20code | In telecommunications, the hybrid (H-) ternary line code is a line code that operates on a hybrid principle combining the binary non-return-to-zero-level (NRZL) and the polar return-to-zero (RZ) codes.
The H-ternary code has three levels for signal representation; these are positive (+), zero (0), and negative (−). These three levels are represented by three states. The state of the line code could be in any one of these three states. A transition takes place to the next state as a result of a binary input 1 or 0 and the encoder's present output state. The encoding procedure is as follows.
In general, the encoder outputs + level for a binary 1 input and a − level for a binary 0 input.
However, if this would result in the same output level as the previous bit time, a 0 level is output instead.
Initially, the encoder output present state is assumed at 0 level when the first bit arrives at the encoder input.
The new line-coding scheme violates the encoding rule of NRZ-L when a sequence of 1s or 0s arrives and hence, it overcomes some of their deficiencies. During the violation period for a run of 1s or 0s, it operates on the same encoding rule of the polar RZ but with pulse occupancy of full period.
NRZ-L and polar RZ codes have deficiencies compared to the proposed H-ternary encoding scheme. NRZ-L code lacks sufficient timing information when the binary signal remains at one level in of either 1 or 0. This has direct influence on synchronising the receiver clock with that of the transmitter and, as a result, has impact on the detection of the received digital signal.
The H-ternary code has also timing superiority compared to similar ternary codes. Other ternary line code such as alternate mark inversion (AMI) also lacks the timing information when a run of zeros needs to be transmitted. This drawback is partly overcome by its modified version the high density bipolar with three zeros substitution (HDB3).
On the other hand, the new code has a smaller bandwidt |
https://en.wikipedia.org/wiki/Coincidence%20counting%20%28physics%29 | In quantum physics, coincidence counting is used in experiments testing particle non-locality and quantum entanglement. In these experiments two or more particles are created from the same initial packet of energy, inexorably linking/entangling their physical properties. Separate particle detectors measure the quantum states of each particle and send the resulting signal to a coincidence counter. In any experiment studying entanglement, the entangled particles are vastly outnumbered by non-entangled particles which are also detected; patternless noise that drowns out the entangled signal. In a two detector system, a coincidence counter alleviates this problem by only recording detection signals that strike both detectors simultaneously (or more accurately, recording only signals that arrive at both detectors and correlate to the same emission time). This ensures that the data represents only entangled particles.
However, since no detector/counter circuit has infinitely precise temporal resolution (due both to limitations in the electronics and the laws of the Universe itself), detections must be sorted into time bins (detection windows equivalent to the temporal resolution of the system). Detections in the same bin appear to occur at the same time because their individual detection times cannot be resolved any further. Thus in a two detector system, two unrelated, non-entangled particles may randomly strike both detectors, get sorted into the same time bin, and create a false-coincidence that adds noise to the signal. This limits coincidence counters to improving the signal-to-noise ratio to the extent that the quantum behavior can be studied, without removing the noise completely.
Every experiment to date that has been used to calculate Bell's inequalities, perform a quantum eraser, or conduct any experiment utilizing quantum entanglement as an information channel has only been possible through the use of coincidence counters. This unavoidably prevents superl |
https://en.wikipedia.org/wiki/Spongivore | A spongivore is an animal anatomically and physiologically adapted to eating animals of the phylum Porifera, commonly called sea sponges, for the main component of its diet. As a result of their diet, spongivore animals like the hawksbill turtle have developed sharp, narrow bird-like beak that allows them to reach within crevices on the reef to obtain sponges.
Examples
The hawksbill turtle is one of the few animals known to feed primarily on sponges. It is the only known spongivorous reptile. Sponges of various select species constitute up to 95% of the diets of Caribbean hawksbill turtle populations.
Pomacanthus imperator, the emperor angelfish; Lactophrys bicaudalis, the spotted trunkfish; and Stephanolepis hispidus, the planehead filefish are known spongivorous coral reef fish. The rock beauty Holocanthus tricolor is also spongivorous, with sponges making up 96% of their diet.
Certain species of nudibranchs are known to feed selectively on specific species of sponges.
Attacks and counter-attacks
Spongivore offense
The many defenses displayed by sponges means that their spongivores need to learn skills to overcome these defenses to obtain their food. These skills allow spongivores to increase their feeding and use of sponges. Spongivores have three primary strategies for dealing with sponge defenses: choice based on colour, able to handle secondary metabolites and brain development for memory.
Choice based on colour was involved based on which sponge the spongivore would choose to eat. A spongivore would bite small sample of sponges and if they were unharmed that they would continue eating that specific sponge and then move on to another sponge of the same colour.
Spongivores have adapted to be able to handle the secondary metabolites that sponges have. Therefore, spongivores are able to consume a variety of sponges without getting harmed.
Spongivores also have enough brain development to be able to remember the same species of sponge it has eaten in the |
https://en.wikipedia.org/wiki/Microscopic%20traffic%20flow%20model | Microscopic traffic flow models are a class of scientific models of vehicular traffic dynamics.
In contrast, to macroscopic models, microscopic traffic flow models simulate single vehicle-driver units, so the dynamic variables of the models represent microscopic properties like the position and velocity of single vehicles.
Car-following models
Also known as time-continuous models, all car-following models have in common that they are defined by ordinary differential equations describing the complete dynamics of the vehicles' positions and velocities . It is assumed that the input stimuli of the drivers are restricted to their own velocity , the net distance (bumper-to-bumper distance) to the leading vehicle (where denotes the vehicle length), and the velocity of the leading vehicle. The equation of motion of each vehicle is characterized by an acceleration function that depends on those input stimuli:
In general, the driving behavior of a single driver-vehicle unit might not merely depend on the immediate leader but on the vehicles in front. The equation of motion in this more generalized form reads:
Examples of car-following models
Optimal velocity model (OVM)
Velocity difference model (VDIFF)
Wiedemann model (1974)
Gipps' model (Gipps, 1981)
Intelligent driver model (IDM, 1999)
DNN based anticipatory driving model (DDS, 2021)
Cellular automaton models
Cellular automaton (CA) models use integer variables to describe the dynamical properties of the system. The road is divided into sections of a certain length and the time is discretized to steps of . Each road section can either be occupied by a vehicle or empty and the dynamics are given by updated rules of the form:
(the simulation time is measured in units of and the vehicle positions in units of ).
The time scale is typically given by the reaction time of a human driver, . With fixed, the length of the road sections determines the granularity of the model. At a complete standstill, the |
https://en.wikipedia.org/wiki/Quadratically%20constrained%20quadratic%20program | In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form
where P0, …, Pm are n-by-n matrices and x ∈ Rn is the optimization variable.
If P0, …, Pm are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If P1, … ,Pm are all zero, then the constraints are in fact linear and the problem is a quadratic program.
Hardness
Solving the general case is an NP-hard problem. To see this, note that the two constraints x1(x1 − 1) ≤ 0 and x1(x1 − 1) ≥ 0 are equivalent to the constraint x1(x1 − 1) = 0, which is in turn equivalent to the constraint x1 ∈ {0, 1}. Hence, any 0–1 integer program (in which all variables have to be either 0 or 1) can be formulated as a quadratically constrained quadratic program. Since 0–1 integer programming is NP-hard in general, QCQP is also NP-hard.
Relaxation
There are two main relaxations of QCQP: using semidefinite programming (SDP), and using the reformulation-linearization technique (RLT). For some classes of QCQP problems (precisely, QCQPs with zero diagonal elements in the data matrices), second-order cone programming (SOCP) and linear programming (LP) relaxations providing the same objective value as the SDP relaxation are available.
Nonconvex QCQPs with non-positive off-diagonal elements can be exactly solved by the SDP or SOCP relaxations, and there are polynomial-time-checkable sufficient conditions for SDP relaxations of general QCQPs to be exact. Moreover, it was shown that a class of random general QCQPs has exact semidefinite relaxations with high probability as long as the number of constraints grows no faster than a fixed polynomial in the number of variables.
Semidefinite programming
When P0, …, Pm are all positive-definite matrices, the problem is convex and can be readily solved |
https://en.wikipedia.org/wiki/Ratio%20distribution | A ratio distribution (also known as a quotient distribution) is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions.
Given two (usually independent) random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
An example is the Cauchy distribution (also called the normal ratio distribution), which comes about as the ratio of two normally distributed variables with zero mean.
Two other distributions often used in test-statistics are also ratio distributions:
the t-distribution arises from a Gaussian random variable divided by an independent chi-distributed random variable,
while the F-distribution originates from the ratio of two independent chi-squared distributed random variables.
More general ratio distributions have been considered in the literature.
Often the ratio distributions are heavy-tailed, and it may be difficult to work with such distributions and develop an associated statistical test.
A method based on the median has been suggested as a "work-around".
Algebra of random variables
The ratio is one type of algebra for random variables:
Related to the ratio distribution are the product distribution, sum distribution and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios.
Many of these distributions are described in Melvin D. Springer's book from 1979 The Algebra of Random Variables.
The algebraic rules known with ordinary numbers do not apply for the algebra of random variables.
For example, if a product is C = AB and a ratio is D=C/A it does not necessarily mean that the distributions of D and B are the same.
Indeed, a peculiar effect is seen for the Cauchy distribution: The product and the ratio of two independent Cauchy distributions (with the same scale parameter and the location parameter set to zero) will give the same distribution.
This be |
https://en.wikipedia.org/wiki/MDMX | The MDMX (MIPS Digital Media eXtension), also known as MaDMaX, is an extension to the MIPS architecture released in October 1996 at the Microprocessor Forum.
History
MDMX was developed to accelerate multimedia applications that were becoming more popular and common in the 1990s on RISC and CISC systems.
Functionality
MDMX defines a new set of thirty-two 64-bit registers called media registers, which are mapped onto the existing floating-point registers to save hardware; and a 192-bit extended product accumulator.
The media registers hold two new data types: octo byte (OB) and quad half (QH) that contain eight bytes (8-bit) and four halfwords (16-bit) integers.
Variants of existing instructions operate on these data types, performing saturating arithmetic, logical, shift, compare and align operations.
MDMX also introduced 19 instructions for permutation, manipulating bytes in registers, performing arithmetic with the accumulator, and accumulator access. |
https://en.wikipedia.org/wiki/Japanese%20wolf | The Japanese wolf (, , or , [see below]; Canis lupus hodophilax), also known as the Honshū wolf, is an extinct subspecies of the gray wolf that was once endemic to the islands of Honshū, Shikoku and Kyūshū in the Japanese archipelago.
It was one of two subspecies that were once found in the Japanese archipelago, the other being the Hokkaido wolf. Phylogenetic evidence indicates that Japanese wolf was the last surviving wild member of the Pleistocene wolf lineage (in contrast to the Hokkaido wolf which belonged to the lineage of the modern day gray wolf), and may have been the closest wild relative of the domestic dog. Many dog breeds originating from Japan also have Japanese wolf DNA from past hybridization.
Despite long being revered in Japan, the introduction of rabies and canine distemper to Japan led to the decimation of the population, and policies enacted during the Meiji Restoration led to the persecution and eventual total extermination of the subspecies by the early 20th century. Well-documented observations of similar canids have been made throughout the 20th and 21st centuries, and have been suggested to be surviving wolves. However, due to environmental and behavioral factors, doubts persist over their identity.
Etymology
C. hodopylax'''s binomial name derives from the Greek Hodos (path) and phylax (guardian), in reference to Okuri-inu from Japanese folklore, which portrayed wolves or weasels as the protectors of travelers.
There had been numerous other aliases referring to Japanese wolf, and the name ōkami (wolf) is derived from the Old Japanese öpö-kamï, meaning either "great-spirit" where wild animals were associated with the mountain spirit Yama-no-kami in the Shinto religion, or "big dog", or "big bite" (ōkami or ōkame), and "big mouth"; Ōkuchi-no-Makami (Japanese) was an old and deified alias for Japanese wolf where it was both worshipped and feared, and it meant "a true god with big-mouth" based on several theories; either referring to wolf's |
https://en.wikipedia.org/wiki/Surf%20zone | As ocean surface waves approach shore, they get taller and break, forming the foamy, bubbly surface called surf. The region of breaking waves defines the surf zone, or breaker zone. After breaking in the surf zone, the waves (now reduced in height) continue to move in, and they run up onto the sloping front of the beach, forming an uprush of water called swash. The water then runs back again as backwash. The nearshore zone where wave water comes onto the beach is the surf zone. The water in the surf zone is shallow, usually between deep; this causes the waves to be unstable.
Animal life
The animals that often are found living in the surf zone are crabs, clams, and snails. Surf clams and mole crabs are two species that stand out as inhabitants of the surf zone. Both of these animals are very fast burrowers. The surf clam, also known as the variable coquina, is a filter feeder that uses its gills to filter microalgae, tiny zooplankton, and small particulates out of seawater. The mole crab is a suspension feeder that eats by capturing zooplankton with its antennae. All of these creatures burrow down into the sand to escape from being pulled into the ocean from the tides and waves. They also burrow themselves in the sand to protect themselves from predators. The surf zone is full of nutrients, oxygen, and sunlight which leaves the zone very productive with animal life.
Rip currents
The surf zone can contain dangerous rip currents: strong local currents which flow offshore and pose a threat to swimmers. Rip-current outlooks use the following set of qualifications:
Low-risk rip currents: Wind and/or wave conditions are not expected to support the development of rip currents; however, rip currents can sometimes occur, especially in the vicinity of jetties and piers. Know how to swim and heed the advice of lifeguards.
Moderate-risk rip currents: Wind and/or wave conditions support stronger or more frequent rip currents. Only experienced surf swimmers should enter the |
https://en.wikipedia.org/wiki/Mandat%20International | Mandat International, also known as the International Cooperation Foundation, is an international non-governmental organization based in Geneva, Switzerland with consultative status with the United Nations Economic and Social Council, the UNDPI, and the United Nations Conference on Trade and Development.
History
Mandat International was established in 1995.
Description
Mandat International is a member of the Internet of Things International Forum and organizes the annual event IoT week. |
https://en.wikipedia.org/wiki/Dental%20radiography | Dental radiographs, commonly known as X-rays, are radiographs used to diagnose hidden dental structures, malignant or benign masses, bone loss, and cavities.
A radiographic image is formed by a controlled burst of X-ray radiation which penetrates oral structures at different levels, depending on varying anatomical densities, before striking the film or sensor. Teeth appear lighter because less radiation penetrates them to reach the film. Dental caries, infections and other changes in the bone density, and the periodontal ligament, appear darker because X-rays readily penetrate these less dense structures. Dental restorations (fillings, crowns) may appear lighter or darker, depending on the density of the material.
The dosage of X-ray radiation received by a dental patient is typically small (around 0.150 mSv for a full mouth series), equivalent to a few days' worth of background environmental radiation exposure, or similar to the dose received during a cross-country airplane flight (concentrated into one short burst aimed at a small area). Incidental exposure is further reduced by the use of a lead shield, lead apron, sometimes with a lead thyroid collar. Technician exposure is reduced by stepping out of the room, or behind adequate shielding material, when the X-ray source is activated.
Once photographic film has been exposed to X-ray radiation, it needs to be developed, traditionally using a process where the film is exposed to a series of chemicals in a dark room, as the films are sensitive to normal light. This can be a time-consuming process, and incorrect exposures or mistakes in the development process can necessitate retakes, exposing the patient to additional radiation. Digital X-rays, which replace the film with an electronic sensor, address some of these issues, and are becoming widely used in dentistry as the technology evolves. They may require less radiation and are processed much more quickly than conventional radiographic films, often instantly vi |
https://en.wikipedia.org/wiki/Algebraic%20cycle | In mathematics, an algebraic cycle on an algebraic variety V is a formal linear combination of subvarieties of V. These are the part of the algebraic topology of V that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety.
The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors. The earliest work on algebraic cycles focused on the case of divisors, particularly divisors on algebraic curves. Divisors on algebraic curves are formal linear combinations of points on the curve. Classical work on algebraic curves related these to intrinsic data, such as the regular differentials on a compact Riemann surface, and to extrinsic properties, such as embeddings of the curve into projective space.
While divisors on higher-dimensional varieties continue to play an important role in determining the structure of the variety, on varieties of dimension two or more there are also higher codimension cycles to consider. The behavior of these cycles is strikingly different from that of divisors. For example, every curve has a constant N such that every divisor of degree zero is linearly equivalent to a difference of two effective divisors of degree at most N. David Mumford proved that, on a smooth complete complex algebraic surface S with positive geometric genus, the analogous statement for the group of rational equivalence classes of codimension two cycles in S is false. The hypothesis that the geometric genus is positive essentially means (by the Lefschetz theorem on (1,1)-classes) that the cohomology group contains transcendental information, and in effect Mumford's theorem implies that, despite having a purely algebraic definition, it shares transcendental information with . Mumford's theorem has since been greatly general |
https://en.wikipedia.org/wiki/Dragon%20Knight%20II | Dragon Knight II (ドラゴンナイトII) is a fantasy-themed eroge role-playing video game in the Dragon Knight franchise that was originally developed and published by ELF Corporation in 1990-1991 only in Japan as the first sequel to the original Dragon Knight game from 1989. The game is an erotic dungeon crawler in which a young warrior Takeru fights to lift a witch's curse that has turned girls into monsters.
Following the commercial and critical success of Dragon Knight II, ELF followed up with Dragon Knight III / Knights of Xentar in 1991. A censored remake of Dragon Knight II was published by NEC Avenue in 1992.
Gameplay
Dragon Knight II is available only in Japanese. Its gameplay system has not changed much since the first Dragon Knight game, as it is still a standard dungeon crawler with first-person view perspective and 2D graphics. The player spends most of the time navigating dungeon-like mazes and fighting enemies. As progress is made, the mazes will become more complicated, but as in the first game there is an aid for the player in the form of a mini-map with grid coordinates. The player can also visit shops and converse with non-hostile NPCs.
The game starts with just one player character, Takeru, but two other characters join up later on. The game's battle system has also undergone minor changes. It still features turn-based battles that are mostly randomly generated, but the fights are better balanced than in the first game. The player can attack, defend, and use spells and items to deal with various types of female enemies (berserker, banshee, catgirl, centaur, elf, harpy, ninja, mummy, werewolf, and so on), who are being fought only one at a time. These enemies are actually girls who have been transformed into monsters, and whenever the player character fights off one of them, the subdued enemy loses her clothing. Later, when the enemies revert to their normal self, in their gratitude they offer themselves to have sex with the protagonist in a cutscene (ce |
https://en.wikipedia.org/wiki/PAM%20%28cooking%20oil%29 | PAM is a cooking spray currently owned and distributed by ConAgra Foods. Its main ingredient is canola oil.
PAM was introduced in 1959 by Leon Rubin who, with Arthur Meyerhoff, started PAM Products, Inc. to market the spray. The name PAM is an acronym for Product of Arthur Meyerhoff. In 1971, Gibraltar Industries merged with American Home Products (now Wyeth) and became part of the Boyle-Midway portfolio. When Reckitt & Colman (now Reckitt Benckiser) acquired Boyle-Midway from American Home Products in 1990, PAM became part of the American Home Foods subsidiary. In 1996, AHF was acquired from American Home Products by Hicks, Muse, Tate & Furst and C. Dean Metropoulos & Company, becoming International Home Foods, which in turn was acquired by ConAgra in 2000.
PAM is marketed in various flavors, such as butter and olive oil, meant to impart the flavor of cooking with those ingredients. Flavors such as lemon or garlic are also offered. PAM also markets high-temperature sprays formulated for use when grilling etc., and one containing flour suitable for dry-cooking as in baking. PAM is marketed as a nominally zero-calorie alternative to other oils used as lubricants when using cooking methods such as sauteing or baking (US regulations allow food products to claim to be zero-calorie if they contain fewer than 5 calories per Reference Amount Customarily Consumed and per labeled serving, and the serving size of a second spray is only 0.3 g containing about 2 calories.) Similar sprays are offered by other manufacturers. |
https://en.wikipedia.org/wiki/Mendelian%20error | A Mendelian error in the genetic analysis of a species, describes an allele in an individual which could not have been received from either of its biological parents by Mendelian inheritance. Inheritance is defined by a set of related individuals who have the same or similar phenotypes for a locus of a particular gene. A Mendelian error means that the very structure of the inheritance as defined by analysis of the parental genes is incorrect: one parent of one individual
is not actually the parent indicated; therefore the assumption is that the parental information is incorrect.
Possible explanations for Mendelian errors are genotyping errors, erroneous assignment of the individuals as relatives, or de novo mutations. Mendelian error is established by demonstrating the existence of a trait which is inconsistent with every possible combination of genotype compatible with the individual. This method of determination requires pedigree checking, however, and establishing a contradiction between phenotype and pedigree is an NP-complete problem. Genetic inconsistencies which do not correspond to this definition are Non-Mendelian Errors.
Statistical genetics analysis is used to detect these errors and to detect the possibility of the individual being linked to a specific disease linked to a single gene. Examples of such diseases in humans caused by single genes are Huntington's disease or Marfan syndrome.
See also
Gregor Mendel
SNP genotyping
Footnotes
Mendelian error detection in complex pedigree using weighted constraint satisfaction techniques
Genetics
error |
https://en.wikipedia.org/wiki/Pyriform%20sinus | The pyriform sinus (also piriform recess, piriform sinus, piriform fossa, or smuggler's fossa) is a small recess on either side of the laryngeal inlet. It is bounded medially by the aryepiglottic fold, and laterally by the thyroid cartilage and thyrohyoid membrane. The fossae are involved in speech.
Etymology
The term "pyriform," which means "pear-shaped," is also sometimes spelled "piriform".
The term smuggler's fossa comes from its use for smuggling of small items.
Anatomy
Relations
Deep to the mucous membrane of the pyriform fossa lie the recurrent laryngeal nerve as well as the internal laryngeal nerve, a branch of the superior laryngeal nerve. The internal laryngeal nerve supplies sensation to the area, and it may become damaged if the mucous membrane is inadvertently punctured. The pyriform sinus is a subsite of the hypopharynx. This distinction is important for head and neck cancer staging and treatment.
Clinical significance
This sinus is a common place for food particles to become trapped; if foreign material becomes lodged in the piriform fossa of an infant, it may be retrieved nonsurgically. If the area is injured (e.g., by a fish bone), it can give the sensation of food stuck in the subject's throat.
Remnants of the pharyngeal pouches III and IV may extent to the piriform sinus as sinus tracts which are sometimes imprecisely called "fistulas". This can result in acute infectious thyroiditis which is more common on the left side of the neck. |
https://en.wikipedia.org/wiki/Plica%20semilunaris%20of%20the%20fauces | The plica semilunaris is the thin upper part of the fold of mucous membrane in the supratonsillar fossa that reaches across between the two arches. A separate fold is called the plica triangularis which runs inferoposteriorly from the posterior surface of the palatoglossal arch to cover the inferior portion of the tonsil. |
https://en.wikipedia.org/wiki/Tank%20blanketing | Tank blanketing, also called gas sealing or tank padding, is the process of applying a gas to the empty space in a storage container. The term storage container here refers to any container that is used to store products, regardless of its size. Though tank blanketing is used for a variety of reasons, it typically involves using a buffer gas to protect products inside the storage container. A few of the benefits of blanketing include a longer life of the product in the container, reduced hazards, and longer equipment life cycles.
Methods
In 1970, Appalachian Controls Environmental (ACE) was the world’s first company to introduce a tank blanketing valve. There are now many ready-made systems available for purchase from a variety of process equipment companies. It is also possible to piece together your own system using a variety of different equipment. Regardless of which method is used, the basic requirements are the same. There must be a way of allowing the blanketing gas into the system, and a way to vent the gas should the pressure get too high.
Since ACE introduced its valve many companies have engineered their own versions. Though many of the products available vary in features and applicability, the fundamental design is the same. When the pressure inside the container drops below a set point, a valve opens and allows the blanketing gas to enter. Once the pressure reaches the set point, the valve closes. As a safety feature, many systems include a pressure vent that opens when the pressure inside exceeds a maximum pressure set point. This helps to prevent the container from rupturing due to high pressure. Since most blanketing gas sources will provide gas at a much higher than desired pressure, a blanketing system will also use a pressure reducing valve to decrease the inlet pressure to the tank.
Although it varies from application to application, blanketing systems usually operate at a slightly higher than atmospheric pressure (a few inches of water colu |
https://en.wikipedia.org/wiki/Model-specific%20register | A model-specific register (MSR) is any of various control registers in the x86 system architecture used for debugging, program execution tracing, computer performance monitoring, and toggling certain CPU features.
History
With the introduction of the 80386 processor, Intel began introducing "experimental" features that would not necessarily be present in future versions of the processor. The first of these were two "test registers" (TR6 and TR7) that enabled testing of the processor's translation lookaside buffer (TLB); a special variant of the instruction allowed moving to and from the test registers. Three additional test registers followed in the 80486 (TR3–TR5) that enabled testing of the processor's caches for code and data. None of these five registers were implemented in the subsequent Pentium processor; the special variant of generated an invalid opcode exception.
With the introduction of the Pentium processor, Intel provided a pair of instructions ( and ) to access current and future "model-specific registers", as well as the instruction to determine which features are present on a particular model. Many of these registers have proven useful enough to be retained. Intel has classified these as architectural model-specific registers and has committed to their inclusion in future product lines.
Using MSRs
Reading and writing to these registers is handled by the rdmsr and wrmsr instructions, respectively. As these are privileged instructions, they can be executed only by the operating system. Use of the Linux msr kernel module creates a pseudo file "/dev/cpu/x/msr" (with a unique x for each processor or processor core). A user with permissions to read and/or write to this file can use the file I/O API to access these registers. The msr-tools package provides a reference implementation.
Documentation regarding which MSRs a certain processor implementation supports is usually found in the processor documentation of the CPU vendor. Examples for rather well |
https://en.wikipedia.org/wiki/Tonsillar%20fossa | The tonsillar fossa (or tonsillar sinus) is a space delineated by the triangular fold (plica triangularis) of the palatoglossal and palatopharyngeal arches within the lateral wall of the oral cavity.
In many cases, however, this sinus is obliterated by its walls becoming adherent to the palatine tonsils. |
https://en.wikipedia.org/wiki/Kaplansky%20density%20theorem | In the theory of von Neumann algebras, the Kaplansky density theorem, due to Irving Kaplansky, is a fundamental approximation theorem. The importance and ubiquity of this technical tool led Gert Pedersen to comment in one of his books that,
The density theorem is Kaplansky's great gift to mankind. It can be used every day, and twice on Sundays.
Formal statement
Let K− denote the strong-operator closure of a set K in B(H), the set of bounded operators on the Hilbert space H, and let (K)1 denote the intersection of K with the unit ball of B(H).
Kaplansky density theorem. If is a self-adjoint algebra of operators in , then each element in the unit ball of the strong-operator closure of is in the strong-operator closure of the unit ball of . In other words, . If is a self-adjoint operator in , then is in the strong-operator closure of the set of self-adjoint operators in .
The Kaplansky density theorem can be used to formulate some approximations with respect to the strong operator topology.
1) If h is a positive operator in (A−)1, then h is in the strong-operator closure of the set of self-adjoint operators in (A+)1, where A+ denotes the set of positive operators in A.
2) If A is a C*-algebra acting on the Hilbert space H and u is a unitary operator in A−, then u is in the strong-operator closure of the set of unitary operators in A.
In the density theorem and 1) above, the results also hold if one considers a ball of radius r > 0, instead of the unit ball.
Proof
The standard proof uses the fact that a bounded continuous real-valued function f is strong-operator continuous. In other words, for a net {aα} of self-adjoint operators in A, the continuous functional calculus a → f(a) satisfies,
in the strong operator topology. This shows that self-adjoint part of the unit ball in A− can be approximated strongly by self-adjoint elements in A. A matrix computation in M2(A) considering the self-adjoint operator with entries 0 on the diagonal and a and a* at the ot |
https://en.wikipedia.org/wiki/Hyoglossal%20membrane | The hyoglossal membrane is a strong fibrous lamina, which connects the under surface of the root of the tongue to the body of the hyoid bone. It is characterized by a posterior widening of the lingual septum.
This membrane receives, in front, some of the fibers of the Genioglossi. Inferior fibers are attached to hyoglossal membrane, and to the upper anterior body of the midline of hyoid bone. |
https://en.wikipedia.org/wiki/Blittable%20types | Blittable types are data types in the Microsoft .NET Framework that have an identical presentation in memory for both managed and unmanaged code. Understanding the difference between blittable and non-blittable types can aid in using COM Interop or P/Invoke, two techniques for interoperability in .NET applications.
Origin
A memory copy operation is sometimes referred to as block transfer, shortened to bit blit (and dedicated hardware to make such a transfer is called a blitter). Blittable is a .NET-specific term expressing whether it is legal to copy an object using a block transfer.
Interoperability overview
Interoperability can be bidirectional sharing of data and methods between unmanaged code and managed .NET code. .NET provides two ways of interoperating between the two: COM Interop and P/Invoke. Though the methodology is different, in both cases marshalling (conversion between representations of data, formats for calling functions and formats for returning values) must take place. COM Interop deals with this conversion between managed code and COM objects, whereas P/Invoke handles interactions between managed code and Win32 code. The concept of blittable and non-blittable data types applies to both—specifically to the problem of converting data between managed and unmanaged memory. This marshalling is performed by the interop marshaller, which is invoked automatically by the CLR when needed.
Blittable types defined
A blittable type is a data type that does not require special attention from the interop marshaler because by default it has a common representation in managed and unmanaged memory. By pinning the data in memory, the garbage collector will be prevented from moving it, allowing it to be shared in-place with the unmanaged application. This means that both managed and unmanaged code will alter the memory locations of these types in a consistent manner, and much less effort is required by the marshaler to maintain data integrity. The following are so |
https://en.wikipedia.org/wiki/Scattering%20length | The scattering length in quantum mechanics describes low-energy scattering. For potentials that decay faster than as , it is defined as the following low-energy limit:
where is the scattering length, is the wave number, and is the phase shift of the outgoing spherical wave. The elastic cross section, , at low energies is determined solely by the scattering length:
General concept
When a slow particle scatters off a short ranged scatterer (e.g. an impurity in a solid or a heavy particle) it cannot resolve the structure of the object since its de Broglie wavelength is very long. The idea is that then it should not be important what precise potential one scatters off, but only how the potential looks at long length scales. The formal way to solve this problem is to do a partial wave expansion (somewhat analogous to the multipole expansion in classical electrodynamics), where one expands in the angular momentum components of the outgoing wave. At very low energy the incoming particle does not see any structure, therefore to lowest order one has only a spherical outgoing wave, called the s-wave in analogy with the atomic orbital at angular momentum quantum number l=0. At higher energies one also needs to consider p and d-wave (l=1,2) scattering and so on.
The idea of describing low energy properties in terms of a few parameters and symmetries is very powerful, and is also behind the concept of renormalization.
The concept of the scattering length can also be extended to potentials that decay slower than as . A famous example, relevant for proton-proton scattering, is the Coulomb-modified scattering length.
Example
As an example on how to compute the s-wave (i.e. angular momentum ) scattering length for a given potential we look at the infinitely repulsive spherical potential well of radius in 3 dimensions. The radial Schrödinger equation () outside of the well is just the same as for a free particle:
where the hard core potential requires that the wave fu |
https://en.wikipedia.org/wiki/Spectrum%20of%20theistic%20probability | Popularized by Richard Dawkins in The God Delusion, the spectrum of theistic probability is a way of categorizing one's belief regarding the probability of the existence of a deity.
Atheism, theism, and agnosticism
J. J. C. Smart argues that the distinction between atheism and agnosticism is unclear, and many people who have passionately described themselves as agnostics were in fact atheists. He writes that this mischaracterization is based on an unreasonable philosophical skepticism that would not allow us to make any claims to knowledge about the world. He proposes instead the following analysis:
Let us consider the appropriateness or otherwise of someone (call him 'Philo') describing himself as a theist, atheist or agnostic. I would suggest that if Philo estimates the various plausibilities to be such that on the evidence before him the probability of theism comes out near to one he should describe himself as a theist and if it comes out near zero he should call himself an atheist, and if it comes out somewhere in the middle he should call himself an agnostic. There are no strict rules about this classification because the borderlines are vague. If need be, like a middle-aged man who is not sure whether to call himself bald or not bald, he should explain himself more fully.
Dawkins' formulation
In The God Delusion, Richard Dawkins posits that "the existence of God is a scientific hypothesis like any other." He goes on to propose a continuous "spectrum of probabilities" between two extremes of opposite certainty, which can be represented by seven "milestones". Dawkins suggests definitive statements to summarize one's place along the spectrum of theistic probability. These "milestones" are:
Strong theist. 100% probability of God. In the words of Carl Jung: "I do not believe, I know."
De facto theist. Very high probability but short of 100%. "I don't know for certain, but I strongly believe in God and live my life on the assumption that he is there."
Leanin |
https://en.wikipedia.org/wiki/Evolutionary%20physiology | Evolutionary physiology is the study of the biological evolution of physiological structures and processes; that is, the manner in which the functional characteristics of individuals in a population of organisms have responded to natural selection across multiple generations during the history of the population. It is a sub-discipline of both physiology and evolutionary biology. Practitioners in the field come from a variety of backgrounds, including physiology, evolutionary biology, ecology, and genetics.
Accordingly, the range of phenotypes studied by evolutionary physiologists is broad, including life history, behavior, whole-organism performance, functional morphology, biomechanics, anatomy, classical physiology, endocrinology, biochemistry, and molecular evolution. The field is closely related to comparative physiology and environmental physiology, and its findings are a major concern of evolutionary medicine. One definition that has been offered is "the study of the physiological basis of fitness, namely, correlated evolution (including constraints and trade-offs) of physiological form and function associated with the environment, diet, homeostasis, energy management, longevity, and mortality and life history characteristics".
History
As the name implies, evolutionary physiology is the product of what were at one time two distinct scientific disciplines. According to Garland and Carter, evolutionary physiology arose in the late 1970s, following debates concerning the metabolic and thermoregulatory status of dinosaurs (see physiology of dinosaurs) and mammal-like reptiles.
This period was followed by attempts in the early 1980s to integrate quantitative genetics into evolutionary biology, which had spillover effects on other fields, such as behavioral ecology and ecophysiology. In the mid- to late 1980s, phylogenetic comparative methods started to become popular in many fields, including physiological ecology and comparative physiology. A 1987 volume titled |
https://en.wikipedia.org/wiki/Nucleic%20acid%20quantitation | In molecular biology, quantitation of nucleic acids is commonly performed to determine the average concentrations of DNA or RNA present in a mixture, as well as their purity. Reactions that use nucleic acids often require particular amounts and purity for optimum performance. To date, there are two main approaches used by scientists to quantitate, or establish the concentration, of nucleic acids (such as DNA or RNA) in a solution. These are spectrophotometric quantification and UV fluorescence tagging in presence of a DNA dye.
Spectrophotometric analysis
One of the most commonly used practices to quantitate DNA or RNA is the use of spectrophotometric analysis using a spectrophotometer. A spectrophotometer is able to determine the average concentrations of the nucleic acids DNA or RNA present in a mixture, as well as their purity.
Spectrophotometric analysis is based on the principles that nucleic acids absorb ultraviolet light in a specific pattern. In the case of DNA and RNA, a sample is exposed to ultraviolet light at a wavelength of 260 nanometres (nm) and a photo-detector measures the light that passes through the sample. Some of the ultraviolet light will pass through and some will be absorbed by the DNA / RNA. The more light absorbed by the sample, the higher the nucleic acid concentration in the sample. The resulting effect is that less light will strike the photodetector and this will produce a higher optical density (OD)
Using the Beer–Lambert law it is possible to relate the amount of light absorbed to the concentration of the absorbing molecule. At a wavelength of 260 nm, the average extinction coefficient for double-stranded DNA is 0.020 (μg/ml)−1 cm−1, for single-stranded DNA it is 0.027 (μg/ml)−1 cm−1, for single-stranded RNA it is 0.025 (μg/ml)−1 cm−1 and for short single-stranded oligonucleotides it is dependent on the length and base composition. Thus, an Absorbance (A) of 1 corresponds to a concentration of 50 μg/ml for double-stranded DNA. Th |
https://en.wikipedia.org/wiki/Pokagon%20Interpretive%20Center | The interpretive center located in Pokagon State Park, Angola, Indiana, contains animals and displays about Pokagon and its surrounding areas. It is staffed by full-time and part-time naturalists. The Interpretive Center is the start of some interpretive hikes and the adjacent auditorium is the site of some programs.
External links
Pokagon Interpretive Center
Pokagon State Park
Nature centers in Indiana
Tourist attractions in Steuben County, Indiana
Education in Steuben County, Indiana
Buildings and structures in Steuben County, Indiana |
https://en.wikipedia.org/wiki/Internet%20Routing%20Registry | An Internet Routing Registry (IRR) is a database of Internet route objects for determining, and sharing route and related information used for configuring routers, with a view to avoiding problematic issues between Internet service providers.
The Internet routing registry works by providing an interlinked hierarchy of objects designed to facilitate the organization of IP routing between organizations, and also to provide data in an appropriate format for automatic programming of routers. Network engineers from participating organizations are authorized to modify the Routing Policy Specification Language (RPSL) objects, in the registry, for their own networks. Then, any network engineer, or member of the public, is able to query the route registry for particular information of interest.
Relevant objects
AUT-NUM
INETNUM6
ROUTE
INETNUM
ROUTE6
AS-SET
Status of implementation
In some RIR regions, the adoption/updates of for e.g. AUT-NUM (Represents for e.g. Autonomous system (Internet)) is only done when the record is created by the RIR, and as long nobody complains about issues, the records remain unreliable/original-state. Most global ASNs provide valid information about their resources in their e.g. AS-SET objects. Peering networks are highly automated, and it would be very harmful for the ASNs.
See also
Resource Public Key Infrastructure
Autonomous system (Internet) |
https://en.wikipedia.org/wiki/Mental%20tubercle | The mandibular symphysis divides below and encloses a triangular eminence, the mental protuberance, the base of which is depressed in the center but raised on either side to form the mental tubercle. The two mental tubercles along with the medial mental protuberance are collectively called the mental trigone. |
https://en.wikipedia.org/wiki/Jugular%20lymph%20trunk | The jugular trunk is a lymphatic vessel in the neck. It is formed by vessels that emerge from the superior deep cervical lymph nodes and unite to efferents of the inferior deep cervical lymph nodes.
On the right side, this trunk ends in the junction of the internal jugular and subclavian veins, called the venous angle. On the left side it joins the thoracic duct. |
https://en.wikipedia.org/wiki/Preauricular%20deep%20parotid%20lymph%20nodes | The preauricular deep parotid lymph nodes (anterior auricular glands or preauricular glands), from one to three in number, lie immediately in front of the tragus.
Their afferents drain multiple surfaces, most of which are lateral in origin. A specific example would be the lateral portions of the eye's bulbar and palpebral conjunctiva as well as the skin adjacent to the ear within the temporal region. The efferents of these nodes pass to the superior deep cervical glands.
The preauricular nodes glands will present with marked swelling in viral conjunctivitis. |
https://en.wikipedia.org/wiki/Diphyllobothriasis | Diphyllobothriasis is the infection caused by tapeworms of the genus Diphyllobothrium (commonly D. latum and D. nihonkaiense).
Diphyllobothriasis mostly occurs in regions where raw fish is regularly consumed; those who consume raw fish are at risk of infection. The infection is often asymptomatic and usually presents only with mild symptoms, which may include gastrointestinal complaints, weight loss, and fatigue. Rarely, vitamin B12 deficiency (possibly leading to anaemia) and gastrointestinal obstructions may occur. Infection may be long-lasting in absence of treatment. Diphyllobothriasis is generally diagnosed by looking for eggs or tapeworm segments in passed stool. Treatment with antiparasitic medications is straightforward, effective, and safe.
Signs and symptoms
[[File:Symptoms of Raw fish infection.png|thumb|300px|Symptoms of parasite infection by raw fish: Clonorchis sinensis (a trematode/fluke), Anisakis (a nematode/roundworm) and Diphyllobothrium a (cestode/tapeworm), all have gastrointestinal, but otherwise distinct, symptoms.For Anisakiasis: WrongDiagnosis: Symptoms of Anisakiasis Retrieved on April 14, 2009For symptoms of diphyllobothrium due to vitamin B12-deficiency University of Maryland Medical Center > Megaloblastic (Pernicious) Anemia Retrieved on April 14, 2009]]
Most infections (~80%) are asymptomatic. Infections may be long-lasting, persisting for many years or decades (up to 25 years) if untreated.
Symptoms (when present) are generally mild. Manifestations may include abdominal pain and discomfort, diarrhea, vomiting, constipation, weight loss, and fatigue.
Additional symptoms have been reported/described, including dyspepsia, abdominal distension (commonly as presenting complaint), headache, myalgia, and dizziness.
Complications
While the infection is generally mild, complications may occur. Complications are predicated on parasite burden, and are generally related to vitamin B12 deficiency and related health conditions.
Vitamin B1 |
https://en.wikipedia.org/wiki/Biocultural%20diversity | Biocultural diversity is defined by Luisa Maffi, co-founder and director of Terralingua, as "the diversity of life in all its manifestations: biological, cultural, and linguistic — which are interrelated (and possibly coevolved) within a complex socio-ecological adaptive system." "The diversity of life is made up not only of the diversity of plants and animal species, habitats and ecosystems found on the planet, but also of the diversity of human cultures and languages." Research has linked biocultural diversity to the resilience of social-ecological systems. Certain geographic areas have been positively correlated with high levels of biocultural diversity, including those of low latitudes, higher rainfalls, higher temperatures, coastlines, and high altitudes. A negative correlation is found with areas of high latitudes, plains, and drier climates. Positive correlations can also be found between biological diversity and linguistic diversity, illustrated in the overlap between the distribution of plant diverse and language diverse zones. Social factors, such as modes of subsistence, have also been found to affect biocultural diversity.
Measuring biocultural diversity
Biocultural diversity can be quantified using QCUs (quantum co-evolution units), and can be monitored through time to quantify biocultural evolution (a form of coevolution). This methodology can be used to study the role that biocultural diversity plays in the resilience of social-ecological systems. It can also be applied on a landscape scale to identify critical cultural habitat for Indigenous peoples.
Linguistic diversity
Cultural traditions are passed down through language, making language an important factor in the existence of biocultural diversity. There has been a decline in the number of languages globally. The Linguistic Diversity Index has recorded that between 1970 and 2005, the number of languages spoken globally has decreased by 20%. This decline has been especially observed in indigenou |
https://en.wikipedia.org/wiki/Acinetobacter%20baumannii | Acinetobacter baumannii is a typically short, almost round, rod-shaped (coccobacillus) Gram-negative bacterium. It is named after the bacteriologist Paul Baumann. It can be an opportunistic pathogen in humans, affecting people with compromised immune systems, and is becoming increasingly important as a hospital-derived (nosocomial) infection. While other species of the genus Acinetobacter are often found in soil samples (leading to the common misconception that A. baumannii is a soil organism, too), it is almost exclusively isolated from hospital environments. Although occasionally it has been found in environmental soil and water samples, its natural habitat is still not known.
Bacteria of this genus lack flagella, whip-like structures many bacteria use for locomotion, but exhibit twitching or swarming motility. This may be due to the activity of type IV pili, pole-like structures that can be extended and retracted. Motility in A. baumannii may also be due to the excretion of exopolysaccharide, creating a film of high-molecular-weight sugar chains behind the bacterium to move forward. Clinical microbiologists typically differentiate members of the genus Acinetobacter from other Moraxellaceae by performing an oxidase test, as Acinetobacter spp. are the only members of the Moraxellaceae to lack cytochrome c oxidases.
A. baumannii is part of the ACB complex (A. baumannii, A. calcoaceticus, and Acinetobacter genomic species 13TU). It is difficult to determine the specific species of members of the ACB complex and they comprise the most clinically relevant members of the genus. A. baumannii has also been identified as an ESKAPE pathogen (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species), a group of pathogens with a high rate of antibiotic resistance that are responsible for the majority of nosocomial infections.
Colloquially, A. baumannii is referred to as "Iraqibacter" due t |
https://en.wikipedia.org/wiki/Computer%20audition | Computer audition (CA) or machine listening is the general field of study of algorithms and systems for audio interpretation by machines. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems "software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents."
Inspired by models of human audition, CA deals with questions of representation, transduction, grouping, use of musical knowledge and general sound semantics for the purpose of performing intelligent operations on audio and music signals by the computer. Technically this requires a combination of methods from the fields of signal processing, auditory modelling, music perception and cognition, pattern recognition, and machine learning, as well as more traditional methods of artificial intelligence for musical knowledge representation.
Applications
Like computer vision versus image processing, computer audition versus audio engineering deals with understanding of audio rather than processing. It also differs from problems of speech understanding by machine since it deals with general audio signals, such as natural sounds and musical recordings.
Applications of computer audition are widely varying, and include search for sounds, genre recognition, acoustic monitoring, music transcription, score following, audio texture, music improvisation, emotion in audio and so on.
Related disciplines
Computer Audition overlaps with the following disciplines:
Music information retrieval: methods for search and analysis of similarity between music signals.
Auditory scene analysis: understanding and description of audio sources and events.
Computational musi |
https://en.wikipedia.org/wiki/Kinematic%20pair | In classical mechanics, a kinematic pair is a connection between two physical objects that imposes constraints on their relative movement (kinematics). German engineer Franz Reuleaux introduced the kinematic pair as a new approach to the study of machines that provided an advance over the motion of elements consisting of simple machines.
Description
Kinematics is the branch of classical mechanics which describes the motion of points, bodies (objects) and systems of bodies (groups of objects) without consideration of the causes of motion. Kinematics as a field of study is often referred to as the "geometry of motion". For further detail, see Kinematics.
Hartenberg & Denavit presents the definition of a kinematic pair:
In the matter of connections between rigid bodies, Reuleaux recognized two kinds; he called them higher and lower pairs (of elements). With higher pairs, the two elements are in contact at a point or along a line, as in a ball bearing or disk cam and follower; the relative motions of coincident points are dissimilar. Lower pairs are those for which area contact may be visualized, as in pin connections, crossheads, ball-and socket joints and some others; the relative motion of coincident points of the elements, and hence of their links, are similar, and an exchange of elements from one link to the other does not alter the relative motion of the parts as it would with higher pairs.In kinematics, the two connected physical objects, forming a kinematic pair, are called 'rigid bodies'. In studies of mechanisms, manipulators or robots, the two objects are typically called 'links'.
Lower pair
A lower pair is an ideal joint that constrains contact between a surface in the moving body to a corresponding in the fixed body. A lower pair is one in which there occurs a surface or area contact between two members, e.g. nut and screw, universal joint used to connect two propeller shafts.
Cases of lower joints:
A revolute R joint, or hinged joint, requires a l |
https://en.wikipedia.org/wiki/European%20Green%20Belt | The European Green Belt initiative is a grassroots movement for nature conservation and sustainable development along the corridor of the former Iron Curtain. The term refers to both an environmental initiative as well as the area it concerns. The initiative is carried out under the patronage of the International Union for Conservation of Nature and formerly Mikhail Gorbachev. It is the aim of the initiative to create the backbone of an ecological network that runs from the Barents to the Black and Adriatic Seas.
The European Green Belt as an area follows the route of the former Iron Curtain and connects national parks, nature parks, biosphere reserves and transboundary protected areas as well as non-protected valuable habitats along or across the (former) borders.
Background
In 1970, satellite pictures showed a dark green belt of old-growth forest on the Finnish-Russian border. In the early 1980s, biologists discovered that the inner German border zone between Bavaria in the west and Thuringia in the east was a refuge for several rare bird species that had disappeared from the intensely used areas covering most of Central Europe. The reasoning behind this observation was that negative human impact on the environment is smaller in such border zones which are commonly closed to public access and thus wildlife is minimally impacted by human activities.
After the end of the Cold War in 1991, the strict border regimes were abandoned and the border zones gradually opened, starting with the German reunification in 1990 and continuing with the step-by-step integration of new member states into the Schengen Treaty as part of the enlargement process of the European Union. At the same time, large military facilities such as training grounds and military research establishments in or close to the border zones were closed down. For most cases, it was unclear whom these lands belonged to and thus what the fate of the valuable landscapes would be. Against this background, the |
https://en.wikipedia.org/wiki/Kinematic%20chain | In mechanical engineering, a kinematic chain is an assembly of rigid bodies connected by joints to provide constrained motion that is the mathematical model for a mechanical system. As the word chain suggests, the rigid bodies, or links, are constrained by their connections to other links. An example is the simple open chain formed by links connected in series, like the usual chain, which is the kinematic model for a typical robot manipulator.
Mathematical models of the connections, or joints, between two links are termed kinematic pairs. Kinematic pairs model the hinged and sliding joints fundamental to robotics, often called lower pairs and the surface contact joints critical to cams and gearing, called higher pairs. These joints are generally modeled as holonomic constraints. A kinematic diagram is a schematic of the mechanical system that shows the kinematic chain.
The modern use of kinematic chains includes compliance that arises from flexure joints in precision mechanisms, link compliance in compliant mechanisms and micro-electro-mechanical systems, and cable compliance in cable robotic and tensegrity systems.
Mobility formula
The degrees of freedom, or mobility, of a kinematic chain is the number of parameters that define the configuration of the chain.
A system of rigid bodies moving in space has degrees of freedom measured relative to a fixed frame. This frame is included in the count of bodies, so that mobility does not depend on link that forms the fixed frame. This means the degree-of-freedom of this system is , where is the number of moving bodies plus the fixed body.
Joints that connect bodies impose constraints. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints that a joint imposes in terms of the joint's freedom , where . In the case of a hinge or slider, which are one-degree-of-freedom joints, have and therefore .
The |
https://en.wikipedia.org/wiki/Phased-array%20optics | Phased-array optics is the technology of controlling the phase and amplitude of light waves transmitting, reflecting, or captured (received) by a two-dimensional surface using adjustable surface elements. An optical phased array (OPA) is the optical analog of a radio-wave phased array. By dynamically controlling the optical properties of a surface on a microscopic scale, it is possible to steer the direction of light beams (in an OPA transmitter), or the view direction of sensors (in an OPA receiver), without any moving parts. Phased-array beam steering is used for optical switching and multiplexing in optoelectronic devices and for aiming laser beams on a macroscopic scale.
Complicated patterns of phase variation can be used to produce diffractive optical elements, such as dynamic virtual lenses, for beam focusing or splitting in addition to aiming. Dynamic phase variation can also produce real-time holograms. Devices permitting detailed addressable phase control over two dimensions are a type of spatial light modulator (SLM).
Transmitter
An optical phased-array transmitter includes a light source (laser), power splitters, phase shifters, and an array of radiating elements. The output light of the laser source is split into several branches using a power splitter tree. Each branch is then fed to a tunable phase shifter. The phase-shifted light is input to a radiating element (a nanophotonic antenna) that couples the light into free space. Light radiated by the elements is combined in the far-field and forms the far-field pattern of the array. By adjusting the relative phase shift between the elements, a beam can be formed and steered.
Receiver
In an optical phased-array receiver, the incident light (usually coherent light) on a surface is captured by a collection of nanophotonic antennas that are placed on a 1D or 2D array. The light received by each element is phase-shifted and amplitude-weighted on a chip. These signals are then added together in the optic |
https://en.wikipedia.org/wiki/Armature%20%28computer%20animation%29 | An armature is a kinematic chain used in computer animation to simulate the motions of virtual human or animal characters. In the context of animation, the inverse kinematics of the armature is the most relevant computational algorithm.
There are two types of digital armatures: Keyframing (stop-motion) armatures and real-time (puppeteering) armatures. Keyframing armatures were initially developed to assist in animating digital characters without basing the movement on a live performance. The animator poses a device manually for each keyframe, while the character in the animation is set up with a mechanical structure equivalent to the armature. The device is connected to the animation software through a driver program and each move is recorded for a particular frame in time. Real-time armatures are similar, but they are puppeteered by one or more people and captured in real time.
See also
Linkages
Skeletal animation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.