id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
50,395,663
https://en.wikipedia.org/wiki/Retail%20outsourcing
Retail outsourcing is a form of business process outsourcing practiced in retail, logistic and restaurant business. It involves the contracting of the operations and responsibilities of transportation, cooking, customer servicing or settlement of accounts business processes to a third-party service provider. Such provider bears the ultimate responsibility for the proper execution of the designated functions at stores, warehouses or catering facilities. The service supplier carries out the internal (back office) and front office duties including recruiting, training and legalizing human resources, tracking their job conduct, reporting and solving the problems, performing daily production tasks. Retail outsourcing is typically contracted for operational segments requiring large amounts of unskilled work force in such spheres as shipment, cash desk processing, counter servicing, bread baking, meat and fish cutting, cleaning, dish washing, packing, boxing or bundling. Initially, the concept of retail outsourcing has been conceived in Russia. It has fully matured by the end of the 2000s. Currently, retail outsourcing is practiced by all major Russian retailers. It accounts for more than 25% of their total business operations. Even the abolition of the contingent labor (2016) failed to impact these services significantly. It only put a ban on specific segments (primarily, cash desk processing) but left other parts almost intact. References Business process Business terms Information technology management Outsourcing
Retail outsourcing
Technology
274
10,356,246
https://en.wikipedia.org/wiki/Standard%20atomic%20weight
The standard atomic weight of a chemical element (symbol Ar°(E) for element "E") is the weighted arithmetic mean of the relative isotopic masses of all isotopes of that element weighted by each isotope's abundance on Earth. For example, isotope 63Cu (Ar = 62.929) constitutes 69% of the copper on Earth, the rest being 65Cu (Ar = 64.927), so Because relative isotopic masses are dimensionless quantities, this weighted mean is also dimensionless. It can be converted into a measure of mass (with dimension ) by multiplying it with the dalton, also known as the atomic mass constant. Among various variants of the notion of atomic weight (Ar, also known as relative atomic mass) used by scientists, the standard atomic weight is the most common and practical. The standard atomic weight of each chemical element is determined and published by the Commission on Isotopic Abundances and Atomic Weights (CIAAW) of the International Union of Pure and Applied Chemistry (IUPAC) based on natural, stable, terrestrial sources of the element. The definition specifies the use of samples from many representative sources from the Earth, so that the value can widely be used as the atomic weight for substances as they are encountered in reality—for example, in pharmaceuticals and scientific research. Non-standardized atomic weights of an element are specific to sources and samples, such as the atomic weight of carbon in a particular bone from a particular archaeological site. Standard atomic weight averages such values to the range of atomic weights that a chemist might expect to derive from many random samples from Earth. This range is the rationale for the interval notation given for some standard atomic weight values. Of the 118 known chemical elements, 80 have stable isotopes and 84 have this Earth-environment based value. Typically, such a value is, for example helium: . The "(2)" indicates the uncertainty in the last digit shown, to read . IUPAC also publishes abridged values, rounded to five significant figures. For helium, . For fourteen elements the samples diverge on this value, because their sample sources have had a different decay history. For example, thallium (Tl) in sedimentary rocks has a different isotopic composition than in igneous rocks and volcanic gases. For these elements, the standard atomic weight is noted as an interval: . With such an interval, for less demanding situations, IUPAC also publishes a conventional value. For thallium, . Definition The standard atomic weight is a special value of the relative atomic mass. It is defined as the "recommended values" of relative atomic masses of sources in the local environment of the Earth's crust and atmosphere as determined by the IUPAC Commission on Atomic Weights and Isotopic Abundances (CIAAW). In general, values from different sources are subject to natural variation due to a different radioactive history of sources. Thus, standard atomic weights are an expectation range of atomic weights from a range of samples or sources. By limiting the sources to terrestrial origin only, the CIAAW-determined values have less variance, and are a more precise value for relative atomic masses (atomic weights) actually found and used in worldly materials. The CIAAW-published values are used and sometimes lawfully required in mass calculations. The values have an uncertainty (noted in brackets), or are an expectation interval (see example in illustration immediately above). This uncertainty reflects natural variability in isotopic distribution for an element, rather than uncertainty in measurement (which is much smaller with quality instruments). Although there is an attempt to cover the range of variability on Earth with standard atomic weight figures, there are known cases of mineral samples which contain elements with atomic weights that are outliers from the standard atomic weight range. For synthetic elements the isotope formed depends on the means of synthesis, so the concept of natural isotope abundance has no meaning. Therefore, for synthetic elements the total nucleon count of the most stable isotope (i.e., the isotope with the longest half-life) is listed in brackets, in place of the standard atomic weight. When the term "atomic weight" is used in chemistry, usually it is the more specific standard atomic weight that is implied. It is standard atomic weights that are used in periodic tables and many standard references in ordinary terrestrial chemistry. Lithium represents a unique case where the natural abundances of the isotopes have in some cases been found to have been perturbed by human isotopic separation activities to the point of affecting the uncertainty in its standard atomic weight, even in samples obtained from natural sources, such as rivers. Terrestrial definition An example of why "conventional terrestrial sources" must be specified in giving standard atomic weight values is the element argon. Between locations in the Solar System, the atomic weight of argon varies as much as 10%, due to extreme variance in isotopic composition. Where the major source of argon is the decay of in rocks, will be the dominant isotope. Such locations include the planets Mercury and Mars, and the moon Titan. On Earth, the ratios of the three isotopes 36Ar : 38Ar : 40Ar are approximately 5 : 1 : 1600, giving terrestrial argon a standard atomic weight of 39.948(1). However, such is not the case in the rest of the universe. Argon produced directly, by stellar nucleosynthesis, is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. The atomic weight of argon in the Sun and most of the universe, therefore, would be only approximately 36.3. Causes of uncertainty on Earth Famously, the published atomic weight value comes with an uncertainty. This uncertainty (and related: precision) follows from its definition, the source being "terrestrial and stable". Systematic causes for uncertainty are: Measurement limits. As always, the physical measurement is never finite. There is always more detail to be found and read. This applies to every single, pure isotope found. For example, today the mass of the main natural fluorine isotope (fluorine-19) can be measured to the accuracy of eleven decimal places: . But a still more precise measurement system could become available, producing more decimals. Imperfect mixtures of isotopes. In the samples taken and measured the mix (relative abundance) of those isotopes may vary. For example, copper. While in general its two isotopes make out 69.15% and 30.85% each of all copper found, the natural sample being measured can have had an incomplete 'stirring' and so the percentages are different. The precision is improved by measuring more samples of course, but there remains this cause of uncertainty. (Example: lead samples vary so much, it can not be noted more precise than four figures: ) Earthly sources with a different history. A source is the greater area being researched, for example 'ocean water' or 'volcanic rock' (as opposed to a 'sample': the single heap of material being investigated). It appears that some elements have a different isotopic mix per source. For example, thallium in igneous rock has more lighter isotopes, while in sedimentary rock it has more heavy isotopes. There is no Earthly mean number. These elements show the interval notation: Ar°(Tl) = [, ]. For practical reasons, a simplified 'conventional' number is published too (for Tl: 204.38). These three uncertainties are accumulative. The published value is a result of all these. Determination of relative atomic mass Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide) and isotopic composition of a sample. Highly accurate atomic masses are available for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples. For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy. The calculation is exemplified for silicon, whose relative atomic mass is especially important in metrology. Silicon exists in nature as a mixture of three isotopes: Si, Si and Si. The atomic masses of these nuclides are known to a precision of one part in 14 billion for Si and about one part in one billion for the others. However the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table). The calculation is A(Si) = (27.97693 × 0.922297) + (28.97649 × 0.046832) + (29.97377 × 0.030872) = 28.0854 The estimation of the uncertainty is complicated, especially as the sample distribution is not necessarily symmetrical: the IUPAC standard relative atomic masses are quoted with estimated symmetrical uncertainties, and the value for silicon is 28.0855(3). The relative standard uncertainty in this value is 1 or 10 ppm. To further reflect this natural variability, in 2010, IUPAC made the decision to list the relative atomic masses of 10 elements as an interval rather than a fixed number. Naming controversy The use of the name "atomic weight" has attracted a great deal of controversy among scientists. Objectors to the name usually prefer the term "relative atomic mass" (not to be confused with atomic mass). The basic objection is that atomic weight is not a weight, that is the force exerted on an object in a gravitational field, measured in units of force such as the newton or poundal. In reply, supporters of the term "atomic weight" point out (among other arguments) that: the name has been in continuous use for the same quantity since it was first conceptualized in 1808; for most of that time, atomic weights really were measured by weighing (that is by gravimetric analysis) and the name of a physical quantity should not change simply because the method of its determination has changed; the term "relative atomic mass" should be reserved for the mass of a specific nuclide (or isotope), while "atomic weight" be used for the weighted mean of the atomic masses over all the atoms in the sample; it is not uncommon to have misleading names of physical quantities which are retained for historical reasons, such as electromotive force, which is not a force resolving power, which is not a power quantity molar concentration, which is not a molar quantity (a quantity expressed per unit amount of substance). It could be added that atomic weight is often not truly "atomic" either, as it does not correspond to the property of any individual atom. The same argument could be made against "relative atomic mass" used in this sense. Published values IUPAC publishes one formal value for each stable chemical element, called the standard atomic weight. Any updates are published biannually (in uneven years). In 2015, the atomic weight of ytterbium was updated. Per 2017, 14 atomic weights were changed, including argon changing from single number to interval value. The value published can have an uncertainty, like for neon: , or can be an interval, like for boron: [10.806, 10.821]. Next to these 84 values, IUPAC also publishes abridged values (up to five digits per number only), and for the twelve interval values, conventional values (single number values). Symbol Ar is a relative atomic mass, for example from a specific sample. To be specific, the standard atomic weight can be noted as , where (E) is the element symbol. Abridged atomic weight The abridged atomic weight, also published by CIAAW, is derived from the standard atomic weight, reducing the numbers to five digits (five significant figures). The name does not say 'rounded'. Interval borders are rounded downwards for the first (low most) border, and upwards for the upward (upmost) border. This way, the more precise original interval is fully covered. Examples: Calcium: → Helium: → Hydrogen: → Conventional atomic weight Fourteen chemical elements – hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, argon, bromine, thallium, and lead – have a standard atomic weight that is defined not as a single number, but as an interval. For example, hydrogen has . This notation states that the various sources on Earth have substantially different isotopic constitutions, and that the uncertainties in all of them are just covered by the two numbers. For these elements, there is not an 'Earth average' constitution, and the 'right' value is not its middle (which would be 1.007975 for hydrogen, with an uncertainty of (±0.000135) that would make it just cover the interval). However, for situations where a less precise value is acceptable, for example in trade, CIAAW has published a single-number conventional atomic weight. For hydrogen, . A formal short atomic weight By using the abridged value, and the conventional value for the fourteen interval values, a short IUPAC-defined value (5 digits plus uncertainty) can be given for all stable elements. In many situations, and in periodic tables, this may be sufficiently detailed. List of atomic weights In the periodic table See also International Union of Pure and Applied Chemistry (IUPAC) Commission on Isotopic Abundances and Atomic Weights (CIAAW) References External links IUPAC Commission on Isotopic Abundances and Atomic Weights Atomic Weights of the Elements 2011 Amount of substance Chemical properties Stoichiometry Periodic table
Standard atomic weight
Physics,Chemistry,Mathematics
2,895
21,880,823
https://en.wikipedia.org/wiki/Nano-scaffold
Nano-scaffolding or nanoscaffolding is a medical process used to regrow tissue and bone, including limbs and organs. The nano-scaffold is a three-dimensional structure composed of polymer fibers very small that are scaled from a Nanometer (10−9 m) scale. Developed by the American military, the medical technology uses a microscopic apparatus made of fine polymer fibers called a scaffold. Damaged cells grip to the scaffold and begin to rebuild missing bone and tissue through tiny holes in the scaffold. As tissue grows, the scaffold is absorbed into the body and disappears completely. Nano-scaffolding has also been used to regrow burned skin. The process cannot grow complex organs like hearts. Historically, research on nano-scaffolds dates back to at least the late 1980s when Simon showed that electrospinning could be used to produce nano- and submicron-scale polymeric fibrous scaffolds specifically intended for use as in vitro cell and tissue substrates. This early use of electrospun fibrous lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon polycarbonate fibers. It was noted that as opposed to the flattened morphology typically seen in 2D culture, cells grown on the electrospun fibers exhibited a more rounded 3-dimensional morphology generally observed of tissues in vivo. Mechanism of tissue regeneration using nanoscaffolds Nano-scaffolding is very small, 100 times smaller than the human hair and is built out of biodegradable fibers. The use of this scaffolding allows more effective use of stem cells and quicker regeneration. Electrospun nanofibers are prepared using microscopic tubes that range between 100 and 200 nanometers in diameter. These entangle with each other in the form of a web as they're produced. Electrospinning allows the construction of these webs to be controlled in the sense of the tube's diameter, thickness, and the material being used. Nano-scaffolding is placed into the body at the site where the regeneration process will occur. Once injected, stem cells are added to the scaffolding. Stem cells that are attached to a scaffold are shown to be more successful in adapting to their environment and performing the task of regeneration. The nerve ends in the body will attach to the scaffolding by weaving in-between the openings. This will cause them to act as a bridge to connect severed sections. Over time the scaffolding will dissolve and safely exit the body leaving healthy nerves in its place. This technology is the combination of stem cell research and nanotechnology. The ability to be able to repair damaged nerves is the greatest challenge and prize for many researchers as well as a huge step for the medical field. This would allow doctors to repair nerves damaged in an extreme accident, like third degree burns. The technology however, is still in its infancy and is still not capable of regenerating complex organs like a heart, although it can already be used to create skin, bone and nails. Nano scaffolding has been shown to be four to seven times more effective in keeping the stem cells alive in the body, which would allow them to perform their job more effectively. This technology can be used to save limbs that would otherwise need amputation. Nanoscaffolding provides a large surface area for the material being produced, along with changeable chemical and physical properties. This allows them to be applicable in many different types of technological fields. Background Tissue engineering Tissue engineering consists of the use of cells, scaffolds, and varying tissue architecture techniques to restore, replace, and regenerate damaged body tissue. The goal of tissue engineering is to restore, replace, or regenerate damaged body tissue. Nano-scaffolds along with cells and growth factor signals are utilized in tissue engineering applications. Tissue engineering applications are designed to overcome hurdles associated with allotransplantation, which include unavailable donors, complex surgeries, and postoperative care. In 2015, the tissue engineering global market was estimated at $23 billion, and expected to reach $94.2 billion by 2022. The anticipation of fast growth was due to an increase in bone and joint disorders, with musculoskeletal regenerative medicines comprising 26.4% of the regenerative medicine market. Extracellular Matrix Most human cells within tissues anchor to the solid extracellular matrix (ECM). ECM components vary between various types of body tissues. The ECM acts as a natural "scaffolding". The ECM has five major function: Provide cellular support and microenvironment necessary to enable cell growth, migration, and signal response. Provide tissue mechanical properties, such as rigidity and elasticity. These properties vary to provide for specific tissue functions. Provide bioactive regulators to trigger cell responses. Provide a reservoir for cellular growth factors to enhance cell responses. Provide a degradable physical environment to accommodate ECM remodeling in response to developmental, physiological, and pathological inputs during tissue processes. The goal of the nano-scaffold is to mimic the ECM functions to encourage tissue restoration, replacement, and regeneration. Both ECM variations between tissue types and the complexity of the ECM make nano-scaffold mimicry difficult. Nano-Scaffold In order to mimic the ECM, the nano-scaffold follows four main features and functions: Architecture: Must provide empty space for new tissue to form. Nano-scaffold biomaterials must be porous to allow for nutrient transportation to the tissue within the construct. However, despite the porous architecture, the nano-scaffold must be mechanically strong enough to withstand physiological loads. Cyto- and tissue compatibility: Nano-scaffolds must support cell attachment, growth, and differentiation prior to implantation in vitro and after in vivo implantation. Bioactivity: Biomaterials within the nano-scaffold must facilitate and regulate cell and tissue activity, as in natural host tissue. Mechanical property: Must provide shape and stability to the damaged tissue. The mechanical properties of the nano-scaffold determine cell differentiation, morphology, and characteristics due to the cell ability to sense substrate stiffness. Approach There are four major nano-scaffolding approaches, which include pre-made porous scaffolds for cell seeding, decellularized ECM from allogeneic or xenogeneic tissues for cell seeding, cell sheets with self-secreted ECM, and cell encapsulation in a self-assembled hydrogel matrix. Each approach contains varying materials, fabrication methods, and resulting mechanical properties. In addition to these four approaches, metallic nano-particles have been researched to enhance the mechanical properties of nano-scaffolds. Nanofiber electrospinning is another fabrication method for nano-scaffolding. Pre-made porous scaffolds for cell seeding A wide array of nano-scaffold biomaterials have been used for pre-made porous scaffolds for cell seeding. These biomaterials can be classified as either natural or synthetic. Natural biomaterials are obtained from natural sources, which include but are not limited to, ECM from allografts or xenografts, calcium phosphates, and organic polymers, such as proteins, polysaccharides, lipids, and polynucleotides. Natural biomaterials increase nano-scaffold biocompatibility, but limit physical and mechanical stability. Natural biomaterials risk a negative immune response in the implantation host due to the allogeneic or xenogeneic source. Synthetic biomaterials can be subclassified as organic or inorganic. Compared to natural, synthetic biomaterials are more easily tailored to varying tissue hardness, and therefore are applicable to a wider variety of tissues. Synthetic biomaterials are less biocompatible and result in decreased cell attachment and growth. Surface and bulk properties can be altered within a synthetic biomaterials in an attempt to increase the biocompatibility of a surface. Various fabrication techniques have been employed to fabricate a porous scaffold, such as porogens within biomaterials, solid free-form or rapid prototyping, and utilizing woven or non-woven fibers. To employ porogens in the nano-scaffold biomaterial, solid materials in solids or dissolved in solvents are combined with the porogen. Porogens include carbon dioxide, water, and paraffin. One the biomaterial is fabricated, the porogens are removed with methods such as sublimation, evaporation, and melting. Therefore, when the porogens are removed the porous scaffold is left behind with pores. To fabricate with solid free-form or rapid prototyping, methods such as laser sintering, stereolithography, and 3D printing have been utilized. These methods use light or heat transfer to bond or crosslink the biomaterial being used. Cross-linking provides enhanced material strength. The fabrication technique utilizing woven and non-woven fiber structures provides a porous structure when the fibers are bonded with thermal energy. Electrospinning is utilized via application of high voltages in a polymer solution. A spinning fiber jet is formed when the electrostatic forces surpass the forces within the polymer solution. The pre-made porous scaffolding method allows for a defined structure formation. With fabricating allowing an intricate structure formation, the nano-scaffolds utilizing this method can be tuned to resemble specific tissue ECMs. Decellularized ECM from allogeneic and xenogeneic tissues for cell seeding Decellularized ECM from allogeneic and xenogeneic tissues have been utilized in tissue engineering for heart valves, vessels, nerves, tendons, and ligaments. To utilize the ECM from allogeneic or xenogeneic tissues the cellular antigens must be removed due to implant recipient immune response. Decellularization is conducted with a combination of physical, chemical, and enzymatic processes. Freeze-thaw cycles or ionic solutions have been utilized to lyse cell membranes. Trypsin/EDTA treatments are then utilized to separate ECM cellular components. Detergents solubilize and remove cell cytoplasm's and nuclei. The decellularized ECM with preserved growth factors is utilized as the nano-scaffold. Decellularized ECM nano-scaffolding provides mechanical properties closer to natural values than other methods due to utilizing a natural ECM structure. Cell sheets with self-secreted ECM In the cell sheet approach, cells are utilized to secrete an ECM for scaffolding. Cells are cultured until confluence on a thermo-responsive polymer. Hydrophobicity is thermally-regulated repeatedly to detach multiple cell sheet layers. Loading capabilities of this approach are limited due to the use of thin cell sheets. Cell sheets with self-secreted ECM provide a high cell density and tight cell association within the nano-scaffold. Cell encapsulation in a self-assembled hydrogel matrix The hydrogel structure consists of cross-linked hydrophilic polymer chains. A semi-permeable membrane or a homogeneous solid mass encapsulate cells. Natural and synthetic hydrogels are used to encapsulate the cells. Algae and sodium alginate provide a commonly used source for polysaccharides. Other natural biomaterials utilized include agarose and chitosan. Synthetic biomaterials include poly(ethylene glycol) (PEG) and polyvinyl alcohol (PVA). Prior to initiation, the biomaterials exist as a liquid monomer. The biomaterials are mixed with cells. Once initiated by pH, temperature, ionic strength, or light control, the biomaterials self-assembles into a solid polymer meshwork. Since the cells are mixed before initiation, this allows for the fabrication of the nano-scaffold construct, and cell seeding in one step. This method contains low mechanical properties due to the highly moldable structure of the nano-scaffold and is not ideal for load-bearing applications. Metallic nano-scaffolds Metallic nanoparticles within polymers increase mechanical strength and biocompatibility of nano-scaffolds. Copper, gold, iron oxide, platinum, palladium, strontium, titanium, zinc, and their oxides have been utilized in bone tissue regenerative applications. These nano-particles have been incorporated within polymers such as poly (lactic-co-glycolic acid) (PLDA), poly (L-lactic acid) (PLLA), poly (caprolactone (PCL), collagen, hyaluronic acid, silk, alginate, and fibrin. Copper nanoparticles within nano-scaffolding enhances antioxidant and anti-diabetic activities. Copper nanoparticles within nano-scaffolding can stimulate angiogenesis, cell migration, and proliferation of endothelial cells. Gold nanoparticles within nano-scaffolding induces osteogenic differentiation due to signal transduction from mechanical stimuli. Platinum nanoparticles and palladium nanoparticles within nano-scaffolding reduces oxidative stresses which decreases disease progression. Silver nanoparticles within nano-scaffolding are antimicrobial and aid in preventing postoperative pathogenic infections. Silver nanoparticles within nano-scaffolding have been used to develop microbe resistant coating. Titanium nanoparticles within nano-scaffolding are highly porous, which is ideal for cell proliferation. Zinc nanoparticles within nano-scaffolding decrease the number of reactive oxygen species, which are associated with failure of implants due to bacterial infection. Nano-fiber electrospinning Electrospinning systems consist of high voltage power, material delivery, and fiber collection units. The high voltages produce charged polymer solution, which exits from the delivery unit in a jet form. The jet of polymer solution is elongated and the solvent either evaporates or solidifies. Fibers are then collected in the collection unit. Flat plated are utilized to randomly collect the fibers. Rotors are utilized to rotate the collector to collect aligned fibers. Concentric collectors are utilized to collect the fibers in a disc, drum, or cone shape. Compared to random fibers, aligned fibers enhance integrin signaling pathways, and contain anisotropic properties similar to ECMs characterized by high degrees of orientation. Fibers may be fabricated from natural and synthetic polymers, including collagen, gelatin, elastin, silk, poly(l-lactic acid) (PLLA), ploy(glycolic acid) (PGA), poly(ԑ-caprolactone) (PCL), and poly(lactic-co-glycolic) acid (PLGA). The morphology of fibers fabricated through electrospinning varies with the solution properties of the polymer, hydrostatic pressure, temperature, and humidity. Nanofiber electrospinning can create loosely connected porous nanofiber mats, which can be fabricated with varying patterns for varying applications. Electrospinning nanofibers limits the three-dimensional capabilities of the nano-scaffold, which decreases cell differentiation and gene expressions. Three-dimensional electrospun scaffolds have been created by stacking multiple layers and then seeding cells within the scaffold. Fabrications With the new advancement in nanotechnologies, there are many methods of fabrication that improve upon the methods previously mentioned . To  appropriately emulate the complexity  of native tissue and extracellular matrix(ECM) architecture, the adoption  of nanotechnology  becomes  an integral part of scaffold implant production. Airbrush In 1936, Norton patented the first blow spinning device, most recently in 2015 research was published describing a device with concentric nozzles where  a polymer  solution was inserted into a stream  of flowing gas in order to form nano-fibers from polymers like polystyrene. The new developments lead to the technique of airbrushing for nano-scaffold fabrication. Airbrushing  is a technique  for fiber fabrication that involves two parallel concentric fluid streams; a polymer  dissolved in a  volatile  solvent  and a pressurized gas  that flows around  the polymer solution,  generating  fibers  that are  deposited in the direction of gas flow. This method is more favored compared to electrospinning due to the fact that it is less expensive  and is  easier to interface. This method has  the ability to deposit  conformal fibers  onto both planar  and non-planar substrates  with  a deposition  rate that is relatively ten times faster than electrospinning. Just like commercial airbrushes the nanofibrous airbrush technique can be used to “paint”  nanofibers onto a more extensive range of targets and for the carrier solvent to evaporate quickly before the polymer fibers deposit on the collection surface. Although  acute  exposure  to high  concentrations of a solvent  such  as acetone  may be toxic, studies  have  shown  that SBS  from acetone  directly  onto cells  did not affect viability, resulting in preventing issue of biocompatibility. Complication of  the airbrush technique arises when formation  of fiber mats  with the local fiber bundles, this is induced by the morphological differences in fibers and crystalline structures. Phase Separation In 1999, researchers(Need to identify) pioneered a method  creating  nano-fibrous polyester  based scaffolds  with high porosity  and sub-micron fibers dimensions  through the method of phase separation. Phase separation, also called  as phase  inversion, is a technique  that has been employed to generate porous polymers scaffolds  by promoting  the separation  of a polymeric  solution into two phases:  a polymer-rich phase and a polymer-poor phase. Polymer solution is driven to separate  in phases through  cooling  or non-solvent  exchange, in a way  that the polymer is not thermodynamically  miscible  anymore  and forms polymer-rich  domains  within  the solvent. Next, the solvent  is extracted and the scaffold is frozen to maintain the structure. Lastly, lyophilization forms  a fibrous scaffold with diameters  between 50 and 500 nm(nanometers) and able to exhibit 98.5% porosity. Again this method of fabrication is used to create nano-fibrous scaffolds out of aliphatic polyesters. Solvents that are used include THF( developing the best results), DMF, THF/methanol, THF/acetone, dioxane/methodal, dioxane/H2O, and dioxane/acetone). Phase separation approximates more to conventional foams  with larger pore  sizes, implying  that this method would be prone to cell infiltration, making it favorable for tissue engineering.. Phase separation can also  lead to smaller pores being produced, however there is difficulty  to control the diameter  of fibrous due to the fact that the initial  polymer concentration  does not lead to larger  fiber diameters in phase separated scaffolds. This method of fabrication promotes  cell growth, proliferation and differentiation, making it suitable to be used as tissues  for artificial  organs, neural  networks, bioreactors, cell sources and drug delivery systems. STEP techniques “Spinneret based turntable engineered parameters” or STEP technique has been required for nanofiber networks  with  controllable  fiber diameters, controllable  spacing, and for the orientation  of individual  fibers. During this technique micro/nano fibers  are pulled  from the pendant  solution droplet and allows for a collection of highly aligned fibers of uniform  dimensions on the substrate. It promotes  control of the dimensions  of the fibers  deposited  in the aligned configurations, thus creating  a platform  for the investigation  of cellular dynamics  and cellular adhesion on scaffolds. The technique allows precise  spacing  and orientation of fibers  into planar or non-planar structures using a wide spectrum of polymers.  However, there is a difficulty in  obtaining fibers smaller than 100 nm and as well as limitation to viscoelastic materials used in the STEP technique.   Nano-fibrous scaffolds, created from STEP techniques,  have the ability to be used  for a wide range of applications in tissue engineering Applications Bone Scaffolds By 2012 the  US  over half  a million people  receive  bone defect repairs yearly  with an estimated cost of $2.5 million and has doubled in  recent years. In the US, bone is one of the most transplanted tissues  and the increasing demand  of bone grafts and substitutes was estimated to be 3.3 billion of revenue. Investments  in research in tissue engineering solutions have had a massive market especially for bone. As a scaffolding tissue, bone is responsible  for support, protection, load bearing and hematopoietic functions. For small defects  the human bone has the ability to continuously remodel and rebuild upon itself. However, large scale  defects, inflammations  caused by accidents, infections and tumors make it difficult for the bone to heal, requiring external interventions. The growing shortage of donors, rejection of transplants, and mechanical failure have made it  difficult to have lasting solutions. Advancements in nanotechnology  have enabled  the applications for 3D printing in tissue engineering for  the development of Bone scaffolds. Bone scaffolds are typically made of porous  biodegradable  materials  that provide the mechanical  support  during repair regeneration of damaged and diseased bone. The design of the scaffolds presents  a surface that promotes  cell attachments, growth, and differentiation, while providing  a porous network  for tissue growth. For  bone scaffold  continuous  ingrowth of bone tissue, interconnected  porosity  is important  as it can allow  nutrients and molecules  to transport  to inner parts of scaffold to facilitate  cell ingrowth, vascularization, as  well as waste material removal. The 3D bioprinting method  has been used  to fabricate  more ideal structural  scaffolds  with better control of pore morphology, pore size  and porosity. 3D printing can be essential to bone scaffolds as it  takes into account  the high degree  of porosity together  with high mechanical  strength, which is critical for the bone scaffold to perform. Heart Muscle Scaffolds Cardiac muscle, on the other hand, has an elastic modulus of only around 10 MPa, 3 orders of magnitude smaller than bone. However, it experiences constant cyclic loading as the heart pumps. This means that the scaffold must be both tough and elastic, a property achieved using polymeric materials. Spinal Cord Engineering Spinal cord injury can be seriously detrimental to normal form and function in the human body, often leading to major loss of motor and sensory function that can even affect the whole of the body below the injury level. The number of global spinal cord injury cases rose to 27.04 million in 2016 where each patient can cost the economy from $1–5 million for a given case. As a result, there is a significant need for novel solutions to address the issue. Novel biomaterial and tissue engineering strategies have been developed recently to address the need, mainly centering around formulating nanoscaffolds that fill the gap created in the injury site and that foster a pro-regenerative environment that help to facilitate restoration of the spinal cord structure and function. This is achieved through physically connecting the exposed areas in the spinal cord via scaffold as well as providing a favorable environment for regenerative cell types such as mesenchymal stem cells and Schwann cells and for promoting axon restoration and remyelination. Olfactory ensheathing cells, stem cells, and other neural progenitor cells play a large part in creating a stimulating environment for regenerative purposes. In order to make these nanoscaffolds, both natural and synthetic polymers are used in their synthesis. For natural polymers, hyaluronic acid and collagen are two of the major candidates used in industry today. Hyaluronic acid is a major component of the extracellular matrix and has variable properties depending on its molecular weight, which is useful in compensating for properties necessary for a good scaffold. Collagen is also a major component of the extracellular matrix, most importantly in central nervous tissue where it has good histocompatibility and supports adhesion and growth. References Medical technology Nanomedicine
Nano-scaffold
Materials_science,Biology
5,060
26,666,150
https://en.wikipedia.org/wiki/Committee%20for%20Nuclear%20Responsibility
The Committee for Nuclear Responsibility was formed as a "political and educational organization to disseminate anti-nuclear views and information to the public". The goals of the organization were a moratorium on nuclear power and the commercialization of alternative energy sources. John Gofman founded the Committee for Nuclear Responsibility in 1971, as a small non-profit, public interest association with four Nobel Laureates on its board. These Nobel scientists were Linus Pauling, Harold Urey, George Wald and James D. Watson. Other scientists who were involved included Paul Ehrlich, John Edsall, and Richard E. Bellman. The Board of Directors included Lewis Mumford, Ramsey Clark, Ian MacHarg, and Richard Max McCarthy. Actor Jack Lemmon endorsed the goals of the Committee for Nuclear Responsibility. See also Anti-nuclear groups in the United States Anti-nuclear movement in the United States References Anti-nuclear organizations Nuclear history James Watson
Committee for Nuclear Responsibility
Engineering
191
52,810,837
https://en.wikipedia.org/wiki/Maridesulfovibrio%20ferrireducens
Maridesulfovibrio ferrireducens is a psychrotolerant bacterium which has been isolated from permanently cold sediments from fjords in the Svalbard archipelago in Norway. Originally described under Desulfovibrio, it was reassigned to Maridesulfovibrio by Waite et al. in 2020. References External links Type strain of Desulfovibrio ferrireducens at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 2006 Psychrophiles Desulfovibrionales
Maridesulfovibrio ferrireducens
Biology
110
51,801,762
https://en.wikipedia.org/wiki/BIOSTEC
The International Joint Conference on Biomedical Engineering Systems and Technologies - BIOSTEC - is an international joint conference composed of five co-located conferences each specialized in a different knowledge area: Biomedical Electronics and Devices Bioimaging Bioinformatics Models, Methods and Algorithms Bio-inspired Systems and Signal Processing Health Informatics This joint conference is held annually and it seems to be interested in the dissemination of the novelties in the topics covered by its sub-conferences. BIOSTEC had its first edition in 2008 counting with the participation of some keynote speakers like Kevin Warwick. Since then, several names have been invited to deliver keynotes to the BIOSTEC attendees. Among them: David Rose (MIT Media Lab, United States), Bradley Nelson, (ETH Zurich, Switzerland), Edward H. Shortliffe, (Arizona State University, United States), José C. Príncipe (University of Florida, United States), Alberto Cliquet Jr (University of São Paulo & University of Campinas, Brazil), Tanja Schultz (University of Bremen, Germany) and Vimla L. Patel, (Arizona State University, United States). Besides the presentation of invited talks, the BIOSTEC conferences are composed by different kind of sessions like poster sessions, technical sessions, tutorials, special sessions, workshops, doctoral consortiums, panels and industrial tracks. The papers presented in the conference are made available at the SCITEPRESS digital library, published in the conference proceedings and some of the best papers are invited to a post-publication with Springer. The 2019 edition of the conference will be held in cooperation with the Swiss Society for Biomedical Engineering, the International Society for Computational Biology, the European Association for Signal Processing, VDE DGBMT, the European Alliance of Medical and Biological Engineering and Science, the Finnish Society for Medical Physics and Medical Engineering and the Société Française de Génie Biologique et Médical. Editions References External links Science and Technology Events Conference website Event management system WikiCfp call for papers Information systems conferences Computer science conferences Academic conferences
BIOSTEC
Technology
412
74,285,940
https://en.wikipedia.org/wiki/Synchronous%20impedance%20curve
The synchronous impedance curve (also short-circuit characteristic, SCC) of a synchronous generator is a plot of the output short circuit current as a function of the excitation current or field. The curve is typically plotted alongside the open-circuit saturation curve. The SCC is almost linear, since under the short-circuit conditions the magnetic flux in the generator is below the iron saturation levels and thus the reluctance is almost entirely defined by the fixed one of the air gap. The name "synchronous impedance curve" is due to the fact that in the short-circuit condition all the generated voltage dissipates across the generator internal synchronous impedance . The curve is obtained by rotating the generator at the rated RPM with the output terminals shorted and the output current going to 100% of the rated for the device (higher values are typically not tested to avoid overheating). References Sources Electrical generators
Synchronous impedance curve
Physics,Technology
198
78,606,586
https://en.wikipedia.org/wiki/DitchCarbon
DitchCarbon is a UK-based emissions intelligence company that provides software-as-a-service (SaaS) solutions for measuring and managing Scope 3 carbon emissions, specifically those from suppliers, and investments (category 1, 2 &15). The company's platform supports businesses in integrating carbon emissions insights into their workflows to make data-driven sustainability decisions. History Ditchcarbon was founded in London in 2022 by Marc Munier, an environmentalist and software executive, who was later joined by Cam Pederson and Alex Rudnicki. Product and Services DitchCarbon provides emissions insights using artificial intelligence to identify, extract, and verify disclosed emissions data. The platform combines this data with established frameworks such as the Science Based Targets initiative (SBTi) and the Carbon Disclosure Project (CDP) to offer clients a comprehensive view of their suppliers' carbon footprints. In 2024, DitchCarbon introduced features such as supplier-level spend-based factors for more accurate emissions reporting and a forecasting tool to predict supplier emissions over time. DitchCarbon operates as a remote-first organization, with employees located in the United Kingdom, Spain, Hungary, the United States, Canada, the United Arab Emirates, and Bangladesh. References Companies based in London Software companies of the United Kingdom 2022 establishments in the United Kingdom Software companies established in 2022 Emissions reduction
DitchCarbon
Chemistry
273
10,313,521
https://en.wikipedia.org/wiki/Anomalous%20cancellation
An anomalous cancellation or accidental cancellation is a particular kind of arithmetic procedural error that gives a numerically correct answer. An attempt is made to reduce a fraction by cancelling individual digits in the numerator and denominator. This is not a legitimate operation, and does not in general give a correct answer, but in some rare cases the result is numerically the same as if a correct procedure had been applied. The trivial cases of cancelling trailing zeros or where all of the digits are equal are ignored. Examples of anomalous cancellations which still produce the correct result include (these and their inverses are all the cases in base 10 with the fraction different from 1 and with two digits): The article by Boas analyzes two-digit cases in bases other than base 10, e.g., = and its inverse are the only solutions in base 4 with two digits. An example of anomalous cancellation with more than two digits is = , and an example with different numbers of digits is =. Elementary properties When the base is prime, no two-digit solutions exist. This can be proven by contradiction: suppose a solution exists. Without loss of generality, we can say that this solution is where the double vertical line indicates digit concatenation. Thus, we have But , as they are digits in base ; yet divides , which means that . Therefore. the right hand side is zero, which means the left hand side must also be zero, i.e., , a contradiction by the definition of the problem. (If , the calculation becomes , which is one of the excluded trivial cases.) Another property is that the numbers of solutions in a base is odd if and only if is an even square. This can be proven similarly to the above: suppose that we have a solution Then, doing the same manipulation, we get Suppose that . Then note that is also a solution to the equation. This almost sets up an involution from the set of solutions to itself. But we can also substitute in to get , which only has solutions when is a square. Let . Taking square roots and rearranging yields . Since the greatest common divisor of is one, we know that . Noting that , this has precisely the solutions : i.e., it has an odd number of solutions when is an even square. The converse of the statement may be proven by noting that these solutions all satisfy the initial requirements. The question in a bit more generality was studied by Satvik Saha, Sohom Gupta, Sayan Dutta and Sourin Chatterjee. The number of solutions in different bases are listed in OEIS A366412. See also Howler (mathematics) Mathematical joke References Arithmetic
Anomalous cancellation
Mathematics
558
58,080,546
https://en.wikipedia.org/wiki/Sewer%20Murders
The Sewer Murders or "Sewage Plant Murders" ( or ) were an unexplained murder series of male adolescents in the Frankfurt Rhine-Main area during the 1970s and 1980s. Victims The killings took place from 1976 to 1983. The victims were seven boys and male adolescents aged between 11 and 18 from Frankfurt (likely Baseler Platz at the "Tivoli" arcade) or the Offenbach station area where some of them may have worked as prostitutes and met the culprit. The boys' hands were tied to the back with a rope or cord and then killed by apparent blunt force. For some, however, death presumably occurred by drowning in the sewerage. Due to long submersion in the sewage and partly strong damage to the corpses by screw conveyors, the victims were identified relatively late, and on only one, clear signs of blunt force trauma to the head had been found. Victim list 7 September 1976: Unidentified male (15–18 years), found in Stangenrod, Giessen. The naked corpse of a young man, only in socks, was found near a footpath in a forest between Atzenhain and Lehnheim during the military manoeuvre "Gordian Shield". The body was heavily mummified with partial skeletonization after a lying time of at least six weeks. A violent skull fracture had been found to be the probable cause of death. Since the identity of the decedent could not be clarified, the police assumed that he may have been a foreigner in transit through West Germany. 23 May 1982: Erik (17), Dreieich, Offenbach. Found in an oblique position behind an inflow. The body had significant damage, such as the right thigh being torn off, the pelvis and skull being smashed, and exposed bones. According to the autopsy report, the corpse was in an advanced state of decomposition with extensive adipocere growth. He was probably lying there for over six months, and the cause of death could no longer be determined. 19 September 1982: Bernd Michel (17–18 years), Darmstadt-Erzhausen. The collecting rake was blocked by a clothed body. Michel was probably still alive when he was thrown into a manhole, and most likely drowned. The identification of the almost unrecognizable corpse was difficult. The young man was around 17 years old and was characterized by a clear overbite. He was a prostitute in Frankfurt. 2 July 1983: Markus Hildebrandt (17 years), Darmstadt-Erzhausen. A tattooed body was discovered in the sump of the Dreieich-Buchschlag sewage plant. According to the Offenbach police, the decedent was washed ashore by a sewage pipe. His hands were handcuffed, but there were no other externally visible injuries. The tattoos on the upper arms showed different motifs and the word "Fuck". Hildebrandt came from Hanau and had been involved in the Frankfurt heroin scene since 1981. Hildebrandt, who had spent much of his youth in congregate care, was in apprenticeship and lived a "restless life" in Frankfurt. He is said to have occasionally prostituted. He was last seen in January 1983, accompanied by three men, and allegedly claimed to be travelling to Saarbrücken. 9 September 1983: Fuad Rahou (14 years), Niederrad. The body of the 14-year-old Moroccan boy was found in the Niederrad sewage treatment plant. At first, it was assumed that Rahou had drowned accidentally or inhaled marsh gasses. Only later did it become clear that he must have been murdered. Rahou had been reported missing since 1 September 1983, by his parents. 11 October 1983: Oliver Tupikas (11 years), Niederrad. The youngest victim, was also found in Niederrad's sewage treatment plant. Probably pushed down a manhole after being murdered. Traces of legcuffs were found on the body. Oliver had run away from home and had not been seen alive since. 21 June 1989: Daniel Schaub (14 years), Offenbach-Rosenhöhe. Bones and pieces of clothing from the presumably last victim were found in a tributary of the drainage system. The teenager had been missing since 1983. Possible motive The criminal psychologist Rudolf Egg suggested that the suspect might be a single person at the age of about 50 years without family ties or friends. It is possible that the culprit himself had been a victim of sexual abuse and may therefore have developed a disturbed relationship with his own homosexuality or with other same-sex people. His inclinations apparently include sadistic bondage. The suspect likely moved from Giessen to Frankfurt at the end of the 1970s and lived out his fetishes in the local milieu. He had also been familiar with the area and was highly mobile. The fact that he threw his victims into the sewerage after violating them is probably a hint of a deep-rooted hatred. Modus Operandi The first murder is believed to have happened at the body's site of discovery. Only when he resumed killing, the suspect could have figured out that throwing a dead or dying victim down the sewers was a more effective way to get rid of them. The quick disposal of the bodies allowed him to carry out his murders even within the densely populated Frankfurt area, without a risk of being caught. The victims were tied up, then the killer abused them and "disposed of them like garbage". For weeks or even months, the bodies in the sewers began to decompose. The dead usually remained undetected in the sewage system for a long time until they were eventually flushed into the sewage treatment plants, where they often blocked the screw pumps to separate the solid particles. The advanced decomposition of the bodies has made the identification and the clarification of the factual circumstances in the investigation much more difficult. The first victim, for instance, was identified 2.5 years after discovery. Investigation Horst Kropp and the "AG 229" were entrusted with the investigation of sexually motivated murders of young people. For some time, a 40-year-old storeman from Offenbach, who was mainly convicted of multiple sexual misconducts toward minors and molestation, had been the prime suspect. He was known for enticing homeless teens to his summer house in Riederwald where he performed sadistic sex games with them. He is said to have acted very brutally during these but bribed his victims with money to keep quiet about what he did to them. Investigators found out that the prime suspect and Markus Hildebrandt had visited the same gay bars in Frankfurt. However, this was not sufficient evidence, as the traces of blood found in the summer house did not match Hildebrandt's. In the home of the suspect, who had known two of the other victims alongside Hildebrandt, police secured a gas pistol, several knives, including a butcher knife, and handcuffs. Due to lack of evidence, however, there were no charges. See also List of fugitives from justice who disappeared List of German serial killers List of unsolved murders Literature Stephan Harbort: Mörderisches Profil: Phänomen Serienkiller, Heyne Verlag, 2006, . References External links Film contribution, Kriminalreport Hessen. Abominable murder series in the Rhine-Main area Text contribution, Kriminalreport Hessen. Abominable murder series in the Rhine-Main area Mortuary finds in sewage treatment plant. Aktenzeichen XY, 24. February 1984 from 37:47 1976 murders in Germany 1982 murders in Germany 1983 murders in Germany 1989 murders in Germany Crimes against sex workers Fugitives Serial murders in Germany Sewerage Unidentified serial killers Unsolved murders in Germany Violence against men in Europe
Sewer Murders
Chemistry,Engineering,Environmental_science
1,614
15,040,210
https://en.wikipedia.org/wiki/LuaTeX
LuaTeX is a TeX-based computer typesetting system which started as a version of pdfTeX with a Lua scripting engine embedded. After some experiments it was adopted by the TeX Live distribution as a successor to pdfTeX (itself an extension of ε-TeX, which generates PDFs). Later in the project some functionality of Aleph was included (esp. multi-directional typesetting). The project was originally sponsored by the Oriental TeX project, founded by Idris Samawi Hamid, Hans Hagen, and Taco Hoekwater. Objective of the project The main objective of the project is to provide a version of TeX where all internals are accessible from Lua. In the process of opening up TeX much of the internal code is rewritten. Instead of hard coding new features in TeX itself, users (or macro package writers) can write their own extensions. LuaTeX offers support for OpenType fonts with external modules. One of them, written in Lua, is provided by the LuaTeX team, but support for complex scripts is limited. Since 2020 LuaTeX includes the HarfBuzz engine for correct rendering of complex scripts using OpenType. An alternate approach can be found on GitHub. A related project is MPLib (an extended MetaPost library module), which brings a graphics engine into TeX. The LuaTeX team consists of Luigi Scarso, Taco Hoekwater, Hartmut Henkel and Hans Hagen. Versions The first public beta was launched at TUG 2007 in San Diego. The first formal release was planned for the end of 2009, and the first stable production version was released in 2010. Version 1.00 was released in September 2016 during ConTeXt 2016. Version 1.12 was released for TeXLive 2020. , both ConTeXt mark IV and LaTeX with extra packages (e.g. luaotfload, luamplib, luatexbase, luatextra) make use of new LuaTeX features. (When LuaTeX is used with the LaTeX format, it is sometimes called "LuaLaTeX".) Both are supported in TeX Live 2010 with LuaTeX 0.60, and in LyX. Special support in plain TeX is still under development. Further development takes place as LuaMetaTeX in connection with the ConTeXt project. See also TeX List of TeX extensions Further reading CTAN: LuaTeX Manual Manuel Pégourié-Gonnard: A guide to LuaLaTeX. 5 May 2013. LuaTeX development team: Documentation. October 2021. Official LuaTeX wiki ConTeXt wiki External links LuaTeX official site LuaTeX Wiki References Free PDF software Free TeX software Lua (programming language)-scriptable software TeX
LuaTeX
Mathematics
569
8,205,972
https://en.wikipedia.org/wiki/Potamkin%20Prize
The Potamkin Prize for Research in Pick's, Alzheimer's, and Related Diseases was established in 1988 and is sponsored by the American Academy of Neurology. The prize is funded through the philanthropy of the Potamkin Foundation. The prize is awarded for achievements on emerging areas of research in Pick's disease, Alzheimer's disease and other dementias. The award includes a medallion, $100,000 prize, and a 20-minute lecture at the American Academy of Neurology annual meeting. The prize is named after Luba Potamkin (wife of Victor Potamkin) who, in 1978, was diagnosed with a form of dementia which was identified as Pick's disease, a form of frontotemporal dementia. A website dedicated to the Potamkin Prize was launched in 2020 and included background on the prize, biographies of past winners, and information about applying or being nominated. Awards Source (to 2017): American Academy of Neurology 2024: Francisco Lopera 2023: Maria Luisa Gorno-Tempini 2021: Vladimir Hachinski 2021: Kenneth Kosik, Giovanna Mallucci 2020: J. Paul Taylor 2019: 2018: David Bennett 2017: Claudia Kawas, Kristine Yaffe 2016: Rosa Rademakers, Bryan J. Traynor 2015: Peter Davies, Reisa Sperling 2014: 2013: , , 2012: 2011: , , Eva-Maria Mandelkow 2010: , 2009: , , 2008: , William E. Klunk, and Chester A. Mathis 2007: 2006: Karen Ashe, Karen Duff, and Bradley Hyman 2005: , and 2004: , 2003: David M. Holtzman, 2002: Christian Haass, Bart De Strooper 2001: 2000: Maria Grazia Spillantini, 1999: , , 1998: Michel Goedert, Virginia M.-Y. Lee, John Q. Trojanowski 1997: , , 1996: Rudolph Tanzi, Peter St. George-Hyslop 1995: , Khalid Iqbal, 1994: , Gerard D. Schellenberg 1993: , Alison Goate, John Hardy, Christine Van Broeckhoven 1992: Donald L. Price, 1991: Stanley B. Prusiner 1990: Colin L. Masters, Konrad Beyreuther 1989: Dennis Selkoe, George G. Glenner 1988: See also List of medicine awards List of neuroscience awards References External links Potamkin Prize for Research in Pick's, Alzheimer's, and Related Diseases Potamkin Prize Neuroscience awards Awards established in 1988 American science and technology awards
Potamkin Prize
Technology
534
39,862,678
https://en.wikipedia.org/wiki/West%20Fork%20Furnace
West Fork Furnace is a historic iron furnace and national historic district located near Floyd, Floyd County, Virginia. The district includes structural, landscape and archaeological components of a small and well preserved mid-19th-century iron furnace built about 1853. The components consist of the furnace, retaining wall, staging area, head race, wheel pit, tail race, and East Prong of Furnace Creek. The furnace remained in operation until 1855. It was listed on the National Register of Historic Places in 2009. References Industrial buildings and structures on the National Register of Historic Places in Virginia Historic districts on the National Register of Historic Places in Virginia Buildings and structures in Floyd County, Virginia National Register of Historic Places in Floyd County, Virginia Industrial buildings completed in 1853 Industrial furnaces
West Fork Furnace
Chemistry
151
7,406,806
https://en.wikipedia.org/wiki/Shanon%20Shah
Shanon Shah (born 14 August 1978 in Alor Star, Kedah), is a singer-songwriter, playwright and academic from Malaysia. He released two albums Dilanda Cinta (2005) and Suara Yang Ku Dengar (2010) on the InterGlobal Music Malaysia independent label. He is noted for his emotive voice and cabaret-style piano playing. Trained as a chemical engineer, Shanon has previously worked as a credit risk analyst, human rights advocate and journalist. In his various writings, he focuses on issues relating to gender, sexuality and Islam. Music In 2003, Shanon won the Mandarin Oriental Fan of the Arts Most Promising Artist Award at the 2nd Annual Boh Cameronian Arts Awards. Two years later, he went on to win the Anugerah Industri Muzik award for best male vocal in an album for Dilanda Cinta. In 2007, he entered the Ikon Malaysia televised competition which looked for an icon among existing Southeast Asian artistes. The Malaysian level of the competition was ultimately won by Jaclyn Victor. Shanon has also performed as a duo with fellow singer-songwriter Azmyl Yunor and with his backing band the Cintas. Fellow singer-songwriter Ariff Akhir has also performed as part of the Cintas, and produced Shanon's second album, Suara Yang Ku Dengar. Shanon's musical influences include Leonard Cohen, Aimee Mann and Sam Phillips. Theatre and film Shanon Shah is also a playwright. His play Air Con was by the Instant Cafe Theatre Company's FIRSTWoRKS programme. The play, directed by Jo Kukathas and Zalfian Fuzi, was performed to critical acclaim prompting a revival in 2009. One reviewer praised not only the play's take on issues such as hate crimes against transsexuals, homophobic bullying in schools, racism and religious fundamentalism, but also its comedic touches and bilingual dialogue. Shanon has said he is greatly influenced by award-winning Malaysian actor and playwright Jit Murad. Air Con was nominated in nine categories for the 7th BOH Cameronian Arts Awards, winning four awards, including Best Original Script (Bahasa Malaysia). Shanon also co-wrote the screenplay and four original songs for Chris Chong Chan Fui's first full-length feature film Karaoke, which in 2009 was selected for the Directors' Fortnight of the Cannes Film Festival. The songs for Karaoke eventually made it into Suara Yang Ku Dengar. Journalism and writing Shanon Shah was also the former full-time Columns and Comments Editor at The Nut Graph, a bilingual, independent, Malaysian online news site aiming "to provide space for columnists and reader comments from as broad a political spectrum, and from as many sectors of interest, as possible". He contributed several English-language features, commentaries and interviews on the politics of Islam in Malaysia. His fortnightly Malay-language column, Secubit Garam, often took a light-hearted approach to serious political concerns through the fictional agony aunt Kak Nora. Shanon has also been published in other print anthologies. His 5,000-word essay "The Khutbah Diaries" was published in New Malaysian Essays 2 in 2009. In the same year his essay, "Muslim 2 Muslim", was published in Body 2 Body, an English-language anthology of fiction and non-fiction on sexual diversity in Malaysia. Body 2 Body was published by writer-director Amir Muhammad's publishing company, Matahari Books. In June 2012, Shanon's essay "Lot's Legacy" was published in the third issue of Critical Muslim (Fear and Loathing), a British "quarterly magazine of ideas and issues showcasing ground-breaking thinking on Islam and what it means to be a Muslim in a rapidly changing, interconnected world". The magazine is co-edited by London-based Muslim scholar and critic Ziauddin Sardar. Current activities In 2010, Shanon was awarded the Chevening Scholarship to pursue his Master of Arts (MA) in Religion in Contemporary Society at King's College London. He completed his MA in 2011 and won the Shelford MA Prize from King's School of Arts and Humanities. He is currently a doctoral candidate at King's College London. Discography Dilanda Cinta (2005) Suara Yang Ku Dengar (2010) Filmography Karaoke (2009) Theatre Air Con (2008) Awards First Prize – "Kisahmu Belum Berakhir" (song: music and lyrics) for Pertandingan Mencipta Lagu Patriotik Alaf Baru, by International College of Music Malaysia, 2001 Mandarin Oriental Fan of the Arts Most Promising Artist Award, 2nd BOH Cameronian Arts Awards, 2003 Best Male Vocal in an Album, for Dilanda Cinta, 13th Anugerah Industri Muzik, 2005 Best Original Script (Bahasa Malaysia), for Air Con, 7th BOH Cameronian Arts Awards, 2008 References 1978 births Living people Alumni of King's College London Chemical engineers Malaysian people of Malay descent Malaysian Muslims Malaysian male singer-songwriters Malaysian singer-songwriters Malaysian television personalities People from Kedah Malay-language singers Malaysian soul singers
Shanon Shah
Chemistry,Engineering
1,062
38,734,545
https://en.wikipedia.org/wiki/HD%20143346
HD 143346 (HR 5595) is a single star in the southern circumpolar constellation of Apus. It is 28.5 minutes west and about 5° north of the yellow giant star Gamma Apodis, which is the second brightest star in the constellation of Apus. This object has an orange hue and is visible to the naked eye as a dim point of light with an apparent visual magnitude of 5.68 It is located at a distance of approximately 286 light years from the Sun based on parallax, and is drifting further away with a radial velocity of . At that distance, the visual brightness of this star is diminished by an extinction of 0.174 due to interstellar dust. The star has an absolute magnitude of 0.95. HD 1433456 has a stellar classification of K1.5III CN1, indicating a red giant that has an anomalous overabundance of cyanogen in the spectrum. It is currently on the horizontal branch, generating fusion via a helium core. At present it has 118% the mass of the Sun but has expanded to 10.6 times the radius of the Sun. The star is radiating 53 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 1433456 is a member of the Milky Way's thick disk, but is metal enriched. It spins with a projected rotational velocity lower than . References K-type giants Apus PD-72 01902 143346 078868 5955 Horizontal-branch stars Apodis, 38
HD 143346
Astronomy
323
38,002,452
https://en.wikipedia.org/wiki/Eta%20Normae
Eta Normae, Latinized from η Normae, is a single star in the southern constellation of Norma. It is visible to the naked eye as a faint, yellow-hued star with an apparent visual magnitude of 4.65. The distance to this star is about 219 light years, based on parallax. The Gamma Normids radiate from a position near this star. This is an aging giant star with a stellar classification of G8III, having exhausted the supply of hydrogen at its core then swollen and cooled off the main sequence. At present it has a diameter of 11 times that of the Sun. It is a red clump giant, meaning it is on the horizontal branch and is generating energy through core helium fusion. The star has 2.78 times the mass of the Sun and is radiating 72 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,052 K. It is a source for X-ray emission. References G-type giants Horizontal-branch stars Norma (constellation) Normae, Eta Durchmusterung objects 143546 078639 5692
Eta Normae
Astronomy
235
29,882,763
https://en.wikipedia.org/wiki/K%20factor%20%28crude%20oil%20refining%29
The K factor or characterization factor is defined from Rankine boiling temperature °R=1.8Tb[k] and relative to water density ρ at 60°F: K(UOP) = The K factor is a systematic way of classifying a crude oil according to its paraffinic, naphthenic, intermediate or aromatic nature. 12.5 or higher indicate a crude oil of predominantly paraffinic constituents, while 10 or lower indicate a crude of more aromatic nature. The K(UOP) is also referred to as the UOP K factor or just UOPK. See also Crude oil assay References External links Pipe fitting friction calculation Pipe Friction Loss Calculations Oil refining Separation processes
K factor (crude oil refining)
Chemistry
144
9,421,365
https://en.wikipedia.org/wiki/Rhombitriheptagonal%20tiling
In geometry, the rhombitriheptagonal tiling is a semiregular tiling of the hyperbolic plane. At each vertex of the tiling there is one triangle and one heptagon, alternating between two squares. The tiling has Schläfli symbol rr{7, 3}. It can be seen as constructed as a rectified triheptagonal tiling, r{7,3}, as well as an expanded heptagonal tiling or expanded order-7 triangular tiling. Dual tiling The dual tiling is called a deltoidal triheptagonal tiling, and consists of congruent kites. It is formed by overlaying an order-3 heptagonal tiling and an order-7 triangular tiling. Related polyhedra and tilings From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. Symmetry mutations This tiling is topologically related as a part of sequence of cantellated polyhedra with vertex figure (3.4.n.4), and continues as tilings of the hyperbolic plane. These vertex-transitive figures have (*n32) reflectional symmetry. See also Rhombitrihexagonal tiling Order-3 heptagonal tiling Tilings of regular polygons List of uniform tilings Kagome lattice References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Semiregular tilings
Rhombitriheptagonal tiling
Physics
416
34,266,683
https://en.wikipedia.org/wiki/Huawei%20IDEOS
The Huawei IDEOS U8150 is an Android smartphone manufactured by Huawei. It is rebranded as S31HW by EMOBILE in Japan. It is also renamed the T-Mobile Comet in the United States. Features The U8150 features a 2.8-inch glass capacitive touchscreen. It also features a 3.16 megapixel fixed-focus camera, Bluetooth connectivity, built-in GPS/AGPS. A microSD slot is located under the phone's battery cover. Physical buttons are the power button, a volume control on the side of the phone, a call start button, a navigation pad, and a call end button. Notes External links Profile at gsmarena.com Huawei Android (operating system) devices Ideos Mobile phones introduced in 2010 Discontinued smartphones
Huawei IDEOS
Technology
173
12,055,540
https://en.wikipedia.org/wiki/Shapley%E2%80%93Shubik%20power%20index
The Shapley–Shubik power index was formulated by Lloyd Shapley and Martin Shubik in 1954 to measure the powers of players in a voting game. The constituents of a voting system, such as legislative bodies, executives, shareholders, individual legislators, and so forth, can be viewed as players in an n-player game. Players with the same preferences form coalitions. Any coalition that has enough votes to pass a bill or elect a candidate is called winning. The power of a coalition (or a player) is measured by the fraction of the possible voting sequences in which that coalition casts the deciding vote, that is, the vote that first guarantees passage or failure. The power index is normalized between 0 and 1. A power of 0 means that a coalition has no effect at all on the outcome of the game; and a power of 1 means a coalition determines the outcome by its vote. Also the sum of the powers of all the players is always equal to 1. There are some algorithms for calculating the power index, e.g., dynamic programming techniques, enumeration methods and Monte Carlo methods. Since Shapley and Shubik have published their paper, several axiomatic approaches have been used to mathematically study the Shapley–Shubik power index, with the anonymity axiom, the null player axiom, the efficiency axiom and the transfer axiom being the most widely used. Examples Suppose decisions are made by majority rule in a body consisting of A, B, C, D, who have 3, 2, 1 and 1 votes, respectively. The majority vote threshold is 4. There are 4! = 24 possible orders for these members to vote: For each voting sequence the pivot voter – that voter who first raises the cumulative sum to 4 or more – is bolded. Here, A is pivotal in 12 of the 24 sequences. Therefore, A has an index of power 1/2. The others have an index of power 1/6. Curiously, B has no more power than C and D. When you consider that A's vote determines the outcome unless the others unite against A, it becomes clear that B, C, D play identical roles. This reflects in the power indices. Suppose that in another majority-rule voting body with members, in which a single strong member has votes and the remaining members have one vote each. In this case the strong member has a power index of (unless , in which case the power index is simply ). Note that this is more than the fraction of votes which the strong member commands. Indeed, this strong member has only a fraction of the votes. Consider, for instance, a company which has 1000 outstanding shares of voting stock. One large shareholder holds 400 shares, while 600 other shareholders hold 1 share each. This corresponds to and . In this case the power index of the large shareholder is approximately 0.666 (or 66.6%), even though this shareholder holds only 40% of the stock. The remaining 600 shareholder have a power index of less than 0.0006 (or 0.06%). Thus, the large shareholder holds over 1000 times more voting power as each other shareholder, while holding only 400 times as much stock. The above can be mathematically derived as follows. Note that a majority is reached if at least votes are cast in favor. If , the strong member clearly holds all the power, since in this case (i.e., the votes of the strong member alone meet the majority threshold). Suppose now that and that in a randomly chosen voting sequence, the strong member votes as the th member. This means that after the first member have voted, votes have been cast in favor, while after the first members have voted, votes have been cast in favor. The vote of strong member is pivotal if the former does not meet the majority threshold, while the latter does. That is, , and . We can rewrite this condition as . Note that our condition of ensures that and (i.e., all of the permitted values of are feasible). Thus, the strong member is the pivotal voter if takes on one of the values of up to but not including . Since each of the possible values of is associated with the same number of voting sequences, this means that the strong member is the pivotal voter in a fraction of the voting sequences. That is, the power index of the strong member is . Applications The index has been applied to the analysis of voting in the Council of the European Union. The index has been applied to the analysis of voting in the United Nations Security Council. The UN Security Council is made up of fifteen member states, of which five (the United States of America, Russia, China, France and the United Kingdom) are permanent members of the council. For a motion to pass in the Council, it needs the support of every permanent member and the support of four non permanent members. This is equivalent to a voting body where the five permanent members have eight votes each, the ten other members have one vote each and there is a quota of forty four votes, as then there would be fifty total votes, so you need all five permanent members and then four other votes for a motion to pass. Note that a non-permanent member is pivotal in a permutation if and only if they are in the ninth position to vote and all five permanent members have already voted. Suppose that we have a permutation in which a non-permanent member is pivotal. Then there are three non-permanent members and five permanent that have to come before this pivotal member in this permutation. Therefore, there are ways of choosing these members and so 8! × different orders of the members before the pivotal voter. There would then be 6! ways of choosing the remaining voters after the pivotal voter. As there are a total of 15! permutations of 15 voters, the Shapley-Shubik power index of a non-permanent member is: . Hence the power index of a permanent member is . Python implementation This is a simple implementation of the above example in Python. from math import factorial, floor def normalize(x): total = sum(x) return [float(x) / total for x in x] def enumerate_coalitions(n): if n == 0: yield [] else: for coalition in enumerate_coalitions(n - 1): yield coalition yield coalition + [n] def power_index(seats, threshold=None): if threshold is None: threshold = floor(sum(seats) / 2) + 1 result = [0] * len(seats) for coalition in enumerate_coalitions(len(seats) - 1): for pivot in range(len(seats)): coalition_seats = sum(seats[(pivot + i) % len(seats)] for i in coalition) if (coalition_seats < threshold and threshold <= coalition_seats + seats[pivot]): result[pivot] += factorial(len(coalition)) * factorial(len(seats) - len(coalition) - 1) return normalize(result) print(power_index([3, 2, 1, 1])) See also Shapley value Arrow theorem Banzhaf power index References External links Online Power Index Calculator (by Tomomi Matsui) Computer Algorithms for Voting Power Analysis Web-based algorithms for voting power analysis Power Index Calculator Computes various indices for (multiple) weighted voting games online. Includes some examples. Computing Shapley-Shubik power index and Banzhaf power index with Python and R (by Frank Huettner) Game theory Cooperative games Voting theory Lloyd Shapley
Shapley–Shubik power index
Mathematics
1,598
35,038,731
https://en.wikipedia.org/wiki/C19H26N4O2
The molecular formula C19H26N4O2 may refer to: BIMU8, a drug which acts as a 5-HT4 receptor selective agonist ADB-4en-PINACA, a cannabinoid designer drug Sucrononic acid, a guanidino derivative artificial sweetener Molecular formulas
C19H26N4O2
Physics,Chemistry
69
52,472,798
https://en.wikipedia.org/wiki/Nisterime
Nisterime (), also known as 2α-chloro-4,5α-dihydrotestosterone 3-O-(p-nitrophenyl)oxime or as 2α-chloro-5α-androstan-17β-ol-3-one 3-O-(p-nitrophenyl)oxime, is a synthetic anabolic-androgenic steroid (AAS) and a derivative of dihydrotestosterone (DHT) that was never marketed. The C17α acetate ester of nisterime, nisterime acetate (ORF-9326), also exists and was developed as a postcoital contraceptive but was similarly never marketed. See also Istaroxime References Abandoned drugs Secondary alcohols Anabolic–androgenic steroids Androstanes Hormonal contraception 4-Nitrophenyl compounds Organochlorides Steroid oximes Ketoximes
Nisterime
Chemistry
214
6,748,115
https://en.wikipedia.org/wiki/Health%20information%20management
Health information management (HIM) is information management applied to health and health care. It is the practice of analyzing and protecting digital and traditional medical information vital to providing quality patient care. With the widespread computerization of health records, traditional (paper-based) records are being replaced with electronic health records (EHRs). The tools of health informatics and health information technology are continually improving to bring greater efficiency to information management in the health care sector. Health information management professionals plan information systems, develop health policy, and identify current and future information needs. In addition, they may apply the science of informatics to the collection, storage, analysis, use, and transmission of information to meet legal, professional, ethical and administrative records-keeping requirements of health care delivery. They work with clinical, epidemiological, demographic, financial, reference, and coded healthcare data. Health information administrators have been described to "play a critical role in the delivery of healthcare in the United States through their focus on the collection, maintenance and use of quality data to support the information-intensive and information-reliant healthcare system". History and development of HIM standards in the United States HIM standards began with establishment of AHIMA Health information management's standards history is dated back to the introduction of the American Health Information Management Association, founded in 1928 "when the American College of Surgeons established the Association of Record Librarians of North America (ARLNA) to 'elevate the standards of clinical records in hospitals and other medical institutions.'" In 1938, AHIMA was known as American Association of Medical Record Librarians (AAMRL) and its members were known as medical record experts or librarians who studied medical record science. The goal was to raise the standards of records keeping in hospitals and other healthcare facilities. The individuals involved in this profession were promoters for the successful management of clinical records to guarantee accuracy and precision. Over time, the organization's name changed to reflect the evolving field of health information management practices, eventually becoming the American Health Information Management Association. The association's current name is meant to cover the wide variety of areas which health professionals work in today. AHIMA members affect the quality of patient information and patient care at every touch point in the healthcare delivery cycle. They often serve in bridge roles, connecting clinical, operational, and administrative functions. HIMSS establishment in 1961 increased industry knowledge The Healthcare Information and Management Systems Society (HIMSS) was organized in 1961 as the Hospital Management Systems Society (HMSS), an independent, unincorporated, nonprofit, voluntary association of individuals. It was preceded by increasing amounts of management engineering activity in healthcare during the 1950s, when teachings of Frederick Winslow Taylor and Frank Bunker Gilbreth, Sr. began to attract the attention of health leaders. The HIMSS grew to include chapters, membership categories, publications, conventions, and continues to grow in different parts of the world via its Europe, Asia Pacific, and Middle Eastern branches. Accredited HIM educational program development The Commission on Accreditation for Health Informatics and Information Management Education (CAHIIM) defines standards which higher education health information management and technology programs must meet to qualify for accreditation. Students who graduate from an accredited associate's, bachelor's or certificate program are qualified to sit for their respective exams for certification as a Registered Health Information Technician (RHIT) – via graduation from an accredited associate or certification program or Registered Health Information Administrator (RHIA), which requires education through an accredited bachelor or certification program. Competency requirements are maintained by CAHIIM in their associate degree Entry-Level Competencies and Baccalaureate Degree Entry-Level Competencies definitions. Modern development The World Health Organization (WHO) stated that the proper collection, management and use of information within healthcare systems "will determine the system's effectiveness in detecting health problems, defining priorities, identifying innovative solutions and allocating resources to improve health outcomes". Electronic health records The electronic health record has been continually expressed as an evolvement of health record-keeping. Because it is electronic, this means of record keeping has been both supported and debated in the health professional community and within the public realm. In the United States, 89% of those who responded to a recent poll by The Wall Street Journal described themselves as "Very/Somewhat Confident" in their health care provider who used electronic health records compared to 71% of respondents who responded positively about their providers who didn't or don't use electronic health records. As of 2008, more than fifty-percent of chief information officers polled listed that they wanted ambulatory electronic health records in order to have the health information record available to move across each stage of health care. Health information managers are charged with the protection of patient privacy and are responsible for training their employees in the proper handling and usage of the confidential information entrusted to them. With the rise of technology's importance in healthcare, health information managers must remain competent with the use of information databases that generate crucial reports for administrators and physicians. Educational programs The requisites and accreditation processes for health information management education and professional activity vary across jurisdictions. In the United States, the CAHIIM requires continued accreditation for accredited programs in health information management. The current standard is that accreditation may be maintained with periodic site visits, submission of an annual report, informing CAHIIM of adverse changes within the program and paying CAHIIM administrative fees. HIM students may opt to participate in a full-time bridge program called the Joint Bachelor of Science/Masters Program. With this program, students can achieve both the Bachelor of Science in Health Information Management and the Master of Health Services Administration Program (BSHIM/MHSA). The full-time bridge program allows students to achieve both degrees in five years. Students pursuing the BSHIM/MHSA will be prepared to assume management and executive positions in health-related organizations such as: hospitals, managed care organizations, health information system developers and vendors, and pharmaceutical companies, and bring their knowledge in HIM to these positions. In Canada, graduates of Canadian College of Health Information Management (CCHIM) programs are eligible to write a national certification examination to pursue a profession in HIM. Online program availability There are many programs that are also available online. Online students collaborate with in-class students using internet technology. With online learning, students are allowed to go through the programs at their own pace. Online students are included in class through group lectures that are recorded and put online, discussion boards and are members of group projects with in-class students. Some online students are even allowed to attend some classes on campus and take some classes online. The CAHIIM lists accredited online programs on its website. Further education for health information professionals Education is an important aspect in being successful in the world of health information management. Aside from initial credentials, health information professionals may wish to pursue a Master of Health Information Management, Master of Business Administration, Master of Health Administration, or other master's programs in health data management, information technology and systems, and organization and management. Gaining further education advances the health professional's career and qualifies the individual for upper-management positions. Elements Healthcare quality and safety require that the right information be available at the right time to support patient care and health system management decisions. Gaining consensus on essential data content and documentation standards is a necessary prerequisite for high-quality data in the interconnected healthcare system of the future. Continuous quality management of data standards and content is key to ensuring that information is usable and actionable. Records The patient health record is the primary legal record documenting the health care services provided to a person in any aspect of the health care system. The term includes routine clinical or office records, records of care in any health related setting, preventive care, lifestyle evaluation, research protocols and various clinical databases. This repository of information about a single patient is generated by health care professionals as a direct result of interaction with a patient or with individuals who have personal knowledge of the patient. The primary patient record is the record that is used by health care professionals while providing patient care services to review patient data or document their own observations, actions, or instructions. The secondary patient record is a record that is derived from the primary record and contains selected data elements to aid non clinical persons in supporting, evaluating and advancing patient care. Patient care support refers to administration, regulation, and payment functions. Practices Methods to ensure Data Quality The accuracy of data depends on the manual or computer information system design for collecting, recording, storing, processing, accessing and displaying data as well as the ability and follow- through of the people involved in each phase of these activities. Everyone involved with documenting or using health information is responsible for its quality. According to AHIMA's Data Quality Management Model, there are four key processes for data: Application: the purpose for which the data are collected. Collection: the processes by which data elements are accumulated. Warehousing: the processes and systems used to store and maintain data and data journals. Analysis: the process of translating data into information utilized for an application. Each aspect is analyzed with 10 different data characteristics: Accuracy: Data are the correct values and are valid. Accessibility: Data items should be easily obtainable and legal to collect. Comprehensiveness: All required data items are included. Ensure that the entire scope of the data is collected and document intentional limitations. Consistency: The value of the data should be reliable and the same across applications. Currency: The data should be up to date. A datum value is up to date if it is current for a specific point in time. It is outdate if it was current at some preceding time yet incorrect at a later time. Definition: Clear definitions should be provided so that current and future data users will know what the data mean. Each data element should have clear meaning and acceptable values. Granularity: The attributes and values of data should be defined at the correct level of detail. Precision: Data values should be just large enough to support the application or process. Relevancy: The data are meaningful to the performance of the process or application for which they are collected. Timeliness: Timeliness is determined by how the data are being used and their context. Health information professionals HIM is a very broad and successful field for health care professionals. There are several career opportunities in Health Information Management and many different traditional and non-traditional settings for an HIM professional to work within. Traditional settings include: Managing an HIM medical records department, cancer registry, coding, trauma registry, transcription, quality improvement, release of information, patient admissions, compliance auditor, physician accreditation, utilization review, physician offices and risk management. Non-traditional settings include: consulting firms, government agencies, law firms, insurance companies, correctional facilities, extended care facilities, pharmaceutical research, information technology and medical software companies. Health information managers Professional health information managers manage and construct health information programs to guarantee they accommodate medical, legal, and ethical standards. They play a crucial role in the maintenance, collection, and analyzing of data that is received by doctors, nurses, and other healthcare players. In return these healthcare data contributors rely on the information to deliver quality healthcare. Managers must work with a group of information technicians to guarantee that the patient's medical records are accurate and are available when needed. In the United States, health information managers are typically certified as a Registered Health Information Administrator (RHIA) after achieving a bachelor's degree in health informatics or health information management from a school accredited by the Commission on Accreditation for Health Informatics and Information Management Education (CAHIIM) and after passing their respective certification exam. The Certified Health Informatics Systems Professional (CHISP) certification offered by American Society of Health Informatics Managers (ASHIM) is to credit a working level IT or clinical professional who is able to support physician adoption of Health IT. A CHISP professional needs to process knowledge of the health care environment, Health IT, IT, and soft skills including communication skills. RHIAs usually assume a managerial position that interacts with all levels of an organization that use patient data in decision making and everyday operations. They may work in a broad range of settings that span the continuum of healthcare including office based physician practices, nursing homes, home health agencies, mental health facilities, and public health agencies. Health information managers may specialize in registry management, data management, and data quality among other areas. Medical records and Health information technicians Medical records (MR) and Health information technicians (HIT) are described as having the following duties according to the U.S. Bureau of Labor Statistics' Occupational Outlook Handbook: assemble patients' health information including medical history, symptoms, examination results, diagnostic tests, treatment methods, and all other healthcare provider services. Technicians organize and manage health information data by ensuring its quality, accuracy, accessibility, and security. They regularly communicate with physicians and other healthcare professionals to clarify diagnoses or to obtain additional information. The International Labour Organization's International Standard Classification of Occupations further notes: "Occupations included in this category require knowledge of medical terminology, legal aspects of health information, health data standards, and computer- or paper-based data management as obtained through formal education and/or prolonged on-the-job training. MRHITs usually work in hospitals. However they also work in a variety of other healthcare settings, including office based physician practices, nursing homes, home health agencies, mental health facilities, and public health agencies. Technicians who specialize in coding are called medical coders or coding specialists. In the United States, health information technicians are certified as a Registered Health Information Technician (RHIT) after completing an associate degree in health information technology from a school accredited by the Commission on Accreditation for Health Informatics and Information Management Education (CAHIIM) before they may take their certification exam. See also Clinical documentation improvement Hospital information systems Human resources for health (HRH) information systems Medical classifications SNOMED CT References Health informatics Healthcare in the United States
Health information management
Biology
2,812
2,555,528
https://en.wikipedia.org/wiki/Gas%20to%20liquids
Gas to liquids (GTL) is a refinery process to convert natural gas or other gaseous hydrocarbons into longer-chain hydrocarbons, such as gasoline or diesel fuel. Methane-rich gases are converted into liquid synthetic fuels. Two general strategies exist: (i) direct partial combustion of methane to methanol and (ii) Fischer–Tropsch-like processes that convert carbon monoxide and hydrogen into hydrocarbons. Strategy ii is followed by diverse methods to convert the hydrogen-carbon monoxide mixtures to liquids. Direct partial combustion has been demonstrated in nature but not replicated commercially. Technologies reliant on partial combustion have been commercialized mainly in regions where natural gas is inexpensive. The motivation for GTL is to produce liquid fuels, which are more readily transported than methane. Methane must be cooled below its critical temperature of -82.3 °C in order to be liquified under pressure. Because of the associated cryogenic apparatus, LNG tankers are used for transport. Methanol is a conveniently handled combustible liquid, but its energy density is half of that of gasoline. Fischer–Tropsch process A GtL process may be established via the Fischer–Tropsch process which comprises several chemical reactions that convert a mixture of carbon monoxide (CO) and hydrogen (H2) into long chained hydrocarbons. These hydrocarbons are typically liquid or semi-liquid and ideally have the formula (CnH2n+2). In order to obtain the mixture of CO and H2 required for the Fischer–Tropsch process, methane (main component of natural gas) may be subjected to partial oxidation which yields a raw synthesis gas mixture of mostly carbon dioxide, carbon monoxide, hydrogen gas (and sometimes water and nitrogen). The ratio of carbon monoxide to hydrogen in the raw synthesis gas mixture can be adjusted e.g. using the water gas shift reaction. Removing impurities, particularly nitrogen, carbon dioxide and water, from the raw synthesis gas mixture yields pure synthesis gas (syngas). The pure syngas is routed into the Fischer–Tropsch process, where the syngas reacts over an iron or cobalt catalyst to produce synthetic hydrocarbons, including alcohols. Methane to methanol process Methanol is made from methane (natural gas) in a series of three reactions: Steam reforming CH4 + H2O → CO + 3 H2 ΔrH = +206 kJ mol−1 Water shift reaction CO + H2O → CO2 + H2 ΔrH = -41 kJ mol−1 Synthesis 2 H2 + CO → CH3OH ΔrH = -92 kJ mol−1 The methanol thus formed may be converted to gasoline by the Mobil process and methanol-to-olefins. Methanol to gasoline (MTG) and methanol to olefins In the early 1970s, Mobil developed an alternative procedure in which natural gas is converted to syngas, and then methanol. The methanol reacts in the presence of a zeolite catalyst to form various compounds. In the first step methanol is partially dehydrated to give dimethyl ether: 2 CH3OH → CH3OCH3 + H2O The mixture of dimethyl ether and methanol is then further dehydrated over a zeolite catalyst such as ZSM-5, and in practice is polymerized and hydrogenated to give a gasoline with hydrocarbons of five or more carbon atoms making up 80% of the fuel by weight. The Mobil MTG process is practiced from coal-derived methanol in China by JAMG. A more modern implementation of MTG is the Topsøe improved gasoline synthesis (TiGAS). Methanol can be converted to olefins using zeolite and SAPO-based heterogeneous catalysts. Depending on the catalyst pore size, this process can afford either C2 or C3 products, which are important monomers. Methanol to olefins technology is widely used in China in order to produce plastics from coal gasification. It is also discussed as a method to make fossil-free plastics in the future. Syngas to gasoline plus process (STG+) A third gas-to-liquids process builds on the MTG technology by converting natural gas-derived syngas into drop-in gasoline and jet fuel via a thermochemical single-loop process. The STG+ process follows four principal steps in one continuous process loop. This process consists of four fixed bed reactors in series in which a syngas is converted to synthetic fuels. The steps for producing high-octane synthetic gasoline are as follows: Methanol Synthesis: Syngas is fed to Reactor 1, the first of four reactors, which converts most of the syngas (CO and ) to methanol () when passing through the catalyst bed. Dimethyl Ether (DME) Synthesis: The methanol-rich gas from Reactor 1 is next fed to Reactor 2, the second STG+ reactor. The methanol is exposed to a catalyst and much of it is dehydrated to DME (). Gasoline synthesis: The Reactor 2 product gas is next fed to Reactor 3, the third reactor containing the catalyst for conversion of DME to hydrocarbons including paraffins (alkanes), aromatics, naphthenes (cycloalkanes) and small amounts of olefins (alkenes), mostly from (number of carbon atoms in the hydrocarbon molecule) to . Gasoline Treatment: The fourth reactor provides transalkylation and hydrogenation treatment to the products coming from Reactor 3. The treatment reduces durene (tetramethylbenzene)/isodurene and trimethylbenzene components that have high freezing points and must be minimized in gasoline. As a result, the synthetic gasoline product has high octane and desirable viscometric properties. Separator: Finally, the mixture from Reactor 4 is condensed to obtain gasoline. The non-condensed gas and gasoline are separated in a conventional condenser/separator. Most of the non-condensed gas from the product separator becomes recycled gas and is sent back to the feed stream to Reactor 1, leaving the synthetic gasoline product composed of paraffins, aromatics and naphthenes. Biological gas-to-liquids (Bio-GTL) With methane as the predominant target for GTL, much attention has focused on the three enzymes that process methane. These enzymes support the existence of methanotrophs, microorganisms that metabolize methane as their only source of carbon and energy. Aerobic methanotrophs harbor enzymes that oxygenate methane to methanol. The relevant enzymes are methane monooxygenases, which are found both in soluble and particulate (i.e. membrane-bound) varieties. They catalyze the oxygenation according to the following stoichiometry: CH4 + O2 + NADPH + H+ → CH3OH + H2O + NAD+ Anaerobic methanotrophs rely on the bioconversion of methane using the enzymes called methyl-coenzyme M reductases. These organisms effect reverse methanogenesis. Strenuous efforts have been made to elucidate the mechanisms of these methane-converting enzymes, which would enable their catalysis to be replicated in vitro. Biodiesel can be made from using the microbes Moorella thermoacetica and Yarrowia lipolytica. This process is known as biological gas-to-liquids. Commercial uses Using gas-to-liquids processes, refineries can convert some of their gaseous waste products (flare gas) into valuable fuel oils, which can be sold as is or blended only with diesel fuel. The World Bank estimates that over of natural gas are flared or vented annually, an amount worth approximately $30.6 billion, equivalent to 25% of the United States' gas consumption or 30% of the European Union's annual gas consumption, a resource that could be useful using GTL. Gas-to-liquids processes may also be used for the economic extraction of gas deposits in locations where it is not economical to build a pipeline. This process will be increasingly significant as crude oil resources are depleted. Royal Dutch Shell produces a diesel from natural gas in a factory in Bintulu, Malaysia. Another Shell GTL facility is the Pearl GTL plant in Qatar, the world's largest GTL facility. Sasol has recently built the Oryx GTL facility in Ras Laffan Industrial City, Qatar and together with Uzbekneftegaz and Petronas builds the Uzbekistan GTL plant. Chevron Corporation, in a joint venture with the Nigerian National Petroleum Corporation is commissioning the Escravos GTL in Nigeria, which uses Sasol technology. PetroSA, South Africa's national oil company, owns and operates a 22,000 barrels/day (capacity) GTL plant in Mossel Bay, using Sasol GTL technology. Aspirational and emerging ventures New generation of GTL technology is being pursued for the conversion of unconventional, remote and problem gas into valuable liquid fuels. GTL plants based on innovative Fischer–Tropsch catalysts have been built by INFRA Technology. Other mainly U.S. companies include Velocys, ENVIA Energy, Waste Management, NRG Energy, ThyssenKrupp Industrial Solutions, Liberty GTL, Petrobras, Greenway Innovative Energy, Primus Green Energy, Compact GTL, and Petronas. Several of these processes have proven themselves with demonstration flights using their jet fuels. Another proposed solution to stranded gas involves use of novel FPSO for offshore conversion of gas to liquids such as methanol, diesel, petrol, synthetic crude, and naphtha. Economics of GTL GTL using natural gas is more economical when there is wide gap between the prevailing natural gas price and crude oil price on a Barrel of oil equivalent (BOE) basis. A coefficient of 0.1724 results in full oil parity. GTL is a mechanism to bring down the diesel/gasoline/crude oil international prices at par with the natural gas price in an expanding global natural gas production at cheaper than crude oil price. When natural gas is converted in to GTL, the liquid products are easier to export at cheaper price rather than converting in to LNG and further conversion to liquid products in an importing country. However, GTL fuels are much more expensive to produce than conventional fuels. See also Biomass to liquid Carbon-neutral fuel Coal to liquid Bibliography Boogaard, P. J., Carrillo, J. C., Roberts, L. G., & Whale, G. F. (2017) Toxicological and ecotoxicological properties of gas-to-liquid (GTL) products. 1. Mammalian toxicology. Critical reviews in toxicology, 47(2), 121-144. References Natural gas technology Synthetic fuel technologies Industrial gases
Gas to liquids
Chemistry
2,292
8,664,662
https://en.wikipedia.org/wiki/Generalized%20Pochhammer%20symbol
In mathematics, the generalized Pochhammer symbol of parameter and partition generalizes the classical Pochhammer symbol, named after Leo August Pochhammer, and is defined as It is used in multivariate analysis. References Gamma and related functions Factorial and binomial topics
Generalized Pochhammer symbol
Mathematics
57
20,052,587
https://en.wikipedia.org/wiki/Thermosediminibacterales
Thermosediminibacterales is an order of Gram positive bacteria in the class Clostridia. See also List of Bacteria genera List of bacterial orders References Bacteria orders Clostridia Thermophiles Anaerobes
Thermosediminibacterales
Biology
50
40,956,660
https://en.wikipedia.org/wiki/Beauty%20micrometer
The beauty micrometer, also known as the beauty calibrator, was a device designed in the early 1930s to help in the identification of the areas of a person's face which need to have their appearance reduced or enhanced by make-up. The inventors include famed beautician Max Factor Sr. A 2013 Wired article described the device as "a Clockwork Orange style device" that combines "phrenology, cosmetics and a withering pseudo-scientific analysis". A photograph of Factor, using the device on actress Marjorie Reynolds featured in a 1935 article in science magazine Modern Mechanix and, when republished by The Guardian in 2013, the caption described it as being "a contraption that looks like an instrument of torture". Placed on and around the head and face, the beauty micrometer uses flexible metal strips which align with a person's facial features. The screws holding the strips in place allow for 325 adjustments, enabling the operator to make fine measurements with a precision of one thousandth of an inch. The inventors stated that there are two key measurements that they looked for: the heights of the nose and forehead should be the same, and the eyes should be separated by the width of one eye. When an imperfection is identified, corrective make-up can be applied to enhance or subdue the feature. The company Max Factor claims that the device helped Max Factor, Sr. to better understand the female face. The beauty micrometer was completed in 1932 and was primarily intended for use in the movie industry. When an actor's face is shown on a very large scale their "flaws" are magnified and can become "glaring distortions", according to the Modern Mechanix article. This device was intended to remedy the perceived problem, and the inventors also envisioned it being used in beauty shops. However, it did not become popular and did not gain widespread usage. Only one beauty micrometer is believed to exist. It is featured in a display at the Hollywood Entertainment Museum and came up for auction in 2009, falling significantly short of the $10,000–$20,000 estimate. References Cosmetics Physiological instruments
Beauty micrometer
Technology,Engineering
437
4,056,695
https://en.wikipedia.org/wiki/Binary%20moment%20diagram
A binary moment diagram (BMD) is a generalization of the binary decision diagram (BDD) to linear functions over domains such as booleans (like BDDs), but also to integers or to real numbers. They can deal with Boolean functions with complexity comparable to BDDs, but also some functions that are dealt with very inefficiently in a BDD are handled easily by BMD, most notably multiplication. The most important properties of BMD is that, like with BDDs, each function has exactly one canonical representation, and many operations can be efficiently performed on these representations. The main features that differentiate BMDs from BDDs are using linear instead of pointwise diagrams, and having weighted edges. The rules that ensure the canonicity of the representation are: Decision over variables higher in the ordering may only point to decisions over variables lower in the ordering. No two nodes may be identical (in normalization such nodes all references to one of these nodes should be replaced be references to another) No node may have all decision parts equivalent to 0 (links to such nodes should be replaced by links to their always part) No edge may have weight zero (all such edges should be replaced by direct links to 0) Weights of the edges should be coprime. Without this rule or some equivalent of it, it would be possible for a function to have many representations, for example 2x + 2 could be represented as 2 · (1 + x) or 1 · (2 + 2x). Pointwise and linear decomposition In pointwise decomposition, like in BDDs, on each branch point we store result of all branches separately. An example of such decomposition for an integer function (2x + y) is: In linear decomposition we provide instead a default value and a difference: It can easily be seen that the latter (linear) representation is much more efficient in case of additive functions, as when we add many elements the latter representation will have only O(n) elements, while the former (pointwise), even with sharing, exponentially many. Edge weights Another extension is using weights for edges. A value of function at given node is a sum of the true nodes below it (the node under always, and possibly the decided node) times the edges' weights. For example, can be represented as: Result node, always 1× value of node 2, if add 4× value of node 4 Always 1× value of node 3, if add 2× value of node 4 Always 0, if add 1× value of node 4 Always 1× value of node 5, if add +4 Always 1× value of node 6, if add +2 Always 0, if add +1 Without weighted nodes a much more complex representation would be required: Result node, always value of node 2, if value of node 4 Always value of node 3, if value of node 7 Always 0, if value of node 10 Always value of node 5, if add +16 Always value of node 6, if add +8 Always 0, if add +4 Always value of node 8, if add +8 Always value of node 9, if add +4 Always 0, if add +2 Always value of node 11, if add +4 Always value of node 12, if add +2 Always 0, if add +1 References Graph data structures Formal methods
Binary moment diagram
Engineering
690
3,473,949
https://en.wikipedia.org/wiki/Repetition%20code
In coding theory, the repetition code is one of the most basic linear error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel corrupts only a minority of these repetitions. This way the receiver will notice that a transmission error occurred since the received data stream is not the repetition of a single message, and moreover, the receiver can recover the original message by looking at the received message in the data stream that occurs most often. Because of the bad error correcting performance coupled with the low code rate (ratio between useful information symbols and actual transmitted symbols), other error correction codes are preferred in most cases. The chief attraction of the repetition code is the ease of implementation. Code parameters In the case of a binary repetition code, there exist two code words - all ones and all zeros - which have a length of . Therefore, the minimum Hamming distance of the code equals its length . This gives the repetition code an error correcting capacity of (i.e. it will correct up to errors in any code word). If the length of a binary repetition code is odd, then it's a perfect code. The binary repetition code of length n is equivalent to the (n, 1)-Hamming code. A (n, 1) BCH code is also a repetition code. Example Consider a binary repetition code of length 3. The user wants to transmit the information bits 101. Then the encoding maps each bit either to the all ones or all zeros code word, so we get the 111 000 111, which will be transmitted. Let's say three errors corrupt the transmitted bits and the received sequence is 111 010 100. Decoding is usually done by a simple majority decision for each code word. That lead us to 100 as the decoded information bits, because in the first and second code word occurred less than two errors, so the majority of the bits are correct. But in the third code word two bits are corrupted, which results in an erroneous information bit, since two errors lie above the error correcting capacity. Applications Despite their poor performance as stand-alone codes, use in Turbo code-like iteratively decoded concatenated coding schemes, such as repeat-accumulate (RA) and accumulate-repeat-accumulate (ARA) codes, allows for surprisingly good error correction performance. Repetition codes are one of the few known codes whose code rate can be automatically adjusted to varying channel capacity, by sending more or less parity information as required to overcome the channel noise, and it is the only such code known for non-erasure channels. Practical adaptive codes for erasure channels have been invented only recently, and are known as fountain codes. Some UARTs, such as the ones used in the FlexRay protocol, use a majority filter to ignore brief noise spikes. This spike-rejection filter can be seen as a kind of repetition decoder. See also Block code Turbo code References Coding theory Error detection and correction
Repetition code
Mathematics,Engineering
633
8,510,311
https://en.wikipedia.org/wiki/Liquid%20Fidelity
Liquid Fidelity is a "microdisplay" technology applied in high-definition televisions. It incorporates Liquid Crystal on Silicon technology capable of producing true 1080p resolution with two million pixels on a single display chip. Components of Liquid Fidelity technology were originally used in 720p HDTVs produced by Uneed Systems of Korea from 2004-2006. Technology Overview Liquid Crystal on Silicon in general is a sophisticated mix of optical and electrical technologies on one chip. The top layer of the chip is liquid crystal material, the bottom layer is an integrated circuit that drives the liquid crystal, and the surface between the layers is highly reflective. The circuit determines how much light passes through the liquid crystal layer, and the reflected light creates an image on a projection screen. LCOS chips with both 720p and 1080p resolution have been developed for HDTVs. Nearly all LCOS chips in mass production have been used in three-chip systems, with one LCOS chip each for red, green and blue light. Sony’s SXRD and JVC’s HD-ILA TVs create images this way. While three-chip systems can produce very good HDTV pictures, they are difficult to align precisely and are expensive. Misalignments can cause visible convergence errors between red, green and blue, particularly along the sides and in the corners of the screen. Liquid Fidelity addresses both the alignment and cost problems. Exclusive technology enables Liquid Fidelity to change its brightness much more quickly than ordinary LCOS chips can. This fast response allows the use of one chip and a color wheel, rather than three chips, so red, green and blue alignment is assured at all areas on the screen. Also, by eliminating two of the three LCOS chips and the additional optical components to support them, Liquid Fidelity HDTVs are generally less expensive to manufacture. Comparison to DLP technology DLP uses MEMS technology, which stands for Micro-Electro-Mechanical Systems. DLP HDTV chips include hundreds of thousands of microscopic mirrors which tilt back and forth, reflecting light which is then projected onto a television screen. While Liquid Fidelity creates an HDTV image by controlling the amount of light reflecting from it, DLP creates an HDTV image by varying the percentage of the time that its mirrors are aimed toward the projection screen. The main advantage of Liquid Fidelity over DLP is that the 1080p Liquid Fidelity chip has over 2 million cells, in an array of 1920 x 1080, for true 1080p pixel resolution. The 1080p DLP chips designed for consumer HDTVs have only half that number of microscopic mirrors, and use yet another mechanism to create 2 pixels from each of those mirrors. By providing a dedicated cell for every pixel, Liquid Fidelity technology provides a sharp, stable picture with smooth, fine texture. References LCoS and 'Liquid Fidelity': A microdisplay application External links MicroDisplay Corporation Liquid Fidelity Liquid Fidelity detail Display technology
Liquid Fidelity
Engineering
593
68,683,770
https://en.wikipedia.org/wiki/Foraminifera%20test
Foraminiferal tests are the tests (or shells) of Foraminifera. Foraminifera (forams for short) are single-celled predatory protists, mostly marine, and usually protected with shells. These shells, often called tests, can be single-chambered or have multiple interconnected chambers; the cellular machinery is contained within the shell. So important is the test to the biology of foraminifera that it provides the scientific name of the group—foraminifera, Latin for "hole bearers", referring to the pores connecting chambers of the shell in the multi-chambered species. Foraminiferal tests are usually made of calcite, a form of calcium carbonate (), but are sometimes made of aragonite, agglutinated sediment particles, chitin, or (rarely) of silica. Other foraminifera lack tests altogether. Over 50,000 species are recognized, both living (6,700 - 10,000) and fossil (40,000). They are usually less than 1 mm in size, but some are much larger, the largest species reaching up to 20 cm. Most forams are benthic, but about 40 extant species are planktic. The hard nature of most foraminiferal tests leads to an excellent fossil record, and they are widely researched to infer information about past climate and environments. Background Foraminiferal tests serve to protect the organism within. Owing to their generally hard and durable construction (compared to other protists), the tests of foraminifera are a major source of scientific knowledge about the group. Openings in the test that allow the cytoplasm to extend outside are called apertures. The primary aperture, leading to the exterior, take many different shapes in different species, including but not limited to rounded, crescent-shaped, slit-shaped, hooded, radiate (star-shaped), dendritic (branching). Some foraminifera have "toothed", flanged, or lipped primary apertures. There may be only one primary aperture or multiple; when multiple are present, they may be clustered or equatorial. In addition to the primary aperture, many foraminifera have supplemental apertures. These may form as relict apertures (past primary apertures from an earlier growth stage) or as unique structures. Test shape is highly variable among different foraminifera; they may be single-chambered (unilocular) or multi-chambered (multilocular). In multilocular forms, new chambers are added as the organism grows. A wide variety of test morphologies is found in both unilocular and multilocular forms, including spiraled, serial, and milioline, among others. Many foraminifera exhibit dimorphism in their tests, with microspheric and megalospheric individuals. These names should not be taken as referring to the size of the full organism; rather, they refer to the size of the first chamber, or proloculus. Tests as fossils are known from as far back as the Ediacaran period, and many marine sediments are composed primarily of them. For instance, the limestone that makes up the pyramids of Egypt is composed almost entirely of nummulitic benthic foraminifera. It is estimated that reef foraminifera generate about 43 million tons of calcium carbonate per year. Genetic studies have identified the naked amoeba Reticulomyxa and the peculiar xenophyophores as foraminiferans without tests. A few other amoeboids produce reticulose pseudopodia, and were formerly classified with the forams as the Granuloreticulosa, but this is no longer considered a natural group, and most are now placed among the Cercozoa. Composition The form and composition of their tests are the primary means by which forams are identified and classified. Most secrete calcareous tests, composed of calcium carbonate. Calcareous tests may be composed of either aragonite or calcite depending on species; among those with calcite tests, the test may contain either a high or low fraction of magnesium substitution. The test contains an organic matrix, which can sometimes be recovered from fossil samples. Some studies suggest a high amount of homoplasy in foraminifera, and that neither agglutinated nor calcareous foraminifera form monophyletic groupings. Soft In some forams, the tests may be composed of organic material, typically the protein tectin. Tectin walls may have sediment particles loosely adhered onto the surface. The foram Reticulomyxa entirely lacks a test, having only a membranous cell wall. Organic-walled forams have traditionally been grouped as the "allogromiids"; however, genetic studies have found that this does not make up a natural group. Agglutinated Other forams have tests made from small pieces of sediment cemented together (agglutinated) by either proteins (possibly collagen-related), calcium carbonate, or Iron (III) oxide. In the past these forms were grouped together as the single-chambered "astrorhizids" and the multi-chambered textulariids. However, recent genetic studies suggest that "astrorhizids" do not make up a natural grouping, instead forming a broad base of the foram tree. Textulariid foraminifera, unlike other living members of the globothalamea, have agglutinated tests; however, grains in these tests are cemented with a calcite cement. This calcite cement is made up of small (<100 nm) globular nanograins, similar to in other globothalameans. These tests may also have many pores, another feature uniting them with the globothalamea. Agglutinating foraminifera may be selective regarding what particles they incorporate into their shells. Some species prefer certain sizes and types of rock particles; other species are preferential towards certain biological materials. Certain species of foraminifera are known to have preferentially agglutinated coccoliths to form their tests; others preferentially utilise echinoderm plates, diatoms, or even other foraminiferans' tests. The foraminifera Spiculosiphon preferentially agglutinates silica sponge spicules using an organic cement; it shows strong selectivity also towards shape, utilising elongated spicules on its "stalk" and shortened ones on its "bulb". It is thought to use the spicules as both a means of elevating itself off the seabed as well as to lengthen the reach of its pseudopodia to capture prey.The agglutinated tests of xenophyophores are the largest of any foraminifera, reaching up to 20 cm in diameter. The name "xenophyophore", meaning "bearer of foreign bodies", refers to this agglutinating habit. Xenophyophores selectively uptake sediment grains between 63 and 500 μm, avoiding larger pebbles and finer silts; type of sediment seems to be a strong factor in which particles are agglutinated, as particle type preferentially includes sulfides, oxides, volcanic glass, and especially tests of smaller foraminifera. Xenophyophores 1.5 cm in diameter have been recorded completely naked, with no test whatsoever. Calcareous Of those foraminifera with calcareous tests, several different structures of calcite crystals are found. Porcelaneous Porcelaneous walls are found in the Miliolida. These consist of high-magnesium calcite organized with an ordered outer and inner calcite lining (the "extrados" and "intrados", respectively) and randomly oriented needle-shaped calcite crystals forming a thick center layer (the "porcelain"). An organic inner lining is also present. The external surface may have a pitted structure, but it is not perforated by holes. "Cornuspirid" miliolids apparently lack any extrados. Monocrystalline A "monocrystalline" test structure has traditionally been described for the Spirillinida. However, these tests remain poorly understood and poorly described. Some supposed "monocrystalline" spirillinids have been found to actually have tests consisting of a mosaic of very small crystals when observed with scanning electron microscope. SEM observation of Patellina sp. suggests that a truly monocrystalline test may indeed be present, with apparent cleavage faces. Fibre bundles Lagenid tests consist of "fibre bundles" that can reach tens of micrometres long; each "bundle" is formed from a single calcite crystal, is triangular in cross-section, and has a pore in the centre (thought to be an artefact of test deposition). There is also an internal organic layer, attached to the "cone" structure of the fibre bundles. As the crystalline structure varies significantly from that of other calcareous foraminifera, it is thought to represent a separate evolution of the calcareous test. The exact mineralisation process of lagenids remains unclear. Hyaline Rotaliid tests are described as "hyaline". They are formed from low-to-high-magnesium calcite "nanograins" positioned with their C-axes perpendicular to the external surface of the test. Further, these nanograins can have higher-level structure, such as rows, columns, or bundles. The test wall is characteristically bilamellar (two-layered) and perforated throughout with small pores. The outer calcite layer of the test wall is referred to as the "outer lamina" while the inner calcite layer is referred to as the "inner lining"; this should not be confused with the organic inner lining beneath the test. Sandwiched between the outer lamina and the inner lining is the "median layer", a protein layer that separates the two. The median layer is quite variable; depending on the species it may be well-defined while in others it is not sharply delineated. Some genera may contain sediment particles within the median layer. The now-extinct Fusulinids have traditionally been considered unique in having tests of homogenous microgranular crystals with no preferred orientation and almost no cement. However, a 2017 study found that the supposed microgranular structure was actually the result of diagenetic alteration of the fossils, and that unaltered fusulinid tests instead had a hyaline structure. This suggests that the group is affiliated with the Globothalamea. Robertinids have aragonitic tests with perforations; these are similar to the tests of rotaliids in that they are formed from nanograins, however, they differ in composition and in having well-organised columnar domains. As the earliest planktonic forams had aragonitic tests, it has been suggested that this may represent a separate evolution of a planktonic lifestyle within the Robertinida, rather than being close relatives of Globigerinans. Hyaline aragonitic tests are also present in the Involutinida. Spicules The Carterinids, including the genera Carterina and Zaninettia, have a unique crystalline structure of the test which long complicated their classification. The test in this genus consists of spicules of low-magnesium calcite, bound together with an organic matrix and containing "blebs" of organic matter; this led some researchers to conclude that the test must be agglutinated. However, life studies have failed to find agglutination, and in fact the genus has been discovered on artificial substrate where sediment particles do not accumulate. A 2014 genetic study found carterinids to be an independent lineage within the Globothalamea, and supported the idea of the spicules being secreted as spicule shape differed consistently between specimens of Carterina and Zaninettia collected from the same locality (ovoid in Carterina, rounded-rectangular in Zaninettia). Silicate One genus, Miliamellus, has a non-perforated test made of opaline silica. It is similar in shape and structure to the porcelanous tests of typical miliolids; the test consists of an internal and external organic layer, as well as a middle silica layer made of elongate rods. This silica layer is further divided into outer, middle, and inner subunits; the outer and inner subunits each are approximately 0.2μm thick and consist of subparallel sheets of silica rods with their long axes parallel to the test surface. The middle subunit is approximately 18μm in thickness and consists of a three-dimensional lattice of silica rods with no organic component in the open space. The ultrastructure differs from that of miliolids in that the rods are over twice as long and twice as thick on average, in that the rods of Miliamellus are hollow rather than solid, and of course in having a silica test rather than calcite. Test wall construction When a secreted test is present, walls of foraminiferal tests may be either nonlamellar or lamellar. Nonlamellar walls are found in some foraminifera, such as Carterinida, Spirillinida, and Miliolida. In these forms, the secretion of a new chamber is not associated with any further deposition over previous chambers. As such there is no associated layering of calcite layers on the test. In foraminifera with lamellar walls, the deposition of a new chamber is accompanied by the deposition of a layer over previously-formed chambers. This layer may cover all previous chambers, or it may cover only some of them. These layers are known as secondary lamellae. Foraminifera with lamellar walls can be further broken down into those with monolamellar walls and those with bilamellar walls. Monolamellar foraminifera secrete test walls which consist of a single layer, while those of bilamellar foraminifera are double-layered with an organic "median layer", sometimes containing sediment particles. In the case of bilamellar foraminifera, the outer layer is referred to as the "outer lamella" whilst the inner layer is referred to as the "inner lining". Monolamellar forams include the Lagenida, while bilamellar forms include the Rotaliida (including the major planktonic subgroup, the Globigerinina). Bilamellar test walls can be further divided into those with septal flaps (a layer of test wall covering the previously-secreted septum) and those lacking septal flaps. Septal flaps are not known to be present in any foraminifera other than those with bilamellar walls. The presence of a septal flap is often, though not always, associated with the presence of an interlocular space. As the name suggests, this is a small space located between chambers; it may be open and form part of the outer surface of the test, or it may be enclosed to form a void. The layer enclosing the void is formed from different parts of the lamellae in different genera, suggesting an independent evolution of enclosed interlocular spaces in order to strengthen the test. References Foraminifera Microfossils
Foraminifera test
Chemistry
3,248
38,243,004
https://en.wikipedia.org/wiki/Union%20for%20Ethical%20Biotrade
thumb The Union for Ethical BioTrade (UEBT) is a nonprofit association that promotes the "Sourcing with Respect" of ingredients that come from biodiversity. Members commit to gradually ensuring that their sourcing practices promote the conservation of biodiversity, respect traditional knowledge and assure the equitable sharing of benefits all along the supply chain, following the Ethical BioTrade Standard. Members also commit to the UEBT verification system, which includes undergoing independent third party verification against the Ethical BioTrade Standard, developing a work-plan for gradual compliance for all natural, as well as the commitment to continuous improvement once compliance is achieved. History UEBT is a membership-based organisation that was created in May 2007 in Geneva, Switzerland. It was conceptualized in response to multiple developments. First of all, the Convention on Biological Diversity (CBD) acknowledged that additional efforts were needed to reach out to the private sector. The CBD recognized the strong link between business and biodiversity, as well as the dependency of industry on biodiversity and it highlighted the key role the private sector plays in sustainable use and the need for efforts and tools to engage it. In pursuit of the decisions that CBD parties had taken regarding private sector engagement and the use of standards in this, the idea of UEBT was conceived. The creation of the Union for Ethical BioTrade also responded to the need expressed by small and medium-sized enterprises (SMEs) in developing countries for ways to differentiate biodiversity-based products in the market. Finally, UEBT built upon efforts initiated by the BioTrade Initiative of the United Nations Conference on Trade and Development (UNCTAD), which was created to contribute to making biodiversity a strategy for sustainable development. On 8 May 2007, a meeting of the founding members took place and the articles of association were approved. To support the efforts of the Union, the CBD and UEBT signed a Memorandum of Understanding in December 2008, to encourage companies involved in BioTrade to adopt and promote good practices. Aim Although the CBD and CSD provide general principles, not much practical guidance is currently available to help business advance on the ethical sourcing of biodiversity. UEBT fills this gap and makes concrete contributions to biodiversity conservation and local sustainable development. UEBT aims to bring together actors committed to Ethical BioTrade, and promotes, facilitates and recognises ethical sourcing of biodiversity in line with the objectives of the CBD. In order to achieve these goals, UEBT supports its members in the membership process and in their work towards implementing the Ethical BioTrade Standard. It also provides technical support, organizes regular conferences and workshops around Access and Benefit Sharing and biodiversity, publishes papers and reports, and consults organisations on the ethical sourcing from biodiversity. Standard UEBT manages the Ethical BioTrade Standard, which provides a basis for UEBT Trading Members to improve their biodiversity sourcing practices. UEBT members develop biodiversity management systems that further the implementation of the Ethical BioTrade Standard in all their own operations involving natural ingredients, as well as throughout their supply chains. In joining UEBT, a company agrees to comply with the principles of Ethical BioTrade. This means using practices that promote the sustainable use of natural ingredients, while ensuring that all contributors along the supply chain are paid fair prices and share the benefits derived from the use of biodiversity. The Ethical BioTrade Standard is mainstreamed in the operations of UEBT trading members, including for instance in research, innovation and development. UEBT members undergo regular audits by independent third party verification bodies. UEBT works with the following Verification Bodies: Imaflora, SGS Qualifor, NaturaCert, Control Union Certifications, Ecocert SA, SGS del Peru S.A.C., Soil Association, IBD Certificações (IBD), Biotropico S.A., IMO do Brasil, and Rainforest Alliance. The Ethical BioTrade Standard builds on the seven Principles and Criteria as developed by the UNCTAD BioTrade Initiative. First established in 2007, it was revised in April 2011, following the requirements of the ISEAL Alliance and the World Trade Organization (WTO). These requirements include the need to periodically review the Standard and ensure wide stakeholder involvement during the review process. Biodiversity Barometer Every year the UEBT publishes the Biodiversity Barometer. The Biodiversity Barometer contains the results of awareness studies commissioned by UEBT and provides insights on evolving biodiversity awareness among consumers and how the beauty industry reports on biodiversity. The Barometer is used as one of the indicators for measuring progress towards meeting the Aichi Biodiversity Target 1 in the Biodiversity Indicators Partnership's Aichi Passport. Members The Union for Ethical BioTrade has several types of trading members: brands, producers and processing companies, mainly in the global cosmetics, pharmaceutical and food sector. Current UEBT trading members include Natura, Weleda, Laboratoires Expanscience and Aroma Forest. UEBT also has affiliate members, which currently include the International Finance Corporation (IFC), MEB - Movimento Empresarial Brasileiro pela Biodiversidade, PhytoTrade Africa and Rongead. Funding The Union for Ethical BioTrade is financed by membership fees and contributions from donors. Governance The General Assembly acts as the main governing body of UEBT and meets once a year to elect the Members of the Board. The General Assembly is composed of all UEBT Members. Provisional, Trading and Affiliate Members have the right to vote and elect the Board of Directors, thereby approving the management of the organization. UNCTAD, the CBD and the International Finance Corporation act as observers to the Board. To support the functioning of UEBT, the Board has appointed various committees, including an appeals committee, a membership committee, and a standards committee. References External links UNCTAD BioTrade initiative United Nations Convention on Biological Diversity Biotrade-Wiki Organisations based in Geneva Biodiversity Sustainability organizations Traditional knowledge International environmental organizations Bioethics research organizations
Union for Ethical Biotrade
Biology
1,245
60,912,643
https://en.wikipedia.org/wiki/NGC%20655
NGC 655 is a lenticular galaxy located 400 million light-years away in the constellation Cetus. It was discovered in a sky-survey by Ormond Stone on December 12, 1885. On July 23, 2010, supernova SN 2010gp in NGC 655 was announced, reaching magnitude 15.5 on July 22. It was offset by 22″ west and 47″ south of the nucleus. Earlier, in 2000, the type II supernova SN 2000bg had also appeared in this galaxy. See also List of NGC objects (1–1000) References Lenticular galaxies Cetus 655 Astronomical objects discovered in 1885 006262
NGC 655
Astronomy
134
47,900,975
https://en.wikipedia.org/wiki/Penicillium%20vancouverense
Penicillium vancouverense is a species of fungus in the genus Penicillium which was isolated from soil under a maple tree in Vancouver in Canada. References vancouverense Fungi described in 2011 Fungus species
Penicillium vancouverense
Biology
43
452,582
https://en.wikipedia.org/wiki/Arimaa
Arimaa () is a two-player strategy board game that was designed to be playable with a standard chess set and difficult for computers while still being easy to learn and fun to play for humans. It was invented between 1997 and 2002 by Omar Syed, an Indian-American computer engineer trained in artificial intelligence. Syed was inspired by Garry Kasparov's defeat at the hands of the chess computer Deep Blue to design a new game which could be played with a standard chess set, would be difficult for computers to play well, but would have rules simple enough for his then four-year-old son Aamir to understand. ("Arimaa" is "Aamir" spelled backwards plus an initial "a".) Beginning in 2004, the Arimaa community held three annual tournaments: a World Championship (humans only), a Computer Championship (computers only), and the Arimaa Challenge (human vs. computer). After eleven years of human dominance, the 2015 challenge was won decisively by the computer (Sharp by David Wu). Arimaa has won several awards including GAMES Magazine 2011 Best Abstract Strategy Game, Creative Child Magazine 2010 Strategy Game of the Year, and the 2010 Parents' Choice Approved Award. It has also been the subject of several research papers. Rules Arimaa is played on an 8×8 board with four trap squares. There are six kinds of pieces, ranging from elephant (strongest) to rabbit (weakest). Stronger pieces can push or pull weaker pieces, and stronger pieces freeze weaker pieces. Pieces can be captured by dislodging them onto a trap square when they have no orthogonally adjacent pieces. The two players, Gold and Silver, each control sixteen pieces. These are, in order from strongest to weakest: one elephant (), one camel (), two horses (), two dogs (), two cats (), and eight rabbits . These may be represented by the king, queen, rooks, bishops, knights, and pawns respectively when one plays using a chess set. Objective The main object of the game is to move a rabbit of one's own color onto the home rank of the opponent, which is known as a goal. Thus Gold wins by moving a gold rabbit to the eighth rank, and Silver wins by moving a silver rabbit to the first rank. However, because it is difficult to usher a rabbit to the goal line while the board is full of pieces, an intermediate objective is to capture opposing pieces by pushing them into the trap squares. The game can also be won by capturing all of the opponent's rabbits (elimination) or by depriving the opponent of legal moves (immobilization). Compared to goal, these are uncommon. Setup The game begins with an empty board. Gold places the sixteen gold pieces in any configuration on the first and second ranks. Silver then places the sixteen silver pieces in any configuration on the seventh and eighth ranks. Diagram 1 shows one possible initial placement. Movement After the pieces are placed on the board, the players alternate turns, starting with Gold. A turn consists of making one to four steps. With each step a piece may move into an unoccupied square one space left, right, forward, or backward, except that rabbits may not step backward. The steps of a turn may be made by a single piece or distributed among several pieces in any order. A turn must make a net change to the position. Thus one cannot, for example, take one step forward and one step back with the same piece, effectively passing the turn and evading zugzwang. Furthermore, one's turn may not create the same position with the same player to move as has been created twice before. This rule is similar to the situational super ko rule in the game of Go, which prevents endless loops, and is in contrast to chess where endless loops are considered draws. The prohibitions on passing and repetition make Arimaa a drawless game. Pushing and pulling The second diagram, from the same game as the initial position above, helps illustrate the remaining rules of movement. A player may use two consecutive steps of a turn to dislodge an opposing piece with a stronger friendly piece which is adjacent in one of the four cardinal directions. For example, a player's dog may dislodge an opposing rabbit or cat, but not a dog, horse, camel, or elephant. The stronger piece may pull or push the adjacent weaker piece. When pulling, the stronger piece steps into an empty square, and the square it came from is occupied by the weaker piece. The silver elephant on d5 could step to d4 (or c5 or e5) and pull the gold horse from d6 to d5. When pushing, the weaker piece is moved to an adjacent empty square, and the square it came from is occupied by the stronger piece. The gold elephant on d3 could push the silver rabbit on d2 to e2 and then occupy d2. Note that the rabbit on d2 can't be pushed to d1, c2, or d3, because those squares are not empty. Friendly pieces may not be dislodged. Also, a piece may not push and pull simultaneously. For example, the gold elephant on d3 could not simultaneously push the silver rabbit on d2 to e2 and pull the silver rabbit from c3 to d3. An elephant can never be dislodged, since there is nothing stronger. Freezing A piece which is adjacent in any cardinal direction to a stronger opposing piece is frozen, unless it is also adjacent to a friendly piece. Frozen pieces may not be moved by the owner, but may be dislodged by the opponent. A frozen piece can freeze another still weaker piece. The silver rabbit on a7 is frozen, but the one on d2 is able to move because it is adjacent to a silver piece. Similarly the gold rabbit on b7 is frozen, but the gold cat on c1 is not. The dogs on a6 and b6 do not freeze each other because they are of equal strength. An elephant cannot be frozen, since there is nothing stronger, but an elephant can be blockaded. Capturing A piece which enters a trap square is captured and removed from the game unless there is a friendly piece orthogonally adjacent. Silver could move to capture the gold horse on d6 by pushing it to c6 with the elephant on d5. A piece on a trap square is captured when all adjacent friendly pieces move away. Thus if the silver rabbit on c4 and the silver horse on c2 move away, voluntarily or by being dislodged, the silver rabbit on c3 will be captured. Note that a piece may voluntarily step into a trap square, even if it is thereby captured. Also, the second step of a pulling maneuver is completed even if the piece doing the pulling is captured on the first step. For example, Silver could step the silver rabbit from f4 to g4 (so that it will no longer support pieces at f3), and then step the silver horse from f2 to f3, which captures the horse; the horse's move could still pull the gold rabbit from f1 to f2. Strategy and tactics For beginning insights into good play, see the Arimaa Wikibook. Karl Juhnke, twice Arimaa world champion, has written a book titled Beginning Arimaa which gives an introduction to Arimaa tactics and strategies. Also Jean Daligault, six time Arimaa world champion, wrote Arimaa Strategies and Tactics which is geared towards those who have started playing Arimaa and want to improve their game. Annual tournaments World Championship Each year since 2004 the Arimaa community has held a World Championship tournament. The tournament is played over the Internet and is open to everyone. Past and current world champion title holders are: 2004 – Frank Heinemann of Germany 2005 – Karl Juhnke of USA 2006 – Till Wiechers of Germany 2007 – Jean Daligault of France 2008 – Karl Juhnke of USA 2009 – Jean Daligault of France 2010 – Jean Daligault of France 2011 – Jean Daligault of France 2012 – Hirohumi Takahashi of Japan 2013 – Jean Daligault of France 2014 – Jean Daligault of France 2015 – Mathew Brown of USA 2016 – Mathew Brown of USA 2017 – Mathew Brown of USA 2018 – Matthew Craven of USA 2019 – Jerome Richmond of Great Britain 2020 – Mathew Brown of USA 2021 – Mathew Brown of USA 2022 – Jerome Richmond of Great Britain 2023 – Mathew Brown of USA World Computer Championship Each year from 2004 to 2015 the Arimaa community held a World Computer Championship tournament. The tournament is played over the Internet and is open to everyone. The current champion is sharp developed by David Wu of the USA. Past computer world champion title holders are: 2004 – Bomb developed by David Fotland of USA 2005 – Bomb developed by David Fotland of USA 2006 – Bomb developed by David Fotland of USA 2007 – Bomb developed by David Fotland of USA 2008 – Bomb developed by David Fotland of USA 2009 – clueless developed by Jeff Bacher of Canada 2010 – marwin developed by Mattias Hultgren of Sweden 2011 – sharp developed by David Wu of USA 2012 – marwin developed by Mattias Hultgren of Sweden 2013 – ziltoid developed by Ricardo Barreira of Portugal 2014 – sharp developed by David Wu of USA 2015 – sharp developed by David Wu of USA Arimaa Challenge The Arimaa Challenge was a cash prize of around $10,000 that was to have been available annually until 2020 for the first computer program to win the human-versus-computer Arimaa challenge. As part of the conditions of the prize, the computer program must run on standard, off-the-shelf hardware. The Arimaa Challenge was held twelve times, starting in 2004. Following the second match, Syed changed the format to require the software to win two out of three games against each of three players, to reduce the psychological pressure on individual volunteer defenders. Also Syed called for outside sponsorship of the Arimaa Challenge to build a bigger prize fund. In the first five challenge cycles, David Fotland, programmer of Many Faces of Go, won the Arimaa Computer Championship and the right to play for the prize money, only to see his program beaten decisively each year. In 2009 Fotland's program was surpassed by several new programs in the same year, the strongest of which was Clueless by Jeff Bacher. Humanity's margin of dominance over computers appeared to widen each year from 2004 to 2008 as the best human players improved, but the 2009 Arimaa Challenge was more competitive. Clueless became the first bot to win two games of a Challenge match. In 2010, Mattias Hultgren's bot Marwin edged out Clueless in the computer championship. In the Challenge match Marwin became the first bot to win two out of three games against a single human defender, and also the first bot to win three of the nine games overall. In 2011, however, Marwin won only one of the nine games, and that one having received a material handicap. In 2012 a new challenger, Briareus, became the first program to defeat a top-ten player, sweeping all three games from the fifth-ranked human. In 2013, however, the humans struck back against Marwin, with #4 and #6 each sweeping including a handicap win, and #31 winning two of three games. In 2014, the computer bounced back to win two games, albeit no matches. In 2015, Sharp made a substantial leap in playing strength. After having scored 6-6 in twelve games against its top two computer rivals the previous year, Sharp went undefeated in the computer tournaments of 2015, including 13-0 against the second- and third-place finishers. Sharp dominated the pre-Challenge screening against human opponents, winning 27 of 29 games. In the Challenge itself, Sharp clinched victory in each of the three mini-matches by winning the first six games, finishing 7-2 overall and winning the Arimaa challenge. Wu published a paper describing the algorithm and most of ICGA Journal Issue 38/1 was dedicated to this topic. The algorithm combined traditional alpha–beta pruning (changing sides every 4 ply) with heuristic functions manually written while analysing human expert games. After DeepMind's AlphaZero mastered Go, Chess, and Shogi simply by playing itself, Omar Syed announced a $10,000 prize for the creation of such an Arimaa bot which could win a 10-game match against Sharp. This has not yet been done. See also Computer Arimaa Game complexity Anti-computer tactics Games inspired by chess Competitions and prizes in artificial intelligence List of world championships in mind sports Notes References Further reading Wikibook: Arimaa Strategy Daligault, Jean (2012). Arimaa Strategies and Tactics. CreateSpace Independent Publishing Platform. External links Official Arimaa website David Fotland's Arimaa program The Arimaa Public License Abstract strategy games Board games Games of mental skill Computer science competitions Game artificial intelligence
Arimaa
Mathematics
2,702
67,699,403
https://en.wikipedia.org/wiki/Enteral%20respiration
Enteral respiration, also referred to as cloacal respiration or intestinal respiration, is a form of respiration in which gas exchange occurs across the epithelia of the enteral system, usually in the caudal cavity (cloaca). This is used in various species as an alternative respiration mechanism in hypoxic environments as a means to supplement blood oxygen. Turtles Some turtles, especially those specialized in diving, are highly reliant on cloacal respiration during dives. They accomplish this by having a pair of accessory air bladders connected to the cloaca which can absorb oxygen from the water. Other animals Various fish, as well as polychaete worms and even crabs, are specialized to take advantage of the constant flow of water through the cloacal respiratory tree of sea cucumbers while simultaneously gaining the protection of living within the sea cucumber itself. At night, many of these species emerge from the anus of the sea cucumber in search of food. The pond loach is able to respond to the periodic drying in their native habitats by burrowing into the mud and exchanging gas through the posterior end of their alimentary canal. Studies have shown that mammals are capable of performing intestinal respiration to a limited degree in a laboratory setting. Mice were subjected to hypoxic conditions and supplied oxygen through their intestines survived an average of 18 minutes compared to 11 minutes in the control group. In 2024 Ig Nobel Prize an award in physiology has been given to a study proving that pigs are capable of this as well. When the intestinal lining was abraded before oxygen was introduced, most of the animals survived for at least 50 minutes. Investigations are planned regarding the effectiveness of the strategy, the safety of this application of perfluorocarbons, and the feasibility of application to humans. It has potential application to people with a respiratory disease or lung damage. See also Cutaneous respiration References Respiration Animal anatomy Digestive system
Enteral respiration
Biology
412
10,197,275
https://en.wikipedia.org/wiki/Dry%20matter
The dry matter or dry weight is a measure of the mass of a completely dried substance. Analysis of food The dry matter of plant and animal material consists of all its constituents excluding water. The dry matter of food includes carbohydrates, fats, proteins, vitamins, minerals, and antioxidants (e.g., thiocyanate, anthocyanin, and quercetin). Carbohydrates, fats, and proteins, which provide the energy in foods (measured in kilocalories or kilojoules), make up ninety percent of the dry weight of a diet. Water composition Water content in foods varies widely. A large number of foods are more than half water by weight, including boiled oatmeal (84.5%), cooked macaroni (78.4%), boiled eggs (73.2%), boiled rice (72.5%), white meat chicken (70.3%) and sirloin steak (61.9%). Fruits and vegetables are 70 to 95% water. Most meats are on average about 70% water. Breads are approximately 36% water. Some foods have a water content of less than 5%, e.g., peanut butter, crackers, and chocolate cake. Water content of dairy products is quite variable. Butter is 15% water. Cow's milk ranges between 88 and 86% water. Swiss cheese is 37% percent water. The water content of milk and dairy products varies with the percentage of butterfat so that whole milk has the lowest percentage of water and skimmed milk has the highest. Dry matter basis The nutrient or mineral content of foods, animal feeds or plant tissues are often expressed on a dry matter basis, i.e. as a proportion of the total dry matter in the material. For example, a 138-gram apple contains 84% water (116 g water and 22 g dry matter per apple). The potassium content is 0.72% on a dry matter basis, i.e. 0.72% of the dry matter is potassium. The apple, therefore, contains 158 mg potassium (0.72/100 X 22 g). Dried apple contains the same concentration of potassium on a dry matter basis (0.72%), but is only 32% water (68% dry matter). So 138 g of dried apple contains 93.8 g dry matter and 675 mg potassium (0.72/100 x 93.8 g). When formulating a diet or mixed animal feed, nutrient or mineral concentrations are generally given on a dry matter basis; it is therefore important to consider the moisture content of each constituent when calculating total quantities of the different nutrients supplied. Fat in dry matter (FDM) Cheese contains both dry matter and water. The dry matter in cheese contains proteins, butterfat, minerals, and lactose (milk sugar), although little lactose survives fermentation when the cheese is made. A cheese's fat content is expressed as the percentage of fat in the cheese's dry matter (abbreviated FDM or FiDM), which excludes the cheese's water content. For example, if a cheese is 50% water (and, therefore, 50% dry matter) and has 25% fat, its fat content would be 50% fat in dry matter. Techniques In the sugar industry the dry matter content is an important parameter to control the crystallization process and is often measured on-line by means of microwave density meters. Animal feed Dry matter can refer to the dry portion of animal feed. A substance in the feed, such as a nutrient or toxin, can be referred to on a dry matter basis (abbreviated DMB) to show its level in the feed (e.g., ppm). Considering nutrient levels in different feeds on a dry matter basis (rather than an as-is basis) makes a comparison easier because feeds contain different percentages of water. This also allows a comparison between the level of a given nutrient in dry matter and the level needed in an animal's diet. Dry matter intake (DMI) refers to feed intake excluding its water content. The percentage of water is frequently determined by heating the feed on a paper plate in a microwave oven or using the Koster Tester to dry the feed. Ascertaining DMI can be useful for low-energy feeds with a high percentage of water in order to ensure adequate energy intake. Animals eating these kinds of feeds have been shown to consume less dry matter and food energy. A problem called dry matter loss can result from heat generation, as caused by microbial respiration. It decreases the content of nonstructural carbohydrate, protein, and food energy. See also Body water Moisture References Solids Measurement Food analysis
Dry matter
Physics,Chemistry,Materials_science,Mathematics
989
19,161,243
https://en.wikipedia.org/wiki/Global%20Challenge%20Award
The Global Challenge Award is an online science and engineering design program for pre-college school students (e.g. middle school through high school) from all over the world. It is an initiative that started with a partnership with the University of Vermont in collaboration with the National Science Foundation, currently funded by the MacArthur Foundation Digital Media and Learning program as well as other foundations and corporations, wherein students have the opportunity to form teams with international counterparts and work towards a solution to mitigate global warming and help envision the future of renewable energy. The program is an online educational environment that uses game based learning, simulation and Web-based science resources in a global competition. It relies on the personal initiative and creativity of students working in diverse teams. The access to the project via the Web makes it possible for students, parents, homeschooling families, teachers and interested global community members to get involved to help young people with their creative ideas for innovation in new forms of energy, conservation and increased productivity. History Founded in 2005 by Craig Deluca and David Rocchio of the Arno Group and Biddle Duke, the publisher of the Stowe Reporter newspaper company in Stowe, Vt., working in close partnership with Domenico Grasso of The University of Vermont (see) College of Engineering and Mathematical Sciences, the program gives international student teams the opportunity to experience the excitement of scientific understanding and engineering design while working on significant human and societal issues – bringing science to life in innovative new applications. The program mission is to "give students the tools and confidence to solve global problems together." The overarching model for the learning experiences offered worldwide to any student was influenced by The George Lucas Foundation's Big ideas For Better Schools, the Partnership for 21st Century Schools and game based learning. The Global Challenge was funded in part by a National Science Foundation award from the Innovative Technology Experiences for Students (ITEST) program, validating the project's design for engaging youth in science, technology, engineering and mathematics learning. Since its founding in 2005, The Global Challenge has reached over 100,000 people worldwide and engaged over 4,000 students from 60 countries in forming teams to solve the challenge. About $200,000 in scholarships, travel, summer study have been provided to over 200 students from 10 countries. International connections The Global Challenge Award is responsible for identification of high school students who represent the United States in the International Earth Science Olympiad or IESO. Students and teachers travel to South Korea in 2007 and the Philippines in 2008. Plans are now underway to form a US-IESO selection process with the support of the American Geological Institute. In addition, the design of the program builds international student teams. Students from over 79 countries participate each year. Top countries by participation with over 100 students each year have been the United States, India, China, and South Korea. Program elements There are several project areas in the Challenge. Some are designed specifically for teams, others students can work on alone. Students can mix and match projects based on their interest level and time. They can form a team to compete in one competition and, at the same time, work on individual points. Global Business Plan Students build an international team, envision a global solution, create a detailed business plan, and submit it for judging. Technical Innovation Plan Students build an international team, envision any kind of technical solution, explain it to a panel of judges. Explorations Students work on their own on science, technology, engineering and mathematics units of study called "challenges." Green Earth Corps Students work on their own or with any team, build a home and business auditing service, earn while they learn and serve. GCA-350 Students create an awareness event about the need to reach "350 parts per million" of in the atmosphere. Each Challenge earns certain points, and in the end, teams with the highest scores win and earn scholarships, travel awards to the Governor's Institute on Engineering in Vermont, cash prizes, and tuition scholarships. News coverage The program was the lead story "Save the World" in Learning & Leading with Technology November 2007. In the Burlington Free Press in July 2008, and has led to a number of youth-authored articles on Cogito.org, for example: Using Nanotechnology for Cost Effective Converters as well as Educating Myself, International Style. See also List of earth sciences awards Notes External links YouTube Introduction Earth sciences awards American education awards Science education
Global Challenge Award
Technology
895
1,815,563
https://en.wikipedia.org/wiki/Rockfall
A rockfall or rock-fall is a quantity/sheets of rock that has fallen freely from a cliff face. The term is also used for collapse of rock from roof or walls of mine or quarry workings. "A rockfall is a fragment of rock (a block) detached by sliding, toppling, or falling, that falls along a vertical or sub-vertical cliff, proceeds down slope by bouncing and flying along ballistic trajectories or by rolling on talus or debris slopes." Alternatively, a "rockfall is the natural downward motion of a detached block or series of blocks with a small volume involving free falling, bouncing, rolling, and sliding". The mode of failure differs from that of a rockslide. Causal mechanisms Favourable geology and climate are the principal causal mechanisms of rockfall, factors that include intact condition of the rock mass, discontinuities within the rockmass, weathering susceptibility, ground and surface water, freeze-thaw, root-wedging, and external stresses. A tree may be blown by the wind, and this causes a pressure at the root level and this loosens rocks and can trigger a fall. The pieces of rock collect at the bottom creating a talus or scree. Rocks falling from the cliff may dislodge other rocks and serve to create another mass wasting process, for example an avalanche. A cliff that has favorable geology to a rockfall may be said to be incompetent. One that is not favorable to a rockfall, which is better consolidated, may be said to be competent. In higher altitude mountains, rockfalls may be caused by thawing of rock masses with permafrost. In contrast, lower altitude mountains with warmer climates rockfalls may be caused by weathering being enhanced by non-freezing conditions. Propagation Assessing the propagation of rockfall is a key issue for defining the best mitigation strategy as it allows the delineation of run out zones and the quantification of the rock blocks kinematics parameters along their way down to the elements at risk. In this purpose, many approaches may be considered. For example, the energy line method allows expediently estimating the rockfall run out. Numerical models simulating the rock block propagation offer a more detailed characterisation of the rockfall propagation kinematics. These simulation tools in particular focus on the modeling of the rebound of the rock block onto the soil The numerical models in particular provide the rock block passing height and kinetic energy that are necessary for designing passive mitigation structures. Mitigation Typically, rockfall events are mitigated in one of two ways: either by passive mitigation or active mitigation. Passive mitigation is where only the effects of the rockfall event are mitigated and are generally employed in the deposition or run-out zones, such as through the use of drape nets, rockfall catchment fences, galeries, ditches, embankments, etc. The rockfall still takes place but an attempt is made to control the outcome. In contrast, active mitigation is carried out in the initiation zone and prevents the rockfall event from ever occurring. Some examples of these measures are rock bolting, slope retention systems, shotcrete, etc. Other active measures might be by changing the geographic or climatic characteristics in the initiation zone, e.g. altering slope geometry, dewatering the slope, revegetation, etc. Design guides of passive measures with respect to the block trajectory control have been proposed by several authors. Effects on trees The effect of rockfalls on trees can be seen in several ways. The tree roots may rotate, via the rotational energy of the rockfall. The tree may move via the application of translational energy. And lastly deformation may occur, either elastic or plastic. Dendrochronology can reveal a past impact, with missing tree rings, as the tree rings grow around and close over a gap; the callus tissue can be seen microscopically. A macroscopic section can be used for dating of avalanche and rockfall events. See also Avalanche Earthquake Kinetic energy Landslide Potential energy Protection forest Rockslide Slope stability SMR classification Capitólio rockfall References External links Road hazards Landslide types Hazards of outdoor recreation
Rockfall
Technology
865
28,772,649
https://en.wikipedia.org/wiki/Aluminium%20diacetate
Aluminium diacetate, also known as basic aluminium acetate, is a white powder with the chemical formula C4H7AlO5. It is one of a number of aluminium acetates and can be prepared in a reaction of sodium aluminate (NaAlO2) with acetic acid. Medicinal use Aluminium diacetate is used as an antiseptic and astringent. It is used topically as wet dressing, compress, or soak for self-medication to temporarily relieve itching and soothe, particularly on wet or weeping lesions. It relieves skin irritation from many causes, such as insect bites, athlete's foot, urushiol-induced contact dermatitis from plants poisonous to the touch such as poison ivy, oak, or sumac, and skin irritation due to sensitivity to soaps, detergents, cosmetics, or jewellery. It is also used to relieve swelling from bruises. Preparations are also used topically for the relief of a variety skin conditions such as eczema, diaper rash, acne, and pruritus ani. It is typically used in the form of Burow's solution, 13% of AlAc in water. In the USA medications containing aluminium acetate are sold under the brand names Domeboro Powder, Gordon's Boro-Packs, and Pedi-Boro Soak Paks. It is sold in gel form under the name TriCalm. Acetic acid/aluminium acetate solution can be used medicinally to treat infections in the outer ear canal. This medication stops the growth of the bacteria and fungus and beneficially dries out the ear canal. US preparations for this purpose include Domeboro Otic, Star-Otic, and Borofair. Mordant In the dyeing industry, basic aluminium diacetate is used in combination with aluminium triacetate as a mordant for fibres like cotton. References Acetates Aluminium compounds
Aluminium diacetate
Chemistry
398
73,189,382
https://en.wikipedia.org/wiki/Drawing%20tower
A drawing tower produces a fine glass filament by drawing a glass preform. The tip of the preform is heated to melting temperature and then a strand of molten material is pulled downward. Industrial drawing towers range in height from 30 to 45 meters. A drawing tower is used in the production of optical fiber, for example for fiber-optic communication cables. The preform is a multi-layered cylinder typically 20 cm in diameter, and 2 m long. References Fiber optics
Drawing tower
Materials_science,Engineering
97
45,336,978
https://en.wikipedia.org/wiki/Australian%20School%20of%20Petroleum%20and%20Energy%20Resources
The Australian School of Petroleum and Energy Resources (ASPER) is a centre for education, training and research in petroleum and energy resources engineering, geoscience and management at the University of Adelaide in South Australia. ASPER is housed in the purpose-built Santos Petroleum Engineering Building on the University of Adelaide's North Terrace campus. History The Australian School of Petroleum originated from the merger, in 2003, of the National Centre for Petroleum Geology and Geophysics (NCPGG) and the School of Petroleum Engineering and Management (SPEM). In 2020, the School was renamed from the Australian School of Petroleum to the Australian School of Petroleum and Energy Resources to reflect its teaching and research in areas such as the underground storage and use of carbon and hydrogen. The NCPGG was founded as a government and industry-funded Centre of Excellence in 1986. The SPEM was founded in 2000 under an AU $25 million Sponsorship Agreement between the University of Adelaide and Santos Limited. At the time it was believed to be 'the largest single industry sponsorship ever given to a public university in Australia.' School Ranking and Reputation In 2020 the QS World University Rankings included the discipline of Petroleum Engineering for the first time. The ERA (Excellence in Research Australia) is the Australian Government’s attempt to assess research quality and impact. As a School focused largely on a single industry sector, its research outputs do not align with a single ERA field of research, with the majority of ASPER’s outputs allocated to the “Resource Engineering and Extractive Metallurgy” and “Geology” fields of research. In the most recent ERA (2018), the University of Adelaide received 5/5 in both these categories. Engagement with Industry and Government ASPER interacts with industry and government agencies and is sometimes sought out for advice on matters related to petroleum and energy management. ASPER’s industry Advisory Board has 13 members from 10 energy companies (Santos, Beach Energy, Chevron, BHP, Esso Australia, Woodside Energy, Vintage Energy, Strike Energy, Cooper Energy, and Schlumberger), CO2CRC and the South Australian government. The Board advises ASPER on the capabilities they seek in future employees, their training needs and the research needs of the industry.  A large proportion of ASPER research is funded by the industry in the form of consortia, direct contract research or through collaborative Australian Research Council (ARC) Linkage grants. The South Australian State Government has provided support to ASPER and its predecessors, including funding for the South Australian State Chair of Petroleum Geoscience which has been held by Cedric Griffiths (1994-1999), Bruce Ainsworth (2010-2013) and Peter McCabe (2014-2020). The number of research papers co-authored by ASPER staff and industry-based collaborators provides a possible measure for ASPER’s engagement with the industry. ASPER has performed an analysis that determined 29.8% of its publications during the last 3 years have been published with industry co-authors, higher than both the 3.6% of papers published by University of Adelaide academics and the Australia-wide average of 2.3%. Teaching ASPER offers a range of undergraduate and postgraduate coursework and research programs in engineering and geoscience, from which approximately 2100 students have graduated since 2002. References University of Adelaide 2003 establishments in Australia Petroleum engineering schools
Australian School of Petroleum and Energy Resources
Engineering
681
80,630
https://en.wikipedia.org/wiki/Wubi%20method
The Wubizixing input method (), often abbreviated to simply Wubi or Wubi Xing, is a Chinese character input method primarily for inputting simplified Chinese and traditional Chinese text on a computer. Wubi should not be confused with the Wubihua (五笔画) method, which is a different input method that shares the categorization into five types of strokes. The method is also known as Wang Ma (), named after the inventor Wang Yongmin (王永民). There are four Wubi versions that are considered to be standard: Wubi 86, Wubi 98, Wubi 18030 and Wubi New-century (the 3rd-generation Version). The latter three can also be used to input traditional Chinese text, albeit in a more limited way. Wubi 86 is the most widely known and used shape-based input method for full letter keyboards in Mainland China. If it is frequently needed to input traditional Chinese characters as well, other input methods like Cangjie or Zhengma may be better suited to the task, and it is also much more likely to find them on the computer one needs to use. The Wubi method is based on the structure of characters rather than their pronunciation, making it possible to input characters even when the user does not know the pronunciation, as well as not being too closely linked to any particular spoken variety of Chinese. It is also extremely efficient: nearly every character can be written with at most 4 keystrokes. In practice, most characters can be written with fewer. There are reports of experienced typists reaching 160 characters per minute with Wubi. What this means in the context of Chinese is not entirely the same as it is for English, but it is true that Wubi is extremely fast when used by an experienced typist. The main reason for this is that, unlike with traditional phonetic input methods, one does not have to spend time selecting the desired character from a list of homophonic possibilities: virtually all characters have a unique representation. As its name suggests, the keyboard is divided into five regions. The Chinese character 笔 (bǐ), when used in the context of writing Chinese characters, refers to the brush strokes used in Chinese calligraphy. Each region is assigned a certain type of stroke. Region 1: horizontal (一) Region 2: vertical (丨) Region 3: downward right-to-left (丿) Region 4: dot strokes or downward left-to-right strokes (丶) Region 5: hook (乙) As a more complex system, Wubi takes longer to acquire as a skill. Memorization and practice are key factors for proficient usage. To use Wubi, there are multiple input methods available, including Google Input Tools (used by Google Translate) and keyboard options on Mac devices. Wubi sequences can be looked up for specific characters by using online dictionaries. In this article, the following convention will be used: character will always mean Chinese character, whereas letter, key and keystroke will always refer to the keys on keyboard. How it works Essentially, a character is broken down into components, which are usually (but not always) the same as radicals. These are typed in the order in which they would be written by hand. In order to ensure that extremely complex characters do not require an inordinate number of keystrokes, any character containing more than 4 components is entered by typing the first 3 components written, followed by the last. In this way, each character's data can be entered with no more than 4 keystrokes. Wubi distributes its characters very evenly and as such the vast majority of characters are uniquely defined by the 4 keystrokes discussed above. One then types a space to move the character from the input buffer onto the screen. In the event that the 4 letter representation of the character is not unique, one would type a digit to select the relevant character (for example, if two characters have the same representation, typing 1 would select the first, and 2 the second). In most implementations, a space can always be typed and simply means 1 in an ambiguous setting. Intelligent software will try to make sure that the character in the default position is the one desired. Many characters have more than one representation. This sometimes is for ease of use, in case there is more than one obvious way to break down a character. More often though, it is because certain characters have a short representation that is less than 4 letters, as well as a "full" representation. For characters with fewer than 4 components that do not have a short form representation, one types each component and then "fills up" the representation (that is, types enough extra keystrokes to make the representation 4 keystrokes) by manually typing the strokes of the last component, in the order they would be written. If there are too many strokes, one should write as many as possible, but put the last stroke last (this mirrors the component rule for characters with more than 4 components outlined above). Once the algorithm is understood, one can type almost any character with a little practice, even if one has not typed it before. Muscle memory ensures that frequent typists using this method do not have to think about how the characters are actually constructed, just as the vast majority of English typists do not think very much about the spelling of words when they write. Implementation of specific details Many implementations employ further, multiple-word optimizations. Usually, a commonly used digraph (two character word) in which both characters have short form two-keystroke representations can be combined into a single, four keystroke representation which generates two characters rather than one. There are also a few 3-character shortcuts, and even one rather longer, politically motivated one. Some examples of these are provided in the examples section below. Another common feature is the use of the 'z' key as a wildcard. The Wubi method was actually designed with this feature in mind; this is why no components are assigned to the z key. Basically, one can type a z when unsure what the component should be, and the input method will help complete it. If one knew, for example, that the character ought to start with "kt", but was unsure what the next component should be, typing "ktz" would produce a list of all characters starting with "kt". In practice though, many input method engines use a tabular lookup method for all table based input systems, including for Wubi. This means that they simply have a large table in memory, associating different characters to their respective representations. The input method then simply becomes a table lookup. In such an implementation, the z key breaks the paradigm and as such is not found in much generalized software (although the Wubi input method commonly found in Chinese Windows implements the feature). For this same reason, the multiple character optimization described in the previous paragraph is also relatively rare. Some input methods, such as xcin (found on many UNIX-like systems), provide a generic wildcard functionality which can be used in all the table based input systems, including pinyin and virtually anything else. Xcin uses '*' for auto-complete and '?' for just one letter, following the conventions pioneered in UNIX file globbing. Other implementations have their own conventions. Subdivision of the keyboard The Wubi keyboard assumes a QWERTY-like layout, so users of keyboards implementing a nationalized or alternative layout (such as Dvorak or the French AZERTY) will probably have to do some remapping to make the system sane. Wubi does not position its components arbitrarily: there are far too many of them, and it is only with the introduction of a logical methodology that the system becomes easy to learn. Basically, the keyboard is divided into 5 zones, each representing a stroke. Those five strokes are falling left, falling right, horizontal, vertical, and hook, and the zones that represent them are QWERT, YUIOP, ASDFG, HJKLM, and XCVBN, respectively. These zones are all laid out horizontally, with the exception of M, which is not in line with the rest of the letters in its zone. In a general way, the keyboard can be thought of as divided down the center, between T and Y, G and H, and N and M. The keys in each zone are numbered moving away from this dividing line: so we should actually say that in zone QWERT, T is the first letter, R is the second, and E the third; in zone YUIOP, Y is the first, U is the second, I the third, etc. For XCVBN, N is the first, and so on. In HJKLM, consider M to be the last in the series, even though it does not lie on the line. This is important because components in the first position will have one repetition of the stroke in question (the stroke assigned to the zone in which they belong), those in the second, two, those in the third, three. Those components which are not easily classifiable using this paradigm will be placed on the last letter. Therefore, one would expect 一 to be located on G, and 二 on F, and 三 on D, and indeed, this is the case. Similarly, one would expect 丨 to be located on H, 刂 to be on J, and 川 to be on K. This pattern holds for all the zones. Furthermore, it extends to most radicals that look as though they are made up of three such strokes, even if in fact they might not be at all. An example of this is 中 on K: while it does not have three downward strokes (two only), it appears to have three. Furthermore, it is written by hand by first writing a mouth radical, 口, and then bisecting it with a vertical downward stroke. The mouth radical lies on 'K', so this makes the assignment doubly logical. And the pinyin romanization of 口, kou3, begins with k, another memory aid encoded into the Wubi keyboard. Furthermore, each letter of each zone has one component associated with it, its "main component". These are usually a complete character (with the exception of X) in their own right. One can always type this main component by typing the letter it is situated on four times. So, for example, the main component of H is 目, and so one would type it by typing "hhhh". Each letter also has a shortcut character associated with it. In some cases, this character is the same as the component associated with the key in question, and sometimes not. This shortcut character is the character produced when one types just the letter and nothing else; these are all extremely common characters used when typing Chinese. It is entirely possible that there are a number of components not listed below, either because of oversight, because they are rarely used, or because no simple Unicode representation for the component exists. QWERT zone (falling left) The Q key's main component is 金 and its shortcut character is 我. It is associated with the following components: 金, 钅, 勹, 儿, 夕, as well as the hook at the top of 饣 and 角, the radical 犭 without the lower left-falling stroke (so characters with that radical start with "qt", not just "q"), the criss-cross (such as in the center of 区), the top of 鱼 (i.e., without the horizontal stroke at the bottom), and the three (nearly vertical) "feet" in the bottom right corner of 流. The W key's main component and shortcut character are both 人. It is associated with the following components: 人, 亻, 八, and the top of 癸. While 人 means person, it is often used by Wubi to construct a roof radical, such as in 会, "wfc". 入 is not governed by W, despite looking similar, and while 餐 has a top that looks vaguely like the top of 癸, the two are not the same (indeed, to type 餐, one must physically type out each component on the top). The E key's main component is 月, and its shortcut character is 有. It is associated with the following components: 月, 用, 彡, 乃, the bottom of 衣 (i.e., without 亠), the top of 孚 (i.e., without 子), 豕 (hog), the bottom of 良 (i.e., without the 白), and the bottom of 舟 (i.e., without the little dot on the top). In this case, E's shortcut character does not even begin with a left-falling stroke, but merely prominently figures a component belonging to E. 彡 is featured on this character, as it is the third character in the zone (counting from T, see above). A particular distortion that comes up often is the use of E in 且 and in characters containing it: Wubi thinks of this component as 月 + 一. The R key's main component is 白, and its shortcut character is 的. It is associated with the following components: 白, 手, 扌, 斤 (both with and without the T), 牛 (without the vertical downward stroke), and of course the two left-falling strokes 𰀪 that one would expect from the second key in the zone (see above for an explanation). Watch out for varieties of 手 where the central downward hook is replaced by a left-falling stroke, such as in 看. The T key's main component is 禾, and its shortcut character is 和. It is associated with the following components: 禾, 竹, 夂, 攵, 彳, and the top of 乞 (i.e., without the 乙). 竹 may also be found in its smaller form (⺮). 丿 is also found on this key, because T is the first key in the zone (see above). This means that if one is typing a component or character stroke by stroke, they would (generally) use T to represent a left-falling stroke. See the section on disambiguation strokes for more information on exceptions to this rule. YUIOP zone (falling right) This zone might also be called the dot zone, because its pattern of Y: 讠 U: 冫 I: 氵 and O: 灬 is not actually necessarily built up of right falling strokes. In fact, one could argue that the first stroke in 灬 actually falls left. It is called the falling right zone because the keys in this zone, when used to construct a character by stroke (rather than component), all represent right falling strokes for some character configuration (see the section on disambiguation strokes for more information). The Y key's main component is 言, and its shortcut character is 主. It is associated with the following components: 言, 讠, 亠, 亠 with a 口 beneath it, 广, 文, 方, and 丶. These components all start with a right-falling stroke. Generally, dots in Chinese characters are actually left falling strokes, and so most of the time, the use of T is more appropriate than Y. Of course, if one can write Chinese characters by hand, they should be able to tell which to choose by recalling how it is written. The U key's main component is 立, and its shortcut character is 产. It is associated with the following components: 立, 六, 辛, 门, 疒, 丬, 冫, the "antennae" on the top of 单 (just two strokes: 丷), and the antennae plus a horizontal stroke, as found on the top of 兹. Most of these all feature two short diagonal strokes (门 being the obvious exception). This is consistent with U's place as the second letter in the zone (see above for an explanation). The I key's main component is 水, and its shortcut character is 不. It is associated with the following components: 水, 氵, 小, the three strokes on the top of 学, and the three strokes on the top of 当. Additionally, a component which might be described as two 冫, back to back, is associated with this character. The O key's main component is 火, and its shortcut character is 为. It is associated with the following components: 火, 米, 灬, and 业 without the bottom horizontal stroke — this allows construction of characters such as 严. This is the 4th key in the falling right zone: hence the inclusion of 灬. The P key's main component is 之, and its shortcut character is 这. It is associated with the following components: 之, 辶, 廴, 冖, 宀, and 礻. As Wubi components are typed in the order that they would need to be written were one writing by hand, the 辶 and 廴 components are typically typed last. ASDFG zone (horizontal) The A key's shortcut character is 工. The S key's main component is 木, and its shortcut character is 要. The D key's main component is 大, and its shortcut character is 在. The F key's main component is 土, and its shortcut character is 地. The main component's name (earth) correlates to the shortcut character which means earth. The G key's main component is 王, and its shortcut character is 一. HJKLM zone (vertical) The H key's main component is 目, and its shortcut character is 上. The J key's main component is 日, and its shortcut character is 是. The K key's main component is 口, and its shortcut character is 中. The L key's main component is 田, and its shortcut character is 国. The M key's main component is 山, and its shortcut character is 同. XCVBN zone (hook) The X key's main component is 纟, and its shortcut character is 经. The C key's main component is 又, and its shortcut character is 以. The V key's main component is 女, and its shortcut character is 发. The B key's main component is 子, and its shortcut character is 了. The N key's main component is 已, and its shortcut character is 民. Disambiguation strokes Strokes of keyboard is divided into 5 zones Examples Characters with 4 components or fewer (but no need for strokes) Example 1: 请 Consists of three components: y (讠, radical #10), g (王*, radical 89), e (月, radical 118) → 请 Characters with more than four components Example 2: 遗 Consists of five components: k (口), h (丨), g (一), m (贝), p (辶) → khgp → 遗 (it is not necessary to type m) Characters with fewer than 4 components (needing strokes) Example 3a: 文: First you type the key with the symbol on it, which happens to be 'Y'. Then you type the first component, which is also 'Y' for the 点 stroke, then a 'G' for the 横 stroke,and since you now already have three strokes, you type the last stroke, which also happens to be a 捺, arriving at the keycode 'YYGY' for the complete character. Example 3b: 一: The code for this character is 'GGLL'. As before, you type the key for the character first, which is 'G', then the first stroke of that character, which is also a 'G'. Because this is all necessary information, the L is used as a filler until you reach 4 letters. Note that the '一' is also the shortcut character for 'G' (making it one stroke only in practice). Example 3c: 广: The code for this character is 'YYGT'. At first, you type the key where this character is located, which is a 'Y'. Then, you type a 点 stroke, which is also on 'Y'. The next will be the 横 stroke on 'G', and the last will be the 捺, on 'T'. Characters requiring disambiguation strokes Example 4: 等 Consists of three components: t (竹), f (土), f (寸), Disambiguation strokes: The last stroke is 丶 and the character is with top-bottom structure (42,u) → 等 So the character code for 等 is 'TFFU' Poem A poem was made as a mnemonic for the Wubi keyboard, associating few characters with each key. The first character is the corresponding key main component, while the next ones are components or associated characters. 1986 version 1998 version New-century (3rd-generation) version In media In 2020, the history of Wubi was featured in a Radiolab episode titled "The Wubi Effect". Notes and references External links Full tables of Wubi sequences CJK input methods
Wubi method
Technology
4,374
1,012,800
https://en.wikipedia.org/wiki/Thermal%20design%20power
Thermal Design Power (TDP), also known as thermal design point, is the maximum amount of heat that a computer component (like a CPU, GPU or system on a chip) can generate and that its cooling system is designed to dissipate during normal operation at a non-turbo clock rate (base frequency). Some sources state that the peak power rating for a microprocessor is usually 1.5 times the TDP rating. Calculation The average CPU power (ACP) is the power consumption of central processing units, especially server processors, under "average" daily usage as defined by Advanced Micro Devices (AMD) for use in its line of processors based on the K10 microarchitecture (Opteron 8300 and 2300 series processors). Intel's thermal design power (TDP), used for Pentium and Core 2 processors, measures the energy consumption under high workload; it is numerically somewhat higher than the "average" ACP rating of the same processor. According to AMD the ACP rating includes the power consumption when running several benchmarks, including TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth), which AMD said is an appropriate method of power consumption measurement for data centers and server-intensive workload environments. AMD said that the ACP and TDP values of the processors will both be stated and do not replace one another. Barcelona and later server processors have the two power figures. The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system. In this case, CPUs either cause a system failure (a "therm-trip") or throttle their speed down. Most modern processors will cause a therm-trip only upon a catastrophic cooling failure, such as a no longer operational fan or an incorrectly mounted heat sink. For example, a laptop's CPU cooling system may be designed for a 20 W TDP, which means that it can dissipate up to 20 watts of heat without exceeding the maximum junction temperature for the laptop's CPU. A cooling system can do this using an active cooling method (e.g. conduction coupled with forced convection) such as a heat sink with a fan, or any of the two passive cooling methods: thermal radiation or conduction. Typically, a combination of these methods is used. Since safety margins and the definition of what constitutes a real application vary among manufacturers, TDP values between different manufacturers cannot be accurately compared (a processor with a TDP of, for example, 100 W will almost certainly use more power at full load than processors with a fraction of said TDP, and very probably more than processors with lower TDP from the same manufacturer, but it may or may not use more power than a processor from a different manufacturer with a not excessively lower TDP, such as 90 W). Additionally, TDPs are often specified for families of processors, with the low-end models usually using significantly less power than those at the high end of the family. Until around 2006 AMD used to report the maximum power draw of its processors as TDP. Intel changed this practice with the introduction of its Conroe family of processors. Intel calculates a specified chip's TDP according to the amount of power the computer's fan and heatsink need to be able to dissipate while the chip is under sustained load. Actual power usage can be higher or (much) lower than TDP, but the figure is intended to give guidance to engineers designing cooling solutions for their products. In particular, Intel's measurement also does not fully take into account Intel Turbo Boost due to the default time limits, while AMD does because AMD Turbo Core always tries to push for the maximum power. Alternatives TDP specifications for some processors may allow them to work under multiple different power levels, depending on the usage scenario, available cooling capacities and desired power consumption. Technologies that provide such variable TDPs include Intel's configurable TDP (cTDP) and scenario design power (SDP), and AMD's TDP power cap. Configurable TDP (cTDP), also known as programmable TDP or TDP power cap, is an operating mode of later generations of Intel mobile processors () and AMD processors () that allows adjustments in their TDP values. By modifying the processor behavior and its performance levels, power consumption of a processor can be changed altering its TDP at the same time. That way, a processor can operate at higher or lower performance levels, depending on the available cooling capacities and desired power consumption. cTDP typically provide (but are not limited to) three operating modes: Nominal TDP the processor's rated frequency and TDP. cTDP down when a cooler or quieter mode of operation is desired, this mode specifies a lower TDP and lower guaranteed frequency versus the nominal mode. cTDP up when extra cooling is available, this mode specifies a higher TDP and higher guaranteed frequency versus the nominal mode. For example, some of the mobile Haswell processors support cTDP up, cTDP down, or both modes. As another example, some of the AMD Opteron processors and Kaveri APUs can be configured for lower TDP values. IBM's POWER8 processor implements a similar power capping functionality through its embedded on-chip controller (OCC). Intel introduced scenario design power (SDP) for some low power Y-series processors since 2013. It is described as "an additional thermal reference point meant to represent thermally relevant device usage in real-world environmental scenarios." As a power rating, SDP is not an additional power state of a processor; it states the average power consumption of a processor using a certain mix of benchmark programs to simulate "real-world" scenarios. Ambiguities of the Thermal Design Power parameter As some authors and users have observed, the Thermal Design Power (TDP) rating is an ambiguous parameter. In fact, different manufacturers define the TDP using different calculation methods and different operating conditions, keeping these details almost undisclosed (with very few exceptions). This makes highly problematic (if not impossible) to reasonably compare similar devices made by different manufacturers based on their TDP, and to optimize the design of a cooling system in terms of both heat management and cost. Thermal Management fundamentals To better understand the problem we must remember the basic concepts underlying Thermal management and Computer cooling. Let’s consider the thermal conduction path from the CPU case to the ambient air through a Heat sink, with: Pd (Watt) = Thermal power generated by a CPU and to be dissipated into the ambient through a suitable Heat sink. It corresponds to the total power drain from the direct current supply rails of the CPU. Rca (°C/W) = Thermal resistance of the Heat sink, between the case of the CPU and the ambient air. Tc (°C) = Maximum allowed temperature of the CPU's case (ensuring full performances). Ta (°C) = Maximum expected ambient temperature at the inlet of the Heat sink fan. All these parameters are linked together by the following equation: Hence, once we know the thermal power to be dissipated (Pd), the maximum allowed case temperature (Tc) of the CPU and the maximum expected ambient temperature (Ta) of the air entering the cooling fans, we can determine the fundamental characteristics of the required Heat sink, i.e. its thermal resistance Rca, as: This equation can be rearranged by writing where in Pd can replaced by the Thermal Design Power (TDP). Note that the heat dissipation path going from the CPU to the ambient air flowing through the printed circuit of the motherboard has a thermal resistance that is orders of magnitude greater than the one of the Heat sink, therefore it can be neglected in these computations. Issues when dealing with the Thermal Design Power (TDP) Once all the input data is known, the previous formula allows to choose a CPU’s Heat sink with a suitable thermal resistance Rca between case and ambient air, sufficient to keep the maximum case temperature at or below a predefined value Tc. On the contrary, when dealing with the Thermal Design Power (TDP), ambiguities arise because the CPU manufacturers usually do not disclose the exact conditions under which this parameter has been defined. The maximum acceptable case temperature Tc to get the rated performances is usually missing, as well as the corresponding ambient temperature Ta, and, last but not least, details about the specific computational test workload. For instance, an Intel’s general support page states briefly that the TDP refers to "the power consumption under the maximum theoretical load". Here they also inform that starting from the 12th generation of their CPUs the term Thermal Design Power (TDP) has been replaced with Processor Base Power (PBP) . In a support page dedicated to the Core i7-7700 processor, Intel defines the TDP as the maximum amount of heat that a processor can produce when running real life applications , without telling what these "real life applications" are. Another example: in a 2011 white paper where the Xeon processors are compared with AMD’s competing devices, Intel defines TDP as the upper point of the thermal profile measured at maximum case temperature, but without specifying what this temperature should be (nor the computing load). It is important to note that all these definitions imply that the CPU is running at the base clock rate (non-turbo). In conclusion: Comparing the TDP between devices of different manufacturers is not very meaningful. The selection of a heat sink may end up with overheating (and CPU reduced performances) or overcooling (oversized, expensive heat sink ), depending if one chooses a too high or a too low case temperature Tc (respectively with a too low or too high ambient temperature Ta), or if the CPU operates with different computational loads. A possible approach to ensure a long life of a CPU is to ask the manufacturer the recommended maximum case temperature Tc and then to oversize the cooling system. For instance, a safety margin taking into account some turbo overclocking could consider a thermal power that is 1.5 times the rated TDP. In any case, the lower is the silicon junction temperature, the longer will be the lifespan of the device, according to an acceleration factor very roughly expressed by means of the Arrhenius equation. Some disclosed details of AMD’s Thermal Design Power (TDP) In October 2019, the GamersNexus Hardware Guides showed a table with case and ambient temperature values that they got directly from AMD, describing the TPDs of some Ryzen 5, 7 and 9 CPUs. The formula relating all these parameters, given by AMD, is the usual The declared TPDs of these devices range from 65 W to 105 W; the ambient temperature considered by AMD is +42°C, and the case temperatures range from +61.8 °C to +69.3°C, while the case-to-ambient thermal resistances range from 0.189 to 0.420 °C/W. See also Heat generation in integrated circuits Operating temperature Power rating Intel Turbo Boost AMD Turbo Core References External links Details on AMD Bulldozer: Opterons to Feature Configurable TDP, AnandTech, July 15, 2011, by Johan De Gelas and Kristian Vättö Making x86 Run Cool, April 15, 2001, by Paul DeMone Computer engineering Heat transfer
Thermal design power
Physics,Chemistry,Technology,Engineering
2,399
69,680,802
https://en.wikipedia.org/wiki/Recursive%20largest%20first%20algorithm
The Recursive Largest First (RLF) algorithm is a heuristic for the NP-hard graph coloring problem. It was originally proposed by Frank Leighton in 1979. The RLF algorithm assigns colors to a graph’s vertices by constructing each color class one at a time. It does this by identifying a maximal independent set of vertices in the graph, assigning these to the same color, and then removing these vertices from the graph. These actions are repeated on the remaining subgraph until no vertices remain. To form high-quality solutions (solutions using few colors), the RLF algorithm uses specialized heuristic rules to try to identify "good quality" independent sets. These heuristics make the RLF algorithm exact for bipartite, cycle, and wheel graphs. In general, however, the algorithm is approximate and may well return solutions that use more colors than the graph’s chromatic number. Description The algorithm can be described by the following three steps. At the end of this process, gives a partition of the vertices representing a feasible -colouring of the graph . Let be an empty solution. Also, let be the graph we wish to color, comprising a vertex set and an edge set . Identify a maximal independent set . To do this: The first vertex added to should be the vertex in that has the largest number of neighbors. Subsequent vertices added to should be chosen as those that (a) are not currently adjacent to any vertex in , and (b) have a maximal number of neighbors that are adjacent to vertices in . Ties in condition (b) can be broken by selecting the vertex with the minimum number of neighbors not in . Vertices are added to in this way until it is impossible to add further vertices. Now set and remove the vertices of from . If still contains vertices, then return to Step 2; otherwise end. Example Consider the graph shown on the right. This is a wheel graph and will therefore be optimally colored by RLF. Executing the algorithm results in the vertices being selected and colored in the following order: Vertex (color 1) Vertex , , and then (color 2) Vertex , , and then (color 3) This gives the final three-colored solution . Performance Let be the number of vertices in the graph and let be the number of edges. Using big O notation, in his original publication Leighton states the complexity of RLF to be ; however, this can be improved upon. Much of the expense of this algorithm is due to Step 2, where vertex selection is made according to the heuristic rules stated above. Indeed, each time a vertex is selected for addition to the independent set , information regarding the neighbors needs to be recalculated for each uncolored vertex. These calculations can be performed in time, meaning that the overall complexity of RLF is . If the heuristics of Step 2 are replaced with random selection, then the complexity of this algorithm reduces to ; however, the resultant algorithm will usually return lower quality solutions compared to those of RLF. It will also now be inexact for bipartite, cycle, and wheel graphs. In an empirical comparison by Lewis in 2021, RLF was shown to produce significantly better vertex colorings than alternative heuristics such as the greedy algorithm and the DSatur algorithm on random graphs. However, runtimes with RLF were also seen to be higher than these alternatives due to its higher overall complexity. References External links High-Performance Graph Colouring Algorithms Suite of graph coloring algorithms (implemented in C++) used in the book A Guide to Graph Colouring: Algorithms and Applications (Springer International Publishers, 2021). 1979 in computing Graph algorithms Graph coloring
Recursive largest first algorithm
Mathematics
751
13,590,511
https://en.wikipedia.org/wiki/Mycoplasma%20laboratorium
Mycoplasma laboratorium or Synthia refers to a synthetic strain of bacterium. The project to build the new bacterium has evolved since its inception. Initially the goal was to identify a minimal set of genes that are required to sustain life from the genome of Mycoplasma genitalium, and rebuild these genes synthetically to create a "new" organism. Mycoplasma genitalium was originally chosen as the basis for this project because at the time it had the smallest number of genes of all organisms analyzed. Later, the focus switched to Mycoplasma mycoides and took a more trial-and-error approach. To identify the minimal genes required for life, each of the 482 genes of M. genitalium was individually deleted and the viability of the resulting mutants was tested. This resulted in the identification of a minimal set of 382 genes that theoretically should represent a minimal genome. In 2008 the full set of M. genitalium genes was constructed in the laboratory with watermarks added to identify the genes as synthetic. However M. genitalium grows extremely slowly and M. mycoides was chosen as the new focus to accelerate experiments aimed at determining the set of genes actually needed for growth. In 2010, the complete genome of M. mycoides was successfully synthesized from a computer record and transplanted into an existing cell of Mycoplasma capricolum that had had its DNA removed. It is estimated that the synthetic genome used for this project cost US$40 million and 200 man-years to produce. The new bacterium was able to grow and was named JCVI-syn1.0, or Synthia. After additional experimentation to identify a smaller set of genes that could produce a functional organism, JCVI-syn3.0 was produced, containing 473 genes. 149 of these genes are of unknown function. Since the genome of JCVI-syn3.0 is novel, it is considered the first truly synthetic organism. Minimal genome project The production of Synthia is an effort in synthetic biology at the J. Craig Venter Institute by a team of approximately 20 scientists headed by Nobel laureate Hamilton Smith and including DNA researcher Craig Venter and microbiologist Clyde A. Hutchison III. The overall goal is to reduce a living organism to its essentials and thus understand what is required to build a new organism from scratch. The initial focus was the bacterium M. genitalium, an obligate intracellular parasite whose genome consists of 482 genes comprising 582,970 base pairs, arranged on one circular chromosome (at the time the project began, this was the smallest genome of any known natural organism that can be grown in free culture). They used transposon mutagenesis to identify genes that were not essential for the growth of the organism, resulting in a minimal set of 382 genes. This effort was known as the Minimal Genome Project. Choice of organism Mycoplasma Mycoplasma is a genus of bacteria of the class Mollicutes in the division Mycoplasmatota (formerly Tenericutes), characterised by the lack of a cell wall (making it Gram negative) due to its parasitic or commensal lifestyle. In molecular biology, the genus has received much attention, both for being a notoriously difficult-to-eradicate contaminant in mammalian cell cultures (it is immune to beta-lactams and other antibiotics), and for its potential uses as a model organism due to its small genome size. The choice of genus for the Synthia project dates to 2000, when Karl Reich coined the phrase Mycoplasma laboratorium. Other organisms with small genomes As of 2005, Pelagibacter ubique (an α-proteobacterium of the order Rickettsiales) has the smallest known genome (1,308,759 base pairs) of any free living organism and is one of the smallest self-replicating cells known. It is possibly the most numerous bacterium in the world (perhaps 1028 individual cells) and, along with other members of the SAR11 clade, are estimated to make up between a quarter and a half of all bacterial or archaeal cells in the ocean. It was identified in 2002 by rRNA sequences and was fully sequenced in 2005. It is extremely hard to cultivate a species which does not reach a high growth density in lab culture. Several newly discovered species have fewer genes than M. genitalium, but are not free-living: many essential genes that are missing in Hodgkinia cicadicola, Sulcia muelleri, Baumannia cicadellinicola (symbionts of cicadas) and Carsonella ruddi (symbiote of hackberry petiole gall psyllid, Pachypsylla venusta) may be encoded in the host nucleus. The organism with the smallest known set of genes as of 2013 is Nasuia deltocephalinicola, an obligate symbiont. It has only 137 genes and a genome size of 112 kb. Techniques Several laboratory techniques had to be developed or adapted for the project, since it required synthesis and manipulation of very large pieces of DNA. Bacterial genome transplantation In 2007, Venter's team reported that they had managed to transfer the chromosome of the species Mycoplasma mycoides to Mycoplasma capricolum by: isolating the genome of M. mycoides: gentle lysis of cells trapped in agar—molten agar mixed with cells and left to form a gel—followed by pulse field gel electrophoresis and the band of the correct size (circular 1.25Mbp) being isolated; making the recipient cells of M. capricolum competent: growth in rich media followed starvation in poor media where the nucleotide starvation results in inhibition of DNA replication and change of morphology; and polyethylene glycol-mediated transformation of the circular chromosome to the DNA-free cells followed by selection. The term transformation is used to refer to insertion of a vector into a bacterial cell (by electroporation or heatshock). Here, transplantation is used akin to nuclear transplantation. Bacterial chromosome synthesis In 2008 Venter's group described the production of a synthetic genome, a copy of M. genitalium G37 sequence L43967, by means of a hierarchical strategy: Synthesis → 1kbp: The genome sequence was synthesized by Blue Heron in 1,078 1080bp cassettes with 80bp overlap and NotI restriction sites (inefficient but infrequent cutter). Ligation → 10kbp: 109 groups of a series of 10 consecutive cassettes were ligated and cloned in E. coli on a plasmid and the correct permutation checked by sequencing. Multiplex PCR → 100kbp: 11 Groups of a series of 10 consecutive 10kbp assemblies (grown in yeast) were joined by multiplex PCR, using a primer pair for each 10kbp assembly. Isolation and recombination → secondary assemblies were isolated, joined and transformed into yeast spheroplasts without a vector sequence (present in assembly 811-900). The genome of this 2008 result, M. genitalium JCVI-1.0, is published on GenBank as CP001621.1. It is not to be confused with the later synthetic organisms, labelled JCVI-syn, based on M. mycoides. Synthetic genome In 2010 Venter and colleagues created Mycoplasma mycoides strain JCVI-syn1.0 with a synthetic genome. Initially the synthetic construct did not work, so to pinpoint the error—which caused a delay of 3 months in the whole project—a series of semi-synthetic constructs were created. The cause of the failure was a single frameshift mutation in DnaA, a replication initiation factor. The purpose of constructing a cell with a synthetic genome was to test the methodology, as a step to creating modified genomes in the future. Using a natural genome as a template minimized the potential sources of failure. Several differences are present in Mycoplasma mycoides JCVI-syn1.0 relative to the reference genome, notably an E.coli transposon IS1 (an infection from the 10kb stage) and an 85bp duplication, as well as elements required for propagation in yeast and residues from restriction sites. There has been controversy over whether JCVI-syn1.0 is a true synthetic organism. While the genome was synthesized chemically in many pieces, it was constructed to match the parent genome closely and transplanted into the cytoplasm of a natural cell. DNA alone cannot create a viable cell: proteins and RNAs are needed to read the DNA, and lipid membranes are required to compartmentalize the DNA and cytoplasm. In JCVI-syn1.0 the two species used as donor and recipient are of the same genus, reducing potential problems of mismatches between the proteins in the host cytoplasm and the new genome. Paul Keim (a molecular geneticist at Northern Arizona University in Flagstaff) noted that "there are great challenges ahead before genetic engineers can mix, match, and fully design an organism's genome from scratch". Watermarks A much publicized feature of JCVI-syn1.0 is the presence of watermark sequences. The 4 watermarks (shown in Figure S1 in the supplementary material of the paper) are coded messages written into the DNA, of length 1246, 1081, 1109 and 1222 base pairs respectively. These messages did not use the standard genetic code, in which sequences of 3 DNA bases encode amino acids, but a new code invented for this purpose, which readers were challenged to solve. The content of the watermarks is as follows: Watermark 1: an HTML document which reads in a Web browser as text congratulating the decoder, and instructions on how to email the authors to prove the decoding. Watermark 2: a list of authors and a quote from James Joyce: "To live, to err, to fall, to triumph, to recreate life out of life". Watermark 3: more authors and a quote from Robert Oppenheimer (uncredited): "See things not as they are, but as they might be". Watermark 4: more authors and a quote from Richard Feynman: "What I cannot build, I cannot understand". JCVI-syn3.0 In 2016, the Venter Institute used genes from JCVI-syn1.0 to synthesize a smaller genome they call JCVI-syn3.0, that contains 531,560 base pairs and 473 genes. In 1996, after comparing M. genitalium with another small bacterium Haemophilus influenzae, Arcady Mushegian and Eugene Koonin had proposed that there might be a common set of 256 genes which could be a minimal set of genes needed for viability. In this new organism, the number of genes can only be pared down to 473, 149 of which have functions that are completely unknown. As of 2022 the unknown set has been narrowed to about 100. In 2019 a complete computational model of all pathways in Syn3.0 cell was published, representing the first complete in silico model for a living minimal organism. Concerns and controversy Reception On Oct 6, 2007, Craig Venter announced in an interview with UK's The Guardian newspaper that the same team had synthesized a modified version of the single chromosome of Mycoplasma genitalium chemically. The synthesized genome had not yet been transplanted into a working cell. The next day the Canadian bioethics group, ETC Group issued a statement through their representative, Pat Mooney, saying Venter's "creation" was "a chassis on which you could build almost anything. It could be a contribution to humanity such as new drugs or a huge threat to humanity such as bio-weapons". Venter commented "We are dealing in big ideas. We are trying to create a new value system for life. When dealing at this scale, you can't expect everybody to be happy." On May 21, 2010, Science reported that the Venter group had successfully synthesized the genome of the bacterium Mycoplasma mycoides from a computer record and transplanted the synthesized genome into the existing cell of a Mycoplasma capricolum bacterium that had had its DNA removed. The "synthetic" bacterium was viable, i.e. capable of replicating. Venter described it as "the first species.... to have its parents be a computer". The creation of a new synthetic bacterium, JCVI-3.0 was announced in Science on March 25, 2016. It has only 473 genes. Venter called it “the first designer organism in history” and argued that the fact that 149 of the genes required have unknown functions means that "the entire field of biology has been missing a third of what is essential to life". Press coverage The project received a large amount of coverage from the press due to Venter's showmanship, to the degree that Jay Keasling, a pioneering synthetic biologist and founder of Amyris commented that "The only regulation we need is of my colleague's mouth". Utility Venter has argued that synthetic bacteria are a step towards creating organisms to manufacture hydrogen and biofuels, and also to absorb carbon dioxide and other greenhouse gases. George M. Church, another pioneer in synthetic biology, has expressed the contrasting view that creating a fully synthetic genome is not necessary since E. coli grows more efficiently than M. genitalium even with all its extra DNA; he commented that synthetic genes have been incorporated into E.coli to perform some of the above tasks. Intellectual property The J. Craig Venter Institute filed patents for the Mycoplasma laboratorium genome (the "minimal bacterial genome") in the U.S. and internationally in 2006. The ETC group, a Canadian bioethics group, protested on the grounds that the patent was too broad in scope. Similar projects From 2002 to 2010, a team at the Hungarian Academy of Science created a strain of Escherichia coli called MDS42, which is now sold by Scarab Genomics of Madison, WI under the name of "Clean Genome. E.coli", where 15% of the genome of the parental strain (E. coli K-12 MG1655) were removed to aid in molecular biology efficiency, removing IS elements, pseudogenes and phages, resulting in better maintenance of plasmid-encoded toxic genes, which are often inactivated by transposons. Biochemistry and replication machinery were not altered. References Primary sources Popular press External links J. Craig Venter Institute: Research Groups Artificial life Synthetic biology Mycoplasma
Mycoplasma laboratorium
Engineering,Biology
3,103
24,573,336
https://en.wikipedia.org/wiki/IC%202149
IC 2149 is a planetary nebula in the constellation of Auriga. It is a small, bright planetary nebula with something to offer in telescopes of most sizes. Characteristics Visually it has an apparent magnitude of 10.6 and an apparent size of 12 arc seconds and like other objects of its class a nebular filter may help on its observation. Its distance to the Solar System has been estimated to be around 1.1 kiloparsecs, having a total mass of 0.03 solar masses and being thought to have been produced by a low-mass star. Some authors have proposed the planetary nebula that the Sun will produce will be similar to this one, but smaller. The central star of the planetary nebula is an O-type star with a spectral type of O(H)4f. References External links Planetary nebulae 2149 Auriga
IC 2149
Astronomy
174
47,147,050
https://en.wikipedia.org/wiki/List%20of%20Google%20April%20Fools%27%20Day%20jokes
From 2000 to 2019, Google frequently inserted jokes and hoaxes into its products on April Fools' Day, which takes place on April 1. The company ceased performing April Fools jokes in 2020 due to the COVID-19 pandemic and has not performed them since. 2000 Google's first April Fools' Day hoax, the MentalPlex trick, was a hoax that invited users to project a mental image of what they wanted to find while staring at an animated GIF. Several humorous error messages were then displayed on the search results page, all listed below: Error 005: KUT Weak or no signal detected. Upgrade transmitter and retry. Error 666: Multiple transmitters detected. Silence voices in your head and try again. Error 05: Brainwaves received in analog. Please re-think in digital. Error 4P: Unclear on whether your search is about money or monkeys. Please try again. Error 445: Searching on this topic is prohibited under international law. Error CKR8: That information is protected under the National Security Act. Error 104: That information was lost with the Martian Lander. Please try again. Error 007: Query is unclear. Try again after removing hat, glasses and shoes. Error 008: Interference detected. Remove aluminum foil and remote control devices. Error: Insufficient conviction. Please clap hands three times, while chanting "I believe" and try again. Error: MentalPlex™ has determined that this is not your final answer. Please try again. An additional error message was included which converted all navigation text to German, but was scrapped after user complaints. 2002 Google reveals the technology behind its PageRank Systems—PigeonRank.Pi. Google touts the benefits of this cost-effective and efficient means of ranking pages and reassures readers that there is no animal cruelty involved in the process. The article makes many humorous references and puns based on computer terminology and how Google PageRank really works, (for example, a chart showing the pigeons' consumption of linseed and flax, represented as the "lin/ax kernel," a pun on the Linux kernel). Pigeon Rank 2004 Fictitious job opportunities for a research center on the moon. Copernicus is the name of a new operating system they claimed to have created for working at the research center. Google Job Opportunities: Google Copernicus Center is hiring Google also announced Gmail on April 1, with an unprecedented and unbelievable free 1 GB space, compared to e.g. Hotmail's 2 MB. The announcement of Gmail was written in an unserious jokey language normally seen in April Fools' jokes, tricking many into thinking that it was an April Fools' joke. In reality, it was a double fake, in that the announced product was serious. 2005 Google Gulp, a fictitious drink, was announced by Google in 2005. According to the company, this beverage would optimize one's use of the Google search engine by increasing the drinker's intelligence. It was claimed this boost was achieved through real-time analysis of the user's DNA and carefully tailored adjustments to neurotransmitters in the brain (a patented technology termed Auto-Drink; as the "Google Gulp FAQ" suggests, partly through MAO inhibition). The drink was said to come in "four great flavors": Glutamate Grape (glutamic acid), Sugar-Free Radical (free radicals), Beta Carotty (Beta-Carotene), and Sero-Tonic Water (serotonin). This hoax was probably intended as a parody of Google's then invite-only email service called Gmail. Although ostensibly free, the company claimed the beverage could only be obtained by returning the cap of a Google Gulp bottle to a local grocery store: a Catch-22. In the Google Gulp FAQ, Google replies to the observation, "I mean, isn't this whole invite-only thing kind of bogus?" by saying, "Dude, it's like you've never even heard of viral marketing." Google Gulp Google Gulp FAQ 2006 On April Fools' Day 2006, Google Romance was announced on the main Google search page with the introduction, "Dating is a search problem. Solve it with Google Romance." It pretends to offer a "Soulmate Search" to send users on a "Contextual Date". A parody of online dating, it had a link for "those who generally favor the 'throw enough stuff at the wall' approach to online dating" to Post multiple profiles with a bulk upload file, you sleaze in addition to Post your Google Romance profile. Clicking on either of these gave an error page, which explained that it was an April Fool's joke and included links to previous April Fools' jokes. Google Romance Google Romance FAQ Google Romance Tour 2007 Gmail Paper At about 10:00 pm, Pacific time (where Google has its headquarters) on March 30, 2007, Google changed the login page for Gmail to announce a new service called Gmail Paper. The service offered to allow users of Google's free webmail service to add e-mails to a "Paper Archive", which Google would print (on "94% post-consumer organic soybean sputum") and mail via traditional post. The service would be free, supported by bold, red advertisements printed on the back of the printed messages. Image attachments would also be printed on high-quality glossy paper, though MP3 and WAV files would not be printed. The page detailing more information about the service features photographs of Ian Spiro and Carrie Kemper, current employees of Google. Also featured are Product Marketing Managers of Gmail Anna-Christina Douglas and Shane Lawrence. Gmail Paper Index Gmail Paper Announcement Gmail Paper Program Policies Google TiSP Google TiSP (short for Toilet Internet Service Provider) was a fictitious free broadband service supposedly released by Google. This service would make use of a standard toilet and sewage lines to provide free Internet connectivity at a speed of 8 Mbit/s (2 Mbit/s upload) (or up to 32 Mbit/s with a paid plan). The user would drop a weighted end of a long, Google-supplied fiber-optic cable in their toilet and flush it. Around 60 minutes later, the end would be recovered and connected to the Internet by a "Plumbing Hardware Dispatcher (PHD)". The user would then connect their end to a Google-supplied wireless router and run the Google-supplied installation media on a Windows XP or Windows Vista computer ("Mac and Linux support coming soon"). Alternatively, a user could request a professional installation, in which Google would deploy nanobots through the plumbing to complete the process. The free service would be supported by "discreet DNA sequencing" of "personal bodily output" to display online ads that relate to culinary preferences and personal health. Google also referenced the Diet Coke-and-Mentos reaction in their FAQ: "If you're still experiencing problems, drop eight mints into the bowl and add a two-liter bottle of diet soda." They also claim that Enterprise plans will include support in the event of backup problems, brownouts and data wipes. Google TiSP Google TiSP FAQ Installation page Press Release page Not found page – April Fools' version 2008 Blogger "Google Weblogs (beta)" The Blogger dashboard featured an announcement for Google Weblogs, or "GWeblogs," or "Gblogs," the next revolution in personal publishing. Features include algorithms putting the user's best content at the top of the user's blog (rather than publishing by reverse chronology), automatically populating the blog's sidebar with the most relevant content, posting directly into Google search results for maximum visibility, blog headers refreshed with images from Google's team of artists for anniversaries of a scientific achievement (similar to Google Doodle), and automatic content generation ('Unsure of what to post about? Just click "I'm Feeling Lucky" and we'll "take care" of the rest!') The announcement was followed by a link to a video tour of the product, which actually led to Tay Zonday's cover of Rick Astley's "Never Gonna Give You Up." Blogger Buzz: The Official Buzz from Blogger at Google: Announcing Google Weblogs (beta) Dajare Google Japan launches Dajare, with the mission of "organizing the world's laughter." Day Google announced Day in Australia, a new beta search technology that will search web pages 24 hours before they are created. The name is a play on the phrase "g'day". gDay Gmail Time Gmail's sign-in page and a banner at the top of each Gmail inbox announced a new feature, called Gmail Custom Time, that would allow its users to "pre-date" their messages and choose to have the message appear as "read" or "unread". The new feature uses the slogan "Be on time. Every time." Around 787:00 pm EST March 31, 2008, on the newer and older version of Gsnail, but not in the basic HTML version, in the upper right corner, next to Settings, a link appeared labeled, "New! Gmail Custom Time". The link led to a 404 error until April 1, when it led to the full Gmail Custom Time hoax page. Clicking any of the three links at the bottom of the page brought the user to a page stating that Gmail Custom time was, in fact, their April Fools' Day joke. Google Book Search Scratch and Sniff Google Book Search has a new section allowing users to "scratch and sniff" certain books. Users are asked to "...please place your nose near the monitor and click 'Go'", which then "loads odors". When clicking on "Help", users are redirected to a page in a book that describes the origins of April Fools' Day. Inside Google Book Search Blog: "Google Book Search now smells better" Google Calendar is Feeling Lucky Google added the "I'm Feeling Lucky" button to its calendar feature. When a user tries to create a new event, the user was given the regular option of entering the correct details and hitting "Create Event", and also the new option of "I'm Feeling Lucky" which would set the user up with an evening date with, among others, Matt Damon, Eric Cartman, Tom Cruise, Jessica Alba, Pamela Anderson, Paris Hilton, Angelina Jolie, Britney Spears, Anna Kournikova, Johnny Depp, George W. Bush, or Lois Griffin. Google Manpower Search Google launched Manpower Search (谷歌人肉搜索) in China (google.cn). The feature was presented as being powered by 25 million volunteers who conducted searches around the clock. When the user entered a keyword, volunteers would search any possible answers from a mass of paper documents as well as online resources. The user was expected to get the search result within 32 seconds. The "search" button would avoid the user's cursor. Google Saturi Translate Google Korea announced that 'Google Saturi (, Korean dialect) Translate' had been opened on April 1, 2008. When the user tried to use this translator, a message appeared, explaining that it was an April Fools' Day event and was not executable. Google Talk Google announced plans to, on April 22, 2008 (Earth Day), shorten all conversations over Google Talk thereby reducing the energy required to transmit chats in an effort to reduce carbon output. Google Talk Goes Green Google Wake Up Kit Google launched their "Wake Up Kit" as a calendar notification option. The 'wake up' notification uses several progressively more annoying alerts to wake one up. First it will send an SMS message to their phone. If that fails, more coercive means will be used. The kit includes an industrial-sized bucket and is designed to be connected to their water main for automatic filling. In addition, a bed-flipping device is included for forceful removal from their sleeping quarters. Virgle Google announced a joint project with the Virgin Group to establish a permanent human settlement on Mars. This operation has been named Project Virgle. The announcement includes videos of Richard Branson (founder of Virgin Group) as well as Larry Page and Sergey Brin (the founders of Google) on YouTube, talking about Virgle. An "application" to join the settlement includes questions such as: I am a world-class expert in: After the user submitted the application, the site notifies the user that the user is not fit for space, or that the user's application is fine and "all you have to do is submit your video" [as a response to their video on YouTube]. As a result, an open source Virgle group has been established, OpenVirgle. On the FAQ page, the final question is "Okay, come on – seriously. Is this Virgle thing for real?" The reply links to a page that tells the user it's an April Fools' joke, and then mentions that the user "Dragged us out of our lovely little fantasy world, to crush all our hopes and dreams." Virgle Application Page – Virgle: The Adventure of Many Lifetimes Yogurt Google's Orkut displayed its name as yogurt. YouTube On April 1, 2008, all featured videos on all international homepages (starting with the British and Australian homepages) of YouTube were redirected to Rick Astley's "Never Gonna Give You Up", causing all users of the website who clicked on featured videos to be Rickrolled. This was the first year YouTube participated in Google's April Fools' Day tradition. 2009 Google runs on Microsoft Windows IIS/3.0 google.com.au reported as if it ran on IIS/3.0 and google.com on Apache/0.8.4 (on Linux). CADIE The announcement of CADIE, meaning "Cognitive Autoheuristic Distributed-Intelligence Entity", was made on 31 March 2009 11:59 pm by the CADIE Team, not on April 1. The announcement on the Google blog was made at 2009/04/01 12:01:00 am. The introduction page and all of the references to CADIE in Google's Products were taken down on April 2, replaced with a message stating' We apologize for the recent disruption(s) to our service(s). Please stand by while order is being restored. However the technology page describing the technical capabilities of the software remained at: Technical Description When using Google Books or GMail, a user would come across an announcement dated March 31, 2009, at 11:59:59, declaring a new "Cognitive Autoheuristic Distributed-Intelligence Entity". CADIE is also mentioned on the gBall FAQ page: "Google's new CADIE technology will interpret the data obtained from each ball to provide useful tips to owners". There was also a link on Google's Homepage for CADIE, and a blog entry in Google's official blog. CADIE technology is also used to generate "senryu" (a type of Japanese poem similar to haiku) based on search terms for certain Japanese queries. The Google Search homepage had a link to the CADIE announcement, stating that "For several years now a small research group has been working on some challenging problems in the areas of neural networking, natural language and autonomous problem-solving. Last fall this group achieved a significant breakthrough: a powerful new technique for solving reinforcement learning problems, resulting in the first functional global-scale neuro-evolutionary learning cluster." The page links to the blog below. On mobile devices, a link shows up to Brain Search, which uses CADIE technology to "index your brain". Gmail When one is using the Gmail service, they will notice that it has a new option, named "Gmail Autopilot" in which the service would analyze an email. On that page it says under the FAQ section,"You can adjust tone, typo propensity, and preferred punctuation from the Autopilot tab under Settings." However, if a person logs into their Gmail account and goes under the Settings tab they will notice that there is no Autopilot tab. The program could be customized to contain certain types of grammatical or spelling errors, as well as complexity and length of the sentence. It also has a way of responding to relationship related messages, such as if someone spoke aggressively, even in a humorous way, the system would "terminate relationship." gBall Google Australia announced the development of a ball that will change how Australian Football is played the world over. The newest football technology—"gBall" —is a prototype ball for use in the Australian Football League with GPS. Google Australia announced ("New! Get the newest football technology – gBall.") that they are developing a prototype ball for use in the Australian Football League with GPS. Apparently, the ball will measure the location, force, and torque of a kick, and "vibrate if player agents or talent scouts want to speak to you". Google claimed that the ball will cost $10 with a cost-per-kick set of payments in addition to the basic fee. Google Analytics A blog post to the Google Analytics Blog investigates the analytics reports of CADIE's activities. Google Maps Google's CADIE has a recommended places to visit using Google Maps. Viewing "CADIE's recommended places for humans" one will see each of her suggested places listed, that, when clicked, displays a photo and humorous commentary. There is also a "CADIE's recommended places for humans" link in Google Maps, which leads to the "Panda Mapplet" and includes several marked locations with "CADIE's" commentary. Under Redmond, Washington a link is listed which will Rickroll the viewer. Blogger CADIE's personal blog/homepage Google Chrome with 3D A version of Google Chrome was offered rendering web pages in Anaglyph 3D, "powered" by CADIE. A 3D effect was actually possible with this browser, but it only made the window appear to be sunken into the monitor. Introducing Google Chrome with 3D Google Earth Powered by CADIE Google announced a new Google Earth powered by CADIE, which claimed to allow the user to see ocean terrain imagery from the world's most advanced submarine, explore the deep sea, soar with CADIE in real time, view CADIE's Recommended Summer Vacation, and chat with CADIE, among other options. Google Code The Google Code Search homepage is featuring LOLCODE examples. CADIE is set to write code by itself based on specified features; however all that is returned is bad code or witty criticisms of the user's request and choice of programming language, recommending the use of INTERCAL. CADIE's source code was supposedly uploaded to Google Code, but she changed her mind and replaced it with a "fun program" consisting of 31 lines of INTERCAL. When executed, this program prints out the message "I do not feel like sharing." Google Book Search CADIE recommends some books at Google Book Search homepage. Also, when viewing a book, there is a "Generate book report" button. When clicked, it says "Gotcha! It's April Fools' Day! Sorry, but you'll have to actually read the book yourself." Google Docs on Demand Google has announced new Google Docs features enhanced by CADIE Add subliminal messages and images to documents. If a person makes a new presentation and looks for the subliminal message and image buttons under the insert menu they will notice it is not there. Google Mobile Google Mobile has a link to "Brain Search". The instructions are to "Put phone to forehead for brain indexing" and "Think your query". When the user clicks "Try Now", a page loads with "Brain indexing" status. When indexing is complete, a button comes up with "search me". By clicking this button, the user is directed to fake search results. There are several possible results: What's the name of that woman by the window? She's my boss's boss, but, oh man, is it Suzanne? Susan? Blanche? Should I order the pizza? I don't remember if it makes me gassy. Wow, cute guy. Should I go up to him? Why is everyone looking at me so strangely? When is Mom's birthday? I should send her a card. Google Knol Knol was updated so that all of the featured articles were about Artificial Intelligence, with a message from CADIE indicating that this "improvement" was for the good of mankind. HTTP Headers In keeping with the CADIE theme Google has altered the server HTTP header to contain the name of various AI entities, including HAL 9000, WOPR, and GLaDOS. Other server HTTP headers found were IIS/Bob (a reference to Microsoft Bob), IIS/Clippy (a reference to Clippy), IIS/3.0, Netscape iPlanet, Chrome/3.0, Google Operating System (BETA), CERN/3.0 (a reference to CERN HTTPd), Apple (a reference to Apple II), IRIX, MCP, Apache/0.8.4, Conficker, and Skynet. Oil Tanker Data Center During the last minutes of Google's Data Center Efficiency Summit, Urs Hoelzle presented in a "special topic": Google had bought an oil tanker, the "M/S Sergey", where Google's data center containers were being submerged in oil tanks to enable extremely high-efficiency cooling. The presentation can be seen in , and includes slightly customized Wikipedia images from the article Oil tanker, including a retouched photo of commercial oil tanker AbQaiq and the oil tankers side view graphic. Even though Google did apply for a US patent to build data centers on cargo ships and oil cooling is an existing technology, summit attendee James Hamilton believed this topic to be an April Fools' joke. The ship's name "M/S Sergey" is also likely to be a pun on Google's co-founder Sergey Brin. YouTube upside down On April Fools 2009, watch pages of YouTube appeared upside down. 2010 Google and Topeka, Kansas, Switch Places In early March, the city of Topeka, Kansas, temporarily changed its name to Google in an attempt to capture a spot in Google's new broadband/fiber-optics project. Then, on April 1 (April Fools' Day), Google jokingly announced that it would be changing its name to Topeka, to "honor that moving gesture" and changed its home page to say Topeka in place of the Google logo. Google Books available in Anachrome 3D Google books introduced a feature which allows any book to be read in 3D, assuming the viewer has appropriate glasses. It was enabled by clicking the "View in 3D" button in the menu bar above the book. This feature was removed after April 1, but on June 29, 2010, Google announced its restoration. Google also released the latest form of 3D glasses, similar to the pairs one would use today when seeing a film. Store anything on Google Docs Google announced that Google Docs will have the capacity to upload anything, including physical objects like keys, remote controls, etc. The site declared that one could use this to find items like keys using CTRL-F and send objects around the globe by "uploading" and "downloading" them, at the low price of $0.10 per kg. Search results generated in different units Google's search results page displayed the time taken to load the results in different units from seconds. Several of these are pop culture references, as with 1.21 gigawatts, while others refer to slang: at warp X.XX 0.XX centibeats 0.XX centons X.XXe-15 0.0X femtogalactic years 1.21 gigawatts X.XX hertz XX.XX jiffies 0.XX microfortnights 0.XX microweeks 0.XX nanocenturies 11.90 parsecs 0.XXe+43 Planck times 23.00 skidoo 2.00 shakes of a lamb's tail 0.XX times the velocity of an unladen female swallow dhaka time YouTube ASCII video filter The logo of YouTube was overlaid with ASCII text repeating the character "1". The YouTube logo was a reference to some videos having a new quality setting, namely "TEXTp". According to a notice underneath the videos, viewing the video with this quality setting enabled allowed YouTube to save one US dollar ($1) per second on bandwidth costs. The notice also remarked on the source of this new "feature," wishing the reader a happy April Fools' Day. However, in accordance with the announcement, the video quality on many videos was indeed able to be set to 'TEXTp' and video output was rendered through an ASCII filter. This feature was removed on April 2, 2010. Animal Translator BETA Google placed a link on the main page, advertising a new Google Animal Translator service to add to their Language Translator service. Clicking the link would take the user to a page advertising an app for Android phones for the translator, with the tagline being "Bridging the gap between animals and humans". Google Translate for Animals Once the app is installed on an Android phone, it provides some amusing translations depending on the animal selected. Standard Voicemail Mode for Google voice Google placed a New! Standard Voicemail Mode link in the Google Voice main page. Evil Bit Google added an "evil bit" to their AJAX APIs, to aid in generating an appropriate response to nefarious deeds. If an evildoer is "detected", the code returns with, among other things, "For Great Justice", a quotation from the video game Zero Wing. Conversely, setting the evil bit to 'false' will return the Google Search results for 'April Fools' encoded in JSON. Wave Wave Notifications Google Wave can be set to have a human being waved at by the user to notify the user of a change to a Google Wave. The user can also select the volume of the human notifier from a list of silent, medium, loud and vibrate. They can also select which human notifier they want, including Ashton Kutcher, Dr. Wave, Grandma, Werner Heisenberg, and Puppy. Clicking on any of the links on the new notifications page redirected the user to a Google help page, alerting them that it was an April Fools' joke, but also that email notifications are possible. Google Annotations Gallery The Google Annotations Gallery ("GAG") is an exciting new Java open source library that provides a rich set of annotations for developers to express themselves. Disemvoweling on Gmail The English-language home page of Gmail, including its logo, was disemvowelled. A post on the Gmail blog was created to address the issue, claiming that they had encountered a server error which firstly made the data centers fail to render the vowel 'a' before failing to render the vowels, and were working on the problem. They also claimed to be investigating whether the letter 'y' was impacted. Chrome Sounds (Google Chrome Extension) Google created a new extension, Chrome Sounds, after "months deep in psychoacoustic models, the Whittaker-Nyquist-Kotelnikov-Shannon sampling theorem, Franssen effects, Shepard-Risset Tones, and 11.1 surround sound research". The extension provides audio for actions performed within the Google Chrome web browser. For a few interesting sounds, try going to different countries' localized Google pages. The full list of sounds that this extension makes can be found by going to the Chrome Tools menu, choosing Extensions, turning on developer mode, and viewing the source of the extension. Google Analytics Goes Back to Hits Google decided that hits really is the only metric for tracking web site usage. Life sized Picasa Google offered an option which allows the user to print life-size cardboard cutouts of all of their photos. ReaderAdvantage Program Google announced a reward program for Google Reader, known as ReaderAdvantage, in which they would assign points to users depending on the number of items read on Google Reader. The rewards were different badges. Wingdings in AdSense Wingdings was announced as a new font option for AdSense users. 2011 YouTube A button was added to the video player which, when clicked, would apply a video filter to the video and replace the audio with a recording of Rhapsody Rag, a piece typically played as background music to silent movies in 1911. If subtitles are enabled when watching the video, intertitles will be displayed containing the dialogue. The upload page also featured an option to "send a horse-drawn carriage to me to pick [the video] up". In addition, a few videos were made parodying several viral videos, such as the "Flugelhorn Feline". Gmail Motion A body gesture oriented way to send and view mail through Gmail. In the "How it Works" Section it reads "Gmail Motion uses your computer's built-in webcam and Google's patented spatial tracking technology to detect your movements and translate them into meaningful characters and commands. Movements are designed to be simple and intuitive for people of all skill levels." An overview video presented by Gmail product manager Paul McDonald explains Gmail Motion's "language of movements that replaces type entirely" while a mime artist performs the full-body Gmail actions. Upon clicking the "Try Gmail Motion" button, it explains to the user about the prank, and says "Gmail Motion doesn't actually exist. At least not yet..." The page also offers a preview of the features of Google Docs Motion. Gmail Motion Google Docs Motion Google Docs Motion Using Gmail Motion's technology, Google has promoted the BETA version of Google Docs Motion which "will introduce a new way to collaborate – using your body" in their Documents, Spreadsheets, Presentations, Drawings, and Document List tools. Autocompleter Job A YouTube video was posted by Google showing a "Google Autocompleter" employee explaining the job. Also, a job opening was featured for an "Autocompleter." Clicking on the "Add to job cart" or "View cart" links to a Google search for "google April Fools' Day pranks". Autocompleter Job Chromercise Google Chrome launched a new website called "Chromercise", which aims to increase people's hands' strength and dexterity while browsing the web faster, and also allowing their hands to fit "into sleeker, sexier gloves". On the website, they also gave away free Google Chrome finger sweatbands for a limited time. Japan Due to the large-scale devastation from the 2011 Tōhoku earthquake and tsunami, in lieu of a traditional April Fools' hoax, Google Japan featured many never-before featured drawings from its 2009 Google Doodle competition, themed "What I Love About Japan" drawn by Japanese schoolchildren, saying "We promised that only the top prize winners would be featured on Google, but as this is the only day where lies are forgiven, we have obtained the other children's understanding." As a small concession to the usual festivities, the Google Blog mentioned, "This year's April Fools' joke has been postponed until next year. Next year's April Fools' joke has been postponed until the year following that." Google Google teleport is a service that allows user to time travel. The site is written in Simplified Chinese. It claims that it can take the user on a journey through time and space in first-person. Search Searching for "helvetica", "comic sans", or "comic sans ms" temporarily changed the entire webpage's font to Comic Sans. Comic Sans for Everyone Announcement that Comic Sans will become the default font for all Google products. Google also created a Google Chrome extension which changes the font to Comic Sans on all webpages. Google Cow The Google Body homepage appeared as Google Cow, where a cow's body can be examined in 3D. There was a toggle button that switched to human models. Google Maps Google Maps used to display a dragon in Germany's biggest forest, the 'Pfälzer Wald'. Also a shark in the Netherlands' lake called IJsselmeer, East of Amsterdam was featured. When viewed in Earth Mode or Google Earth, these can be rendered in 3D. There is also a narwhal in the Thames in London, outside Millbank Tower. The Loch Ness monster also makes an appearance in 'Loch Ness'. A giant red lobster sits atop the Zakim Bridge in Boston, as well as a pink elephant at "Amphitheatre Parkway, Mountain View, CA". Google Translate for Animals Google UK reportedly offered a version of Google Translate which could be used to talk with animals. Adwords AdWords announced a new format, Google Blimp Ads, that would be flying over major cities starting in May. Google I/O The announced sessions for the Google I/O conference for software developers were changed to include talks featuring technologies from the late 1990s. Contoso has gone Google On the Google Enterprise Blog, Google announced that Contoso (a fictional company used by Microsoft in Microsoft's product documentation materials) has switched from Microsoft Office and Microsoft Exchange to Google Apps. The post included references to 2007's TiSP and 2011's Gmail Motion jokes. Meow Me Now Mobile On the Google Mobile Blog, Google announced a new mobile-based search option for Android and iOS devices which locates kittens near the user's current location. Blogger The blogging service Blogger announced that it was being acquired by Google, even though it has been part of Google since 2003. 2012 Google Maps 8-bit for NES Google partnered with Square Enix and announced an "NES version" of their Google Maps service, to be released "as soon as possible". The version would be released in NES and Famicom versions (the Famicom version would feature voice input by using the second controller's microphone). In the meantime, Google added a "Quest" layer to the Maps website, which features 8-bit tile-based graphics and sprites on landmarks, both made by Google and by Square Enix (using the Dragon Quest game series' graphics). Improved Japanese Input System Google's proposed improved keyboard based on the experience with Japanese input system from 2010. The YouTube Collection YouTube added a small disc on the right side of the YouTube logo, which when clicked leads to a page about a service called "The YouTube Collection". It claimed to be an at-home experience of YouTube and made everything from videos to comments physical, including a postal mail commenting service. At the bottom of the website, it had a fake shipping form which after filled said "Your order has been placed. Due to heavy demand, your anticipated delivery date is: JUNE 16, 2045" and in small grey text at the bottom said "Also, April Fools'." Google Street Roo Google announced they will deploy a 'roo force' of more than 1,000 big red kangaroos who will capture up to 98% of the Australian bush within the next three years. Underwater Image Search An underwater image search experience developed by Google China. Google Weather Control Google added weather control to its weather search. Chrome Multitask Mode Chrome Multitask Mode makes it possible to browse the web with two or more mice at the same time. Clicking the "Try Multitask Mode" button initially creates one fake mouse that moves around the screen, and over time adds several more and at one point a giant cursor even appears. Clicking the "Exit Multitask Mode" button shows an April Fools' message. Elegantizr Google introduced the Elegantizr framework. To use it, one just needs to insert the following line of HTML: <link rel="stylesheet" href="https://www.google.com/landing/elegantizr/elegantizr.css" /> Upon insertion, every text begins with APRIL FOOL and an emoticon, before moving on to the regular text. Piano & Guitar Analytics Playback Google Analytics allows the user to playback their website statistics on piano and guitar. Google Racing Google announced a partnership with NASCAR to help create self-driving vehicles to compete in stock car racing. The "I'm Feeling Lucky" button on Google's site was also changed to "I'm Steering Lucky." Gmail Tap Gmail Tap for Android and iOS doubles typing speed with a revolutionary new keyboard. The system involves a keyboard with three keys: Morse code "dash" and "dot", and a spacebar (along with backspace). Shortly before midnight, on March 31, 2012, added Gmail Tap – Android and iOS Application utilizing Morse Code instead of onscreen keyboard. Selecting Download App for Your Phone produces the message: "Oops! Gmail Tap is a bit too popular right now. We suggest you try downloading it again on April 2nd." Clicking the Retry button will produce "It's still April 1st, 2012. You'll have to wait till April 2nd to download Gmail Tap." After clicking the retry button the page will say "Still trying to download Gmail Tap? Check back next April 1st to see if it is available...you never know.". On Gmail's Facebook page, they also posted about a Morse keyboard. Finally, at the Google I/O 2018, Google announced that they will be adding morse code input to its mobile keyboard. The company announced the new feature at Google I/O after showing a video of Tania Finlayson. Really Advanced Search A link on the bottom of search results pages titled Really Advanced Search takes users to a search page where they can filter their search results by, among other things, subtext or innuendo, page font (Comic Sans or Wingdings), loanword origin, or future modification date. Clicking on the "Advanced Search" button to actually run the search query redirects users to search results for "April Fools'". Click-to-Teleport Extensions Click-to-Teleport extensions allow potential customers to instantly teleport to the business location directly from a search ad in a matter of seconds. This teleportation technology shortens the "online-to-store" conversion funnel by providing searchers with an easy way to visit any business and convert. On average, advertisers using Click-to-Teleport extensions have seen their offline sales increase by 3600%. GoRo Solving the increasingly frustrating problem of accessing mobile internet on rotary phones across the US, Google is announcing GoRo. GoRo aims to fix the problem that 100% of people using rotary phones have trouble accessing a website. Jargon-Bot for Google Apps Jargon-Bot instantly recognizes business terms and provides real-time, in-product jargon translation into plain English. Google TV Click Innovative remote control application for phone and tablet lets users interact with shows and movies as they are playing. Google Voice for Pets Google introduced special Voice Communication Collars that fit around animal necks and use a series of sensors to record audio directly from animal vocal cords. Using a WiFi network, audio messages are uploaded to Google Voice within seconds. Alternately, a tiny micro-LED emitter built into the collar can project a keyboard onto the floor, so the animal pet can tap its front paws to send text messages. To understand animal language, Google took their voicemail transcription engine and combined it with millions of adorable pet videos from the Internet, training it to translate cat meows or dog growls into English. $1 Google Offer for Parking Karma Google Offer for unlimited good parking karma $1 takes the stress and guesswork out of finding a good spot by providing the following service: prime spots when you need them, repels parking tickets, includes 1 space buffer on each side, shopping cart protection plan, no parallel parking for first 6 months. Canine Staffing Team Google revealed that dogs at Google offices go through the same detailed recruitment and hiring process by Canine Staffing Team as human Googlers do before being welcomed to the Googleplex. Analytics Interplanetary Reports While currently users can only get a partial picture of website visitor location, Google Analytics is expanding beyond Earth by announcing new Analytis Interplanetary Reports to help users understand visitor activities from neighboring stars and planets. Users will also be able to drill down on each planet to see greater detail, e.g. which colony or outpost visitors came from, similar to the city drill down available for Earth today. "Did you mean: Beyonce" and Kanye West in the Play Music Store Kanye West bugdroid appeared in the Play Music Store. While searching anything, "Did you mean: Beyonce" came up every time. Google Edible fiber Google released a video on YouTube claiming it invented an edible fiber which could "take feedback from the body, determine which nutrients are needed and target delivery to the specific organs that need those nutrients". The video actually links to Google Fiber, a broadband internet service by Google. 2013 YouTube contest for the best video In YouTube's sixth April Fools' prank, YouTube joined forces with The Onion, a newspaper satire company, by claiming that it will "no longer accept new entries". YouTube began the process of selecting a winner on April 1, 2013, and would delete everything else. YouTube would go back online in 2023 to post the winning video and nothing else. After that, on April 1, 2013, YouTube briefly repeated the "YouTube Collection" joke from April 1, 2012. They also broadcast a live ceremony in which two "submission coordinators" continuously read off the titles and descriptions of random videos (the "nominees") for twelve straight hours, claiming they would do hold the same ceremony every day for the next two years. Treasure Hunt on Google Maps Google Maps allows the user to start a treasure hunt by selecting the "Treasure" view from the top right. Google Maps notes that the "system may not be able to display at higher resolutions than paper print" and that the user should "take care when unfolding the map to avoid ripping it." Also, the user is warned to 'beaware [sic] of pirates'. In reference to the TV show Portlandia, an image of a bird was placed on Portland, Oregon. While in this mode, Pegman is replaced with a telescope, thus giving the effect of looking through an old telescope when using Street View. Explore Treasure Mode with Google Maps Improved Google Play Developer Console The addition of an "Add new awesome application" button. Google Japanese Input Patapata Version Google introduces a new Japanese input system. Users repeatedly tap a single button to cycle through different letters. A brief pause confirms the current letter and advances the cursor to begin entering the next one. The name "Patapata" likely references a Japanese word for Split-flap display, onomatopoeically dubbed "Patapata-shiki" for its distinctive fluttering sound when updating. Another possibile explanation is then the video game Patapon, "Pata" is one of the sounds made with a drum. Gmail Blue Gmail is now the color blue. Coincidentally, Google would eventually go on to release Inbox by Gmail which features a similar interface to Gmail, only blue. Google SCHMICK (Simple Complete House Makeover Internet Conversion Kit) Google SCHMICK allows the user to redesign his or her street viewed house so that the user can "fly the Australian flag" outside the user's house Google Fiber Poles Google Fiber to the Pole provides ubiquitous gigabit connectivity to fiberhoods across Kansas City. This latest innovation in Google Fiber technology enables users to access Google Fiber's ultra fast gigabit speeds even when they are out and about. Google Wallet Mobile ATM Google announced the release of the Google Wallet Mobile ATM. The mobile ATM device easily attaches to most smartphones and dispenses money instantly and effortlessly – forever ending the user's search for the nearest bank or ATM. The Google Wallet Mobile ATM technology allows the user to enter the amount of money they want to withdraw directly to a phone or use voice-activated dispenser. Unlike traditional ATM's, the Google Wallet Mobile ATM even dispenses rare two and fifty dollar bills, as well as more practical one dollar bills. Levity Algorithm in Google Apps Google introduces the Levity Algorithm in Google Apps to help users spice up even the most boring of work days. Updated Export and Send-To features on Google Analytics Google updated the Export and Send-To features for Google Analytics to give users even more options and support some of our favorite legacy technology: 3.5" floppy, CD-ROM, papyrus, sticky note, carrier pigeon, fax, telegram, telegraph. Self-Writing Code Program Google developed self-writing code program. Now that Google engineers are not spending their time at the desk programming, they have plenty of time to collaborate with teammates, attend talks and events on campus, go for a workout at the gym or try out a new cafe. Google always encourages employees to have a full life outside of the office and now Google employees have tremendous work-life balance. Google Search Cold Trends The least searched topics on Google, "Cold searches" is the way to discover new unique things that nobody else is into. Google Nose Google announces a new "Google Nose" feature, which adds scents to items in the Google Knowledge Graph. Users can click a "Smell" button on select items to experience scents directly through their existing desktop computer, laptop, or mobile device. 2014 Software Dogengineer Google created an entry in their careers page looking for a dogengineer. Google Maps Pokémon Challenge Google joined forces with The Pokémon Company, Game Freak, and Nintendo to develop a new Google Maps app for the iOS and Android, which allowed users to capture Pokémon while exploring the real world using Google Maps. The concept of the app would later be refined and released as Pokémon Go in 2016. Gmail Selfie Based on the popularity of adding pictures of oneself as a Gmail custom theme, Google launches a feature to share that custom theme (of one's self) with their friends. Nest + Virgin After acquiring Nest Labs in early 2014, Google teamed up with Virgin Airlines about the latest in-flight feature. Passengers on the Virgin Airlines aircraft have the ability to change their personal temperature on the plane using their latest Total Temperature Control. Google Japanese Input: Magic Hand Version There are many problems with inputting Japanese on a mobile device using one's finger – so Google has introduced the *Magic Hand* to solve them. Emojify the Web Google Translate support for Emoji is built directly into Chrome for Android and iOS. One can now read all their favorite Web content "using efficient and emotive illustrations, instead of cumbersome text." Google's translation algorithm interprets not just the definition of the words on a webpage, but also their context, tone, and sometimes even facial expression in order to convert them into symbols. "Not only does this pictorial and theatrical language allow us to communicate complex emotions, it's also far more compact. One Emoji symbol can easily replace dozens of characters, improving efficiency and comprehension on the go. It turns out the best way to communicate in the future is to look to the past: the ancient Egyptians were really onto something with their hieroglyphs." Auto-Awesome Photobombs with David Hasselhoff Google announced on the Official Google Blog that they would randomly insert David Hasselhoff into Google+ photos via the Auto-Awesome feature. WazeDates 'WazeDates' uses the same crowdsourcing technology designed to help drivers around the world outsmart traffic, while creating a new space for people to meet and fall in love. Upcoming Viral Video Trends YouTube announced that they write, shoot, and upload all of the world's most popular viral videos, and that this year they're accepting viral video ideas from YouTube users. AutoAwesome for Resumes Google announced that it's rolling out special effects for Resumes on Google Drive. Qwerty Cats Chrome Extension The Chromium team releases a QWERTY virtual keyboard for cats on the Chrome Web Store. Coffee to the Home Google Fiber launches Coffee to the Home (CTTH) program for Kansas City residents; delivering made-to-order coffee drinks straight to users at fiber speeds—through the same fiber jack that delivers 100 times faster Internet. AdBirds Google AdWords team now released AdBirds, a new way to show ads. The user has six birds (Sparrow, Duck, Owl, Pigeon, Eagle and Penguin) to choose from, and they add in a little bit of text before setting the bird free into the world, for everyone to see their ad. Google Apps for Business Dogs Google announced that they're launching a suite of features to make Google Apps more useful for Dogs in the workplace. Features include Dmail with translation, Hangouts with Bark Enhancement, and paw recognition technology. Google Analytics Academy: Data-less Decision Making Google announced a web course on how to "make uninformed business decisions on a whim by following gut instincts and applying simple guesswork techniques." Helpouts by Google: Helpouts from a Pirate Scowlin' Guideon Scabb the Beardless helps one hone their pirate vocabulary 1 on 1 over live video. AdSenses on planets and moon Now interplanetary IP addresses are interpreted. "With our recent discovery of the interplanetary IP address repository, you'll have access to even more reports that can help you improve user engagement on your site. For example, if you notice a lot of traffic coming from Mars, try adding more pages in Martian to engage with those audiences." Google Play Signature Edition Signature Apps lets developers ship their work directly to customers on a thumbdrive inside a special package ready for unboxing, preferably "using natural sources of locomotion such as biking and walking" to reduce the environmental impact. The dev console includes settings for shipping apps, an explanation of the value add, and a reminder to sign apps on a piece of paper or electronically to give them more authenticity. Unfortunately, hitting the Save button doesn't work. Chromecast for squirrels Google says it is working with "developers of 'paw-friendly' apps to build Chromecast support into more of the apps and websites both humans and squirrels love." 2015 Pac-Maps Google added a "Pac-Man View" to Google Maps, allowing users to play Pac-Man along real world streets. The bell and key were replaced by the map marker and the Street View "pegman" respectively. Created by John Tantalo, a software engineer at Google, and his wife Mary Radcliffe, an assistant professor of mathematics at the University of Washington, Pac-Maps remained available for about ten days. Ingress Pacman Niantic Labs, a startup internal to Google, added Pacman to the Ingress scanner. #ChromeSelfie Google added a "Share a reaction" button to the Chrome mobile app menu, which lets the user take a half-selfie, half-screenshot picture of the current-viewing site and then share it, offering to use the Hashtag #ChromeSelfie. Smartbox by Inbox by Gmail by Google Google announced a Smart Mailbox for a user's physical mail, with auto-sorting folders, push notifications, temperature control, spam protection and more. com.google Google launched com.google, a version of Google Search in which the site is reflected horizontally. This was the company's first usage of the .google top-level domain. The site is no longer active. Google Fiber Dial Up Mode Google Fiber launched dial up mode which slows a user's life down, "to pause and take care of the little things". Darude – Sandstorm On many song-related searches on YouTube and Twitter, it suggests Sandstorm by Darude. It also adds its button to any video that plays its sequence. This joke was a reference to the Internet phenomenon associated with the song. Google Panda Product manager for Google Search launched Google Panda, a panda plush toy aimed to 'change the face' of Google Search. State of the art emotional and conversational intelligence allows the panda to respond to their human and answer any question just as a user would on Google Search or Google Now using the voice search feature. Equator Slipping: Australia to become Northern Hemisphere Google Maps engineers from the Google Sydney office 'discovered' that the Earth's equator was slipping south at a rate of 25 km per year. This was backed by evidence from Veritasium's Derek Muller, measuring the movements of the Milankovitch cycles, which predicted "the northernmost point of Australia, Cape York could enter the Northern Hemisphere as soon as 2055." Google Actual Cloud Platform The Google Actual Cloud Platform is the world's first public cloud running on servers in the troposphere. Google Keyboardless Keyboard Google Japan announced a keyboard shaped like a party horn that a user blows in order to type. Quantum Code Testing The Google testing blog announced that it has radically simplified software testing by being able to model every possible state of a software application by employing quantum superposition techniques. 2016 Gmail Mic Drop A new feature was added to Gmail called "Mic Drop", which archived the email message as soon as it was sent and inserted a GIF of a Minion from the Despicable Me film series. However, the feature immediately caused backlash. Many people complained about accidentally sending the GIF to people at businesses, which resulted in some people being dropped from job consideration or even being fired. Google removed the feature not long after, citing those reasons and a bug that caused the GIF to be sent after hitting the regular send button. Google Cloud Style Detection API Google Cloud announced a new Machine Learning API called Style Detection, which allowed automatic identification and categorization of the fashion metadata in a given image. The YouTube video featured several members of the Google Cloud team and was shot in the Spear St. San Francisco office. Obviously, there are still details to iron out. Searchable Socks Google Australia announced a new product called Searchable Socks, a pair of socks which if lost could be found using the Google app. When the user taps the beacon on the Google app, the sock would then play the Trololo song. Google Maps Disco Google Maps features a video with the Pegman from Street View disco dancing. Parachutes by Google Express As stated in the description of the YouTube video Google uploaded promoting this service: "Google Express offers fast delivery of things you need from stores you love. With our new delivery technology, packages will arrive even faster and land anywhere you want them – whether at the beach, in the woods, or even on a run." Google Cardboard Plastic Google announced a transparent plastic version of the Google Cardboard viewer without a smartphone slot, making a user see real life through it instead. YouTube SnoopaVision YouTube launched the SnoopaVision feature, which allows users to watch videos in 360 degrees. The feature gets its name from Snoop Dogg, who was hired by Google to sponsor the project by appearing on announcements, but ended up being a "true leader" of it. Google Self-Driving Bike Google Netherlands announced Google Self-Driving bike inspired by their self-driving cars. Deputy mayor Kajsa Ollongren of Amsterdam also made an appearance in the video. Physical Flick Japanese Input Google Japan announced that it had been working hard to bring the flick actions of its virtual Japanese input to the real world. Inbox by Gmail Emoji Smart Reply The Gmail team announced it had added "sass" to Inbox by Gmail's smart reply feature, now including emoji in its one-click responses. Interplanetary app publishing In the app publishing process the "Pricing & Distribution" section contained a blue box entitled "DISTRIBUTE TO THESE PLANETS" containing a list of planets from Mercury through to Pluto. Pluto had been crossed out and a note appended which read "No longer supported." A "Learn more" caption was provided which linked to a blog post by Lily Sheringham. Google X New Chief Compression Officer Google X announced that they hired Richard Hendricks (from HBO's TV show, Silicon Valley) as their Chief Compression Officer, in order to solve compression challenges they were facing. Google Play RealBooks Google announced RealBooks, a new form of ebook for those who miss having physical copies of books. These books were essentially a smartphone with every feature removed except the ability to read a single ebook. The video was removed at a later date for unknown reasons. 2017 Ms. Pac-Maps Google has partly revived new Pac-Maps to allow users to play the popular video game Ms. Pac-Man along the streets of the world. Although, this time, instead of turning the player's current location into the game level, the player is taken to a random spot in the world. The mobile app for Maps also displays a button to play Ms. Pac-Maps. Google Wind Google Netherlands says that "Holland is one of the greatest countries to live in, but the biggest downside is that it rains 145 days a year". They also stated that "it uses Machine Learning to recognize cloud patterns and orchestrate the network of windmills when rain is approaching. Test results look very promising." On April 1, they would be able to ensure clear skies for everyone in Holland. This is an example of an attempt to control the weather locally. Google Japanese Input Puchi Puchi Version A version of Google Japanese Input using bubble wrap. Also, on the website they gave at the end of the video, there was a bubble wrap at the end of the page. Haptic Helpers Google claims that "it takes the virtual reality world to the next level" by implementing the missing three senses of older VR technology: taste, touch, and smell. When one tries to sign up however, the sign-up button become the words,"APRIL FOOLS!" Google Cloud Platform Expands to Mars Google announced the creation of a datacenter on Mars, nicknamed "Ziggy Stardust," which would open in 2018 starting with a new Mars location in Google Cloud Storage. Part of Google's announcement included the ability to walk-through their new datacenter in Google Street View. Mobile Accessories for Chromebook Google announced a wide range of accessories for the Chromebook that are only available for mobile phones, such as the "Chromebook Groupie Stick," "Chromebook Cardboard," and "Chromebook Workout Armband." Google Translate for Heptapod B Google announced Heptapod B (the fictional language of "Story of Your Life" and the motion picture based on it) as the 32nd language to be supported in Word Lens. Google Gnome Google announced a new Google Assistant product designed for the yard called "Google Gnome". It has some of Google Home's features, except that it is intended to be used outdoors. According to Google, it can report about the environment and the outdoors. It only responds to voice and is hand-free. It can also mow the lawn, acting as a lawnmower. The announcement video was edited in countless memes, in a similar fashion to the announcement video of Amazon Echo. Google Now for Dogs & Cats Google announced a new force 3D touch action on the Google app for iOS that would open a special experience for cats or dogs. Google Play for Pets Google announced a new Google Play category for Pets with games, apps and training tools to keep a pet stimulated. 2018 Google Cloud Hummus API Google Israel launched a "hummus API" to organize information, even hummus. It attempts to store their favorite type of hummus as information. Gboard Physical Handwriting Board Google Japan, from the Google Japanese Input team, proposed a physical handwritten version of Gboard. The device was developed "to realize intuitive character input". It was also said in the video to stretch the feature to beyond keyboards, such as an abacus and even corn. Where's Waldo on Google Maps Five classic Where's Waldo scenes were hidden over Google Maps. Finding Waldo in each scene rewarded the player with a hint as to finding the next one. Completing all the levels unlocked a secret sixth scene on the Moon, which could be accessed by zooming out in Satellite view. Bad Joke Detector Google announced that its file management app Files Go would use a "custom-built deep neural network" to free up storage by deleting bad jokes from the user's device. Googz Google Australia made a redesign of Google for Australian citizens called "Googz". Google asked Aussie designer Jazza to make a convincing video about the new adaption of the word Googz. They conducted "surveys" which showed some false results on how "80%" of Australians commonly refer to Google as Googz. Recrawl Now Google Search Console made a site recrawl feature that instead rickrolls the user. 2019 Sssnakes on a map Google Maps had a feature to play Snake in several cities. During the week of April Fools' Day, this was accessible in the app. Many cities were available, such as Cairo, London, San Francisco, São Paulo, Sydney, and Tokyo, and even the world. There was also a standalone site at snake.googlemaps.com. Google Tulip Google Nederland released a video on YouTube about a new app allowing communication with tulips, by translating the root signals of tulips into spoken words. Google Calendar Google Calendar invites a user to clear their schedule, one meeting at a time, with laser-sharp precision. Click the Gear icon, then select Play a game (alternatively, deep link is: https://calendar.google.com/?playagame) Gboard Spoon Bending Google Japan followed up 2018's physical handwriting board with 2019's Spoon Bending version, a special, smart spoon that allows users to type Japanese characters in Gboard by bending it. This invention allows the user to type almost effortlessly anytime, anywhere, allowing the user greater flexibility in their writing. Allegedly they are also developing other bending technologies, such as an "outdoor version" consisting of a fishing rod and a "hands off" version where the spoon will be bent telepathically. Google Assistant If the user writes "April Fools" to Google Assistant, it will now offer a random April Fools' prank in history. Google Colab "Power Mode - rack up combos and see sparks fly". It introduced a new mode that, when activated, causes sparks to fly out from the cursor when typing, and shows an animated "combo counter". Gmail To commemorate the 15th anniversary of the email client’s release, the Gmail logo featured balloons and a party hat on April 1. YouTube After a two year hiatus YouTube returned to making April Fools' pranks. This year they had an ad on the top of the home page for an Aquaman 2 movie, but instead of the playable video being a trailer for it the video was for the Shazam! trailer instead. Files App: Screen Cleaner Google released a video about Screen Cleaner with an "Activate" button that when pressed, dirt and stains magically poof away. Then the phone vibrates, creating a non-stick shield "With a fresh pineapple scent." Cancellation Google canceled its 2020 April Fools' jokes for the first time due to the COVID-19 pandemic, urging employees to contribute to relief efforts instead. Since the cancellation in 2020, Google has not participated in April Fools. However in 2020, April 1st was celebrated with the anniversary of Jean Macnamara's birthday. Post-Cancellation On October 1, 2021, Google Japan resumed its annual tradition of creating novelty keyboards, and has subsequently released new novelty keyboards annually on October 1. Given the release occurring exactly half a year after April 1, it has been described as an April Fools' Day in October. As with all novelty keyboards produced by the team since 2012, the schematics of these devices are available as open source. 2021: Gboard Yunomi, a keyboard with the form factor of a traditional Japanese teacup, with keycaps representing traditional fish used in the preparation of sushi. 2022: Gboard Bar Version, a single-row keyboard spanning 5.25 feet. 2023: Gboard CAPS, a large key-shaped hat which one can select a letter by rotating and transmit a keypress by pressing down on the hat. 2024: Gboard Double-Sided Version, this is a double-sided keyboard that is shaped like a Möbius Strip. https://www.designboom.com/technology/google-double-sided-twisted-gboard-japan-10-05-2024/ Real April Fools' Day product launches Google has chosen April Fools' Day and the day before it to announce some of their actual products, as a form of viral marketing. Shortly before midnight on March 31, 2004, Google announced the launch of Gmail. However, it was widely believed to be a hoax, since free web-based e-mail with one gigabyte of storage was unheard of at the time. In 2005, Google increased Gmail storage to two gigabytes and released Google Ride Finder. On March 31, 2010, YouTube implemented its new video page design, which had been revealed two months earlier. On April 1, 2010, Google Street View received a new feature to toggle anaglyph 3D images. It was available by clicking on the icon depicting "pegman" wearing a pair of red/cyan glasses. The icon was present until April 8, when it was removed. Google Japan has open sourced the "firmware, circuit diagrams, and design drawings" for all of its novelty input devices, beginning with Google Japanese Input Morse version on April 1, 2012, to allow anyone to build their own versions of the devices. On April 1, 2013, Google announced Google+ Emotion. Google+ can now 'plumb the emotional depths of everyone in the photo, then summarize their feelings with a beautifully crafted, emotion icon' On April 1, 2014, Google announced Shelfies (Shareable Selfies), which allows one to add pictures of oneself as a Gmail custom theme and share that custom theme (of one's self) with their friends. The first version of Brotli compression format specification was published. On April 1, 2016, Google introduced a new feature for Google Photos, allowing users to search their photos using emojis. See also Netflix April Fools' Day jokes References April Fools' Day jokes April Fools' Day jokes Lists of practical jokes
List of Google April Fools' Day jokes
Technology
13,847
14,708,063
https://en.wikipedia.org/wiki/Enthalpy%E2%80%93entropy%20compensation
In thermodynamics, enthalpy–entropy compensation is a specific example of the compensation effect. The compensation effect refers to the behavior of a series of closely related chemical reactions (e.g., reactants in different solvents or reactants differing only in a single substituent), which exhibit a linear relationship between one of the following kinetic or thermodynamic parameters for describing the reactions: Between the logarithm of the pre-exponential factors (or prefactors) and the activation energies where the series of closely related reactions are indicated by the index , are the preexponential factors, are the activation energies, is the gas constant, and , are constants. Between enthalpies and entropies of activation (enthalpy–entropy compensation) where are the enthalpies of activation and are the entropies of activation. Between the enthalpy and entropy changes of a series of similar reactions (enthalpy–entropy compensation) where are the enthalpy changes and are the entropy changes. When the activation energy is varied in the first instance, we may observe a related change in pre-exponential factors. An increase in tends to compensate for an increase in , which is why we call this phenomenon a compensation effect. Similarly, for the second and third instances, in accordance with the Gibbs free energy equation, with which we derive the listed equations, scales proportionately with . The enthalpy and entropy compensate for each other because of their opposite algebraic signs in the Gibbs equation. A correlation between enthalpy and entropy has been observed for a wide variety of reactions. The correlation is significant because, for linear free-energy relationships (LFERs) to hold, one of three conditions for the relationship between enthalpy and entropy for a series of reactions must be met, with the most common encountered scenario being that which describes enthalpy–entropy compensation. The empirical relations above were noticed by several investigators beginning in the 1920s, since which the compensatory effects they govern have been identified under different aliases. Related terms Many of the more popular terms used in discussing the compensation effect are specific to their field or phenomena. In these contexts, the unambiguous terms are preferred. The misapplication of and frequent crosstalk between fields on this matter has, however, often led to the use of inappropriate terms and a confusing picture. For the purposes of this entry different terms may refer to what may seem to be the same effect, but that either a term is being used as a shorthand (isokinetic and isoequilibrium relationships are different, yet are often grouped together synecdochically as isokinetic relationships for the sake of brevity) or is the correct term in context. This section should aid in resolving any uncertainties. (see Criticism section for more on the variety of terms) compensation effect/rule : umbrella term for the observed linear relationship between: (i) the logarithm of the preexponential factors and the activation energies, (ii) enthalpies and entropies of activation, or (iii) between the enthalpy and entropy changes of a series of similar reactions. enthalpy-entropy compensation : the linear relationship between either the enthalpies and entropies of activation or the enthalpy and entropy changes of a series of similar reactions. isoequilibrium relation (IER), isoequilibrium effect : On a Van 't Hoff plot, there exists a common intersection point describing the thermodynamics of the reactions. At the isoequilibrium temperature , all the reactions in the series should have the same equilibrium constant () isokinetic relation (IKR), isokinetic effect : On an Arrhenius plot, there exists a common intersection point describing the kinetics of the reactions. At the isokinetic temperature , all the reactions in the series should have the same rate constant () isoequilibrium temperature : used for thermodynamic LFERs; refers to in the equations where it possesses dimensions of temperature isokinetic temperature : used for kinetic LFERs; refers to in the equations where it possesses dimensions of temperature kinetic compensation : an increase in the preexponential factors tends to compensate for the increase in activation energy: Meyer–Neldel rule (MNR) : primarily used in materials science and condensed matter physics; the MNR is often stated as the plot of the logarithm of the preexponential factor against activation energy is linear: where is the preexponential factor, is the activation energy, σ is the conductivity, and is the Boltzmann constant, and is temperature. Mathematics Enthalpy–entropy compensation as a requirement for LFERs Linear free-energy relationships (LFERs) exist when the relative influence of changing substituents on one reactant is similar to the effect on another reactant, and include linear Hammett plots, Swain–Scott plots, and Brønsted plots. LFERs are not always found to hold, and to see when one can expect them to, we examine the relationship between the free-energy differences for the two reactions under comparison. The extent to which the free energy of the new reaction is changed, via a change in substituent, is proportional to the extent to which the reference reaction was changed by the same substitution. A ratio of the free-energy differences is the reaction quotient or constant . The above equation may be rewritten as the difference () in free-energy changes (): Substituting the Gibbs free-energy equation () into the equation above yields a form that makes clear the requirements for LFERs to hold. One should expect LFERs to hold if one of three conditions are met: 's are coincidentally the same for both the new reaction under study and the reference reaction, and the 's are linearly proportional for the two reactions being compared. 's are coincidentally the same for both the new reaction under study and the reference reaction, and the 's are linearly proportional for the two reactions being compared. 's and 's are linearly related to each other for both the reference reaction and the new reaction. The third condition describes the enthalpy–entropy effect and is the condition most commonly met. Isokinetic and isoequilibrium temperature For most reactions the activation enthalpy and activation entropy are unknown, but, if these parameters have been measured and a linear relationship is found to exist (meaning an LFER was found to hold), the following equation describes the relationship between and : Inserting the Gibbs free-energy equation and combining like terms produces the following equation: where is constant regardless of substituents and is different for each substituent. In this form, has the dimension of temperature and is referred to as the isokinetic (or isoequilibrium) temperature. Alternately, the isokinetic (or isoequilibrium) temperature may be reached by observing that, if a linear relationship is found, then the difference between the s for any closely related reactants will be related to the difference between 's for the same reactants: Using the Gibbs free-energy equation, In both forms, it is apparent that the difference in Gibbs free-energies of activations () will be zero when the temperature is at the isokinetic (or isoequilibrium) temperature and hence identical for all members of the reaction set at that temperature. Beginning with the Arrhenius equation and assuming kinetic compensation (obeying ), the isokinetic temperature may also be given by The reactions will have approximately the same value of their rate constant at an isokinetic temperature. History In a 1925 paper, F.H. Constable described the linear relationship observed for the reaction parameters of the catalytic dehydrogenation of primary alcohols with copper-chromium oxide. Phenomenon explained The foundations of the compensation effect are still not fully understood though many theories have been brought forward. Compensation of Arrhenius processes in solid-state materials and devices can be explained quite generally from the statistical physics of aggregating fundamental excitations from the thermal bath to surmount a barrier whose activation energy is significantly larger than the characteristic energy of the excitations used (e.g., optical phonons). To rationalize the occurrences of enthalpy-entropy compensation in protein folding and enzymatic reactions, a Carnot-cycle model in which a micro-phase transition plays a crucial role was proposed. In drug receptor binding, it has been suggested that enthalpy-entropy compensation arises due to an intrinsic property of hydrogen bonds. A mechanical basis for solvent-induced enthalpy-entropy compensation has been put forward and tested at the dilute gas limit. There is some evidence of enthalpy-entropy compensation in biochemical or metabolic networks particularly in the context of intermediate-free coupled reactions or processes. However, a single general statistical mechanical explanation applicable to all compensated processes has not yet been developed. Criticism Kinetic relations have been observed in many systems and, since their conception, have gone by many terms, among which are the Meyer-Neldel effect or rule, the Barclay-Butler rule, the theta rule, and the Smith-Topley effect. Generally, chemists will talk about the isokinetic relation (IKR), from the importance of the isokinetic (or isoequilibrium) temperature, condensed matter physicists and material scientists use the Meyer-Neldel rule, and biologists will use the compensation effect or rule. An interesting homework problem appears following Chapter 7: Structure-Reactivity Relationships in Kenneth Connors's textbook Chemical Kinetics: The Study of Reaction Rates: From the last four digits of the office telephone numbers of the faculty in your department, systematically construct pairs of "rate constants" as two-digit numbers times 10−5 s−1 at temperatures 300 K and 315 K (obviously the larger rate constant of each pair to be associated with the higher temperature). Make a two-point Arrhenius plot for each faculty member, evaluating and . Examine the plot of against for evidence of an isokinetic relationship. The existence of any real compensation effect has been widely derided in recent years and attributed to the analysis of interdependent factors and chance. Because the physical roots remain to be fully understood, it has been called into question whether compensation is a truly physical phenomenon or a coincidence due to trivial mathematical connections between parameters. The compensation effect has been criticized in other respects, namely for being the result of random experimental and systematic errors producing the appearance of compensation. The principal complaint lodged states that compensation is an artifact of data from a limited temperature range or from a limited range for the free energies. In response to the criticisms, investigators have stressed that compensatory phenomena are real, but appropriate and in-depth data analysis is always needed. The F-test has been used to such an aim, and it minimizes the deviations of points constrained to pass through an isokinetic temperature to the deviation of the points from the unconstrained line is achieved by comparing the mean deviations of points. Appropriate statistical tests should be performed as well. W. Linert wrote in a 1983 paper: There are few topics in chemistry in which so many misunderstandings and controversies have arisen as in connection with the so-called isokinetic relationship (IKR) or compensation law. Up to date, a great many chemists appear to be inclined to dismiss the IKR as being accidental. The crucial problem is that the activation parameters are mutually dependent because of their determination from the experimental data. Therefore, it has been stressed repeatedly, the isokinetic plot (i.e., against ) is unfit in principle to substantiate a claim of an isokinetic relationship. At the same time, however, it is a fatal error to dismiss the IKR because of that fallacy. Common among all defenders is the agreement that stringent criteria for the assignment of true compensation effects must be adhered to. References Thermodynamics Chemical thermodynamics
Enthalpy–entropy compensation
Physics,Chemistry,Mathematics
2,496
1,223,804
https://en.wikipedia.org/wiki/Philips%2068070
The SCC68070 is a Philips Semiconductors-branded, Motorola 68000-based 16/32-bit processor produced under license. While marketed externally as a high-performance microcontroller, it has been almost exclusively used combined with the Philips SCC66470 VSC (Video- and Systems Controller) in the Philips CD-i interactive entertainment product line. Additions to the Motorola 68000 core include: Operation from 4 to 17.5 MHz Inclusion of a minimal, segmented MMU supporting up to 16 MB of memory Built-in DMA controller I²C bus controller UART 16-bit counter/timer unit 2 match/count/capture registers allowing the implementation of a pulse generator, event counter or reference timer Clock generator Differences from the Motorola 68000 core include these: Instruction execution timing is completely different Interrupt handling has been simplified The SCC68070 has MC68010 style bus-error recovery. They are not compatible, so exception error processing is different. The SCC68070 lacks a dedicated address generation unit (AGU), so operations requiring address calculation run slower due to contention with the shared ALU. This means that most instructions take more cycles to execute, for some instructions significantly more, than a 68000. The MMU is not compatible with the Motorola 68451 or any other "standard" Motorola MMU, so operating system code dealing with memory protection and address translation is not generally portable. Enabling the MMU also costs a wait state on each memory access. While the SCC68070 is mostly binary compatible with the Motorola 68000, there is no equivalent chip in the Motorola 680x0 series. In particular, the SCC68070 is not a follow-on to the Motorola 68060. Even though the SCC68070 is a 32-bit processor internally, it has a 24-bit address bus, giving it a theoretical 16MB maximum RAM. However, this is not possible, as all of the on-board peripherals are mapped internally. External links xs4all.nl/~ganswijk/chipdir/reg/68070.txt SCC68070 datasheet 68k microprocessors Microcontrollers Freescale Semiconductor 68070
Philips 68070
Technology
468
56,515,003
https://en.wikipedia.org/wiki/Red%20fluorescent%20protein
Red fluorescent protein (RFP) is a protein which acts as a fluorophore, fluorescing red-orange when excited. The original variant occurs naturally in the coral genus Discosoma, and is named DsRed. Several new variants have been developed using directed mutagenesis which fluoresce orange, red, and far-red. Characteristics and Properties Like GFP and other fluorescent proteins, RFP is a barrel-shaped protein made primarily out of β-sheet motifs; this type of protein fold is commonly known as a β-barrel. The mass of RFP is approximately 25.9 kDa. Its excitation maximum is 558 nm, and its emission maximum is 583 nm. Applications RFP is frequently used in molecular biology research as a fluorescent marker, for a variety of purposes. DsRed has been shown to be more suitable for optical imaging approaches than EGFP. Issues with fluorescent proteins include the length of time between protein synthesis and expression of fluorescence. DsRed has a maturation time of around 24 hours, which renders it unsuited for experiments that take place over a shorter time frame. Additionally, DsRed exists in a tetrameric form, which can affect the function of proteins to which it is attached. Genetic engineering has improved the utility of RFP by increasing the speed of fluorescence development and creating monomeric variants. Improved variants of RFP include the mFruits variants (mCherry, mOrange, mRaspberry), mKO, TagRFP, mKate, mRuby, FusionRed, mScarlet and DsRed-Express. Other Fluorescent Proteins The first fluorescent protein to be discovered, green fluorescent protein (GFP), has been adapted to identify and develop fluorescent markers in other colors. Variants such as yellow fluorescent protein (YFP) and cyan fluorescent protein (CFP) were discovered in Anthozoa. See Also Cyan fluorescent protein (CFP) Green fluorescent protein (GFP) Yellow fluorescent protein (YFP) References External links DsRed on FPBase Fluorescent proteins
Red fluorescent protein
Chemistry,Biology
429
72,500,648
https://en.wikipedia.org/wiki/Operation%20PowerOFF
Operation PowerOFF is an ongoing joint operation by the FBI, EUROPOL, the Dutch National Police Corps, German Federal Criminal Police Office , Poland Cybercrime Police and the UK National Crime Agency to close "booter/stresser" services offering DDoS attack services for hire. Beginning in 2018, the operation shut down 48 websites offering DDoS services, and six people were arrested in the United States. Multiple companies, including Cloudflare, PayPal, and DigitalOcean provided information to the FBI to assist in the seizure. History In 2018, the FBI closed down 15 DDoS websites with the Dutch National Police Corps. On December 14, 2022, resuming this collaboration, the FBI and Department of Justice announced that they had closed multiple websites offering DDoS-for-hire services. The FBI claimed that these websites offered services designed to slow down websites relating to gaming. The FBI also noted that these services had heavy use, claiming that "Quantum", one of the seized services, was used to launch 50,000 attacks. After the shutdown, multiple law enforcement agencies collaborating with the FBI declared they would place advertisements on search engines, such as Google, that would educate the public on the legality of DDoS services. Aftermath Six US citizens were indicted by FBI offices in California and Alaska. Three of the people arrested were from Florida, one from Texas, one from Hawaii, and one from New York. The FBI asks that users with information related to the attacks contact their offices for tips and information related to the seized sites. Ongoing activity , Operation PowerOFF activities were still ongoing, with further websites being seized and prosecutions continuing. References PowerOFF PowerOFF 2022 in computing Denial-of-service attacks Cybercrime
Operation PowerOFF
Technology
351
17,745
https://en.wikipedia.org/wiki/Lutetium
Lutetium is a chemical element; it has symbol Lu and atomic number 71. It is a silvery white metal, which resists corrosion in dry air, but not in moist air. Lutetium is the last element in the lanthanide series, and it is traditionally counted among the rare earth elements; it can also be classified as the first element of the 6th-period transition metals. Lutetium was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. All of these researchers found lutetium as an impurity in the mineral ytterbia, which was previously thought to consist entirely of ytterbium and oxygen. The dispute on the priority of the discovery occurred shortly after, with Urbain and Welsbach accusing each other of publishing results influenced by the published research of the other; the naming honor went to Urbain, as he had published his results earlier. He chose the name lutecium for the new element, but in 1949 the spelling was changed to lutetium. In 1909, the priority was finally granted to Urbain and his names were adopted as official ones; however, the name cassiopeium (or later cassiopium) for element 71 proposed by Welsbach was used by many German scientists until the 1950s. Lutetium is not a particularly abundant element, although it is significantly more common than silver in the Earth's crust. It has few specific uses. Lutetium-176 is a relatively abundant (2.5%) radioactive isotope with a half-life of about 38 billion years, used to determine the age of minerals and meteorites. Lutetium usually occurs in association with the element yttrium and is sometimes used in metal alloys and as a catalyst in various chemical reactions. 177Lu-DOTA-TATE is used for radionuclide therapy (see Nuclear medicine) on neuroendocrine tumours. Lutetium has the highest Brinell hardness of any lanthanide, at 890–1300 MPa. Characteristics Physical properties A lutetium atom has 71 electrons, arranged in the configuration [Xe] 4f145d16s2. Lutetium is generally encountered in the 3+ oxidation state, having lost its two outermost 6s and the single 5d-electron. The lutetium atom is the smallest among the lanthanide atoms, due to the lanthanide contraction, and as a result lutetium has the highest density, melting point, and hardness of the lanthanides. As lutetium's 4f orbitals are highly stabilized only the 5d and 6s orbitals are involved in chemical reactions and bonding; thus it is characterized as a d-block rather than an f-block element, and on this basis some consider it not to be a lanthanide at all, but a transition metal like its lighter congeners scandium and yttrium. Chemical properties and compounds Lutetium's compounds almost always contain the element in the 3+ oxidation state. Aqueous solutions of most lutetium salts are colorless and form white crystalline solids upon drying, with the common exception of the iodide, which is brown. The soluble salts, such as nitrate, sulfate and acetate form hydrates upon crystallization. The oxide, hydroxide, fluoride, carbonate, phosphate and oxalate are insoluble in water. Lutetium metal is slightly unstable in air at standard conditions, but it burns readily at 150 °C to form lutetium oxide. The resulting compound is known to absorb water and carbon dioxide, and it may be used to remove vapors of these compounds from closed atmospheres. Similar observations are made during reaction between lutetium and water (slow when cold and fast when hot); lutetium hydroxide is formed in the reaction. Lutetium metal is known to react with the four lightest halogens to form trihalides; except the fluoride they are soluble in water. Lutetium dissolves readily in weak acids and dilute sulfuric acid to form solutions containing the colorless lutetium ions, which are coordinated by between seven and nine water molecules, the average being . Oxidation states Lutetium is usually found in the +3 oxidation state, like most other lanthanides. However, it can also be in the 0, +1 and +2 states as well. Isotopes Lutetium occurs on the Earth in form of two isotopes: lutetium-175 and lutetium-176. Out of these two, only the former is stable, making the element monoisotopic. The latter one, lutetium-176, decays via beta decay with a half-life of ; it makes up about 2.5% of natural lutetium. To date, 40 synthetic radioisotopes of the element have been characterized, ranging in mass number from 149 to 190; the most stable such isotopes are lutetium-174 with a half-life of 3.31 years, and lutetium-173 with a half-life of 1.37 years. All of the remaining radioactive isotopes have half-lives that are less than 9 days, and the majority of these have half-lives that are less than half an hour. Isotopes lighter than the stable lutetium-175 decay via electron capture (to produce isotopes of ytterbium), with some alpha and positron emission; the heavier isotopes decay primarily via beta decay, producing hafnium isotopes. The element also has 43 known nuclear isomers, with masses of 150, 151, 153–162, and 166–180 (not every mass number corresponds to only one isomer). The most stable of them are lutetium-177m, with a half-life of 160.4 days, and lutetium-174m, with a half-life of 142 days; these are longer than the half-lives of the ground states of all radioactive lutetium isotopes except lutetium-173, 174, and 176. History Lutetium, derived from the Latin Lutetia (Paris), was independently discovered in 1907 by French scientist Georges Urbain, Austrian mineralogist Baron Carl Auer von Welsbach, and American chemist Charles James. They found it as an impurity in ytterbia, which was thought by Swiss chemist Jean Charles Galissard de Marignac to consist entirely of ytterbium. The scientists proposed different names for the elements: Urbain chose neoytterbium and lutecium, whereas Welsbach chose aldebaranium and cassiopeium (after Aldebaran and Cassiopeia). Both of these articles accused the other man of publishing results based on those of the author. The International Commission on Atomic Weights, which was then responsible for the attribution of new element names, settled the dispute in 1909 by granting priority to Urbain and adopting his names as official ones, based on the fact that the separation of lutetium from Marignac's ytterbium was first described by Urbain; after Urbain's names were recognized, neoytterbium was reverted to ytterbium. An obvious issue with this decision is that Urbain was on the International Commission of Atomic Weights. Until the 1950s, some German-speaking chemists called lutetium by Welsbach's name, cassiopeium; in 1949, the spelling of element 71 was changed to lutetium. The reason for this was that Welsbach's 1907 samples of lutetium had been pure, while Urbain's 1907 samples only contained traces of lutetium. This later misled Urbain into thinking that he had discovered element 72, which he named celtium, which was actually very pure lutetium. The later discrediting of Urbain's work on element 72 led to a reappraisal of Welsbach's work on element 71, so that the element was renamed to cassiopeium in German-speaking countries for some time. Charles James, who stayed out of the priority argument, worked on a much larger scale and possessed the largest supply of lutetium at the time. Pure lutetium metal was first produced in 1953. Occurrence and production Found with almost all other rare-earth metals but never by itself, lutetium is very difficult to separate from other elements. Its principal commercial source is as a by-product from the processing of the rare earth phosphate mineral monazite (, which has concentrations of only 0.0001% of the element, not much higher than the abundance of lutetium in the Earth crust of about 0.5 mg/kg. No lutetium-dominant minerals are currently known. The main mining areas are China, United States, Brazil, India, Sri Lanka and Australia. The world production of lutetium (in the form of oxide) is about 10 tonnes per year. Pure lutetium metal is very difficult to prepare. It is one of the rarest and most expensive of the rare earth metals with the price about US$10,000 per kilogram, or about one-fourth that of gold. Crushed minerals are treated with hot concentrated sulfuric acid to produce water-soluble sulfates of rare earths. Thorium precipitates out of solution as hydroxide and is removed. After that the solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3. Several rare earth metals, including lutetium, are separated as a double salt with ammonium nitrate by crystallization. Lutetium is separated by ion exchange. In this process, rare-earth ions are adsorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. Lutetium salts are then selectively washed out by suitable complexing agent. Lutetium metal is then obtained by reduction of anhydrous LuCl3 or LuF3 by either an alkali metal or alkaline earth metal. 177Lu is produced by neutron activation of 176Lu or by indirectly by neutron activation of 176Yb followed by beta decay. The 6.693 day half life allows transport from the production reactor to the point of use without significant loss in activity. Applications Small quantities of lutetium have many speciality uses. Stable isotopes Stable lutetium can be used as catalysts in petroleum cracking in refineries and can also be used in alkylation, hydrogenation, and polymerization applications. Lutetium aluminium garnet () has been proposed for use as a lens material in high refractive index immersion lithography. Additionally, a tiny amount of lutetium is added as a dopant to gadolinium gallium garnet, which was used in magnetic bubble memory devices. Cerium-doped lutetium oxyorthosilicate is currently the preferred compound for detectors in positron emission tomography (PET). Lutetium aluminium garnet (LuAG) is used as a phosphor in light-emitting diode light bulbs. Lutetium tantalate (LuTaO4) is the densest known stable white material (density 9.81 g/cm3) and therefore is an ideal host for X-ray phosphors. The only denser white material is thorium dioxide, with density of 10 g/cm3, but the thorium it contains is radioactive. Lutetium is also a compound of several scintillating materials, which convert X-rays to visible light. It is part of LYSO, LuAg and lutetium iodide scintillators. Research indicates that lutetium-ion atomic clocks could provide greater accuracy than any existing atomic clock. Unstable isotopes The suitable half-life and decay mode made lutetium-176 used as a pure beta emitter, using lutetium which has been exposed to neutron activation, and in lutetium–hafnium dating to date meteorites. The isotope 177Lu emits low-energy beta particles and gamma rays and has a half-life around 7 days, positive characteristics for commercial applications, especially in therapeutic nuclear medicine. The synthetic isotope lutetium-177 bound to octreotate (a somatostatin analogue), is used experimentally in targeted radionuclide therapy for neuroendocrine tumors. Lutetium-177 is used as a radionuclide in neuroendocrine tumor therapy and bone pain palliation. Lutetium (177Lu) vipivotide tetraxetan is a therapy for prostate cancer, FDA approved in 2022. Precautions Like other rare-earth metals, lutetium is regarded as having a low degree of toxicity, but its compounds should be handled with care nonetheless: for example, lutetium fluoride inhalation is dangerous and the compound irritates skin. Lutetium nitrate may be dangerous as it may explode and burn once heated. Lutetium oxide powder is toxic as well if inhaled or ingested. Similarly to the other rare-earth metals, lutetium has no known biological role, but it is found even in humans, concentrating in bones, and to a lesser extent in the liver and kidneys. Lutetium salts are known to occur together with other lanthanide salts in nature; the element is the least abundant in the human body of all lanthanides. Human diets have not been monitored for lutetium content, so it is not known how much the average human takes in, but estimations show the amount is only about several micrograms per year, all coming from tiny amounts absorbed by plants. Soluble lutetium salts are mildly toxic, but insoluble ones are not. See also References Chemical elements Lanthanides Transition metals Chemical elements with hexagonal close-packed structure
Lutetium
Physics
2,942
22,590,756
https://en.wikipedia.org/wiki/Thermal%20profiling
A thermal profile is a complex set of time-temperature data typically associated with the measurement of thermal temperatures in an oven (ex: reflow oven). The thermal profile is often measured along a variety of dimensions such as slope, soak, time above liquidus (TAL), and peak. A thermal profile can be ranked on how it fits in a process window (the specification or tolerance limit). Raw temperature values are normalized in terms of a percentage relative to both the process mean and the window limits. The center of the process window is defined as zero, and the extreme edges of the process window are ±99%. A Process Window Index (PWI) greater than or equal to 100% indicates the profile is outside of the process limitations. A PWI of 99% indicates that the profile is within process limitations, but runs at the edge of the process window. For example, if the process mean is set at 200 °C with the process window calibrated at 180 °C and 220 °C respectively, then a measured value of 188 °C translates to a process window index of −60%. The method is used in a variety of industrial and laboratory processes, including electronic component assembly, optoelectronics, optics, biochemical engineering, food science, decontamination of hazardous wastes, and geochemical analysis. Soldering of electronic products One of the major uses of this method is soldering of electronic assemblies. There are two main types of profiles used today: The Ramp-Soak-Spike (RSS) and the Ramp to Spike (RTS). In modern systems, quality management practices in manufacturing industries have produced automatic process algorithms such as PWI, where soldering ovens come preloaded with extensive electronics and programmable inputs to define and refine process specifications. By using algorithms such as PWI, engineers can calibrate and customize parameters to achieve minimum process variance and a near zero defect rate. Reflow process In soldering, a thermal profile is a complex set of time-temperature values for a variety of process dimensions such as slope, soak, TAL, and peak. Solder paste contains a mix of metal, flux, and solvents that aid in the phase change of the paste from semi-solid, to liquid to vapor; and the metal from solid to liquid. For an effective soldering process, soldering must be carried out under carefully calibrated conditions in a reflow oven. Convection Reflow Oven Detailed Description There are two main profile types used today in soldering: The Ramp-Soak-Spike (RSS) Ramp to Spike (RTS) Ramp-Soak-Spike Ramp is defined as the rate of change in temperature over time, expressed in degrees per second. The most commonly used process limit is 4 °C/s, though many component and solder paste manufacturers specify the value as 2 °C/s. Many components have a specification where the rise in temperature should not exceed a specified temperature per second, such as 2 °C/s. Rapid evaporation of the flux contained in the solder paste can lead to defects, such as lead lift, tombstoning, and solder balls. Additionally, rapid heat can lead to steam generation within the component if the moisture content is high, resulting in the formation of microcracks. In the soak segment of the profile, the solder paste approaches a phase change. The amount of energy introduced to both the component and the PCB approaches equilibrium. In this stage, most of the flux evaporates out of the solder paste. The duration of the soak varies for different pastes. The mass of the PCB is another factor that must be considered for the soak duration. An over-rapid heat transfer can cause solder splattering and the production of solder balls, bridging and other defects. If the heat transfer is too slow, the flux concentration may remain high and result in cold solder joints, voids and incomplete reflow. After the soak segment, the profile enters the ramp-to-peak segment of the profile, which is a given temperature range and time exceeding the melting temperature of the alloy. Successful profiles range in temperature up to 30 °C higher than liquidus, which is approximately 183 °C for eutectic and approximately 217 °C for lead-free. The final area of this profile is the cooling section. A typical specification for the cool down is usually less than −6 °C/s (falling slope). Ramp-to-Spike The Ramp to Spike (RTS) profile is almost a linear graph, starting at the entrance of the process and ending at the peak segment, with a greater Δt (change in temperature) in the cooling segment. While the Ramp-Soak-Spike (RSS) allows for about 4 °C/s, the requirements of the RTS is about 1–2 °C/s. These values depend on the solder paste specifications. The RTS soak period is part of the ramp and is not as easily distinguishable as in RSS. The soak is controlled primarily by the conveyor speed. The peak of the RTS profile is the endpoint of the linear ramp to the peak segment of the profile. The same considerations about defects in an RSS profile also apply to an RTS profile. When the PCB enters the cooling segment, the negative slope generally is steeper than the rising slope. Thermocouple attachments Thermocouples (TCs) consist of two dissimilar metals joined by a welded bead. For a thermocouple to read the temperature at any given point, the welded bead must come in direct contact with the object whose temperatures need to be measured. The two dissimilar wires must remain separate, joined only at the bead; otherwise, the reading is no longer at the welded bead but at the position where the metals first make contact, rendering the reading invalid. A zigzagging thermocouple reading on a profile graph indicates loosely attached thermocouples. For accurate readings, thermocouples are attached to areas that are dissimilar in terms of mass, location and known trouble spots. Additionally, they should be isolated from air currents. Finally, the placement of several thermocouples should range from populated to less populated areas of the PCB for the best sampling conditions. Several methods of attachment are used, including epoxy, high-temperature solder, Kapton and aluminum tape, each with various levels of success for each method. Epoxies are good at securing TC conductors to the profile board to keep them from becoming entangled in the oven during profiling. Epoxies come in both insulator and conductor formulations The specs need to be checked otherwise an insulator can play a negative role in the collection of profile data. The ability to apply this adhesive in similar quantities and thicknesses is difficult to measure in quantitative terms. This decreases reproducibility. If epoxy is used, properties and specifications of that epoxy must be checked. Epoxy functions within a wide range of temperature tolerances. The properties of solder used for TC attachment differ from that of electrically connective solder. High temperature solder is not the best choice to use for TC attachment for several reasons. First, it has the same drawbacks as epoxy – the quantity of solder needed to adhere the TC to a substrate varies from location to location. Second, solder is conductive and may short-circuit TCs. Generally, there is a short length of conductor that is exposed to the temperature gradient. Together, this exposed area, along with the physical weld produce an electromotive force (EMF). Conductors and the weld are placed in a homogeneous environment within the temperature gradient to minimize the effects of EMF. Kapton tape is one of the most widely used tapes and methods for TC and TC conductor attachment. When several layers are applied, each layer has an additive effect on the insulation and may negatively impact a profile. A disadvantage of this tape is that the PCB has to be very clean and smooth to achieve an airtight cover over the thermocouple weld and conductors. Another disadvantage to Kapton tape is that at temperatures above 200 °C the tape becomes elastic and, hence, the TCs have a tendency to lift off the substrate surface. The result is erroneous readings characterized by jagged lines in the profile. Aluminum tape comes in various thicknesses and density. Heavier aluminum tape can defuse the heat transfer through the tape and act as an insulator. Low density aluminum tape allows for heat transfer to the EMF-producing area of the TC. The thermal conductivity of the aluminum tape allows for even conduction when the thickness of the tape is fairly consistent in the EMF-producing area of the thermocouple. Virtual profiling Virtual profiling is a method of creating profiles without attaching the thermocouples (TCs) or having to physically instrument a PCB each and every time a profile is run for the same production board. All the typical profile data such as slope, soak, TAL, etc., that are measured by instrumented profiles are gathered by using virtual profiles. The benefits of not having attached TCs surpass the convenience of not having to instrument a PCB every time a new profile is needed. Virtual profiles are created automatically, for both reflow or wave solder machines. An initial recipe setup is required for modeling purposes, but once completed, profiling can be made virtual. As the system is automatic, profiles can be generated periodically or continuously for each and every assembly. SPC charts along with CpK can be used as an aid when collecting a mountain of process-related data. Automated profiling systems continuously monitor the process and create profiles for each assembly. As barcoding becomes more common with both reflow and wave processes, the two technologies can be combined for profiling traceability, allowing each generated profile to be searchable by barcode. This is useful when an assembly is questioned at some time in the future. As a profile is created for each assembly, a quick search using the PCB’s barcode can pull up the profile in question and provide evidence that the component was processed in spec. Additionally, tighter process control can be achieved when combining automated profiling with barcoding, such as confirming that the correct process has been input by the operator before launching a production run. References External links Automatic Profiling video Different levels of reflow profile control Automatic Profiling Walkthrough Profile Simulation Software Brazing and soldering Electronics manufacturing
Thermal profiling
Engineering
2,187
45,076,094
https://en.wikipedia.org/wiki/BOSH%20%28software%29
BOSH is an open-source software project that offers a toolchain for release engineering, software deployment and application lifecycle management of large-scale distributed services. The toolchain is made up of a server (the BOSH Director) and a command line tool. BOSH is typically used to package, deploy and manage cloud software. While BOSH was initially developed by VMware in 2010 to deploy Cloud Foundry PaaS, it can be used to deploy other software (such as Hadoop, RabbitMQ, or MySQL for instance). BOSH is designed to manage the whole lifecycle of large distributed systems. Since March 2016, BOSH can manage deployments on both Microsoft Windows and Linux servers. A BOSH Director communicates with a single Infrastructure as a service (IaaS) provider to manage the underlying networking and virtual machines (VMs) (or containers). Several IaaS providers are supported: Amazon Web Services EC2, Apache CloudStack, Google Compute Engine, Microsoft Azure, OpenStack, and VMware vSphere. To help support more underlying IaaS providers, BOSH uses the concept of a Cloud Provider Interface (CPI). There is an implementation of the CPI for each of the IaaS providers listed above. Typically the CPI is used to deploy VMs, but it can be used to deploy containers as well. Few CPIs exist for deploying containers with BOSH and only one is actively supported. For this one, BOSH uses a CPI that deploys Pivotal Software's Garden containers (Garden is very similar to Docker) on a single virtual machine, run by VirtualBox or VMware Workstation. In theory, any other container engine could be supported, if the necessary CPIs were developed. Due to BOSH indifferently supporting deployments on VMs or containers, BOSH uses the generic term “instances” to designate those. It is up to the CPI to choose whether a BOSH “instance” is actually a VM or a container. Workflow Once installed, a BOSH server accepts uploading root filesystems (called “stemcells”) and packages (called “releases”) to it. When a BOSH server has the necessary bits for deploying a given software system, it can be told to proceed, as described by a YAML deployment manifest. BOSH then progressively deploys “instances” (VMs or containers), using canaries to avoid deploying failing configurations. Once a software system is deployed, BOSH monitors its instances continuously to allow detecting failing instances, and resurrecting any missing one. When a BOSH deployment manifest is changed, BOSH accepts to roll out the implied modifications proceeding progressively, instance by instance. This means that BOSH can upgrade live clusters with possibly no downtime. Concepts Release A BOSH release can either be an archive file or a git repository. In both cases, it describes a software system that can be deployed with BOSH. For this purpose, it packages up all related binary assets, source code, compilation scripts, configurable properties, startup scripts and templates for configuration files. BOSH releases are made of “packages” and “jobs”. Roughly, BOSH packages provide something that can be run, and BOSH jobs describe how these things are configured and run. A BOSH package details the necessary source code, binary assets (called “blobs”), and compilation scripts for building a given software component. There are two ways to provide binary “blobs”. In a BOSH release that is provided as an archive file, blobs are directly included. But with BOSH releases that are provided as git repositories, doing the same tends to be problematic when blobs get big. That's why a BOSH release provides a concept of “blobstore”, from where referenced blobs can be fetched. Most BOSH releases use blobstores that are backed by public Amazon S3 buckets, but there are other ways to refer to a private or a local “blobstore” in a BOSH release. BOSH packages are always subject to a compilation phase, even if this just extracts files from an archive and copies them to the proper target directory. To compile a given package, BOSH spawns an ephemeral compilation instance (VM or container) that only includes any required packages and blobs, as declared by the package specification. In this dedicated instance, BOSH runs the compilation script, and seals the compilation result in its database, so that it can be safely used for reproducible deployments. BOSH jobs on the other hand, provide configuration properties (that can possibly be documented), templates for configuration files, and startup scripts. BOSH jobs refer to one or many packages as dependencies. Jobs are also sealed into BOSH database, but the templates for configuration files are rendered at deploy time, where all configuration properties are resolved. These configuration properties are usually IP addresses, port numbers, user names, passwords, domain names, etc. Stemcell A BOSH stemcell packages the basics for creating a new instance (VM or container). Namely, a BOSH stemcell ships an Operating System image along with a BOSH agent and a copy of monit, which is used to manage the services (called “jobs”) that will be hosted by the instance. The BOSH agent helps BOSH communicate with the instance during all its life cycle. The stemcell concept in BOSH is similar to Virtual Machine Images like Amazon's AMIs, but BOSH stemcells are not meant to be specialized for any particular usage. Instead, BOSH only provides different stemcells for supporting different Operating Systems (CentOS, Ubuntu or Windows), or different underlying IaaS providers (AWS or OpenStack). The name “stemcell” originated from biological term “stem cells”, which refers to the undifferentiated cells that are able to grow into diverse cell types later. Similarly, instances created by a BOSH stemcell are identical at the beginning. After inception, instances are configured with different CPU/memory/storage/network, and installed with different software packages. Hence, instances built from the same BOSH stemcell can behave differently. BOSH Agent The BOSH agent is a service that runs on every BOSH-deployed VM. It does the following: sets up the VM, e.g., configures local disks, configure and format attached (secondary) disks, configures networks accepts requests from director, e.g., pings, job management requests manages jobs: starting, stopping, and monitoring health Deployment A BOSH deployment is basically a YAML deployment manifest, where the user describes the BOSH releases and BOSH stemcells to use, and how to set up and compose jobs into groups of identical instances (historically misnamed “jobs” and later renamed “instance groups”). Within these “instance groups”, BOSH can span identical instances (VMs or containers) across different availability zones, in order to minimise the risk for all instances to go down at the same time. This is particularly useful when deploying highly available databases or applications. In most cases, users don't work with deployment manifest as one big YAML file. Instead, deployment manifest are split into smaller files that are easier to maintain. These separate files are merged by tools like spiff or spruce, right before they get uploaded to the BOSH server and deployed. In a deployment manifest, all configuration properties, as declared by jobs from all referenced releases, can be customized. Different jobs can refer to configuration properties with same name, in order to share common settings. Key principles BOSH was purposefully constructed to address the four principles of modern release engineering in the following ways: Identifiability Being able to identify all of the source, tools, environment, and other components that make up a particular release. In its concept of “release”, BOSH packages up all related source code, binary assets, configurable properties, compilation scripts, and startup scripts. This allows users to easily track what is actually deployed, and how it is run. Additionally, BOSH provides a way to capture the root filesystems that will be the basis of deployed instances (VMs or containers), as single images called “stemcells”. BOSH releases and BOSH stemcells are identified by UUIDs and sealed by SHA-1 checksums. Reproducibility The ability to integrate source, third party components, data, and deployment externals of a software system in order to guarantee operational stability. BOSH tool chain provides a centralized server for operating the deployed systems. This server holds software “releases”, Operating System images (called “stemcells”), persistent data, and system configuration. Therefore, a given deployment is guaranteed to reproduce an identical result. Consistency The mission to provide a stable framework for development, deployment, audit, and accountability for software components. BOSH achieves such consistency with its software “releases”, that bring a consistent framework for developing and deploying the software systems. Moreover, audit and accountability are provided by the BOSH server, which allows users to see and track changes made to the deployed systems. Agility The ongoing research into what are the repercussions of modern software engineering practices on the productivity in the software cycle, i.e. Continuous Integration. BOSH tool chain integrates well with current best practices of software engineering (including Continuous Delivery) by providing ways to easily create software releases in an automated way and to update complex deployed systems with simple commands. History Designed to address shortcomings found in available tools to manage Cloud Foundry. Chef was used originally, but was limited in its ability to package, spin up/down servers, limited in monitoring and self-management capabilities. Originally developed for Cloud Foundry's own needs, but the project has now grown to be completely generic, and can be used for orchestration of other software such as Hadoop, RabbitMQ, MySQL and similar platform or application software. Architecture A BOSH installation is made of several separate components that can possibly be split across different VMs or containers: A Director that is the “brain” of the server The director database, made of a PostgreSQL instance, a Redis instance and a Blobstore for storing compiled packages and jobs A Health Monitor that keeps track of instances (VMs or containers) status Many BOSH agents, one on each deployed instance A NATS message bus for connecting the Director, the Health Monitor, and all the deployed BOSH agents A CPI (Cloud Provider Interface), which is just an executable binary conforming to some specific API A BOSH managed environment usually centers around the Director deployed on a VM. Cloud / Platform / OS compatibility BOSH connects to the underlying IaaS layer through an abstraction called the CPI (Cloud Provider Interface). There are CPIs available for Amazon Web Services, certain OpenStack versions, vSphere, vCloud. Some community maintained CPIs exist for Google Compute Engine, Microsoft Azure and CloudStack. Deployment BOSH can be deployed as a BOSH release, which may create a “chicken or egg” surprise for newcomers. A BOSH server is not the only software that can deploy BOSH releases. There is a BOSH provisioner project that can deploy BOSH in a VM, a Docker container, or a bare metal server. This component is used by the BOSH packer provisioner, which creates a Vagrant box running BOSH-lite, which is what most users rely on when learning BOSH. Governance Once a sub-component of Cloud Foundry, BOSH is now a separate open source project, that aims at deploying any distributed software. BOSH is managed by the Cloud Foundry Foundation. Nearly all contributions to BOSH are made by Pivotal. Users Pivotal uses BOSH to orchestrate Cloud Foundry within Pivotal Cloud Foundry (PCF), as well as all of the Pivotal Data Services for Cloud Foundry. Announced public users of BOSH and PCF include Axel Springer, Corelogic, IBM, Monsanto, Philips, SAP, and Swisscom. Distributions BOSH is not commercially distributed as a standalone product. It is included as part of Pivotal Cloud Foundry, IBM Bluemix, and HP Helion Developer Platform, and is also used and supported commercially by Cloud Credo, Stark & Wayne, Gstack, and others. References External links Web services Web hosting File hosting Network file systems Cloud storage Cloud platforms Open-source cloud hosting services Free software for cloud computing Free software programmed in Ruby VMware
BOSH (software)
Technology
2,639
38,916,131
https://en.wikipedia.org/wiki/Snow%20socks
Snow socks (also known as auto socks) are textile alternatives to snow chains. Snow sock devices wrap around the tires of a vehicle to increase traction on snow and ice. Snow socks are normally composed of a woven fabric with an elastomer attached to the inner and/or outer edge. The woven fabric covers the tire tread and is the contact point between the vehicle and the road. The elastomer keeps the snow sock in place and facilitates with installation. Some snow sock models have an additional component that covers the rim of the tire, which prevents snow or debris from gathering between the tread of the tire and the inner side of the woven fabric. Sizing and deployment Snow socks are sold in pairs and come in sizes that are specific to a range of tire codes. Snow sock sizes are available for different motor vehicle classes, but most snow sock brands focus on cars and pickup trucks rather than semi-trailer trucks or larger vehicles. Since the largest variation of snow socks is in the market for cars and pickup trucks, wide or low-profile tires are often covered. Buses, semi-trailer trucks or larger vehicles require snow sock models that can support the heavier axle load and larger wheel dimensions. Some brands also offer snow socks for specialized vehicles such as forklifts or airport ground support equipment (e.g. pushback tugs or loaders). Driving with snow socks usually reduces the maximum allowable speed to between and depending on the snow sock brand, snow sock size and vehicle class. These restrictions are normally stated in the product's or vehicle's owner’s manual. Neither snow socks nor snow chains are considered direct substitutes to snow tires. Legality of use During inclement weather, regions may invoke snow chain laws as a precautionary measure. Snow socks are generally not considered a legal equivalent to snow chains, though some brands are individually approved. If snow chains are prohibited, snow socks without metal components may be permitted. This depends on local legislature. Textile snow socks This is a cover in textile that covers the tire and insulates the tire from snow. They do not disturb safety systems (ABS, ESP) or damage aluminum rims. One important component is polyester. The fibres absorb the water and improve grip. They have been used safely up to . Snow socks made of composite materials Michelin manufactures a composite snow sock called Easy Grip. It is coupled with an inner elastic band to facilitate installation. To strengthen the tread and maximize road holding there are 150 galvanised steel rings. Benefits of snow socks Snow socks offer added benefit in that they do not disturb the safety systems on a vehicle such as ESP or ABS. Snow socks with a high polyester count absorb more surface water from the road improving grip and allowing driving at speeds up to . See also Snow chains Snow tire References Inclement weather management Vehicle safety technologies
Snow socks
Physics
580
9,773,583
https://en.wikipedia.org/wiki/Hormone%20antagonist
For the use of hormone antagonists in cancer, see hormonal therapy (oncology) A hormone antagonist is a specific type of receptor antagonist which acts upon hormone receptors. Such pharmaceutical drugs are used in antihormone therapy. External links Hormonal agents Receptor antagonists
Hormone antagonist
Chemistry,Biology
58
2,710,792
https://en.wikipedia.org/wiki/Lambda%20Ursae%20Majoris
Lambda Ursae Majoris (λ Ursae Majoris, abbreviated Lambda UMa, λ UMa), formally named Tania Borealis , is a star in the northern circumpolar constellation of Ursa Major. Properties This star has an apparent visual magnitude of +3.45, making it one of the brighter members of the constellation. The distance to this star has been measured directly using the parallax technique, which yields a value of roughly with a 4% margin of error. The stellar classification of Lambda Ursae Majoris is A2 IV, with the luminosity class of 'IV' indicating that, after 410 million years on the main sequence, this star is in the process of evolving into a giant star as the supply of hydrogen at its core becomes exhausted. Compared to the Sun it has 240% of the mass and 230% of the Sun's radius, but is radiating 37 times as much luminosity. This energy is being emitted from the star's outer atmosphere at an effective temperature of 9,280 K, giving it the characteristic white-hot glow of an A-type star. Nomenclature λ Ursae Majoris (Latinised to Lambda Ursae Majoris) is the star's Bayer designation. It bore the traditional names Tania (shared with Mu Ursae Majoris) and Tania Borealis. Tania comes from the Arabic phrase 'the Second Spring (of the Gazelle)'. and Borealis (originally borealis) is Latin for 'the north side'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Tania Borealis for this star. In Chinese, (), meaning Three Steps, refers to an asterism consisting of Lambda Ursae Majoris, Iota Ursae Majoris, Kappa Ursae Majoris, Mu Ursae Majoris, Nu Ursae Majoris and Xi Ursae Majoris. Consequently, the Chinese name for Lambda Ursae Majoris itself is (, ). References Ursae Majoris, Lambda Ursa Major A-type subgiants Tania Borealis Ursae Majoris, 33 050372 4033 089021 Durchmusterung objects
Lambda Ursae Majoris
Astronomy
506
4,959,031
https://en.wikipedia.org/wiki/Demecarium%20bromide
Demecarium bromide, trade name Humorsol, is a carbamate parasympathomimetic drug that acts as an acetylcholinesterase inhibitor, and is used as a glaucoma medication. It is applied directly to the eye in order to reduce elevated intraocular pressure associated with glaucoma. Demecarium causes constriction of the pupil (miosis), which improves the drainage of the fluid in the eye (aqueous humour). As demecarium reversibly inhibits cholinesterase, it can be administered less frequently than other parasympathomimetic drugs, such as carbachol. Commercially produced demecarium bromide solution, previously sold under the trade name Humorsol, is no longer available, although solutions of demecarium can be compounded. Use in dogs When administered with a topical corticosteroid, demecarium can delay the onset of primary glaucoma in dogs. High doses of demecarium may cause organophosphate toxicity, particularly if flea treatments containing organophosphates are administered at the same time. See also Diisopropyl fluorophosphate References Acetylcholinesterase inhibitors Aromatic carbamates Biscarbamates Bisquaternary anticholinesterases Bromides Ophthalmology drugs Phenol esters Quaternary ammonium compounds Veterinary drugs
Demecarium bromide
Chemistry
295
22,056,752
https://en.wikipedia.org/wiki/Quantum%20phases
Quantum phases are quantum states of matter at zero temperature. Even at zero temperature a quantum-mechanical system has quantum fluctuations and therefore can still support phase transitions. As a physical parameter is varied, quantum fluctuations can drive a phase transition into a different phase of matter. An example of a canonical quantum phase transition is the well-studied Superconductor Insulator Transition in disordered thin films which separates two quantum phases having different symmetries. Quantum magnets provide another example of QPT. The discovery of new quantum phases is a pursuit of many scientists. These phases of matter exhibit properties and symmetries which can potentially be exploited for technological purposes and the benefit of mankind. The difference between these states and classical states of matter is that classically, materials exhibit different phases which ultimately depends on the change in temperature and/or density or some other macroscopic property of the material whereas quantum phases can change in response to a change in a different type of order parameter (which is instead a parameter in the Hamiltonian of the system, unlike the classical case) of the system at zero temperature – temperature does not have to change. The order parameter plays a role in quantum phases analogous to its role in classical phases. Some quantum phases are the result of a superposition of many other quantum phases. See also Quantum phase transition Classical phase transitions Quantum critical point References Condensed matter physics
Quantum phases
Physics,Chemistry,Materials_science,Engineering
275
21,600,868
https://en.wikipedia.org/wiki/Radiator%20%28engine%20cooling%29
Radiators are heat exchangers used for cooling internal combustion engines, mainly in automobiles but also in piston-engined aircraft, railway locomotives, motorcycles, stationary generating plants or any similar use of such an engine. Internal combustion engines are often cooled by circulating a liquid called engine coolant through the engine block and cylinder head where it is heated, then through a radiator where it loses heat to the atmosphere, and then returned to the engine. Engine coolant is usually water-based, but may also be oil. It is common to employ a water pump to force the engine coolant to circulate, and also for an axial fan to force air through the radiator. Automobiles and motorcycles In automobiles and motorcycles with a liquid-cooled internal combustion engine, a radiator is connected to channels running through the engine and cylinder head, through which a liquid (coolant) is pumped by a coolant pump. This liquid may be water (in climates where water is unlikely to freeze), but is more commonly a mixture of water and antifreeze in proportions appropriate to the climate. Antifreeze itself is usually ethylene glycol or propylene glycol (with a small amount of corrosion inhibitor). A typical automotive cooling system comprises: a series of galleries cast into the engine block and cylinder head, surrounding the combustion chambers with circulating liquid to carry away heat; a radiator, consisting of many small tubes equipped with a honeycomb of fins to dissipate heat rapidly, that receives and cools hot liquid from the engine; a water pump, usually of the centrifugal type, to circulate the coolant through the system; a thermostat to control temperature by varying the amount of coolant going to the radiator; a fan to draw cool air through the radiator. The combustion process produces a large amount of heat. If heat were allowed to increase unchecked, detonation would occur, and components outside the engine would fail due to excessive temperature. To combat this effect, coolant is circulated through the engine where it absorbs heat. Once the coolant absorbs the heat from the engine it continues its flow to the radiator. The radiator transfers heat from the coolant to the passing air. Radiators are also used to cool automatic transmission fluids, air conditioner refrigerant, intake air, and sometimes to cool motor oil or power steering fluid. A radiator is typically mounted in a position where it receives airflow from the forward movement of the vehicle, such as behind a front grill. Where engines are mid- or rear-mounted, it is common to mount the radiator behind a front grill to achieve sufficient airflow, even though this requires long coolant pipes. Alternatively, the radiator may draw air from the flow over the top of the vehicle or from a side-mounted grill. For long vehicles, such as buses, side airflow is most common for engine and transmission cooling and top airflow most common for air conditioner cooling. Radiator construction Automobile radiators are constructed of a pair of metal or plastic header tanks, linked by a core with many narrow passageways, giving a high surface area relative to volume. This core is usually made of stacked layers of metal sheet, pressed to form channels and soldered or brazed together. For many years radiators were made from brass or copper cores soldered to brass headers. Modern radiators have aluminum cores, and often save money and weight by using plastic headers with gaskets. This construction is more prone to failure and less easily repaired than traditional materials. An earlier construction method was the honeycomb radiator. Round tubes were swaged into hexagons at their ends, then stacked together and soldered. As they only touched at their ends, this formed what became in effect a solid water tank with many air tubes through it. Some vintage cars use radiator cores made from coiled tube, a less efficient but simpler construction. Coolant pump Radiators first used downward vertical flow, driven solely by a thermosyphon effect. Coolant is heated in the engine, becomes less dense, and so rises. As the radiator cools the fluid, the coolant becomes denser and falls. This effect is sufficient for low-power stationary engines, but inadequate for all but the earliest automobiles. All automobiles for many years have used centrifugal pumps to circulate the engine coolant because natural circulation has very low flow rates. Heater A system of valves or baffles, or both, is usually incorporated to simultaneously operate a small radiator inside the vehicle. This small radiator, and the associated blower fan, is called the heater core, and serves to warm the cabin interior. Like the radiator, the heater core acts by removing heat from the engine. For this reason, automotive technicians often advise operators to turn on the heater and set it to high if the engine is overheating, to assist the main radiator. Temperature control Waterflow control The engine temperature on modern cars is primarily controlled by a wax-pellet type of thermostat, a valve that opens once the engine has reached its optimum operating temperature. When the engine is cold, the thermostat is closed except for a small bypass flow so that the thermostat experiences changes to the coolant temperature as the engine warms up. Engine coolant is directed by the thermostat to the inlet of the circulating pump and is returned directly to the engine, bypassing the radiator. Directing water to circulate only through the engine allows the engine to reach optimum operating temperature as quickly as possible whilst avoiding localized "hot spots." Once the coolant reaches the thermostat's activation temperature, it opens, allowing water to flow through the radiator to prevent the temperature from rising higher. Once at optimum temperature, the thermostat controls the flow of engine coolant to the radiator so that the engine continues to operate at optimum temperature. Under peak load conditions, such as driving slowly up a steep hill whilst heavily laden on a hot day, the thermostat will be approaching fully open because the engine will be producing near maximum power while the velocity of airflow across the radiator is low. (Being a heat exchanger, the velocity of air flow across the radiator has a major effect on its ability to dissipate heat.) Conversely, when cruising fast downhill on a motorway on a cold night on a light throttle, the thermostat will be nearly closed because the engine is producing little power, and the radiator is able to dissipate much more heat than the engine is producing. Allowing too much flow of coolant to the radiator would result in the engine being over-cooled and operating at lower than optimum temperature, resulting in decreased fuel efficiency and increased exhaust emissions. Furthermore, engine durability, reliability, and longevity are sometimes compromised, if any components (such as the crankshaft bearings) are engineered to take thermal expansion into account to fit together with the correct clearances. Another side effect of over-cooling is reduced performance of the cabin heater, though in typical cases it still blows air at a considerably higher temperature than ambient. The thermostat is therefore constantly moving throughout its range, responding to changes in vehicle operating load, speed, and external temperature, to keep the engine at its optimum operating temperature. On vintage cars you may find a bellows type thermostat, which has corrugated bellows containing a volatile liquid such as alcohol or acetone. These types of thermostats do not work well at cooling system pressures above about 7 psi. Modern motor vehicles typically run at around 15 psi, which precludes the use of the bellows type thermostat. On direct air-cooled engines, this is not a concern for the bellows thermostat that controls a flap valve in the air passages. Airflow control Other factors influence the temperature of the engine, including radiator size and the type of radiator fan. The size of the radiator (and thus its cooling capacity) is chosen such that it can keep the engine at the design temperature under the most extreme conditions a vehicle is likely to encounter (such as climbing a mountain whilst fully loaded on a hot day). Airflow speed through a radiator is a major influence on the heat it dissipates. Vehicle speed affects this, in rough proportion to the engine effort, thus giving crude self-regulatory feedback. Where an additional cooling fan is driven by the engine, this also tracks engine speed similarly. Engine-driven fans are often regulated by a fan clutch from the drivebelt, which slips and reduces the fan speed at low temperatures. This improves fuel efficiency by not wasting power on driving the fan unnecessarily. On modern vehicles, further regulation of cooling rate is provided by either variable speed or cycling radiator fans. Electric fans are controlled by a thermostatic switch or the engine control unit. Electric fans also have the advantage of giving good airflow and cooling at low engine revs or when stationary, such as in slow-moving traffic. Before the development of viscous-drive and electric fans, engines were fitted with simple fixed fans that drew air through the radiator at all times. Vehicles whose design required the installation of a large radiator to cope with heavy work at high temperatures, such as commercial vehicles and tractors would often run cool in cold weather under light loads, even with the presence of a thermostat, as the large radiator and fixed fan caused a rapid and significant drop in coolant temperature as soon as the thermostat opened. This problem can be solved by fitting a radiator blind (or radiator shroud) to the radiator that can be adjusted to partially or fully block the airflow through the radiator. At its simplest the blind is a roll of material such as canvas or rubber that is unfurled along the length of the radiator to cover the desired portion. Some older vehicles, like the World War I-era Royal Aircraft Factory S.E.5 and SPAD S.XIII single-engined fighters, have a series of shutters that can be adjusted from the driver's or pilot's seat to provide a degree of control. Some modern cars have a series of shutters that are automatically opened and closed by the engine control unit to provide a balance of cooling and aerodynamics as needed. Coolant pressure Because the thermal efficiency of internal combustion engines increases with internal temperature, the coolant is kept at higher-than-atmospheric pressure to increase its boiling point. A calibrated pressure-relief valve is usually incorporated in the radiator's fill cap. This pressure varies between models, but typically ranges from . As the coolant system pressure increases with a rise in temperature, it will reach the point where the pressure relief valve allows excess pressure to escape. This will stop when the system temperature stops rising. In the case of an over-filled radiator (or header tank) pressure is vented by allowing a little liquid to escape. This may simply drain onto the ground or be collected in a vented container which remains at atmospheric pressure. When the engine is switched off, the cooling system cools and liquid level drops. In some cases where excess liquid has been collected in a bottle, this may be 'sucked' back into the main coolant circuit. In other cases, it is not. Engine coolant Before World War II, engine coolant was usually plain water. Antifreeze was used solely to control freezing, and this was often only done in cold weather. If plain water is left to freeze in the block of an engine the water can expand as it freezes. This effect can cause severe internal engine damage due to the expanding of the ice. Development in high-performance aircraft engines required improved coolants with higher boiling points, leading to the adoption of glycol or water-glycol mixtures. These led to the adoption of glycols for their antifreeze properties. Since the development of aluminium alloy or mixed-metal engines, corrosion inhibition has become even more important than antifreeze, and in all regions and seasons. Boiling or overheating An overflow tank that runs dry may result in the coolant vaporizing, which can cause localized or general overheating of the engine. Severe damage may result if the vehicle is allowed to run over temperature. Failures such as blown head gaskets, and warped or cracked cylinder heads or cylinder blocks may be the result. Sometimes there will be no warning, because the temperature sensor that provides data for the temperature gauge (either mechanical or electrical) is exposed to water vapor, not the liquid coolant, providing a harmfully false reading. Opening a hot radiator drops the system pressure, which may cause it to boil and eject dangerously hot liquid and steam. Therefore, radiator caps often contain a mechanism that attempts to relieve the internal pressure before the cap can be fully opened. History The invention of the automobile water radiator is attributed to Karl Benz. Wilhelm Maybach designed the first honeycomb radiator for the Mercedes 35hp. Supplementary radiators It is sometimes necessary for a car to be equipped with a second, or auxiliary, radiator to increase the cooling capacity, when the size of the original radiator cannot be increased. The second radiator is plumbed in series with the main radiator in the circuit. This was the case when the Audi 100 was first turbocharged creating the 200. These are not to be confused with intercoolers. Some engines have an oil cooler, a separate small radiator to cool the engine oil. Cars with an automatic transmission often have extra connections to the radiator, allowing the transmission fluid to transfer its heat to the coolant in the radiator. These may be either oil-air radiators, as for a smaller version of the main radiator. More simply they may be oil-water coolers, where an oil pipe is inserted inside the water radiator. Though the water is hotter than the ambient air, its higher thermal conductivity offers comparable cooling (within limits) from a less complex and thus cheaper and more reliable oil cooler. Less commonly, power steering fluid, brake fluid, and other hydraulic fluids may be cooled by an auxiliary radiator on a vehicle. Turbo charged or supercharged engines may have an intercooler, which is an air-to-air or air-to-water radiator used to cool the incoming air charge—not to cool the engine. Aircraft Aircraft with liquid-cooled piston engines (usually inline engines rather than radial) also require radiators. As airspeed is higher than for cars, these are efficiently cooled in flight, and so do not require large areas or cooling fans. Many high-performance aircraft however suffer extreme overheating problems when idling on the ground - a mere seven minutes for a Spitfire. This is similar to Formula 1 cars of today, when stopped on the grid with engines running they require ducted air forced into their radiator pods to prevent overheating. Surface radiators Reducing drag is a major goal in aircraft design, including the design of cooling systems. An early technique was to take advantage of an aircraft's abundant airflow to replace the honeycomb core (many surfaces, with a high ratio of surface to volume) by a surface-mounted radiator. This uses a single surface blended into the fuselage or wing skin, with the coolant flowing through pipes at the back of this surface. Such designs were seen mostly on World War I aircraft. As they are so dependent on airspeed, surface radiators are even more prone to overheating when ground-running. Racing aircraft such as the Supermarine S.6B, a racing seaplane with radiators built into the upper surfaces of its floats, have been described as "being flown on the temperature gauge" as the main limit on their performance. Surface radiators have also been used by a few high-speed racing cars, such as Malcolm Campbell's Blue Bird of 1928. Pressurized cooling systems It is generally a limitation of most cooling systems that the cooling fluid not be allowed to boil, as the need to handle gas in the flow greatly complicates design. For a water cooled system, this means that the maximum amount of heat transfer is limited by the specific heat capacity of water and the difference in temperature between ambient and 100 °C. This provides more effective cooling in the winter, or at higher altitudes where the temperatures are low. Another effect that is especially important in aircraft cooling is that the specific heat capacity changes and boiling point reduces with pressure, and this pressure changes more rapidly with altitude than the drop in temperature. Thus, generally, liquid cooling systems lose capacity as the aircraft climbs. This was a major limit on performance during the 1930s when the introduction of turbosuperchargers first allowed convenient travel at altitudes above 15,000 ft, and cooling design became a major area of research. The most obvious, and common, solution to this problem was to run the entire cooling system under pressure. This maintained the specific heat capacity at a constant value, while the outside air temperature continued to drop. Such systems thus improved cooling capability as they climbed. For most uses, this solved the problem of cooling high-performance piston engines, and almost all liquid-cooled aircraft engines of the World War II period used this solution. However, pressurized systems were also more complex, and far more susceptible to damage - as the cooling fluid was under pressure, even minor damage in the cooling system like a single rifle-calibre bullet hole, would cause the liquid to rapidly spray out of the hole. Failures of the cooling systems were, by far, the leading cause of engine failures. Evaporative cooling Although it is more difficult to build an aircraft radiator that is able to handle steam, it is by no means impossible. The key requirement is to provide a system that condenses the steam back into liquid before passing it back into the pumps and completing the cooling loop. Such a system can take advantage of the specific heat of vaporization, which in the case of water is five times the specific heat capacity in the liquid form. Additional gains may be had by allowing the steam to become superheated. Such systems, known as evaporative coolers, were the topic of considerable research in the 1930s. Consider two cooling systems that are otherwise similar, operating at an ambient air temperature of 20 °C. An all-liquid design might operate between 30 °C and 90 °C, offering 60 °C of temperature difference to carry away heat. An evaporative cooling system might operate between 80 °C and 110 °C. At first glance this appears to be much less temperature difference, but this analysis overlooks the enormous amount of heat energy soaked up during the generation of steam, equivalent to 500 °C. In effect, the evaporative version is operating between 80 °C and 560 °C, a 480 °C effective temperature difference. Such a system can be effective even with much smaller amounts of water. The downside to the evaporative cooling system is the area of the condensers required to cool the steam back below the boiling point. As steam is much less dense than water, a correspondingly larger surface area is needed to provide enough airflow to cool the steam back down. The Rolls-Royce Goshawk design of 1933 used conventional radiator-like condensers and this design proved to be a serious problem for drag. In Germany, the Günter brothers developed an alternative design combining evaporative cooling and surface radiators spread all over the aircraft wings, fuselage and even the rudder. Several aircraft were built using their design and set numerous performance records, notably the Heinkel He 119 and Heinkel He 100. However, these systems required numerous pumps to return the liquid from the spread-out radiators and proved to be extremely difficult to keep running properly, and were much more susceptible to battle damage. Efforts to develop this system had generally been abandoned by 1940. The need for evaporative cooling was soon to be negated by the widespread availability of ethylene glycol based coolants, which had a lower specific heat, but a much higher boiling point than water. Radiator thrust An aircraft radiator contained in a duct heats the air passing through, causing the air to expand and gain velocity. This is called the Meredith effect, and high-performance piston aircraft with well-designed low-drag radiators (notably the P-51 Mustang) derive thrust from it. The thrust was significant enough to offset the drag of the duct the radiator was enclosed in and allowed the aircraft to achieve zero cooling drag. At one point, there were even plans to equip the Supermarine Spitfire with an afterburner, by injecting fuel into the exhaust duct after the radiator and igniting it. Afterburning is achieved by injecting additional fuel into the engine downstream of the main combustion cycle. Stationary plant Engines for stationary plant are normally cooled by radiators in the same way as automobile engines. There are some unique differences, depending on the stationary plant – careful planning must be taken to ensure proper air flow across the radiator to ensure proper cooling. In some cases, evaporative cooling is used via a cooling tower. See also Coolant Heater core Intercooler Internal combustion engine (ICE) List of auto parts Waste heat References Sources External links Radiator Replacement and Troubleshooting Guides How Car Cooling Systems Work Powertrain Cooling Community Site Engine cooling systems Engine components Heat exchangers
Radiator (engine cooling)
Chemistry,Technology,Engineering
4,489
530,470
https://en.wikipedia.org/wiki/Mycophenolic%20acid
Mycophenolic acid is an immunosuppressant medication used to prevent rejection following organ transplantation and to treat autoimmune conditions such as Crohn's disease and lupus. Specifically it is used following kidney, heart, and liver transplantation. It can be given by mouth or by injection into a vein. It comes as mycophenolate sodium and mycophenolate mofetil. Common side effects include nausea, infections, and diarrhea. Other serious side effects include an increased risk of cancer, progressive multifocal leukoencephalopathy, anemia, and gastrointestinal bleeding. Use during pregnancy may harm the baby. It works by blocking inosine monophosphate dehydrogenase (IMPDH), which is needed by lymphocytes to make guanosine. Mycophenolic acid was initially discovered by Italian Bartolomeo Gosio in 1893. It was rediscovered in 1945 and 1968. It was approved for medical use in the United States in 1995 following the discovery of its immunosuppressive properties in the 1990s. It is available as a generic medication. In 2022, it was the 227th most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses Organ transplant Mycophenolate is used for the prevention of organ transplant rejection. Mycophenolate mofetil is indicated for the prevention of organ transplant rejection in adults and kidney transplantation rejection in children over 2 years; whereas mycophenolate sodium is indicated for the prevention of kidney transplant rejection in adults. Mycophenolate sodium has also been used for the prevention of rejection in liver, heart, or lung transplants in children older than two years. Autoimmune disease Mycophenolate is increasingly utilized as a steroid sparing treatment in autoimmune diseases and similar immune-mediated disorders including Behçet's disease, pemphigus vulgaris, immunoglobulin A nephropathy, small vessel vasculitides, and psoriasis. It is also used for retroperitoneal fibrosis along with a number of other medications. Specifically it has also be used for psoriasis not treatable by other methods. Its increasing application in treating lupus nephritis has demonstrated more frequent complete response and less frequent complications compared to cyclophosphamide bolus therapy, a regimen with risk of bone marrow suppression, infertility, and malignancy. Further work addressing maintenance therapy demonstrated mycophenolate superior to cyclophosphamide, again in terms of response and side-effects. Walsh proposed that mycophenolate should be considered as a first-line induction therapy for treatment of lupus nephritis in people without kidney dysfunction. Comparison to other agents Compared with azathioprine it has higher incidence of diarrhea, and no difference in risk of any of the other side effects in transplant patients. Mycophenolic acid is 15 times more expensive than azathioprine. Adverse effects Common adverse drug reactions (≥ 1% of people) include diarrhea, nausea, vomiting, joint pain; infections, leukopenia, or anemia reflect the immunosuppressive and myelosuppressive nature of the drug. Mycophenolate sodium is also commonly associated with fatigue, headache, cough and/or breathing issues. Intravenous (IV) administration of mycophenolate mofetil is also commonly associated with thrombophlebitis and thrombosis. Infrequent adverse effects (0.1–1% of people) include esophagitis, gastritis, gastrointestinal tract hemorrhage, and/or invasive cytomegalovirus (CMV) infection. More rarely, pulmonary fibrosis or various neoplasia occur: melanoma, lymphoma, other malignancies having an occurrences of 1 in 20 to 1 in 200, depending on the type, with neoplasia in the skin being the most common site. Several cases of pure red cell aplasia (PRCA) have also been reported. The U.S. Food and Drug Administration (FDA) issued an alert that people are at increased risk of opportunistic infections, such as activation of latent viral infections, including shingles, other herpes infections, cytomegalovirus, and BK virus associated nephropathy. In addition the FDA is investigating 16 people that developed a rare neurological disease while taking the drug. This is a viral infection known as progressive multifocal leukoencephalopathy; it attacks the brain and is usually fatal. Pregnancy Mycophenolic acid is associated with miscarriage and congenital malformations when used during pregnancy, and should be avoided whenever possible by women trying to get pregnant. Blood tests Among the most common effects of this drug is increased blood cholesterol levels. Other changes in blood chemistry such as hypomagnesemia, hypocalcemia, hyperkalemia, and an increase in blood urea nitrogen (BUN) can occur. Mechanism of action Purines (including the nucleosides guanosine and adenosine) can either be synthesized de novo using ribose 5-phosphate or they can be salvaged from free nucleotides. Mycophenolic acid is a potent, reversible, non-competitive inhibitor of inosine-5′-monophosphate dehydrogenase (IMPDH), an enzyme essential to the de novo synthesis of guanosine-5'-monophosphate (GMP) from inosine-5'-monophosphate (IMP). IMPDH inhibition particularly affects lymphocytes since they rely almost exclusively on de novo purine synthesis. In contrast, many other cell types use both pathways, and some cells, such as terminally differentiated neurons, depend completely on purine nucleotide salvage. Thus, use of mycophenolic acid leads to a relatively selective inhibition of DNA replication in T cells and B cells. Pharmacology Mycophenolate can be derived from the fungi Penicillium stoloniferum, P. brevicompactum and P. echinulatum. Mycophenolate mofetil is metabolised in the liver to the active moiety mycophenolic acid. It reversibly inhibits inosine monophosphate dehydrogenase, the enzyme that controls the rate of synthesis of guanine monophosphate in the de novo pathway of purine synthesis used in the proliferation of B and T lymphocytes. Other cells recover purines via a separate salvage pathway and are thus able to escape the effect. Mycophenolate is potent and can, in many contexts, be used in place of the older anti-proliferative azathioprine. It is usually used as part of a three-compound regimen of immunosuppressants, also including a calcineurin inhibitor (ciclosporin or tacrolimus) and a glucocorticoid (e.g. dexamethasone or prednisone). Chemistry Mycophenolate mofetil is the morpholino ethyl ester of mycophenolic acid; the ester masks the carboxyl group. Mycophenolate mofetil is reported to have a pKa values of 5.6 for the morpholino moiety and 8.5 for the phenolic group. History Mycophenolic acid was discovered by Italian medical scientist Bartolomeo Gosio. Gosio collected a fungus from spoiled corn and named it Penicillium glaucum. (The species is now called P. brevicompactum.) In 1893 he found that the fungus had antibacterial activity. In 1896 he isolated crystals of the compound, which he successfully demonstrated as the active antibacterial compound against the anthrax bacterium. This was the first antibiotic that was isolated in pure and crystalline form. But the discovery was forgotten. It was rediscovered by two American scientists C.L. Alsberg and O.M. Black in 1912, and given the name mycophenolic acid. The compound was eventually demonstrated to have antiviral, antifungal, antibacterial, anticancer, and antipsoriasis activities. Although it is not commercialised as antibiotic due to its adverse effects, its modified compound (ester derivative) is an approved immunosuppressant drug in kidney, heart, and liver transplantations, and is marketed under the brands CellCept (mycophenolate mofetil by Roche) and Myfortic (mycophenolate sodium by Novartis). Cellcept was developed by a South African geneticist Anthony Allison and his wife Elsie M. Eugui. In the 1970s while working at the Medical Research Council, Allison investigated the biochemical causes of immune deficiency in children. He discovered the metabolic pathway involving an enzyme, inosine monophosphate dehydrogenase, which is responsible for undesirable immune response in autoimmune diseases, as well as for immune rejection in organ transplantation. He conceived an idea that if a molecule that could block the enzyme is discovered, then, it would become an immunosuppressive drug that could be used for autoimmune diseases and in organ transplantation. In 1981 he decided to go for drug discovery and approached several pharmaceutical companies, which turned him down one by one as he had no primary knowledge of drug research. However, Syntex liked his plans and asked him to join the company with his wife. He became vice president for the research. In one of their experiments the Allisons used an antibacterial compound, mycophenolate mofetil, which was abandoned in clinical use due to its adverse effects. They discovered that the compound had immunosuppressive activity. They synthesised a chemical variant for increased activity and reduced adverse effects. They subsequently demonstrated that it was useful in organ transplantation in experimental rats. After successful clinical trials, the compound was approved for use in kidney transplant by the U.S. Food and Drug Administration on 3 May 1995, and was sold under the brand name CellCept. It was approved for use in the European Union in February 1996. Names It was initially introduced as the prodrug mycophenolate mofetil (MMF, trade name CellCept) to improve oral bioavailability. The salt mycophenolate sodium has also been introduced. Enteric-coated mycophenolate sodium (EC-MPS) is an alternative MPA formulation. MMF and EC-MPS appear to be equal in benefits and safety. Research Mycophenolate mofetil is beginning to be used in the management of auto-immune disorders such as idiopathic thrombocytopenic purpura (ITP), systemic lupus erythematosus (SLE), scleroderma (systemic sclerosis or SSc), and pemphigus vulgaris (PV) with success for some patients. It is also being used as a long-term therapy for maintaining remission of granulomatosis with polyangiitis, though thus far, studies have found it inferior to azathioprine. A combination of mycophenolate and ribavirin has been found to stop infection by and replication of dengue virus in vitro. It has also shown promising antiviral activity against MERS, especially in combination with interferon. Preliminary data suggest that mycophenolate mofetil might have benefits in people with multiple sclerosis. However the evidence is insufficient to determine the effects as an add‐on therapy for interferon beta-1a in people with RRMS. References Carboxylic acids Drugs developed by Genentech Drugs developed by Hoffmann-La Roche Immunosuppressants Italian inventions Drugs developed by Novartis Isobenzofurans Phenol ethers Hydroxyarenes Phthalides South African inventions Wikipedia medicine articles ready to translate hu:Mikofenolát-mofetil
Mycophenolic acid
Chemistry
2,587
23,435,056
https://en.wikipedia.org/wiki/C7H9N
{{DISPLAYTITLE:C7H9N}} The molecular formula C7H9N may refer to: Benzylamine Lutidines (dimethylpyridines) 2,4-Lutidine 2,6-Lutidine 3,5-Lutidine N-Methylaniline Toluidines o-Toluidine m-Toluidine p-Toluidine
C7H9N
Chemistry
90
1,248,062
https://en.wikipedia.org/wiki/Electronic%20news%20gathering
Electronic news gathering (ENG) or electronic journalism (EJ) is usage of electronic video and audio technologies by reporters to gather and present news instead of using film cameras. The term was coined during the rise of videotape technology in the 1970s. ENG can involve anything from a single reporter with a single professional video camera, to an entire television crew taking a truck on location. Beginnings Shortcomings of film The term ENG was created as television news departments moved from film-based news gathering to electronic field production technology in the 1970s. Since film requires chemical processing before it can be viewed and edited, it generally took at least an hour from the time the film arrived back at the television station or network news department until it was ready to be broadcast. Film editing was done by hand on what was known as "color reversal" film, usually Kodak Ektachrome, meaning there were no negatives. Color reversal film had replaced black-and-white film as television itself evolved from black-and-white to color broadcasting. Filmo cameras were most commonly used for silent filming, while Auricon cameras were used for filming with synchronized sound. Since editing required cutting the film into segments and then splicing them together, a common problem was film breaking during the newscast. News stories were often transferred to bulky 2-inch videotape for distribution and playback, which made the content cumbersome to access. Film remained important in daily news operations until the late 1960s, when news outlets adopted portable professional video cameras, portable recorders, wireless microphones and joined those with various microwave- and satellite truck-linked delivery systems. By the mid-1980s, film had all but disappeared from use in television journalism. Transition to ENG As one cameraman of the era tells it, This portability greatly contributed to the rise of electronic news gathering as it made portable news more easily accessible than ever before. Early portable video systems recorded at a lower quality than broadcast studio cameras, which made them less desirable than non portable video systems. When the Portapak video camera was introduced in 1967, it was a new method of video recording, forever shifting ENG. By the time videotape technology advanced, the capability for microwave transmission was well established (and used in the 1960s by the BBC's ill-fated Mobile Film Processing Unit). But the convenience of videotape finally allowed crews to more easily use microwave links to quickly send their footage back to the studio. It even made live feeds more possible, as in the police shootout with the Symbionese Liberation Army in 1974. Also in 1974, KMOX, a station in St. Louis, Mo., was the first to abandon film and switch entirely to ENG. Stations all over the country made the switch over the next decade. During the mid-to-late 1970s, several companies, such as RCA, Sony, and Ikegami released portable one-piece color television cameras designed for ENG use. These included the RCA TK-76, Sony BVP-300, and the Ikegami HL-79. These cameras became popular with television stations. These cameras utilized vidicon tubes to capture video as solid-state imagers (such as CCDs) would not be practical in broadcasting until the late 1980s. In 1974, Sony introduced the VO-3800 portable 3/4" U-matic videocassette recorder, which was followed up by the BVU-100, BVU-110, and BVU-50 U-matic recorders, which were also popular with broadcasters for ENG use. ENG greatly reduces the delay between when the footage is captured and when it can be broadcast, thus enabling news gathering and reporting to become steady cycle with little time in between when story breaks and when a story can air. Coupled with live microwave and/or satellite trucks, reporters were able to show live what was happening, bringing the audience into news events as they happened. CNN launched in June 1980, as ENG technologies were emerging. The technology was still in its developmental stages, and had yet to be integrated with satellites and microwave relays, which caused some problems with the network's early transmissions. However, ENG proved to be a crucial development for all television news as news content recorded using videocassette recorders was easier to edit, duplicate and distribute. Over time, as editing technology has become simpler and more accessible, video production processes have largely passed from broadcast engineers to producers and writers, making the process quicker. However, initially the ENG cameras and recorders were heavier and bulkier than their film equivalents. This restricted the ability of camera operators from escaping danger or hurrying toward a news event. Editing equipment was expensive and each scene had to be searched out on the master recording. Technology developments Using technology such as multicast or RTP over UDP, these systems achieve similar performance to high end-microwave. Since the video stream is already encoded for IP, the video can be used for traditional television broadcast or Internet distribution without modification (live to air). As mobile broadband has developed, broadcast devices using this technology have appeared. These devices are often more compact than previous technology and can aggregate multiple mobile data lines to deliver a high definition-quality content live. These devices are known as Digital Mobile News Gathering (DMNG). Broadcast video equipment The ongoing technological evolution of broadcast video production equipment can be observed annually at the NAB Show in Las Vegas where equipment manufacturers gather to display their wares to people within the video production industry. The trend is toward lighter-weight equipment that can deliver more resolution at higher speeds. There has been an evolution from film to standard-definition television, high-definition television and now 4K. As of 2016, highlights included unmanned aerial vehicles aka drones for the delivery of aerial footage, various lines of cameras that can deliver 4K resolution, graphics packages for news stations, which can be utilized inside their microwave and/or satellite vehicles, wireless technology, POV cameras and peripherals from GoPro and other action camera manufacturers, and the Odyssey 7Q+ monitor with Apollo multi-camera switcher/recorder from Convergent Design. This monitor fits on the back of a broadcast video camera and allows photojournalists to live-switch a multi-camera production in studio or on location. Outside broadcasts Outside broadcasts (also known as "remote broadcasts" and "field operations") are when the editing and transmission of the news story are done outside the station's headquarters. Use of ENG has made possible the greater use of outside broadcasts. "Some stations have always required reporters to shoot their own stories, interviews and even standup reports and then bring that material back to the station where the video is edited for that evening's newscast. At some of these stations, the reporters sometimes even anchor the news and introduce the packages they have shot and edited." Short-form news stories are what local news reporters deliver to their stations. Longer-form stories about the same topics are covered by national or international broadcast news magazines such as Dateline NBC, 20/20, Nightline, 48 Hours, 60 Minutes and Inside Edition. Depending upon the scope of the story, the number of crews vying for position at the story venue (press conference, courthouse, crime location, etc.) can potentially be dozens. Natural disasters, terrorism, death and murder are topics that reside at the top of the news-gathering hierarchy. For instance, in the U.S., the series of events culminating in what would thereafter be known as 9/11 galvanized every news division of every network. "Especially on that first day, you were really just going to whomever had a piece of information," said 48 Hours executive producer Susan Zirinsky, whose team produced the primetime coverage that first night. "You were getting cameras up, you were putting people in place, you were trying to wrap your brain around it. You wanted to step back and synthesize some of the information, which is what we were trying to do ... At that point, we thought there were many more dead, and it was still a search-and-rescue mission. It was a very, very complicated day to try to give context to." "There is a hierarchy of news. It's a hierarchy of judgment, I guess. All deaths are equal to the victims and their families. But all deaths are not equal in the calculation of news value." "Feel-good stories" such as the saving of a life, or selfless act of kindness, sometimes make their way into the news stream. Microwave spectrum channels In the United States, there are ten ENG video channels set aside in each area for terrestrial microwave communications. TV Broadcast Auxiliary Services (BAS) Channel A10 (2483.5-2500 MHz) is only available on a grandfathered basis to TV BAS licensees holding authority to use that channel as of July 10, 1985. However, there is no sunset date to the grandfather rights, and continued use of A10 remains on a protected, co-primary basis with Mobile Satellite Service (MSS) Ancillary Terrestrial Component (ATC) use of 2483.5-2495 MHz. Use of these channels is restricted by federal regulations to those holding broadcast licenses in the given market, to Broadcast Network-Entities, and to Cable Network-Entities. Channels 1 through 7 are in the 2 GHz band and channels 8, 9 and 10 are in the 2.5 GHz band. In Atlanta for example, there are two channels each for the four news-producing television stations (WSB-TV, WAGA-TV, WXIA-TV, WANF), one for CNN, and another open for other users on request, such as Georgia Public Broadcasting. Traditionally, the Federal Communications Commission has assigned microwave spectrum based on historic patterns of need and through the application/request process. With the other uses of radio spectrum growing in the 1990s, the FCC made available some bands of spectrum as unlicensed channels. This included spectrum for cordless phones and Wi-Fi. As a result, some of these channels have been used for news gathering by websites and more informal news outlets. One major disadvantage of unlicensed use is that there is no frequency coordination, which can result in interference or blocking of signals. Audio journalism A common set-up for journalists is a battery operated cassette recorder with a dynamic microphone and optional telephone interface. With this set-up, the reporter can record interviews and natural sound and then transmit these over the phone line to the studio or for live broadcast. Electronic formats used by journalists have included DAT, MiniDisc, CD and DVD. Minidisc has digital indexing and is re-recordable, reusable medium; while DAT has SMPTE timecode and other synchronization features. In recent years, more and more journalists have used smartphones or iPod-like devices for recording short interviews. The other alternative is using small field recorders with two condenser microphones. See also Electronic field production Satellite truck Outside broadcasting Production truck References Further reading Broadcast engineering Television news Television terminology Journalism terminology
Electronic news gathering
Engineering
2,256
18,749,313
https://en.wikipedia.org/wiki/Video%20sculpture
A video sculpture is a type of video installation that integrates video into an object, environment, site or performance. The nature of video sculpture is that it utilizes the material of video in an innovative way in space and time, different from the standard traditional narrative screening where the video has a beginning and end. In one definition video sculpture involves one or more monitors or projections that spectators move among or stand in front of. Video sculptures formed of more than one screen or projection may broadcast a single program or may simultaneously broadcast different interconnected sequences on several channels. The screens used in the sculpture can be arranged in many different ways. For example, they can be suspended from a ceiling, aligned and stacked to make a video wall or even randomly stacked on top of each other. Video sculpture is a medium that offers performing artists a chance to have a more permanent artistic forum. Video sculpture includes projection mapping on objects and environments. This has become more accessible and popular due to software advancements in the last five years. History In the late 1950s and early 1960s, artists Wolf Vostell and Edward Kienholz began experimenting with televisions by using them in their happenings and assemblages respectively. In March 1963, Nam June Paik's debuted his video sculpture entitled Music/Electronic Television at the Parnass Gallery in Wupertal, which used 13 altered televisions. In May 1963 Wolf Vostell shows his installation 6 TV-Dé-coll/age at the Smolin Gallery in New York utilized six televisions, each with an anomaly. Shigeko Kubota was also an innovator in the use of video in sculptural form. Her Duchampiana: Nude Descending a Staircase was the first video sculpture acquired by the Museum of Modern Art. This work is a reference to Marcel Duchamp's Nude Descending a Staircase, No. 2 (1912) Video sculpturist are becoming influential among early 21st century artists. One of Paik's video sculptures in which the six windows of a 1936 Chrysler Airstream were replaced with video monitors sold for $75,000 in 2002. Charlotte Moorman was a notable subject of video sculptures as a renowned topless cellist. Current developments There are several developments in current video sculptures. The proliferation of powerful projectors and pixel-bending technology has enabled large-scale works often created for specific events and locations. Other artists like make use of multiple LCD screens or video walls and incorporate computer generated images. A different approach is used by artists like Madeleine Altmann, who creates sculptures with recycled cathode ray tube monitors. Notable video sculptors Michael Bielický Katja Loher Dennis Oppenheim Nam Jun Paik Pipilotti Rist Sonny Sanjay Vadgama See also Video painting Video wall Video installation References Visual arts media Electronic display devices Video hardware Sculptures by medium
Video sculpture
Engineering
578
29,157,715
https://en.wikipedia.org/wiki/List%20of%20ray%20tracing%20software
Ray tracing is a technique that can generate near photo-realistic computer images. A wide range of free software and commercial software is available for producing these images. This article lists notable ray-tracing software. References 3D graphics software Ray tracing Ray tracing (graphics)
List of ray tracing software
Technology
53
51,476,095
https://en.wikipedia.org/wiki/Green%20earth
Green earth, also known as terre verte and Verona green, is an inorganic pigment derived from the minerals celadonite and glauconite. Its chemical formula is . First used by the ancient Romans, green earth has been identified on wall paintings at Pompeii and Dura-Europos. The Renaissance painter and writer Cennino Cennini claimed that “the ancients never gilded except with this green” being used as a bole, or undercoating. In the Middle Ages one of its best-known uses was in the underpainting of flesh tones. Green earths have been rather confusingly referred to as "verda terra" or "terra verde di Verona", which scholars have assumed incorrectly referred to Veronese green, which is actually an emerald green pigment much used in the 18th century. SEM/EDAXS data have demonstrated that it is possible to discriminate between these two sources of celadonite in Roman wall paintings through the presence of trace elements. Spectroscopically, therefore, the analytical challenge is to differentiate between the green earths celadonite and glauconite, and perhaps chlorite, and the copper-containing malachite and verdigris, with the added ability to recognize the presence of haematite, Egyptian blue, calcite, dolomite, and carbon which have been added to change the colour tones. High quality deposits can be found in England, France, Cyprus, Germany and at Monte Baldo near Verona in Italy. The color ranges from neutral yellow green to pale greenish gray to dark matte olive green. See also List of inorganic pigments References Inorganic pigments
Green earth
Chemistry
346
21,056,112
https://en.wikipedia.org/wiki/Powder%20mixture
A powder is an assembly of dry particles dispersed in air. If two different powders are mixed perfectly, theoretically, three types of powder mixtures can be obtained: the random mixture, the ordered mixture or the interactive mixture. Different powder types A powder is called free-flowing if the particles do not stick together. If particles are cohesive, they cling to one another to form aggregates. The significance of cohesion increases with decreasing size of the powder particles; particles smaller than 100 μm are generally cohesive. Random mixture A random mixture can be obtained if two different free-flowing powders of approximately the same particle size, density and shape are mixed (see figure A). Only primary particles are present in this type of mixture, i.e., the particles are not cohesive and do not cling to one another. The mixing time will determine the quality of the random mixture. However, if powders with particles of different size, density or shape are mixed, segregation can occur. Segregation will cause separation of the powders as, for example, lighter particles will be prone to travel to the top of the mixture whereas heavier particles are kept at the bottom. Ordered mixture The term ordered mixture was first introduced to describe a completely homogeneous mixture where the two components adhere to each other to form ordered units. However, a completely homogeneous mixture is only achievable in theory and other denotations were introduced later such as adhesive mixture or interactive mixture. Interactive mixture If a free-flowing powder is mixed with a cohesive powder an interactive mixture can be obtained. The cohesive particles adhere to the free-flowing particles (now called carrier particles) to form interactive units as shown in figure B. An interactive mixture may not contain free aggregates of the cohesive powder, which means that all small particles must be adhered to the larger ones. The difference from an ordered mixture is instead that all carrier particles do not need to be of the same size and a different number of small particles attached to each one. A narrow size range of the carrier particles is preferred to avoid segregation of the interactive units. In practice a combination of a random mixture and an interactive mixture may be obtained which consists of carrier particles, aggregates of the small particles and interactive units. Formation The formation of interactive mixtures cannot automatically be assumed, especially if smaller carrier particles or a greater proportion of fine particles are used. If an interactive mixture is to be formed, it is necessary that enough force is exerted by the carrier particles during dry mixing to break up the aggregates formed by the fine particles. Adhesion can then be achieved if the adhesive forces exceed the gravitational forces that otherwise lead to separation of the constituents. Applications Interactive mixtures for example can be used in the manufacturing of tablets enhancing the dissolution of poorly soluble drugs or for nasal administration. One common application is for inhalation therapy, where the concept has been used in the development of alternatives to pressurised metered dose inhalers. The quality by design initiative (QbD) of the U.S. Food and Drug Administration requires a process to be controllable and predictable. Theories and methods to characterize powder mixture have facilitated the implementation of QbD approaches to predict flow properties of powder mixture. For example, QbD approach is shown to be useful for predicting flow performance and finding design space during formulation development. References External links Improving Powder Flow During Pharmaceutical Operations, an Rx Times article Granularity of materials Food technology Materials science Routes of administration Mixture
Powder mixture
Physics,Chemistry,Materials_science,Engineering
711
21,293,847
https://en.wikipedia.org/wiki/Nitrogen%20generator
Nitrogen generators and stations are stationary or mobile air-to-nitrogen production complexes. Adsorption technology Adsorption concept The adsorption gas separation process in nitrogen generators is based on the phenomenon of fixing various gas mixture components by a solid substance called an adsorbent. This phenomenon is brought about by the gas and adsorbent molecules' interaction. Pressure swing adsorption technology The technology of air-to-nitrogen production with the use of adsorption processes in nitrogen generators is well studied and widely applied at industrial facilities for the recovery of high-purity nitrogen. The operating principle of a nitrogen generator utilizing the adsorption technology is based upon the dependence of the adsorption rates featured by various gas mixture components upon pressure and temperature factors. Among nitrogen adsorption plants of various types, pressure swing adsorption (PSA) plants have found the broadest application world-wide. The system's design is based on the regulation of gas adsorption and adsorbent regeneration by means of changing pressures in two adsorber–adsorbent-containing vessels. This process requires constant temperature, close to ambient. With this process, nitrogen is produced by the plant at the above-atmospheric pressure, while the adsorbent regeneration is accomplished at below-atmospheric pressure. The swing adsorption process in each of the two adsorbers consists of two stages running for a few minutes. At the adsorption stage oxygen, H2O and CO2 molecules diffuse into the pore structure of the adsorbent whilst the nitrogen molecules are allowed to travel through the adsorber–adsorbent-containing vessel. At the regeneration stage the adsorbed components are released from the adsorbent vented into the atmosphere. The process is then multiplely repeated. Advantages High nitrogen purity: PSA nitrogen generator plants allow production of high-purity nitrogen from air, which membrane systems are unable to provide – up to 99.9995% nitrogen. But in most cases they do not produce more than 98.8% nitrogen with the remainder being argon that is not separated from the nitrogen by the usual PSA process. The argon is not normally a problem, as argon is more inert than nitrogen. This nitrogen purity may also be ensured by cryogenic systems, but they are considerably more complex and justified only by large consumption volumes. The nitrogen generators use CMS (carbon molecular sieve) technology to produce a continuous supply of ultra high purity nitrogen and are available with internal compressors or without. Low operating costs: By substitution of out-of-date air separation plants nitrogen production savings largely exceed 50%. The net cost of nitrogen produced by nitrogen generators is significantly less than the cost of bottled or liquefied nitrogen. Environmental impact: Generating nitrogen gas by PSA is a sustainable, environmentally friendly and energy efficient approach to providing pure, clean, dry nitrogen gas. Compared to the energy needed for a cryogenic air separation plant and the energy needed to transport the liquid nitrogen from the plant to the facility, generated nitrogen consumes less energy and creates far fewer greenhouse gases. Membrane technology Gas separation concept The operation of membrane systems is based on the principle of differential velocity with which various gas mixture components permeate membrane substance. The driving force in the gas separation process is the difference in partial pressures on different membrane sides. Membrane cartridge Structurally, a hollow-fiber membrane represents a cylindrical cartridge functioning as a spool with specifically reeled polymer fibers. Gas flow is supplied under pressure into a bundle of membrane fibers. Due to the difference in partial pressures on the external and internal membrane surface gas flow separation is accomplished. Advantages Economic benefits: By substitution of cryogenic or adsorption systems nitrogen production savings generally exceed 50%. The net cost of nitrogen produced by nitrogen complexes is significantly less than the cost of cylinder or liquefied nitrogen. Module design: With respect to the simplicity of the system, a nitrogen generator can be split into modules. This is in direct contrast to classical systems where the equipment is designed for a certain stage of the separation process. Using a modular system, the generation facility may be built from a selection of preexisting equipment and where necessary, the output capacity of a plant may be increased at the minimum cost. This option appears all the more useful where a project envisages a subsequent increase in enterprise capacity, or where demand may simply require on site production of nitrogen by employing equipment that is already present. Dependability: Gas separation units have no moving component parts, thus ensuring exceptional reliability. Membranes are highly resistant to vibration and shocks, chemically inert to greases, moisture-insensitive, and capable of operating over a wide temperature range of –40°С to +60°С. With appropriate maintenance, membrane unit useful life ranges between 130,000 and 180,000 hours (15 to 20 years of continuous operation). Disadvantages Limited capacity Relatively low purity compared to PSA units (95% to 99% purity as compared to 99.9995% - higher purity applications are available at lower flow rates ≤ 10L/min) Applications of nitrogen generators Food and beverage industries: The moment food or beverages are produced, or fruits and vegetables harvested, an aging process kicks in until the complete decay of the products. This is caused by chemical reactions with oxygen, bacteria and other organisms. Generators are used to flood the products with N2 that displaces the oxygen and prolongs the product lifetime significantly because these organisms cannot develop. Furthermore, chemical degradation of food caused by oxidation can be eliminated or stopped. Analytical chemistry: Nitrogen generators are required for various forms of analytical chemistry such as liquid chromatography–mass spectrometry and gas chromatography where a stable and continuous supply of nitrogen is necessary. Aircraft & motor vehicle tires: Although air is 78% nitrogen, most aircraft tires are filled with pure nitrogen. There are many tire and automotive shops with nitrogen generators to fill tires. The advantage of using nitrogen is that the tank is dry. Often a compressed air tank will have water in it that comes from atmospheric water vapor condensing in the tank after leaving the air compressor. Nitrogen maintains a more stable pressure when heated and cooled as a result of being dry and doesn't permeate the tire as easily due to being a slightly larger molecule (155 pm) than O2 (152 pm). Chemical and petrochemical industries: The primary and very important application of nitrogen in chemical and petrochemical industries is the provision of inert environment aimed at ensuring general industrial safety during cleaning and protection of process vessels. In addition, nitrogen is used for pipelines pressure testing, chemical agents transportation, and regeneration of used catalysts in technological processes. Aircraft tires use nitrogen fill to delay tire rupture on rejected take off events, allowing evacuation time before brake system heat causes an internal tire fire. Fusible plugs in the tire are the primary protection against heat induced pressure excursion. Internal tire fires can kindle at initial stop due to local hot sections of the wheels. Electronics: In electronics, nitrogen serves to displace oxygen in the manufacture of semi-conductors and electric circuits, heat treatment of finished products, as well as in blowing and cleaning. The most common uses in electronics are in the soldering process. Specifically Selective, Reflow, and Wave Soldering equipment. Fire Protection: The fire protection industry uses nitrogen gas for two different applications - fire suppression and corrosion prevention. Nitrogen generators are used in hypoxic air fire prevention systems to produce air with a low oxygen content which will suppress a fire. To prevent corrosion, nitrogen generators are used in place of or in conjunction with a compressed air system to provide supervisory nitrogen gas in place of air for dry pipe and pre-action fire sprinkler systems. Glass industry: In glass production, nitrogen proves efficient as a cooling agent for electric arc furnace electrodes as well as to displace oxygen during process procedures. Metallurgy: The metal industry generally utilizes nitrogen as a means of protecting ferrous and non-ferrous metals during annealing. Also, nitrogen is helpful in such standard industry processes as neutral tempering, cementing, hard brazing, stress relieving, cyanide hardening, metal-powder sintering and extrusion die cooling. Paint-and-varnish industry: Paint and varnish production uses nitrogen for the creation of an inert environment in process vessels to ensure safety, as well as for oxygen displacement during packing in order to prevent polymerization of drying oils. Petroleum industry: In the petroleum industry, nitrogen is an indispensable component in a number of processes. Most commonly, nitrogen is used to create an inert environment for preventing explosions and for fire safety and to support transportation and transfer of hydrocarbons. Additionally, nitrogen is used for pipeline testing and purging, cleaning technological vessels and cleaning liquefied gas carriers and hydrocarbon storage facilities. Pharmaceutical industry: In the pharmaceutical industry, nitrogen finds application in pharmaceuticals packaging, and ensuring against explosion and fire safety in activities where fine dispersed substances are used. See also Chemical oxygen generator Hydrogen production Industrial gas References Gas technologies Nitrogen Industrial gases
Nitrogen generator
Chemistry
1,868
30,616,049
https://en.wikipedia.org/wiki/LG%20Optimus%202X
The LG Optimus 2X is a smartphone designed and manufactured by LG Electronics. The Optimus 2X is the world's first smartphone with a dual-core processor and the third phone in the LG Optimus-Android series. LG introduced the Optimus 2X on December 16, 2010 and the device first became available to consumers in South Korea in January 2011. It was also launched in Singapore on March 3, 2011. The Optimus 2X has run the Android 2.3 software version since the upgrade in November 2011, but the latest offering is Android 4.0. The phone holds the record for the longest update holdout, taking 16 months to receive a firmware update from Android 2.2 to 2.3. Hardware The LG Optimus 2X holds the Guinness World Record as the first mobile phone to use a dual-core processor. It is the first smartphone to feature the Nvidia Tegra 2, a dual-core processor clocked at 1 GHz or 1.2 GHz. It also has a micro HDMI port and an 8-megapixel camera. The Optimus 2X is capable of HD playback through the micro HDMI port when connected to an HD device, such as an HDTV. Display The LG Optimus 2X has a 4-inch LCD IPS capacitive touch-screen, displaying 16.7 million colours at 480×800 pixels. Storage & Memory The LG Optimus 2X has a card slot for additional memory. It supports a microSD card of up to 32 GB capacity. It has up to 8 GB of internal memory storage and 512 MB RAM (384 MB available) or 1 GB RAM (496 MB available). Camera An 8-megapixel camera is included on the Optimus 2X and is capable of 3264x2448 pixels. The camera includes auto focus and a LED flash. A secondary front-facing 1.3-megapixel camera is located on the front of the device but it cannot make use of the LED flash on the back of the phone. The primary camera is capable of video recording of 1080p at 24 fps, or 720p at 30 fps. Modifications The aftermarket Android firmware Cyanogenmod has been developed for the Optimus 2X. Official CyanogenMod support was added in CyanogenMod 7.1. History LG previewed the Optimus 2X under the device's internal development name, "LG Star", at CES 2011. Naming variations LG Optimus Speed LG Optimus 2X / LG Optimus Dual / LG P990 / LGP990h / LG P990hn (h and hn models supported different frequencies) T-Mobile G2X / LG P999 LG Star (platform name) LG Star Dop (platform name) Optimus 2X SU660 - different ROM (kernel patched) and software revision than P990. SU660 has 3 buttons on the bottom (P990 has 4 buttons) See also LG Optimus List of LG mobile phones Comparison of smartphones Galaxy Nexus Other phones with Tegra 2 SoC: Samsung Galaxy R Motorola Atrix Motorola Photon Droid X2 Notes References External links LG Optimus 2X official website Reviews Phonearena.com review by PhoneArena Team, 01 Feb 2011 GSMArena.com review by GSMArena team, 7 February 2011. engadget.com review by engadget team, 7 February 2011 Test of the dual-core smartphone LG Optimus 2X Android (operating system) devices LG Electronics mobile phones Mobile phones introduced in 2011 Mobile phones with user-replaceable battery Discontinued flagship smartphones
LG Optimus 2X
Technology
783
889,961
https://en.wikipedia.org/wiki/Contour%20crafting
Contour crafting is a building printing technology being researched by Behrokh Khoshnevis of the University of Southern California's Information Sciences Institute (in the Viterbi School of Engineering) that uses a computer-controlled crane or gantry to build edifices rapidly and efficiently with substantially less manual labor. It was originally conceived as a method to construct molds for industrial parts. Khoshnevis decided to adapt the technology for rapid home construction as a way to rebuild after natural disasters, like the devastating earthquakes that have plagued his native Iran. Using a quick-setting, concrete-like material, contour crafting forms the house's walls layer by layer until topped off by floors and ceilings set in place by the crane. The notional concept calls for the insertion of structural components, plumbing, wiring, utilities, and even consumer devices like audiovisual systems as the layers are built. History Caterpillar Inc. provided funding to help support Viterbi project research in the summer of 2008. In 2009, Singularity University graduate students established a project with Khoshnevis as the CTO to commercialize Contour Crafting. The project was named "ACASA". In 2010, Khoshnevis claimed that his system could build a complete home in a single day, and its electrically powered crane would produce very little construction material waste. Construction of a standard wood-framed house is estimated to create an average of 3-7 tons of waste. Use of contour crafting could significantly reduce environmental impact as it wastes no material at all. In addition, contour crafting does not generate noise, dust or make harmful emissions to the environment given the equipment that is used. Khoshnevis stated in 2010 that NASA was evaluating Contour Crafting for its application in the construction of bases on Mars and the Moon. After three years, in 2013, NASA funded a small study at the University of Southern California to further develop the Contour Crafting 3D printing technique. Potential applications of this technology include constructing lunar structures of a material that could be built of 90-percent lunar material with only ten percent of the material transported from Earth. In 2017 the Contour Crafting Corporation (of which Khoshnevis is the CEO) announced a partnership with and investment from Doka Ventures. In the press release, they claim that they "will start delivery of the first printers early next year". See also References External links Contour Crafting website Building engineering American inventions 3D printing processes 2008 introductions
Contour crafting
Engineering
511
9,566
https://en.wikipedia.org/wiki/Empty%20set
In mathematics, the empty set or void set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set. Any set other than the empty set is called non-empty. In some textbooks and popularizations, the empty set is referred to as the "null set". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero (which is not necessarily empty). Notation Common notations for the empty set include "{ }", "", and "∅". The latter two symbols were introduced by the Bourbaki group (specifically André Weil) in 1939, inspired by the letter Ø () in the Danish and Norwegian alphabets. In the past, "0" (the numeral zero) was occasionally used as a symbol for the empty set, but this is now considered to be an improper use of notation. The symbol ∅ is available at Unicode point . It can be coded in HTML as and as or as . It can be coded in LaTeX as . The symbol is coded in LaTeX as . When writing in languages such as Danish and Norwegian, where the empty set character may be confused with the alphabetic letter Ø (as when using the symbol in linguistics), the Unicode character U+29B0 REVERSED EMPTY SET ⦰ may be used instead. Properties In standard axiomatic set theory, by the principle of extensionality, two sets are equal if they have the same elements (that is, neither of them has an element not in the other). As a result, there can be only one set with no elements, hence the usage of "the empty set" rather than "an empty set". The only subset of the empty set is the empty set itself; equivalently, the power set of the empty set is the set containing only the empty set. The number of elements of the empty set (i.e., its cardinality) is zero. The empty set is the only set with either of these properties. For any set A: The empty set is a subset of A The union of A with the empty set is A The intersection of A with the empty set is the empty set The Cartesian product of A and the empty set is the empty set For any property P: For every element of , the property P holds (vacuous truth). There is no element of for which the property P holds. Conversely, if for some property P and some set V, the following two statements hold: For every element of V the property P holds There is no element of V for which the property P holds then By the definition of subset, the empty set is a subset of any set A. That is, element x of belongs to A. Indeed, if it were not true that every element of is in A, then there would be at least one element of that is not present in A. Since there are elements of at all, there is no element of that is not in A. Any statement that begins "for every element of " is not making any substantive claim; it is a vacuous truth. This is often paraphrased as "everything is true of the elements of the empty set." In the usual set-theoretic definition of natural numbers, zero is modelled by the empty set. Operations on the empty set When speaking of the sum of the elements of a finite set, one is inevitably led to the convention that the sum of the elements of the empty set (the empty sum) is zero. The reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set (the empty product) should be considered to be one, since one is the identity element for multiplication. A derangement is a permutation of a set without fixed points. The empty set can be considered a derangement of itself, because it has only one permutation (), and it is vacuously true that no element (of the empty set) can be found that retains its original position. In other areas of mathematics Extended real numbers Since the empty set has no member when it is considered as a subset of any ordered set, every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers (namely negative infinity, denoted which is defined to be less than every other extended real number, and positive infinity, denoted which is defined to be greater than every other extended real number), we have that: and That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for the minimum and infimum operators. Topology In any topological space X, the empty set is open by definition, as is X. Since the complement of an open set is closed and the empty set and X are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact. The closure of the empty set is empty. This is known as "preservation of nullary unions." Category theory If is a set, then there exists precisely one function from to the empty function. As a result, the empty set is the unique initial object of the category of sets and functions. The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set. Set theory In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as . Thus, we have , , , and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, , such that the Peano axioms of arithmetic are satisfied. Questioned existence Historical issues In the context of sets of real numbers, Cantor used to denote " contains no single point". This notation was utilized in definitions; for example, Cantor defined two sets as being disjoint if their intersection has an absence of points; however, it is debatable whether Cantor viewed as an existent set on its own, or if Cantor merely used as an emptiness predicate. Zermelo accepted itself as a set, but considered it an "improper set". Axiomatic set theory In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways: Standard first-order logic implies, merely from the logical axioms, that exists, and in the language of set theory, that thing must be a set. Now the existence of the empty set follows easily from the axiom of separation. Even using free logic (which does not logically imply that something exists), there is already an axiom implying the existence of at least one set, namely the axiom of infinity. Philosophical issues While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians. The empty set is not the same thing as ; rather, it is a set with nothing it and a set is always . This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather "the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king." The popular syllogism Nothing is better than eternal happiness; a ham sandwich is better than nothing; therefore, a ham sandwich is better than eternal happiness is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements "Nothing is better than eternal happiness" and "[A] ham sandwich is better than nothing" in a mathematical tone. According to Darling, the former is equivalent to "The set of all things that are better than eternal happiness is " and the latter to "The set {ham sandwich} is better than the set ". The first compares elements of sets, while the second compares the sets themselves. Jonathan Lowe argues that while the empty set was undoubtedly an important landmark in the history of mathematics, … we should not assume that its utility in calculation is dependent upon its actually denoting some object. it is also the case that: "All that we are ever informed about the empty set is that it (1) is a set, (2) has no members, and (3) is unique amongst sets in having no members. However, there are very many things that 'have no members', in the set-theoretical sense—namely, all non-sets. It is perfectly clear why these things have no members, for they are not sets. What is unclear is how there can be, uniquely amongst sets, a which has no members. We cannot conjure such an entity into existence by mere stipulation." George Boolos argued that much of what has been heretofore obtained by set theory can just as easily be obtained by plural quantification over individuals, without reifying sets as singular entities having other entities as members. See also References Further reading Halmos, Paul, Naive Set Theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Reprinted by Martino Fine Books, 2011. (paperback edition). External links Basic concepts in set theory 0 (number)
Empty set
Mathematics
2,233
2,984,167
https://en.wikipedia.org/wiki/Fixed-satellite%20service
Fixed-satellite service (FSS, or fixed-satellite radiocommunication service) is – according to article 1.21 of the International Telecommunication Union's (ITU) Radio Regulations (RR) – defined as A radiocommunication service between earth stations at given positions, when one or more satellites are used; the given position may be a specified fixed point or any fixed point within specified areas; in some cases this service includes satellite-to-satellite links, which may also be operated in the inter-satellite service; the fixed-satellite service may also include feeder links for other space radiocommunication services. Classification This radiocommunication service is classified in accordance with ITU Radio Regulations (article 1) as follows: Fixed service (article 1.20) Fixed-satellite service (article 1.21) Inter-satellite service (article 1.22) Earth exploration-satellite service (article 1.51) Meteorological-satellite service (article 1.52) Frequency allocation The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (most recent version, Edition of 2020). In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is within the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared. primary allocation: is indicated by writing in capital letters (see example below) secondary allocation: is indicated by small letters exclusive or shared utilization: is within the responsibility of administrations Example of frequency allocation Use in North America FSS – is as well the official classification (used chiefly in North America) for geostationary communications satellites that provide broadcast feeds to television stations, radio stations and broadcast networks. FSSs also transmit information for telephony, telecommunications, and data communications. References Radiocommunication services ITU Satellite broadcasting
Fixed-satellite service
Engineering
395
42,593,487
https://en.wikipedia.org/wiki/HWB%20color%20model
HWB (Hue, Whiteness, Blackness) is a cylindrical-coordinate representation of points in an RGB color model, similar to HSL and HSV. It was developed by HSV’s creator Alvy Ray Smith in 1996 to address some of the issues with HSV. HWB was designed to be more intuitive for humans to use and slightly faster to compute. The first coordinate, H (Hue), is the same as the Hue coordinate in HSL and HSV. W and B stand for Whiteness and Blackness respectively and range from 0–100% (or 0–1). The mental model is that the user can pick a main hue and then “mix” it with white and/or black to produce the desired color. HWB was included in the CSS Level 4 Color Module in 2014. Conversion HWB is very closely related to HSV, and therefore the conversion formulas are fairly simple. Before conversion from HWB, if the sum of whiteness and blackness exceeds 100%, both components must be scaled back proportionally to make the sum 100%. Swatches The CSS Color Level 4 draft specification includes a number of HWB example color swatches. References Color space
HWB color model
Mathematics
253
30,535,968
https://en.wikipedia.org/wiki/OMEGA%20process
The OMEGA process ("Only MEG Advantage") is a chemical process discovered by the Shell Global Solutions company that is used to produce ethylene glycol from ethylene. This process comprises two steps, the controlled oxidation of ethylene to ethylene oxide, and the net hydrolysis of ethylene oxide to monoethylene glycol (MEG). The first chemical plant using the OMEGA process was started in South Korea. Subsequent OMEGA plants have been started in Saudi Arabia and Singapore. Shell claims that this process, compared to conventional ones, does not produce higher glycols, uses less steam and water, and generates less waste. Ethylene oxidation To produce ethylene oxide, ethylene is oxidized with dioxygen in the presence of a silver catalyst. Some ethylene is over-oxidized to carbon dioxide and water, which is wasteful; early processes only gave ~ 65% selectivity for ethylene oxide as a result. In the OMEGA process, over-oxidation is reduced by including ethyl chloride as a moderator. Ethylene oxide hydrolysis Conventionally, monoethylene glycol (HOC2H4OH) is produced by the controlled hydrolysis of ethylene oxide (C2H4O). The monoethylene glycol product is also able to react with ethylene oxide to give diethylene glycol, and so on; sequential reaction with ethylene oxide is how poly(ethylene glycol) is produced. Due to monoethylene glycol's high boiling point, purification by distillation is energy intensive. C2H4O + H2O → HOC2H4OH In the OMEGA process, the ethylene oxide reacts with carbon dioxide (CO2) to yield ethylene carbonate (C3H4O3). Ethylene carbonate is subsequently hydrolyzed to monoethylene glycol and carbon dioxide. The carbon dioxide is released in this step again and can be fed back into the process circuit. This process is 99.5% selective for monoethylene glycol. C2H4O + CO2 → C3H4O3 C3H4O3 + H2O → HOC2H4OH + CO2 This part of the OMEGA process was originally developed by Mitsubishi Chemicals, and it has been exclusively licensed to Shell. See also Carbonate ester References Chemical processes Carbonate esters Diols
OMEGA process
Chemistry
517
396,482
https://en.wikipedia.org/wiki/Victor%20Francis%20Hess
Victor Franz Hess (; 24 June 1883 – 17 December 1964) was an Austrian-American physicist who shared the 1936 Nobel Prize in Physics with Carl David Anderson "for his discovery of cosmic radiation". Biography He was born to Vinzenz Hess and Serafine Edle von Grossbauer-Waldstätt, in Waldstein Castle, near Peggau in Styria, Austria, on 24 June 1883. His father was a royal forester in Prince Louis of Oettingen-Wallerstein's service. He attended secondary school at Graz Gymnasium from 1893 to 1901. From 1901 to 1905, Hess was an undergraduate student at the University of Graz. In 1910, Hess received his PhD from the University of Vienna. He worked as Assistant under Stefan Meyer at the Institute for Radium Research, Austrian Academy of Sciences, from 1910 to 1920. In 1920, he married Marie Bertha Warner Breisky. Hess took a leave of absence in 1921 and traveled to the United States, working at the United States Radium Corporation, in New Jersey, and as consulting physicist for the US Bureau of Mines, in Washington, D.C. In 1923, he returned to the University of Graz, and was appointed the ordinary professor of experimental physics in 1925. The University of Innsbruck appointed him professor, and director of the Institute of Radiology, in 1931. Hess relocated to the United States with his Jewish wife in 1938, in order to escape Nazi persecution. The same year Fordham University appointed him professor of physics, and he later became a naturalized United States citizen in 1944. His wife died of cancer in 1955. The same year he married Elizabeth M. Hoenke, the woman who nursed Berta at the end of her life. He was a practicing Roman Catholic, and in 1946, he wrote on the topic of the relationship between science and religion in his article "My Faith", in which he explained why he believed in God. He retired from Fordham University in 1958 and he died on 17 December 1964, in Mount Vernon, New York from Parkinson's disease. Cosmic rays Between 1911 and 1913, Hess undertook the work that won him the Nobel Prize in Physics in 1936. For many years, scientists had been puzzled by the levels of ionizing radiation measured in the atmosphere. The assumption at the time was that the radiation would decrease as the distance from the earth, the then assumed source of the radiation, increased. The electroscopes previously used gave an approximate measurement of the radiation but indicated that at greater altitude in the atmosphere the level of radiation might actually be higher than that on the ground. Hess approached this mystery first by greatly increasing the precision of the measuring equipment, and then by personally taking the equipment aloft in a balloon. He systematically measured the radiation at altitudes up to during 1911–1912. The daring flights were made both by day and during the night, at significant risk to himself. The result of Hess's meticulous work was published in the Proceedings of the Austrian Academy of Sciences, and showed the level of radiation decreased up to an altitude of about , but above that the level increased considerably, with the radiation detected at , being about twice that at sea level. His conclusion was that there was radiation penetrating the atmosphere from outer space, and his discovery was confirmed by Robert Andrews Millikan in 1925, who gave the radiation the name "cosmic rays". Hess's discovery opened the door to many new discoveries in particle and nuclear physics. In particular, both the positron and the muon were first discovered in cosmic rays by Carl David Anderson. Hess and Anderson shared the 1936 Nobel Prize in Physics. Honours and awards Lieben Prize (1919) Abbe Memorial Prize Abbe Medal of the Carl Zeiss Institute, Jena (1932) Nobel Prize in Physics (1936) Austrian Decoration for Science and Art (1959) A lunar crater is named after Hess Publications References External links Biography and Nobel lecture Victor Francis Hess – Not only the Discoverer of the Cosmic Radiation Diploma thesis (in German) 1883 births 1964 deaths 20th-century American physicists People from Graz-Umgebung District American Nobel laureates Austrian emigrants to the United States Austrian Nobel laureates Nobel laureates from Austria-Hungary Austrian Roman Catholics Experimental physicists Fordham University faculty Nobel laureates in Physics Recipients of the Austrian Decoration for Science and Art Emigrants from Austria after the Anschluss University of Graz alumni Academic staff of the University of Graz Academic staff of the University of Innsbruck Cosmic ray physicists Spectroscopists 20th-century Austrian physicists
Victor Francis Hess
Physics,Chemistry
918
12,197,537
https://en.wikipedia.org/wiki/Mafosfamide
Mafosfamide (INN) is an oxazaphosphorine (cyclophosphamide-like) alkylating agent under investigation as a chemotherapeutic. It is metabolized by cytochrome P450 into 4-hydroxycyclophosphamide, which is then converted into aldophosphamide, which, in turn yields the cytotoxic metabolites phosphoramide mustard and acrolein. Several Phase I trials have been completed. References Experimental cancer drugs Oxazaphosphinans Phosphorodiamidates Nitrogen mustards Organochlorides Thioethers Sulfonic acids Chloroethyl compounds
Mafosfamide
Chemistry
151
59,603,661
https://en.wikipedia.org/wiki/Interconnect%20%28integrated%20circuits%29
In integrated circuits (ICs), interconnects are structures that connect two or more circuit elements (such as transistors) together electrically. The design and layout of interconnects on an IC is vital to its proper function, performance, power efficiency, reliability, and fabrication yield. The material interconnects are made from depends on many factors. Chemical and mechanical compatibility with the semiconductor substrate and the dielectric between the levels of interconnect is necessary, otherwise barrier layers are needed. Suitability for fabrication is also required; some chemistries and processes prevent the integration of materials and unit processes into a larger technology (recipe) for IC fabrication. In fabrication, interconnects are formed during the back-end-of-line after the fabrication of the transistors on the substrate. Interconnects are classified as local or global interconnects depending on the signal propagation distance it is able to support. The width and thickness of the interconnect, as well as the material from which it is made, are some of the significant factors that determine the distance a signal may propagate. Local interconnects connect circuit elements that are very close together, such as transistors separated by ten or so other contiguously laid out transistors. Global interconnects can transmit further, such as over large-area sub-circuits. Consequently, local interconnects may be formed from materials with relatively high electrical resistivity such as polycrystalline silicon (sometimes silicided to extend its range) or tungsten. To extend the distance an interconnect may reach, various circuits such as buffers or restorers may be inserted at various points along a long interconnect. Interconnect properties The geometric properties of an interconnect are width, thickness, spacing (the distance between an interconnect and another on the same level), pitch (the sum of the width and spacing), and aspect ratio, or AR, (the thickness divided by width). The width, spacing, AR, and ultimately, pitch, are constrained in their minimum and maximum values by design rules that ensure the interconnect (and thus the IC) can be fabricated by the selected technology with a reasonable yield. Width is constrained to ensure minimum width interconnects do not suffer breaks, and maximum width interconnects can be planarized by chemical mechanical polishing (CMP). Spacing is constrained to ensure adjacent interconnects can be fabricated without any conductive material bridging. Thickness is determined solely by the technology, and the aspect ratio, by the chosen width and set thickness. In technologies that support multiple levels of interconnects, each group of contiguous levels, or each level, has its own set of design rules. Before the introduction of CMP for planarizing IC layers, interconnects had design rules that specified larger minimum widths and spaces than the lower level to ensure that the underlying layer's rough topology did not cause breaks in the interconnect formed on top. The introduction of CMP has made finer geometries possible. The AR is an important factor. In technologies that form interconnect structures with conventional processes, the AR is limited to ensure that the etch creating the interconnect, and the dielectric deposition that fills the voids in between interconnects with dielectric, can be done successfully. In those that form interconnect structures with damascene processes, the AR must permit successful etch of the trenches, deposition of the barrier metal (if needed) and interconnect material. Interconnect layout are further restrained by design rules that apply to collections of interconnects. For a given area, technologies that rely on CMP have density rules to ensure the whole IC has an acceptable variation in interconnect density. This is because the rate at which CMP removes material depends on the material's properties, and great variations in interconnect density can result in large areas of dielectric which can dish, resulting in poor planarity. To maintain acceptable density, dummy interconnects (or dummy wires) are inserted into regions with spare interconnect density. Historically, interconnects were routed in straight lines, and could change direction by using sections aligned 45° away from the direction of travel. As IC structure geometries became smaller, to obtain acceptable yields, restrictions were imposed on interconnect direction. Initially, only global interconnects were subject to restrictions; were made to run in straight lines aligned eastwest or northsouth. To allow easy routing, alternate levels of interconnect ran in the same alignment, so that changes in direction were achieved by connecting to a lower or upper level of interconnect though a via. Local interconnects, especially the lowest level (usually polysilicon) could assume a more arbitrary combination of routing options to attain the a higher packing density. Materials In silicon ICs, the most commonly used semiconductor in ICs, the first interconnects were made of aluminum. Aluminum was an ideal material for interconnects due to its ease of deposition and good adherence to silicon and silicon dioxide. Al interconnects are deposited by physical vapor deposition or chemical vapor deposition methods. They were originally patterned by wet etching, and later by various dry etching techniques. Initially, pure aluminum was used but by the 1970s, substrate compatibility, junction spiking and reliability concerns (mostly concerning electromigration) forced the use of aluminum-based alloys containing silicon, copper, or both. By the late 1990s, the high resistivity of aluminum, coupled with the narrow widths of the interconnect structures forced by continuous feature size downscaling, resulted in prohibitively high resistance in interconnect structures. This forced aluminum's replacement by copper interconnects. In gallium arsenide (GaAs) ICs, which have been mainly used in application domains (e.g. monolithic microwave ICs) different to those of silicon, the predominant material used for interconnects is gold. Performance enhancements To reduce the delay penalty caused by parasitic capacitance, the dielectric material used to insulate adjacent interconnects, and interconnects on different levels (the inter-level dielectric [ILD]), should have a dielectric constant that is as close to 1 as possible. A class of such materials, Low-κ dielectrics, were introduced during the late 1990s and early 2000s for this purpose. As of January 2019, the most advanced materials reduce the dielectric constant to very low levels through highly porous structures, or through the creation of substantial air or vacuum pockets (air gap dielectric). These materials often have low mechanical strength and are restricted to the lowest level or levels of interconnect as a result. The high density of interconnects at the lower levels, along with the minimal spacing, helps support the upper layers. Intel introduced air-gap dielectric in its 14nm technology in 2014. Multi-level interconnects IC with complex circuits require multiple levels of interconnect to form circuits that have minimal area. As of 2018, the most complex ICs may have over 15 layers of interconnect. Each level of interconnect is separated from each other by a layer of dielectric. To make vertical connections between interconnects on different levels, vias are used. The top-most layers of a chip have the thickest and widest and most widely separated metal layers, which make the wires on those layers have the least resistance and smallest RC time constant, so they are used for power and clock distribution networks. The bottom-most metal layers of the chip, closest to the transistors, have thin, narrow, tightly-packed wires, used only for local interconnect. Adding layers can potentially improve performance, but adding layers also reduces yield and increases manufacturing costs. ICs with a single metal layer typically use the polysilicon layer to "jump across" when one signal needs to cross another signal. The process used to form DRAM capacitors creates a rough and hilly surface, which makes it difficult to add metal interconnect layers and still maintain good yield. In 1998, state-of-the-art DRAM processes had four metal layers, while state-of-the-art logic processes had seven metal layers. In 2002, five or six layers of metal interconnect was common. In 2009, 1Gbit DRAM typically had three layers of metal interconnect; tungsten for the first layer and aluminum for the upper layers. See also Antenna effect Bonding pad Carbon nanotubes in interconnects Interconnect bottleneck Optical interconnect Parasitic extraction References Integrated circuits
Interconnect (integrated circuits)
Technology,Engineering
1,855
52,288,281
https://en.wikipedia.org/wiki/Cephalodiplosporium%20elegans
Cephalodiplosporium elegans is a species of fungus in the Nectriaceae. References Nectriaceae Fungi described in 1961 Fungus species
Cephalodiplosporium elegans
Biology
34
36,983,565
https://en.wikipedia.org/wiki/Nigel%20Boston
Nigel Boston (July 20, 1961 – March 31, 2024) was a British-American mathematician, who made notable contributions to algebraic number theory, group theory, and arithmetic geometry. Biography Boston attended Harvard University, earning his doctorate in 1987, under supervision of Barry Mazur. He was a Professor Emeritus at the University of Wisconsin–Madison. In 2012, he became a fellow of the American Mathematical Society. Boston died on March 31, 2024, at the age of 62. References External links 1961 births 2024 deaths 20th-century American mathematicians 21st-century American mathematicians Harvard University alumni University of Wisconsin–Madison faculty Fellows of the American Mathematical Society Algebraists Group theorists
Nigel Boston
Mathematics
137
21,489,968
https://en.wikipedia.org/wiki/Table%20data%20gateway
Table Data Gateway is a design pattern in which an object acts as a gateway to a database table. The idea is to separate the responsibility of fetching items from a database from the actual usages of those objects. Users of the gateway are then insulated from changes to the way objects are stored in the database. References Software design patterns
Table data gateway
Technology
69
577,446
https://en.wikipedia.org/wiki/Satellite%20DNA
Satellite DNA consists of very large arrays of tandemly repeating, non-coding DNA. Satellite DNA is the main component of functional centromeres, and form the main structural constituent of heterochromatin. The name "satellite DNA" refers to the phenomenon that repetitions of a short DNA sequence tend to produce a different frequency of the bases adenine, cytosine, guanine, and thymine, and thus have a different density from bulk DNA such that they form a second or "satellite" band(s) when genomic DNA is separated along a cesium chloride density gradient using buoyant density centrifugation. Sequences with a greater ratio of A+T display a lower density while those with a greater ratio of G+C display a higher density than the bulk of genomic DNA. Some repetitive sequences are ~50% G+C/A+T and thus have buoyant densities the same as bulk genomic DNA. These satellites are called "cryptic" satellites because they form a band hidden within the main band of genomic DNA. "Isopycnic" is another term used for cryptic satellites. Satellite DNA families in humans Satellite DNA, together with minisatellite and microsatellite DNA, constitute the tandem repeats. The size of satellite DNA arrays varies greatly between individuals. The major satellite DNA families in humans are called: Length A repeated pattern can be between 1 base pair (bp) long (a mononucleotide repeat) to several thousand base pairs long, and the total size of a satellite DNA block can be several megabases without interruption. Long repeat units have been described containing domains of shorter repeated segments and mononucleotides (1-5 bp), arranged in clusters of microsatellites, wherein differences among individual copies of the longer repeat units were clustered. Most satellite DNA is localized to the telomeric or the centromeric region of the chromosome. The nucleotide sequence of the repeats is fairly well conserved across species. However, variation in the length of the repeat is common. Low-resolution sequencing-based studies have demonstrated variation in human population satellite array lengths as well as in the frequency of certain sequence and structural variations (11–13, 29). However, due to a lack of full centromere assemblies, base-level understanding of satellite array variation and evolution has remained weak. For example, minisatellite DNA is a short region (1-5 kb) of repeating elements with length >9 nucleotides. Whereas microsatellites in DNA sequences are considered to have a length of 1-8 nucleotides. The difference in how many of the repeats is present in the region (length of the region) is the basis for DNA profiling. Origin Microsatellites are thought to have originated by polymerase slippage during DNA replication. This comes from the observation that microsatellite alleles usually are length polymorphic; specifically, the length differences observed between microsatellite alleles are generally multiples of the repeat unit length. Structure Satellite DNA adopts higher-order three-dimensional structures in a naturally occurring complex satellite DNA from the land crab Gecarcinus lateralis, whose genome contains 3% of a GC-rich satellite band consisting of a ~2100 bp "repeat unit" sequence motif called RU. The RU was arranged in long tandem arrays with approximately 16,000 copies per genome. Several RU sequences were cloned and sequenced to reveal conserved regions of conventional DNA sequences over stretches greater than 550 bp, interspersed with five "divergent domains" within each copy of RU. Four divergent domains consisted of microsatellite repeats, biased in base composition, with purines on one strand and pyrimidines on the other. Some contained mononucleotide repeats of C:G base pairs approximately 20 bp in length. These strand-biased microsatellite domains ranged in length from approximately 20 bp to greater than 250 bp. The most prevalent repeated sequences in the embedded microsatellite regions were CT:AG, CCT:AGG, CCCT:AGGG, and CGCAC:GTGCG These repeating sequences were shown to adopt altered structures including triple-stranded DNA, Z-DNA, stem-loop, and other conformations under superhelical stress. Between the strand-biased microsatellite repeats and C:G mononucleotide repeats, all sequence variations retained one or two base pairs with A (purine) interrupting the pyrimidine-rich strand and T (pyrimidine) interrupting the purine-rich strand. These interruptions in compositional bias adopted highly distorted conformations as shown by their response to structrural nuclease enzymes including S1, P1, and mung bean nucleases. The most complex compositionally-biased microsatellite domain of RU included the sequence TTAA:TTAA as well as a mirror repeat. It produced the strongest signal in response to nucleases compared to all other altered structures in experimental observations. That particular strand-biased divergent domain was subcloned and its altered helical structure was studied in greater detail. A fifth divergent domain in the RU sequence was characterized by variations of a symmetrical DNA sequence motif of alternating purines and pyrimidines shown to adopt a left-handed Z-DNA or stem-loop structure under superhelical stress. The conserved symmetrical Z-DNA was abbreviated Z4Z5NZ15NZ5Z4, where Z represents alternating purine/pyrimidine sequences. A stem-loop structure was centered in the Z15 element at the highly conserved palindromic sequence CGCACGTGCG:CGCACGTGCG and was flanked by extended palindromic Z-DNA sequences over a 35 bp region. Many RU variants showed deletions of at least 10 bp outside the Z4Z5NZ15NZ5Z4 structural element, while others had additional Z-DNA sequences lengthening the alternating purine and pyrimidine domain to over 50 bp. One extended RU sequence (EXT) was shown to have six tandem copies of a 142 bp amplified (AMPL) sequence motif inserted into a region bordered by inverted repeats where most copies contained just one AMPL sequence element. There were no nuclease-sensitive altered structures or significant sequence divergence in the relatively conventional AMPL sequence. A truncated RU sequence (TRU), 327 bp shorter than most clones, arose from a single base change leading to a second EcoRI restriction site in TRU. Another crab, the hermit crab Pagurus pollicaris, was shown to have a family of AT-rich satellites with inverted repeat structures that comprised 30% of the entire genome. Another cryptic satellite from the same crab with the sequence CCTA:TAGG was found inserted into some of the palindromes. See also Buoyant density centrifugation DNA profiling DNA supercoil Eukaryotic chromosome fine structure Gene expression Polymerase chain reaction Tengiz Beridze, scientist who discovered satellite DNA in plants References Further reading External links Search tools: SERF De Novo Genome Analysis and Tandem Repeats Finder TRF Tandem Repeats Finder DNA Repetitive DNA sequences
Satellite DNA
Biology
1,490
10,914,584
https://en.wikipedia.org/wiki/Luciano%20Maiani
Luciano Maiani (born 16 July 1941) is a Sammarinese physicist. He is best known for his prediction of the charm quark with Sheldon Glashow and John Iliopoulos (the "GIM mechanism"). Academic history In 1964 Luciano Maiani received his degree in physics and he became a research associate at the Istituto Superiore di Sanità in Italy. During that same year he collaborated with Raoul Gatto's theoretical physics group at the University of Florence. He crossed the Atlantic in 1969 to do a post-doctoral fellowship at Harvard University's Lyman Laboratory of Physics. In 1976 Maiani became a professor of theoretical physics at the University of Rome, however he traveled widely during this period, holding visiting professorships at the École normale supérieure of Paris (1977) and CERN (1979–1980 and 1985–1986). Maiani also took an interest in the direction of particle physics research start on CERN's Scientific Policy Committee from 1984 to 1991. Then, in 1993, he became president of Italy's Istituto Nazionale di Fisica Nucleare (INFN). From 1993 to 1996 Maiani served as a scientific delegate in CERN council and then as that council's president in 1997. Thereafter he became director general of CERN, serving from 1 January 1999 through the end of 2003. From 1995 to 1997 Maiani chaired the Italian Comitato Tecnico Scientifico, Fondo Ricerca Applicata. At the end of 2007 he was proposed as president of Consiglio Nazionale delle Ricerche, but his nomination was suspended temporally after he signed a letter criticizing the rector of 'La Sapienza' University in Rome, who invited Pope Benedict XVI to give a lectio magistralis in 2008. However he became the President of CNR since 2008. As of September 2020, he is a member of the Italian Aspen Institute. Research Luciano Maiani has authored over 100 scientific publications on the theory of elementary particles often with several co-authors. In 1970 he predicted the charm quark in a paper with Glashow and Iliopoulos which was later discovered at SLAC and Brookhaven in 1974 and led to a Nobel Prize in Physics for the discoverers. Working with Guido Altarelli in 1974 they explained that the observed octet enhancement in weak non-leptonic decays was due to a leading gluon exchange effect in quantum chromodynamics. They later extended this effect to describe the weak non-leptonic decays of charm and bottom quarks as well and also produced a parton model description of heavy flavor weak decays. In 1976 Maiani analyzed the CP violation in the six-quark theory and predicted the very small electric dipole moment of the neutron. In the 1980s he started using the numerical simulation of lattice QCD and this led to the first prediction of the decay constant of pseudoscalar charmed mesons and of B mesons. A proponent of supersymmetry, Maiani once said that the search for it was "primary goal of modern particle physics". He has not confined his interest to the theoretical side of physics either, with involvement in ALPI, EUROBALL, DAFNE, VIRGO and the LHC. Honors and awards 1979 Matteucci Medal, Accademia Nazionale dei XL 1987 Sakurai Prize of the American Physical Society 1996 Doctor honoris causa, Université de la Méditerranée, Aix-Marseille 2007 Dirac Medal, Abdus Salam International Centre for Theoretical Physics, Trieste, Italy 2010 Doctor honoris causa, Benemérita Universidad Autónoma de Puebla, Puebla, México 2013 Bruno Pontecorvo Prize by the Joint Institute for Nuclear Research, Dubna, Russia See also GIM mechanism External links Scientific publications of Luciano Maiani on INSPIRE-HEP References 1941 births People associated with CERN Living people Sammarinese physicists Particle physicists Experimental physicists Experimental particle physics Academic staff of the Sapienza University of Rome Foreign fellows of Pakistan Academy of Sciences Theoretical physicists Members of the European Academy of Sciences and Arts Foreign members of the Russian Academy of Sciences J. J. Sakurai Prize for Theoretical Particle Physics recipients Recipients of the Matteucci Medal National Research Council (Italy) people Fellows of the American Physical Society
Luciano Maiani
Physics
884
62,296,569
https://en.wikipedia.org/wiki/Transient%20Array%20Radio%20Telescope
The Transient Array Radio Telescope (TART) is a low-cost open-source array radio telescope consisting of 24 all-sky GNSS receivers operating at the L1-band (1.575 GHz). TART was designed as an all-sky survey instrument for detecting radio bursts, as well as providing a test-bed for the development of new synthesis imaging and calibration algorithms. All of the telescope hardware including radio receivers, correlators and operating software are open source. A TART-2 radio-telescope can be built for approximately 1000 Euros, and the telescope antenna array requires 4m x 4m area for deployment. The TART project is managed by the Electronics Research Foundation, a non-profit based in New Zealand. Design All of the components of TART, from the hardware, FPGA firmware and the operation and imaging software are open source. released under the GPLv3 license. A TART radio telescope consists of four main sub-assemblies, the antenna array, RF Front End, radio Hub and basestation. Antenna array The antenna array consists of 24 antennas arranged on four identical 'tiles' with 6 antennas each. Each tile is a 1x1 meter square. The antennas used are low-cost, widely available commercial GPS active antennas. More recent installations use multi-arm antenna arrays in either a three-arm 'Y' configuration, or a five-arm star configuration. RF front end The Radio Frequency (RF) front ends receive the radio signals from each antenna. The RF front ends take advantage of low-cost, widely available, and very sensitive integrated circuits developed for global positioning satellite receivers. The TART uses the MAX2769C Universal GNSS Receiver made by Maxim Integrated. This single integrated circuit includes all the elements required of a radio-telescope receiver; low-noise amplifier, local oscillator, mixer, filters and an ADC. Each RF front end generates a data-stream of digitized radio signals with 2.5 MHz bandwidth from the GPS L1 band (1.57542 GHz). Radio Hub The TART contains four radio hubs. Each has six RF front end receivers and clock distribution circuitry. Each radio hub sends data to the basestation, and receives the master clock signal from the basestation, over two standard Cat 6 twisted-pair Ethernet cables. Basestation The basestation is a single PCB with an attached Raspberry Pi computer, and Papilio Pro FPGA daughter board. The basestation provides the 16.3767 MHz Crystal Oscillator which is distributed to the four radio hubs to provide synchronous clocking to the RF front ends. The data is returned from the radios via each radio hub to the basestation, consisting of 24 parallel streams of 1-bit samples. An FPGA processes these samples, acting as a radio correlator. The 276 correlations are sent to the Raspberry Pi host via SPI, and made available over a RESTful API. Indicative Budget The component cost of a TART-2 telescope is approximately 1000 Euros. In addition a mounting for the antenna array is required. These can take several forms, with recent TART-2 telescopes using a multi-arm layout which allows for easy adjustment of antenna positions, these can cost approximately 1000 Euros depending on where parts are sourced. Software The TART telescope operating software is open-source and written in Python. It consists of several modules: A hardware driver that reads data from the telescope, via an SPI bus from the FPGA on the basestation. A RESTful API server that makes this data available via HTTP. This runs on the Raspberry Pi computer attached to the basestation. Software that performs aperture synthesis imaging based on the measurements. Aperture synthesis imaging The TART telescope can perform aperture-synthesis imaging of the whole sky in real-time. To do this the data from each of the 24 antennas is correlated with the data from every other antenna forming a complex interferometric visibility. There are 276 unique pairs of antennas, and therefore 276 unique complex visibility measurements. From these measurements, an image of the radio emission from the sky can be formed. This process is called aperture synthesis imaging. In the TART, the imaging is normally done using a browser-based imaging pipeline. Three different pipelines have been written to date: The browser-based control panel for the telescope distributed as part of the TART archive can perform basic imaging. An example is available here . A lightweight imaging-only pipeline written by Max Scheel . A research project from Stellenbosch university written by Jason Jackson . Development TART was developed by a team from the department of Physics at the University of Otago starting in 2013 with TART-1, and in July 2019, TART-3 is under development. TART-1 Development started in 2013 with TART-1, an M.Sc. project developing 6-element proof of concept radio-interferometer. TART-2/2.1 TART-1 was followed by TART-2 which was the focus of a Ph.D. research project. TART-2 consists of 24 elements and is capable of continuous all-sky imaging, with the 'first light' image being taken in August 2015. TART-2 was upgraded into TART-2.1 with reduce costs and improved clock stability. TART-2.1 started operation in 2018. TART-2 includes real-time correlation of the radio data from every pair of antennas. This correlation is carried out in the FPGA. There are 276 pairs of antennas, leading to 276 complex visibilities being calculated which are used as inputs to the synthesis imaging process. These visibilities are made available via the RESTful API for live imaging, or downloading for further analysis. TART-3 TART-3 started development in 2019. A TART-3 telescope will consist of 1-4 radio hubs each with 24 receivers. The maximum number of receivers in a single telescope increases to 96. TART-3 is designed to reduce construction costs, and simplify installation.TART-3 source code TART Installations There are currently four operational TART telescopes, with seven more planned over 2025 as part of an initiative sponsored by the South African Radio Astronomy observatory. The operational telescopes are shown in the table below: References External links TART project website TART Project Github Organization TART-2 Github Repository TART VUER Source code Gitlab Repository for the improved telescope web interface Live images Mirror using the TART VUER interface. Telescopes
Transient Array Radio Telescope
Astronomy
1,364
49,214,673
https://en.wikipedia.org/wiki/Agrocybe%20procera
Agrocybe procera is a species of agaric fungus in the family Strophariaceae. Found in Chile, it was described as new to science by mycologist Rolf Singer in 1969. References Fungi described in 1969 Fungi of Chile Strophariaceae Taxa named by Rolf Singer Fungus species
Agrocybe procera
Biology
63
35,044,818
https://en.wikipedia.org/wiki/C36H53N7O6
{{DISPLAYTITLE:C36H53N7O6}} The molecular formula C36H53N7O6 (molar mass: 679.85 g/mol, exact mass: 679.4057 u) may refer to: Difelikefalin Telaprevir Molecular formulas
C36H53N7O6
Physics,Chemistry
68
38,399,162
https://en.wikipedia.org/wiki/Duplex%20strainers
Duplex strainer or twin basket strainer is a type of filter built into a fuel, oil or water piping system and it is used to remove large particles of dirt and debris. The duplex strainer system usually consists of two separate strainer baskets housings. The system also contains a valve handle placed between the two baskets to divert the flow of liquid to one strainer while the other is being cleaned. On some strainers, the valve will work automatically and the strainer will perform a self-cleaning operation. These types of strainers are installed in pipeline systems where flow cannot be stopped. Depending upon their NB size they are capable of filtration up to 40 μm. Basket Strainers find use in industries where impurities are mostly solids. Unlike other types of strainers, it is easy to conduct maintenance on these strainers. Duplex strainers are mainly used in various industries such as process industry, power industry, chemical industry, oil and gas industry, pulp and paper industry, pharmaceutical industry, metals and mining industry, water and waste management, fire fighting industry, refineries and petrochemical plants. Strainers are used to remove hazardous elements that might cause partial or complete breakdown of operations if they get into the system. References Water filters Water treatment Water industry Petroleum industry
Duplex strainers
Chemistry,Engineering,Environmental_science
265
1,516,049
https://en.wikipedia.org/wiki/Pylon%20%28architecture%29
A pylon is a monumental gate of an Egyptian temple (Egyptian: bxn.t in the Manuel de Codage transliteration). The word comes from the Greek term 'gate'. It consists of two pyramidal towers, each tapered and surmounted by a cornice, joined by a less elevated section enclosing the entrance between them. The gate was generally about half the height of the towers. Contemporary paintings of pylons show them with long poles flying banners. Egyptian architecture In ancient Egyptian religion, the pylon mirrored the hieroglyph akhet 'horizon', which was a depiction of two hills "between which the sun rose and set". Consequently, it played a critical role in the symbolic architecture of a building associated with the place of re-creation and rebirth. Pylons were often decorated with scenes emphasizing a king's authority since it was the public face of a building. On the first pylon of the temple of Isis at Philae, the pharaoh is shown slaying his enemies while Isis, Horus and Hathor look on. Other examples of pylons can be seen in Karnak, Luxor Temple and Edfu. Rituals to the god Amun were often carried out on the top of temple pylons. A pair of obelisks usually stood in front of a pylon. In addition to standard vertical grooves on the exterior face of a pylon wall which were designed to hold flag poles, some pylons also contained internal stairways and rooms. The oldest intact pylons belong to mortuary temples from the Ramesside period in the 13th and 12th centuries BCE. Revival architecture Both Neoclassical and Egyptian Revival architecture employ the pylon form, with Boodle's gentlemen's club in London being an example of the Neoclassical style. The 19th and 20th centuries saw pylon architecture employed for bridges such as the Sydney Harbour Bridge and as stand-alone monuments such as the Patcham Pylon in Brighton and Hove, England. Gallery See also Ancient Egyptian architecture Column References External links Architectural elements Gates
Pylon (architecture)
Technology,Engineering
429
923,556
https://en.wikipedia.org/wiki/Classifying%20space
In mathematics, specifically in homotopy theory, a classifying space BG of a topological group G is the quotient of a weakly contractible space EG (i.e., a topological space all of whose homotopy groups are trivial) by a proper free action of G. It has the property that any G principal bundle over a paracompact manifold is isomorphic to a pullback of the principal bundle . As explained later, this means that classifying spaces represent a set-valued functor on the homotopy category of topological spaces. The term classifying space can also be used for spaces that represent a set-valued functor on the category of topological spaces, such as Sierpiński space. This notion is generalized by the notion of classifying topos. However, the rest of this article discusses the more commonly used notion of classifying space up to homotopy. For a discrete group G, BG is a path-connected topological space X such that the fundamental group of X is isomorphic to G and the higher homotopy groups of X are trivial; that is, BG is an Eilenberg–MacLane space, specifically a K(G, 1). Motivation An example of a classifying space for the infinite cyclic group G is the circle as X. When G is a discrete group, another way to specify the condition on X is that the universal cover Y of X is contractible. In that case the projection map becomes a fiber bundle with structure group G, in fact a principal bundle for G. The interest in the classifying space concept really arises from the fact that in this case Y has a universal property with respect to principal G-bundles, in the homotopy category. This is actually more basic than the condition that the higher homotopy groups vanish: the fundamental idea is, given G, to find such a contractible space Y on which G acts freely. (The weak equivalence idea of homotopy theory relates the two versions.) In the case of the circle example, what is being said is that we remark that an infinite cyclic group C acts freely on the real line R, which is contractible. Taking X as the quotient space circle, we can regard the projection π from R = Y to X as a helix in geometrical terms, undergoing projection from three dimensions to the plane. What is being claimed is that π has a universal property amongst principal C-bundles; that any principal C-bundle in a definite way 'comes from' π. Formalism A more formal statement takes into account that G may be a topological group (not simply a discrete group), and that group actions of G are taken to be continuous; in the absence of continuous actions the classifying space concept can be dealt with, in homotopy terms, via the Eilenberg–MacLane space construction. In homotopy theory the definition of a topological space BG, the classifying space for principal G-bundles, is given, together with the space EG which is the total space of the universal bundle over BG. That is, what is provided is in fact a continuous mapping Assume that the homotopy category of CW complexes is the underlying category, from now on. The classifying property required of BG in fact relates to π. We must be able to say that given any principal G-bundle over a space Z, there is a classifying map φ from Z to BG, such that is the pullback of π along φ. In less abstract terms, the construction of by 'twisting' should be reducible via φ to the twisting already expressed by the construction of π. For this to be a useful concept, there evidently must be some reason to believe such spaces BG exist. The early work on classifying spaces introduced constructions (for example, the bar construction), that gave concrete descriptions of BG as a simplicial complex for an arbitrary discrete group. Such constructions make evident the connection with group cohomology. Specifically, let EG be the weak simplicial complex whose n- simplices are the ordered (n+1)-tuples of elements of G. Such an n-simplex attaches to the (n−1) simplices in the same way a standard simplex attaches to its faces, where means this vertex is deleted. The complex EG is contractible. The group G acts on EG by left multiplication, and only the identity e takes any simplex to itself. Thus the action of G on EG is a covering space action and the quotient map is the universal cover of the orbit space , and BG is a . In abstract terms (which are not those originally used around 1950 when the idea was first introduced) this is a question of whether a certain functor is representable: the contravariant functor from the homotopy category to the category of sets, defined by h(Z) = set of isomorphism classes of principal G-bundles on Z. The abstract conditions being known for this (Brown's representability theorem) ensure that the result, as an existence theorem, is affirmative and not too difficult. Examples The circle is a classifying space for the infinite cyclic group The total space is The n-torus is a classifying space for , the free abelian group of rank n. The total space is The wedge of n circles is a classifying space for the free group of rank n. A closed (that is, compact and without boundary) connected surface S of genus at least 1 is a classifying space for its fundamental group A closed (that is, compact and without boundary) connected hyperbolic manifold M is a classifying space for its fundamental group . A finite locally connected CAT(0) cubical complex is a classifying space of its fundamental group. The infinite-dimensional projective space (the direct limit of finite-dimensional projective spaces) is a classifying space for the cyclic group The total space is (the direct limit of spheres Alternatively, one may use Hilbert space with the origin removed; it is contractible). The space is the classifying space for the cyclic group Here, is understood to be a certain subset of the infinite dimensional Hilbert space with the origin removed; the cyclic group is considered to act on it by multiplication with roots of unity. The unordered configuration space is the classifying space of the Artin braid group , and the ordered configuration space is the classifying space for the pure Artin braid group The (unordered) configuration space is a classifying space for the symmetric group The infinite dimensional complex projective space is the classifying space for the circle thought of as a compact topological group. The Grassmannian of n-planes in is the classifying space of the orthogonal group . The total space is , the Stiefel manifold of n-dimensional orthonormal frames in Applications This still leaves the question of doing effective calculations with BG; for example, the theory of characteristic classes is essentially the same as computing the cohomology groups of BG, at least within the restrictive terms of homotopy theory, for interesting groups G such as Lie groups (H. Cartan's theorem). As was shown by the Bott periodicity theorem, the homotopy groups of BG are also of fundamental interest. An example of a classifying space is that when G is cyclic of order two; then BG is real projective space of infinite dimension, corresponding to the observation that EG can be taken as the contractible space resulting from removing the origin in an infinite-dimensional Hilbert space, with G acting via v going to −v, and allowing for homotopy equivalence in choosing BG. This example shows that classifying spaces may be complicated. In relation with differential geometry (Chern–Weil theory) and the theory of Grassmannians, a much more hands-on approach to the theory is possible for cases such as the unitary groups that are of greatest interest. The construction of the Thom complex MG showed that the spaces BG were also implicated in cobordism theory, so that they assumed a central place in geometric considerations coming out of algebraic topology. Since group cohomology can (in many cases) be defined by the use of classifying spaces, they can also be seen as foundational in much homological algebra. Generalizations include those for classifying foliations, and the classifying toposes for logical theories of the predicate calculus in intuitionistic logic that take the place of a 'space of models'. See also Classifying space for O(n), BO(n) Classifying space for U(n), BU(n) Classifying space for SO(n) Classifying space for SU(n) Classifying stack Borel's theorem Equivariant cohomology Notes References Algebraic topology Homotopy theory Fiber bundles Representable functors
Classifying space
Mathematics
1,833
6,662,091
https://en.wikipedia.org/wiki/Fluorescence%20anisotropy
Fluorescence anisotropy or fluorescence polarization is the phenomenon where the light emitted by a fluorophore has unequal intensities along different axes of polarization. Early pioneers in the field include Aleksander Jablonski, Gregorio Weber, and Andreas Albrecht. The principles of fluorescence polarization and some applications of the method are presented in Lakowicz's book. Definition of fluorescence anisotropy The anisotropy (r) of a light source is defined as the ratio of the polarized component to the total intensity (): When the excitation is polarized along the z-axis, emission from the fluorophore is symmetric around the z-axis(Figure). Hence statistically we have . As , and , we have . Principle – Brownian motion and photoselection In fluorescence, a molecule absorbs a photon and gets excited to a higher energy state. After a short delay (the average represented as the fluorescence lifetime ), it comes down to a lower state by losing some of the energy as heat and emitting the rest of the energy as another photon. The excitation and de-excitation involve the redistribution of electrons about the molecule. Hence, excitation by a photon can occur only if the electric field of the light is oriented in a particular axis about the molecule. Also, the emitted photon will have a specific polarization with respect to the molecule. The first concept to understand for anisotropy measurements is the concept of Brownian motion. Although water at room temperature contained in a glass to the eye may look very still, on the molecular level each water molecule has kinetic energy and thus there are many collisions between water molecules in any amount of time. A nanoparticle (yellow dot in the figure) suspended in solution will undergo a random walk due to the summation of these underlying collisions. The rotational correlation time (Φr), the time it takes for the molecule to rotate 1 radian, is dependent on the viscosity (η), temperature (T), Boltzmann constant (kB) and volume (V) of the nanoparticle: The second concept is photoselection by use of a polarized light. When polarized light is applied to a group of randomly oriented fluorophores, most of the excited molecules will be those oriented within a particular range of angles to the applied polarization. If they do not move, the emitted light will also be polarized within a particular range of angles to the applied light. For single-photon excitation the intrinsic anisotropy r0 has a maximum theoretical value of 0.4 when the excitation and emission dipoles are parallel and a minimum value of -0.2 when the excitation and emission dipoles are perpendicular. where β is the angle between the excitation and emission dipoles. For steady-state fluorescence measurements it is usually measured by embedding the fluorophore in a frozen polyol. Taking the idealistic simplest case a subset of dye molecules suspended in solution that have a mono-exponential fluorescence lifetime and r0=0.4 (rhodamine 6g in ethylene glycol made to have an absorbance of ~0.05 is a good test sample). If the excitation is unpolarized then the measured fluorescence emission should likewise be unpolarized. If however the excitation source is vertically polarized using an excitation polarizer then polarization effects will be picked up in the measured fluorescence. These polarization artifacts can be combated by placing an emission polarizer at the magic angle of 54.7º. If the emission polarizer is vertically polarized there will be an additional loss of fluorescence as Brownian motion results in dye molecules moving from an initial vertical polarized configuration to an unpolarized configuration. On the other hand, if the emission polarizer is horizontally polarized there will be an additional introduction of excited molecules that were initially vertically polarized and became depolarized via Brownian motion. The fluorescence sum and difference can be constructed by addition of the intensities and subtraction of the fluorescence intensities respectively: Dividing the difference by the sum gives the anisotropy decay: The grating factor G is an instrumental preference of the emission optics for the horizontal orientation to the vertical orientation. It can be measured by moving the excitation polarizer to the horizontal orientation and comparing the intensities when the emission polarizer is vertically and horizontally polarized respectively. G is emission wavelength dependent. Note G in literature is defined as the inverse shown. The degree of decorrelation in the polarization of the incident and emitted light depends on how quickly the fluorophore orientation gets scrambled (the rotational lifetime ) compared to the fluorescence lifetime (). The scrambling of orientations can occur by the whole molecule tumbling or by the rotation of only the fluorescent part. The rate of tumbling is related to the measured anisotropy by the Perrin equation: Where r is the observed anisotropy, r0 is the intrinsic anisotropy of the molecule, is the fluorescence lifetime and is the rotational correlation time. This analysis is valid only if the fluorophores are relatively far apart. If they are very close to another, they can exchange energy by FRET and because the emission can occur from one of many independently moving (or oriented) molecules this results in a lower than expected anisotropy or a greater decorrelation. This type of homotransfer Förster resonance energy transfer is called energy migration FRET or emFRET. Steady-state fluorescence anisotropy only give an "average" anisotropy. Much more information can be obtained with time-resolved fluorescence anisotropy where the decay time, residual anisotropy and rotational correlation time can all be determined from fitting the anisotropy decay. Typically a vertically pulsed laser source is used for excitation and timing electronics are added between the start pulses of the laser (start) and the measurement of the fluorescence photons (stop). The technique Time-Correlated Single Photon Counting (TCSPC) is typically employed. Again using the idealistic simplest case a subset of dye molecules suspended in solution that have a mono-exponential fluorescence lifetime and an initial anisotropy r0=0.4. If the sample is excited with a pulsed vertically orientated excitation source then a single decay time should be measured when the emission polarizer is at the magic angle. If the emission polarizer is vertically polarized instead two decay times will be measured both with positive pre-exponential factors, the first decay time should be equivalent to measured with the unpolarized emission set-up and the second decay time will be due to the loss of fluorescence as Brownian motion results in dye molecules moving from an initial vertical polarized configuration to an unpolarized configuration. On the other hand, if the emission polarizer is horizontally polarized, two decay times will again be recovered the first one with a positive pre-exponential factor and will be equivalent to but the second one will have a negative pre-exponential factor resulting from the introduction of excited molecules that were initially vertically polarized and became depolarized via Brownian motion. The fluorescence sum and difference can be constructed by addition of the decays and subtraction of the fluorescence decays respectively: Dividing the difference by the sum gives the anisotropy decay: In the simplest case for only one species of spherical dye: Applications Fluorescence anisotropy can be used to measure the binding constants and kinetics of reactions that cause a change in the rotational time of the molecules. If the fluorophore is a small molecule, the rate at which it tumbles can decrease significantly when it is bound to a large protein. If the fluorophore is attached to the larger protein in a binding pair, the difference in polarization between bound and unbound states will be smaller (because the unbound protein will already be fairly stable and tumble slowly to begin with) and the measurement will be less accurate. The degree of binding is calculated by using the difference in anisotropy of the partially bound, free and fully bound (large excess of protein) states measured by titrating the two binding partners. If the fluorophore is bound to a relatively large molecule like a protein or an RNA, the change in the mobility accompanying folding can be used to study the dynamics of folding. This provides a measure of the dynamics of how the protein achieves its final, stable 3D shape. In combination with fluorophores which interact via Förster resonance energy transfer(FRET), fluorescence anisotropy can be used to detect the oligomeric state of complex-forming molecules ("How many of the molecules are interacting?"). Fluorescence anisotropy is also applied to microscopy, with use of polarizers in the path of the illuminating light and also before the camera. This can be used to study the local viscosity of the cytosol or membranes, with the latter giving information about the membrane microstructure and the relative concentrations of various lipids. This technique has also been used to detect the binding of molecules to their partners in signaling cascades in response to certain cues. The phenomenon of emFRET and the associated decrease in anisotropy when close interactions occur between fluorophores has been used to study the aggregation of proteins in response to signaling. See also Förster resonance energy transfer (FRET) Bioluminescence resonance energy transfer (BRET) Magnetic anisotropy Perrin friction factors References Fluorescence techniques Protein structure
Fluorescence anisotropy
Chemistry,Biology
2,007
42,570,879
https://en.wikipedia.org/wiki/Energy-Safety%20and%20Energy-Economy
Energy-Safety and Energy-Economy () is a peer-reviewed scientific and technical journal covering energy safety and economy, safety regulations, personnel training, innovation, and recent trends in alternative power sources research. The editor-in-chief is Svetlana Zernes (). It was established in 2005 as Energy Safety in Documents and Facts Journal, obtaining its current title in 2008. The journal is included in AGRIS, Ulrich's Periodicals Directory, the Higher Attestation Commission's official list, EBSCO, Russian Science Citation Index, Global Impact Factor, Research Bible, SHERPA/RoMEO, WorldCat, Open Academic Journals Index (OAJI) and VINITI Database RAS. In addition to bimonthly issues, Energy-Safety and Energy-Economy publishes a quarterly appendix. Awards Energy-Safety and Energy-Economy is a winner of the National Ecological Prize, Russian Energy Olympus contest, Social and Economic Significant Projects in Education, Culture and Ecology contest, Save Energy contest, and others. References External links Bimonthly journals Energy and fuel journals Academic journals established in 2005 Multilingual journals
Energy-Safety and Energy-Economy
Environmental_science
227
17,193,739
https://en.wikipedia.org/wiki/Noisy%20text
Noisy text is text with differences between the surface form of a coded representation of the text and the intended, correct, or original text. The noise may be due to typographic errors or colloquialisms always present in natural language and usually lowers the data quality in a way that makes the text less accessible to automated processing by computers, including natural language processing. The noise may also have been introduced through an extraction process (e.g., transcription or OCR) from media other than original electronic texts. Language usage over computer mediated discourses, like chats, emails and SMS texts, significantly differs from the standard form of the language. An urge towards shorter message length facilitating faster typing and the need for semantic clarity, shape the structure of this text used in such discourses. Various business analysts estimate that unstructured data constitutes around 80% of the whole enterprise data. A great proportion of this data comprises chat transcripts, emails and other informal and semi-formal internal and external communications. Usually such text is meant for human consumption, but—given the amount of data—manual processing and evaluation of those resources is not practically feasible anymore. This raises the need for robust text mining methods. Techniques for noise reduction The use of spell checkers and grammar checkers can reduce the amount of noise in typed text. Many word processors include this in the editing tool. Online, Google Search includes a search term suggestion engine to guide users when they make mistakes with their queries. See also Data corruption Jargon Leet speak Natural language understanding Noisy channel References Coding theory
Noisy text
Mathematics
317
48,363,641
https://en.wikipedia.org/wiki/Artificial%20turf%E2%80%93cancer%20hypothesis
Artificial turf is surface of synthetic fibers resembling natural grass. It is widely used for sports fields for being more hard-wearing and resistant than natural surfaces. Most use infills of crumb rubber from recycled tires; this use is controversial because of concerns that the tires contain carcinogens, though research into the issue is ongoing. Studies An unpublished study by Rutgers University examined crumb rubber from synthetic fields in New York City. It found six possibly carcinogenic polycyclic aromatic hydrocarbons at levels excessive to state regulations. The researchers warned that the findings could have been made inaccurate by solvent extraction used to release the chemicals from the rubber. In a statistical study of the list of soccer players with cancer provided by UW coach Amy Griffin, public health researchers for the State of Washington found that the rates of cancer were actually lower than was estimated for the general population. While they did not state any conclusions on the safety of this form of artificial turf, they did recommend that players not restrict their play due to the presumed health benefits of being active. In 2007, the California Office of Environmental Health Assessment (OEHHA) simulated interactions children can have with after coming into direct contact with artificial turf. Results showed that five chemicals, including four polycyclic aromatic hydrocarbons (PAHs), were found in samples. One of these compounds, chrysene, was present at levels higher than the standard established by OEHHA. Chrysene is a known carcinogen, meaning it can increase the risk of a child developing cancer. In late 2015, the United States Congress' House Energy and Commerce Committee ordered for the Environmental Protection Agency (EPA) to investigate a link. As of 2016, the EPA, the Consumer Product Safety Commission and the Centers for Disease Control and Prevention were investigating. In 2018, a study commissioned by the Dutch minister of Health, Welfare and Sport from the Dutch National Institute for Public Health and the Environment found that "our findings for a representative number of Dutch pitches are consistent with those of prior and contemporary studies observing no elevated health risk from playing sports on synthetic turf pitches with recycled rubber granulate". A 2019 Yale study showed that there were 306 chemicals in crumb rubber and that 52 of these chemicals were classified as carcinogens by the Environmental Protection Agency (EPA). They stated that "a vacuum in our knowledge about the carcinogenic properties of many crumb rubber infill. The crumb rubber infill of artificial turf fields contains or emits chemicals that can affect human physiology." In 2020, the European Risk Assessment Study on Synthetic Turf Rubber Infill was completed; published in Science of the Total Environment, this was a scientific study funded by companies and industry association from the tyre granulate supply chain, drawing on data from diverse parts of Europe. The researchers concluded that "there are no relevant health risks associated with the use of synthetic turfs with ELT-derived infill material". A 2022 study published in the same journal analyzed the composition of synthetic turf football pitches from 17 countries. It confirmed the presence of "hazardous substances in the recycled crumb rubber samples collected all around the world" including PAHs of high and very high concern. The study concluded that different stakeholders "must work on a consensus to protect not only human health but also the environment, since there is evidence that crumb rubber hazardous chemicals can reach the environment and affect wildlife." The paper did not, however, discuss cancer risk in any detail. In March 2023, investigative reporters from the Philadelphia Inquirer bought souvenir samples of the old Veterans Stadium AstroTurf used from 1977–81 and commissioned diagnostics through the Eurofins Environmental Testing laboratory. The resulting lab report linked per- and polyfluoroalkyl substances (PFAS) to the turf. Six former Philadelphia Phillies who played at Veterans Stadium, home to the team from 1971 to 2003, died from glioblastoma, an aggressive brain cancer: Tug McGraw, Darren Daulton, John Vukovich, Johnny Oates, Ken Brett, and David West. Testimonies Nigel Maguire, formerly a chief executive for the National Health Service in Cumbria, claims that his son, a goalkeeper, could have developed Hodgkin's lymphoma by playing on an artificial surface. He has called for a ban on the surfaces, saying "It is obscene so little research has been done." In 2014, Amy Griffin, soccer coach at the University of Washington, surveyed American players of the sport who had developed cancer. Of 38 players, 34 were goalkeepers, a position in which diving to the surface makes accidental ingestion or blood contact with crumb rubber more likely, Griffin has asserted. Lymphoma and leukemia, cancers of the blood, predominated. Sports organizations FIFA, the world governing body of association football (soccer), has stated that the evidence weighs in favour of artificial pitches being safe. The Football Association of England stated in February 2016 that they were observing reports and conducting their own research on the issue. References Cancer Cancer
Artificial turf–cancer hypothesis
Chemistry
1,030
39,800,770
https://en.wikipedia.org/wiki/Integrated%20modification%20methodology
Integrated modification methodology (IMM) is a procedure encompassing an open set of scientific techniques for morphologically analyzing the built environment in a multiscale manner and evaluating its performance in actual states or under specific design scenarios. The methodology is structured around a nonlinear phasing process aiming for delivering a systemic understanding of any given urban settlement, formulating the modification set-ups for improving its performance, and examining the modification strategies to transform that system. The basic assumption in IMM is the recognition of the built environment as a Complex Adaptive System. IMM has been developed by IMMdesignlab, a research lab based at Politecnico di Milano at the Department of Architecture, Built Environment and Construction Engineering (DABC). History IMM began in 2010 as an academic research at Politecnico di Milano. That research criticized the analytical approach frequently used to study and evaluate the built environment by most of the sustainable development methods. By Recognizing the built environment as a Complex Adaptive System (CAS), IMM is urged towards a holistic simulation rather than simplifying the complex mechanisms within the cities with reductionism. In 2013, Massimo Tadi established the IMMdesignlab at the Department of Architecture, Built Environment and Construction Engineering (DABC) of the Politecnico di Milano. The purpose of the mentioned laboratory is to develop IMM through research and education. IN 2015, Integrated Modification Methodology for the Sustainable Built Environment has been approved as an academic course in the curriculum of the Architectural Engineering, an International Master Program in Politecnico di Milano. Background At its theoretical background, Integrated Modification Methodology refers to the contemporary urban development as a highly paradoxical context arisen from the social and economic significance of the cities on the one hand and their arguably negative environmental impacts on the other. Asserting the inevitably of urbanization, IMM declares that the only way to overcome that paradox for the cities is to develop in a profound integration with the ecology. According to IMM, the fundamental prerequisite of ecologically sustainable development is to have a comprehensive systemic understanding of the built environment. IMM suggests that the advancement in construction techniques, building materials quality and transportation technologies alone have not solved the complex problems of the urban life simply because such improvements are not necessarily dealing with the systemic integration. The core argument of IMM is that the performance of the city is being chiefly driven by the complex relationships subsystems rather than the independent qualities of the urban elements. Thus, it aims at portraying the systemic structure of the built environment by introducing a logical framework for modeling the linkage between the city's static and dynamic elements. Methodology Integrated Modification Methodology is based on an iterative process involving the following four major phases: Investigation Formulation Modification Retrofitting and Optimization The first phase, Investigation, is a synthesis-based inquiry into the systemic structure of the urban form. It begins with Horizontal Investigation in which the area under study is being dismantled into its morphology-generator elements, namely Urban Built-ups, Urban Voids, Types of Uses, and Links. It follows with Vertical Investigation that is a study of integral relationships between the mentioned elements. The output of Vertical Investigation is a set of quantitative descriptions and qualitative illustrations of certain attributes named Key Categories. In a nutshell they are types of emergence that show how elements come to self-organize or to synchronize their states into forming a new level of organization. Hence in IMM, Key Categories are the result of an emergence process of interaction between elementary parts (Urban Built-ups, Urban Voids, Types of Uses, and Links) to form a synergy able to add value to the combined organization. Key categories are the products of the synergy between elementary parts, a new organization that emerge not (simply as) an additive result of the proprieties of the elementary parts. IMM declares that the city's functioning manner is chiefly driven from the Key Categories, hence, they have the most fundamental role in understanding the architecture of the city as a Complex Adaptive System. The Investigation phase concludes with the Evaluation step which is basically an examination of the system's performance by referring to a list of verified indicators associated with ecological sustainability. The same indicators are later used in the CAS retrofitting process necessary for the final evaluation of the system performance, after the transformation design process occurred. The Formulation phase is a technical assumption of the most critical Key Category and the urban element within the area deduced from the Investigation phase. These critical attributes are being interpreted as the Catalysts of transformation and are to come to the designer's use to set a contextual priority list of Design Ordering Principals. The third phase is the introduction of the modification/design scenarios to the project and advances with examining them by the exact procedure of the Investigation phase in a repetitive manner until the transformed context is predicted to be acceptable in arrangement and evaluation. The fourth phase, Retrofitting and Optimization, is a testing process of the outcomes of the modification phase, then a local optimization by technical strategies (e.g. installing photovoltaic panels, designing green roofs, studying building orientations etc.) is initiated. See also Analysis Center for the Built Environment Chaos theory Circles of Sustainability Cognitive science Collaboration Complex system Design Design education Design Impact Measures Design research Design strategy Design thinking Ecology Ecological footprint Energy conservation Conceptual framework Heuristic Holistic Innovation Interaction design Intuition (knowledge) Method Observation Participatory design Principles of intelligent urbanism Programming paradigm Renewable energy Simulation Sustainable architecture Sustainable design Sustainable development Sustainable landscape architecture Sustainable preservation Sustainable refurbishment Wicked problem World Green Building Council References Further reading Ahern, J. (2006). "Green Infrastructure for Cities: The spatial Dimension". In Cities of the Future Towards Integrated Sustainable Water and Landscape Management, edited by Vladimir Novotny and Paul Brown, 267–283. London: WA publishing. Anderson, P. (1999). Complexity Theory and Organization Science Organization Science. 10(3): 216–232. Batty, M. (2009). Cities as Complex Systems: Scaling, Interaction,Networks, Dynamics and Urban Morphologies. In Encyclopedia of Complexity and Systems Science. Springer. Bennett, S., (2009), A Case of Complex Adaptive Systems Theory- Sustainable Global Governance: The Singular Challenge of the Twenty-first Century. RISC-Research Paper No.5: p. 38 Brownlee, J., (2007), Complex Adaptive Systems. CIS Technical Report: p. 1–6. Backlund, A. (2000), "The definition of system". In: Kybernetes Vol. 29 nr. 4, pp. 444–451. Clarke, C. and P. Anzalone, Architectural Applications of Complex Adaptive Systems, XO (eXtended Office). p. 19. Crotti, S., (1991), Metafora, Morfogenesi e Progetto, E.D'alfonso and E.Franzini, Editors. 1991: Milano. Hildebrand, F. (1999), Designing the city towards a more sustainable urban form. Routledge. Hough, Micheal. (2004). Cities and Natural Processes: a Basis for Sustainability. London: Routledge. Jenks, M., E. Burton, and K. Williams, (1996), The compact city, a sustainable form?: F a FN Spon, an imprint of Chapman & Hall. 288 Ratti C., Baker N., (2005) Steemers K., Energy consumption and urban texture, Energy and buildings, Elsevier. Salat, S. and L. Bourdic, Urban complexity, scale hierarchy, energy efficiency and economic value creation. WIT Transactions on Ecology and The Environment, 2012. Vol 155: p. 11. Steel, C. (2009), Hungry City: How Food Shapes Our Lives, Random House UK. Tadi, M. Vahabzadeh Manesh, S. A.Daysh, G. Kahraman, I. Ursu (2013) The case study of Timișoara (Romania). IMM design for a more sustainable, livable and responsible city. AST Management Pty Ltd, Nerang, QLD, Australia. Tadi, M. & Bogunovich, D. (2017). New Lynn - Auckland IMM Case Study: Low-density urban morphology and energy performance optimisation. Auckland, New Zealand. Retrieved from http://unitec.ac.nz/epress/ Thom, R., (1975), Stabilite Structurelle et Morphogenese. Massachusetts: W.A.Benjamin, Inc. 348. Vahabzadeh Manesh, S. M. Tadi, (2013) Neighborhood Design and Urban Morphological Transformation through Integrated Modification Methodology (IMM) part 1. The Designer Architectural Magazine Vol.8. IRAN. External links European Environment Agency – Air Pollution European Environment Agency – Sustainability Transition Energy Recovery Council Transit Oriented Development Institute UNHabitat for a better Urban Future World Green Building Council Urban population (% of total) – World Bank website based on UN data. Degree of urbanization (percentage of urban population in total population) by continent in 2016 – Statista, based on Population Reference Bureau data. Sustainable architecture Sustainable building Sustainable design Sustainable development Environmental social science Sustainable urban planning Academic disciplines
Integrated modification methodology
Engineering,Environmental_science
1,896
11,884,960
https://en.wikipedia.org/wiki/Extensin
Extensins are a family of flexuous, rodlike, hydroxyproline-rich glycoproteins (HRGPs) of the plant cell wall. They are highly abundant proteins. There are around 20 extensins in Arabidopsis thaliana. They form crosslinked networks in the young cell wall. Typically they have two major diagnostic repetitive peptide motifs, one hydrophilic and the other hydrophobic, with potential for crosslinking. Extensins are thought to act as self-assembling amphiphiles essential for cell-wall assembly and growth by cell extension and expansion. The name "extensin" encapsulates the hypothesis that they are involved in cell extension. Hydrophilic motif This pentapeptide consists of serine (Ser) and four hydroxyprolines (Hyp): Ser-Hyp-Hyp-Hyp-Hyp. Hydroxyproline is unusual not only as a cyclic amino acid that restricts peptide flexibility but as an amino acid with no codon, being encoded as proline. Polypeptides targeted for secretion are subsequently hydroxylated by direct addition of molecular oxygen to proline at C-4. Extensin hydroxyproline is uniquely glycosylated with short chains of L-arabinose that further rigidify and increase hydrophilicity. Generally the serine has a single galactose attached. Hydrophobic tyrosine crosslinking motif Two tyrosines separated by a single amino acid, typically valine or another tyrosine, form a short intra-molecular diphenylether crosslink. This can be crosslinked further by the enzyme extensin peroxidase to form an inter-molecular bridge between extensin molecules and thus form networks and sheets. References Further reading Kieliszewski M, Lamport DTA (1994) Extensin: repetitive motifs, functional sites, post-translational codes, and phylogeny Plant Journal 5: 157–172 Plant proteins Structural proteins Glycoproteins
Extensin
Chemistry
431
16,341,729
https://en.wikipedia.org/wiki/Tape%20transport
A tape transport is the collection of parts of a magnetic tape player or recorder that move the tape and play or record it. Transport parts include the head, capstan, pinch roller, tape pins, and tape guide. The tape transport as a whole is called the transport mechanism. Tape head The tape head is the part of a tape recording or playback device which converts the magnetic fluctuations present in the tape into an electrical signal, which is then amplified and sent to speakers or headphones. The tape head is set off-center in a multitrack device in order to record or play one or more tracks running in each direction of the tape (e.g. the two different tracks present on most, if not all, compact cassettes). Capstan The capstan is a rotating spindle used to move recording tape through the mechanism of a tape recorder. The tape is threaded between the capstan and one or more rubber-covered wheels, called pinch rollers, which press against the capstan, thus providing friction necessary for the capstan to pull the tape. The capstan is always placed downstream (in the direction of tape motion) from the tape heads. To maintain the required tension against the tape heads and other part of the tape transport, a small amount of drag is placed on the supply reel. Tape recorder capstans have a function similar to nautical capstans, which however have no pinch rollers, the line simply being wound around them. The use of a capstan allows the tape to run at a precise and constant speed. Capstans are precision-machined spindles, and polished very smooth: any out-of-roundness or imperfections can cause uneven motion and an audible effect called flutter. The alternative to capstan drive, simply driving the tape takeup reel (which was used on some cheap tape recorders), causes problems both with the speed difference between a full and empty reel and with speed variations as described. Dual capstans, where one is on each side of the heads, are claimed to provide even smoother tape travel across the heads and result in less variance in the recorded/playback signal. Pinch roller The pinch roller is a rubberized, free-spinning wheel typically used to press magnetic tape against a capstan shaft in order to create friction necessary to drive the tape along the magnetic heads (erase, write, read). Most magnetic tape recorders use one capstan motor and one pinch roller located after the magnetic heads in the direction of the moving tape. However multiple pinch rollers may also be employed in association with one or more capstans. An example of the application of multiple pinch rollers is the Technics RS-1520 tape recorder, which utilizes two pinch rollers located on opposite sides of a single capstan shaft, providing a more stable transport across two sets of magnetic heads. Dual pinch rollers are also used (along with dual capstans) in auto-reverse cassette decks to drive the tape in both directions as needed. In this case, only one pinch roller is pressed against its corresponding capstan at a time. Tension arm A tension arm is a device used in magnetic tape recorders/reproducers to control the tension of the magnetic tape during machine operation. The recorders equipped with a tension arm can utilize more than one of them to control tape tension in different direction of winding or during different modes of tape operation. Tension arms can also be found on digital data recorders and other types of recorders/reproducers using continuous tape media such as magnetic digital tape, perforated paper tape, and analog magnetic tape. References One of many US Patents pertaining to Tension arm Workbench Guide to Tape Recorder Servicing. G. Howard Poteet, 1977 Wheels Sound recording technology Sound production technology Tape recording
Tape transport
Technology
768
2,218,814
https://en.wikipedia.org/wiki/Northern%20hairy-nosed%20wombat
The northern hairy-nosed wombat (Lasiorhinus krefftii) or yaminon is one of three extant species of Australian marsupials known as wombats. It is one of the rarest land mammals in the world and is critically endangered. Its historical range previously extended across New South Wales, Victoria, and Queensland, and as recently as 100 years ago it was considered as having become extinct, but in the 1930s a population of about 30 individuals was discovered located in one place, a range within the Epping Forest National Park in Queensland. With the species threatened by wild dogs, the Queensland Government built a -long predator-proof fence around all wombat habitat at Epping Forest National Park in 2002. Insurance populations have since been translocated to two other locations to ensure the species survives threats such as fire, flood, or disease. In 2003, the total population consisted of 113 individuals, including only around 30 breeding females. After recording an estimated 230 individuals in 2015, the number was up to over 300 by 2021, and over 400 by 2024. Taxonomy English naturalist Richard Owen described the species in 1873. The genus name Lasiorhinus comes from the Latin words lasios, meaning hairy or shaggy, and , meaning nose. The widely accepted common name is northern hairy-nosed wombat, based on the historical range of the species, as well as the fur, or "whiskers", on its nose. In some older literature, it is referred to as the Queensland hairy-nosed wombat. The northern hairy-nosed wombat shares its genus with one other extant species, the southern hairy-nosed wombat, while the common wombat is in the genus Vombatus. Both Lasiorhinus species differ morphologically from the common wombat by their silkier fur, broader hairy noses, and longer ears. Description In general, all species of wombat are heavily built, with large heads and short, powerful legs. They have strong claws to dig their burrows, where they live much of the time. It usually takes about a day for an individual to dig a burrow. Northern hairy-nosed wombats have bodies covered in soft, grey fur; the fur on their noses sets them apart from the common wombat. They have longer, more pointed ears and a much broader muzzle than the other two species. Individuals can be 35 cm high, up to 1 m long and weigh up to 40 kg. The species exhibits sexual dimorphism, with females being somewhat larger than males due to the presence of an extra layer of fat. They are slightly larger than the common wombat and able to breed somewhat faster (giving birth to two young every three years on average). The northern hairy-nosed wombat's nose is very important in its survival because it has very poor eyesight, so it must detect its food in the dark through smell. Examination of the wombat's digestive tract shows that the elastic properties of the ends of their large intestines are capable of turning liquid excrement into cubical scat. Distribution and habitat Northern hairy-nosed wombats require deep sandy soils in which to dig their burrows, and a year-round supply of grass, which is their primary food. These areas usually occur in open eucalypt woodlands. At Epping Forest National Park, northern hairy-nosed wombats construct their burrows in deep, sandy soils on levée banks which were deposited by a creek that no longer flows through the area. They forage in areas of heavy clay soils adjacent to the sandy soils, but do not dig burrows in these areas, which become waterlogged in the wet seasons. In the park, burrows are often associated with native bauhina trees (Lysiphyllum hookeri). This tree has a spreading growth form, and its roots probably provide stability for the extensive burrows dug by the wombats. By the 1980s the range of the northern hairy-nosed wombat had become restricted to a single site of about in the Epping National Forest in east-central Queensland, north-west of Clermont. Insurance populations have since been established at two locations near St George, at the Richard Underwood Nature Refuge in 2009, and in the Powrunna State Forest in 2024 with plans for a fourth site by 2041. Behaviour The northern hairy-nosed wombat is nocturnal, living underground in networks of burrows. They avoid coming above ground during harsh weather, as their burrows maintain a constant humidity and temperature. They have been known to share burrows with up to 10 individuals, equally divided by sex. Young are usually born during the wet season, between November and April. When rain is abundant, 50–80% of the females in the population will breed, giving birth to one offspring at a time. Juveniles stay in their mothers' pouches for 8 to 9 months, and are weaned at 12 months of age. The fat reserves and low metabolic rate of this species permit northern hairy-nosed wombats to go without food for several days when food is scarce. Even when they do feed every day, it is only for 6 hours a day in the winter and 2 hours in the summer, significantly less than a similar-sized kangaroo, which feeds for at least 18 hours a day. Their diet consists of native grasses: black speargrass (Heteropogon contortus), bottle washer grasses (Enneapogon spp.), golden beard grass (Chrysopogon fallax), and three-awned grass(Aristida spp.), as well as various types of roots. The teeth continue to grow beyond the juvenile period, and are worn down by the abrasive grasses they eat. Its habitat has become infested with African buffel grass, a grass species introduced for cattle grazing. The grass outcompetes the more nutritional and native grasses on which the wombat prefers to feed by limiting its quantity, forcing the wombat to travel further to find the native grasses it prefers, and leading to a reduction in biomass. Conservation Status The conservation status of the northern hairy-nosed wombat is as follows: Critically Endangered, per IUCN (; last assessed 15 June 2015), Critically Endangered, under the Australian Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act); and Critically Endangered, under the Nature Conservation Act 1992 (Qld). On 15 February 2018, the federal Department of the Environment and Energy (DoEE) upgraded the conservation status from Endangered to Critically Endangered under the EPBC Act to better align with the International Union for Conservation of Nature's (IUCN) Red List of Threatened Species. Due to its status under the EPBC Act, it is listed on the Species Profile and Threats Database (SPRAT). Threats Originally there were two main groups of hairy-nosed wombats (the other being the Southern hairy-nosed wombat, Lasiorhinus latifrons) that were separated by Spencer Gulf in South Australia. Both species experienced a population decline between 1870 and 1920, with the main influences being culling by agriculturalists, competition for food with introduced and feral species and predation. Threats to the northern hairy-nosed wombat include small population size, predation, competition for food, disease, floods, droughts, wildfires, and habitat loss. Its small, highly localised population makes the species especially vulnerable to natural disasters. Wild dogs are the wombat's primary predator, but the spread of invasive herbivores such as the European rabbit and the actions of landowners have also contributed to their decline. There have been two reports of male northern hairy-nosed wombats contracting a fungal infection caused by Emmonsia parva, a soil saprophytic fungus. It is likely that the northern hairy-nosed wombats are inhaling the infection from the soil. Counter-measures Since around 1993, the Queensland Government's Department of Environment and Science (DES) and predecessors have led a recovery program, supported by Glencore mining company and The Wombat Foundation, for the species. To combat the vulnerability of this species, a number of conservation projects have been put into action in the 21st century. One example was the construction of a two-metre-high, predator-proof fence around of the park in 2000. A second, insurance colony of this species of wombat was established at Richard Underwood Nature Refuge (RUNR) at Yarran Downs, near St George in southern Queensland in 2008. The reserve is surrounded by a predator-proof fence. In 2021 the Australian Wildlife Conservancy (AWC), a private conservation organisation, formed a partnership with DES to collaborate on research and management of the animals in the sanctuary. In October 2023 AWC signed an agreement with DES to care for the wombats in the Richard Underwood Nature Reserve. DES would focus on the Epping Forest population. In 2006, researchers performed a study to analyse the demography of the northern hairy-nosed wombat, by using double-sided tape in the burrows to collect hair of the wombats. Through DNA analysis, they found that the ratio of female to male wombats was 1:2.25 in the population of approximately 113 wombats. These findings allowed researchers to understand the demographics of this species, and opened up further research to better understand why there is a significant difference in males and females in the wild. Within Epping Forest National Park, increased attention and funds have been given for wombat research and population monitoring, fire management, maintenance of the predator-proof fence, general management, and control of predators and competitors, and elimination of invasive plant species. In addition, the species recovery plan of 2004 to 2008 included communication and community involvement in saving the species, and worked to increase the current population in the wild, established other populations within the wombat's historical range. There is also a volunteer caretaker program, that allows volunteers to contribute in monitoring the population and keeping the predator fence in good repair. In addition, DNA fingerprint identification of wombat hairs allows research to be conducted without an invasive trapping or radio-tracking program. Studies have also been conducted to assess diet and nutrition. Population increases Due to the combined efforts of these forces, the northern hairy-nosed wombat population has been slowly making a comeback. After having been considered extinct, a population of about 30 was discovered in the Epping Forest in the 1930s, and only 35 individuals were counted in the early 1980s. In 2003, the total population consisted of 113 individuals, including only around 30 breeding females. In the last census taken in 2013, the estimated population was 196 individuals, with an additional 9 individuals at RUNR at Yarran Downs. In 2016 the population was estimated to be 250 individuals. In May 2021, researchers found that the population had increased to over 300 individuals. In June 2024, the total population was reported as being over 400 individuals, including 18 at the RUNR, and 15 newly translocated to the Powrunna State Forest. References Critically endangered fauna of Australia Vombatiforms Mammals of Queensland Mammals of New South Wales Marsupials of Australia EDGE species Nature Conservation Act endangered biota Taxa named by Richard Owen Mammals described in 1873
Northern hairy-nosed wombat
Biology
2,271
35,819,355
https://en.wikipedia.org/wiki/Conference%20on%20Computer%20Communications
The IEEE Conference on Computer Communications (INFOCOM) addresses key topics and issues related to computer communications, with emphasis on traffic management and protocols for both wired and wireless networks. The first INFOCOM conference took place in the United States in Las Vegas, Nevada, in 1982. Since then it has been held in many locations around the world, including China, Japan, Israel, Italy, Spain, and Brazil, as well as many other regions of the United States. References External links IEEE INFOCOM Computer conferences
Conference on Computer Communications
Technology
102
784,781
https://en.wikipedia.org/wiki/Global%20city
A global city is a city that serves as a primary node in the global economic network. The concept originates from geography and urban studies, based on the thesis that globalization has created a hierarchy of strategic geographic locations with varying degrees of influence over finance, trade, and culture worldwide. The global city represents the most complex and significant hub within the international system, characterized by links binding it to other cities that have direct, tangible effects on global socioeconomic affairs. The criteria of a global city vary depending on the source. Common features include a high degree of urban development, a large population, the presence of major multinational companies, a significant and globalized financial sector, a well-developed and internationally linked transportation infrastructure, local or national economic dominance, high quality educational and research institutions, and a globally influential output of ideas, innovations, or cultural products. Quintessential examples, based on most indices and research, include New York City, London, Paris, and Tokyo. Origin and terminology The term 'global city' was popularized by sociologist Saskia Sassen in her 1991 book, The Global City: New York, London, Tokyo. Before then, other terms were used for urban centers with roughly the same features. The term 'world city', meaning a city heavily involved in global trade, appeared in a May 1886 description of Liverpool, by The Illustrated London News; British sociologist and geographer Patrick Geddes used the term in 1915. The term 'megacity' entered common use in the late 19th or early 20th century, the earliest known example being a publication by the University of Texas in 1904. In the 21st century, the terms are usually focused on a city's financial power and high technology infrastructure. Criteria Competing groups have devised competing means to classify and rank world cities and to distinguish them from other cities. Although there is a consensus on the leading world cities, the chosen criteria affect which other cities are included. Selection criteria may be based on a yardstick value (e.g., if the producer-service sector is the largest sector then city is a world city) or on an imminent determination (if the producer-service sector of city is greater than the combined producer-service sectors of other cities then city is a world city.) Although criteria are variable and fluid, typical characteristics of world cities include: The most prominent criterion has been providing a variety of international financial services, notably in finance, insurance, real estate, banking, accountancy, and marketing; and their amalgamation of financial headquarters, a stock exchange, and other major financial institutions, Headquarters of numerous multinational corporations, Domination of the trade and economy of a large surrounding area, Major manufacturing centers with port and container facilities, Considerable decision-making power daily and at a global level, Centers of new ideas and innovation in business, economics, and culture, Centers of digital and other media and communications for global networks, The dominance of the national region with great international significance, The high percentage of residents employed in the services sector and information sector, High-quality educational institutions, including renowned universities and research facilities; and attracting international student attendance, Multi-functional infrastructure offering some of the best legal, medical, and entertainment facilities in the country, High diversity in language, culture, religion, and ideologies. General rankings Global city rankings are numerous. New York City, London, Tokyo, and Paris are the most commonly mentioned. GaWC World Cities Primarily concerned with what is calls the "advanced producer services" of accountancy, advertising, banking/finance, and law, the cities in the top two classifications in the 2024 edition are: Alpha ++ London New York City Alpha + Beijing Dubai Hong Kong Paris Shanghai Singapore Sydney Tokyo Global Cities Index (Kearney) In 2008, the American journal Foreign Policy, working with the consulting firm A.T. Kearney and the Chicago Council on Global Affairs, published a ranking of global cities based on consultation with Saskia Sassen, Witold Rybczynski, and others. The ranking is based on 27 metrics across five dimensions: business activity, human capital, information exchange, cultural experience, and political engagement. The top ranked cities in 2024 are: New York City London Paris Tokyo Singapore Beijing Los Angeles Shanghai Hong Kong Chicago Global Cities Index (Oxford Economics) Advisory firm Oxford Economics released its Global Cities Index in 2024, ranking the world's largest 1,000 cities based on 27 indicators across five categories (economics, human capital, quality of life, environment, and governance) with more weight on economic factors. The top ranked cities in 2024 are: New York City London San Jose Tokyo Paris Seattle Los Angeles San Francisco Melbourne Zurich Global Power City Index The Tokyo-based Institute for Urban Strategies at The Mori Memorial Foundation, issued a study of global cities in 2008. They are ranked in six categories: economy, research and development, cultural interaction, livability, environment, and accessibility, with 70 individual indicators among them. The top ten world cities are also ranked by subjective categories, including manager, researcher, artist, visitor and resident. The top 10 cities in 2023 are: London New York City Tokyo Paris Singapore Amsterdam Seoul Dubai Melbourne Berlin World's Best Cities ranking Consultancy firm Resonance publishes the World’s Best Cities ranking. They are ranked in three categories: livability, lovability and prosperity, each of them using different factors. The top 10 cities in 2024 are: London New York City Paris Tokyo Singapore Rome Madrid Barcelona Berlin Sydney Financial rankings Global Financial Centres Index Strength as a financial center has become one of the pre-eminent indicators of a global city's ranking. As of 2024, the cities representing the top ten financial centers according to the Global Financial Centres Index by the think tank China Development Institute and analytics firm Z/Yen are: New York City London Singapore Hong Kong San Francisco Shanghai Geneva Los Angeles Chicago Seoul The Wealth Report Estate agent Knight Frank LLP and the Citi Private Bank publish The Wealth Report, which includes a "Global Cities Survey", evaluating the most important cities to high-net-worth individuals (HNWIs, having over $25 million of investable assets each). Criteria are economic activity, political power, knowledge and influence, and quality of life. The most important cities to UHNWIs in 2022 are: London Paris & New York City Los Angeles Tokyo Chicago Singapore Hong Kong Toronto Beijing See also Caput Mundi City quality of life indices Ecumenopolis Financial centre Metropolitan and urban regions with the largest foreign-born populations Globalization List of cities by GDP Megalopolis Metropolis Primate city Ranally city rating system Notes References External links Repository of Links Relating to Urban Places The World-System's City System: A Research Agenda () by Jeffrey Kentor and Michael Timberlake of the University of Utah and David Smith of University of California, Irvine UN-HABITAT: The State of the World's Cities Cultural geography Economic geography Economic globalization Index numbers Lists of cities Loughborough University Metropolitan areas Types of cities Urban areas
Global city
Mathematics
1,410
21,806,224
https://en.wikipedia.org/wiki/Shipbuilding%20contract
Shipbuilding contract, which is the contract for the complete construction of a ship, concerns the sales of future goods, so the property could not pass title at the time when the contract is concluded. The aim of shipbuilding contract is to regulate a substantial and complex project which the builders and buyers assume long-term obligations to other and bear significant commercial risks. Shipbuilding contract is a non-maritime contract and not within the Admiralty jurisdiction because it is insufficiently related to any rights and duties pertaining to sea commerce and/or navigation. The property passes to the buyer when the ship has been completed. To avoid difficulties, provision can be made for the property to pass in stage in the process of development and construction. It is different from most hire-purchase agreements where the seller has ownership of the property until the payment of the final installment. Under the Sale of Goods Act 1979, this kind of agreement to sell ‘future’ goods may be a sale either by description or by sample. The sale of new building ship, which is large manufacturing project, is obviously undertaken by description. It is a condition to comply with the agreed description when performing the contract. Standard forms of contract Shipbuilding contract are constructed within the framework of standard contract forms amended by the contractual parties to meet their particular requirements. The choice of form will be based on the influence of trade association which the builders belong to. Principal Form SAJ Form It is published by the Shipbuilders’ Association of Japan in January 1974 and the framework of this form is commonly used in South Korea, China, Singapore and Taiwan. AWES FormIt is the standard shipbuilding contract of the Association of European Shipbuilders and Shiprepairers which revised and reissued in May 1999. National Form The Norwegian Shipowners’ Association and Norwegian Shipbuilders’ Association MARAD Form (The Maritime Administration of the United States Department of Commerce)It is used in relation to American newbuildings financed under Federal Ship Financing Program authorized by Title XI of the Merchant Marine Act 1936. Formation of contract There is no requirement that a shipbuilding contract should be concluded in writing, it will be also legally enforceable in oral provided that necessary formal elements are present. The main terms of an agreement, such as expenditure, timescale and risks involved in shipbuilding, are better to record in written form. In order to create an enforceable agreement, the essential elements for an agreement to form a legally binding contract must be presented, they are: offer; acceptance; consideration; privity of contract intention to form a contract; capacity; Where all these elements are present, a legally binding contract comes into effect. Otherwise, if any of the elements are missing, there is no legally binding contract. Duties of a builder The duty of a builder is to complete the new building ship in accordance with the design and specification given by the buyer. He must ensure the materials he uses are fit for the purpose required and must carry out the building works with general standard of skills expected for a shipbuilder since the buyers rely on the builder’s skills and judgment when contract is being performed. He should also comply with the safety requirement laid down in the Merchant Shipping Act. Passing of risk Within the shipbuilding contract, the risk does not pass from builder to buyer until delivery of the completed ship. It is suggested that builder should take out an insurance cover before the delivery of ship. What are the builder’s remedies? If the buyer cannot fulfill the payment, the builder may: a) exercise his possessory lien; b) resell as a result, exercising his lien; c) exercise a common law right of stoppage in transit; and d) sue for the price The buyer may want to exit from the contract due to change in market situation or financial situation. When the builder had made use of his contractual remedy to cancel the contract for the future, the buyer’s default indeed will trigger the guarantor’s liability and make the letter of guarantee operative. Moreover, if the buyer fails to take delivery, the builder may sue him for failure to accept. The builder has remedies available when the buyer breaches the contract. What are the buyer’s remedies? If the builder fail to deliver the ship, the buyer may: a) seek specific performance; or b) sue for non-delivery There may be an express term in the contract that the property is to pass in whole or partly by stages to buyer before delivery, this does not mean that the buyer has the right to reject the ship if it fails to meet up with the required standard. The buyer has the right to examine the complete property before he is obliged to signify acceptance. He has no right to reject after accepting the delivery, but only to redress if he discovers fault is by way of damage. The builder must notify the buyer the ship’s readiness for trials which will be taken place at the agreed place of delivery. The buyer may choose any place to take the delivery and the costs are for his account. The time of delivery is normally stated and treated as an essential term of the contract. If it is not mentioned or it is not an essential term, the builder should deliver the completed ship within a reasonable time. “Reasonable” will be determined case by case. Summary Shipbuilding contract is different from the general sales contract in terms of nature of contract, time frame and passing of risks. Each shipbuilding contract is tailor made where there are different requirement from each buyer. Shipbuilding contract needs very careful drafting of provisions in contemplation of the likely event of damage before completion. See also Seaworthiness (law) References Hill, C. (1998), Maritime Law, 5th ed, LLP Reference Publishing, London. Simon, C. (2002), The Law of Shipbuilding Contracts, 3rd ed, Informa Professional UK, London. External links A Shipbuilding Contract Sample. Shipbuilding
Shipbuilding contract
Engineering
1,191