id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
42,868,360
https://en.wikipedia.org/wiki/Heliothrix%20oregonensis
Heliothrix oregonensis is a phototrophic filamentous, gliding bacterium containing bacteriochlorophyll a that is aerotolerant and photoheterotrophic. References Further reading Boomer, Sarah M.  Characterization of a hot-spring bacterium resembling Heliothrix oregonensis. Diss. University of Puget Sound, 1989. Blankenship, Robert E., Michael T. Madigan, and Carl E. Bauer, eds. Anoxygenic photosynthetic bacteria. Vol. 2. Springer, 1995. Renger, Gernot, ed.  Primary processes of photosynthesis: principles and apparatus. Royal Society of Chemistry, 2008. External links LPSN Phototrophic bacteria Chloroflexota Bacteria described in 1986
Heliothrix oregonensis
[ "Chemistry", "Biology" ]
163
[ "Bacteria stubs", "Bacteria", "Photosynthesis", "Phototrophic bacteria" ]
42,868,443
https://en.wikipedia.org/wiki/Desulfurobacterium%20thermolithotrophum
Desulfurobacterium thermolithotrophum is a species of autotrophic, sulphur-reducing bacterium isolated from a deep-sea hydrothermal vent. It is the type species of its genus, being thermophilic, anaerobic, Gram-negative, motile and rod-shaped, with type strain BSAT (= DSM 11699T). References Further reading Kadish, Karl M., Kevin M. Smith, and Roger Guilard. "Handbook of porphyrin science." World Scientific: Singapore 2012 (2010): 1-25. External links LPSN Type strain of Desulfurobacterium thermolithotrophum at BacDive - the Bacterial Diversity Metadatabase Aquificota Bacteria described in 1998 Thermophiles
Desulfurobacterium thermolithotrophum
[ "Biology" ]
173
[ "Bacteria stubs", "Bacteria" ]
42,868,455
https://en.wikipedia.org/wiki/Symbiobacterium%20thermophilum
Symbiobacterium thermophilum is a symbiotic thermophile that depends on co-culture with a Bacillus strain for growth. It is Gram-negative and tryptophanase-positive, with type strain T(T) (= IAM 14863T). It is the type species of its genus. Symbiobacterium is related to the Gram-positive Bacillota and Actinomycetota, but belongs to a lineage that is distinct from both.S. thermophilum has a bacillus shaped cell structure with no flagella. This bacterium is located throughout the environment in soils and fertilizers. Cell Structure Although Gram staining S. thermophilum shows a negative lab result, there are key Gram-negative membrane biosynthesis proteins that it lacks, such as LPS:glycosyltransferase and polysaccharide transporters. Instead, the cell structure of S. thermophilum includes proteins STH61, 969, 1321, 2197, 2492, and 3168 which are associated with the enveloped S-layer bacteria. The bacillus shape of S. thermophilum cells may be caused by the mreBCD (STH372-4) gene, located adjacent to the min locus. Although it has no flagella, the genome of S. thermophilum does include a flagella biosynthesis gene cluster. S. thermophilum is found to produce endospores in specific conditions. There is less research on the spore-like structure of S. thermophilum as it is the rarer form. Genome Structure Its genome has been sequenced, and has a size of 3.57 Mbp, with 3338 protein-coding genes. Characteristics of S. thermophilum such as the production of tryptophanase and β-tyrosinase, the cell surface structure, and a negative gram stain results indicate that the bacteria is Gram-negative. However, the sequence of 16S rRNA gene led to the complete phylogenic analysis of S. thermophilum, concluding it was in fact Gram-positive.  High-G+C content (68.7%) along with its Gram stain results indicates that S. thermophilum belongs to the Actinomyces phylum, but the genome and proteins are more closely related to the Firmicutes, a Gram-positive phylum with low-G+C content. S. thermophilum further defies the knowledge that endospore forming genes are unique to the Bacillus-Clostridium group, showing genes involved in the formation of endospores.  Sequencing of proteins proved biological roles in 2,082 of the 3,338 CDSs. The genome of S. thermophilum is not even partially alike other prokaryotic genomes sequenced at this point in time, as indicated by a CDS similarity matrix search. Growth S. thermophilum depends on other strains of Bacillus to grow, in a co-culture mechanism. This is known as microbial commensalism and often occurs in composts. S. thermophilum is one of many cultures that arise from compost derivatives. Under optimal conditions, the growth rate maximizes at 5x10^8 cells/mL. Metabolism S. thermophilum uses the non-oxidative branch of the pentose-phosphate glycolytic pathway for metabolism. Despite not using the Entner-Doudoroff pathway and lacking both cellulose-degrading and amylose-degrading enzymes, it has the genes and ability to metabolize glycerol, gluconate, cellobiose, N-acetylgalactosamine, tyrosine, and tryptophan. S. thermophilum contains genes for ferredoxin oxidoreductases, pyruvate, and 2-oxoacid. S. thermophilum lacks the genes for methionine and lysine biosynthesis but has the enzymes that are utilized to biosynthesize amino acids. Respiration The variety of respiratory enzymes possessed by S. thermophilum enables the bacterium to grow in both aerobic and anaerobic conditions. The ability to grow in both aerobic and anaerobic conditions is indicated by the presence of both aerobic glycerol-3-phosphate dehydrogenase and anaerobic glycerol-3-phosphate dehydrogenase. The presence of the Nap nitrate reductase gene cluster and Nar nitrate reductase suggest that S. thermophilum utilizes nitrate respiration. Habitat Due to the thermophilic nature of S. thermophilum, areas that are ideal for the survival of the bacteria would be ones that have increased temperatures and are nutrient dense. The habitats that are most suited for S. thermophilum would be in the intestinal tract of animals and also in composts. This is because both of those areas contain the essentials for the bacteria to survive. Distribution and Diversity S. thermophilum is a bacterium that is widely distributed throughout the environment. It can be found in many different types of soil and fertilizers that contain animal feces, as well as inside animal intestines, and in the feed that is given to the animals. To determine the distribution of S. thermophilum, tests were done to check for growth of the bacterium and whether or not the item being tested contained tryptophanase. In a study done at the Department of Applied Biological Sciences in Nihon University, Fujisawa, Japan, there was a random sample of Symbiobacterium that was cloned and it determined that out of the 31 samples taken, 16 of the cases showed that the sample had a more diverse genetic structure, where as the other 15 samples had less diverse genetics due to the results showing that the genetics were almost identical to S. thermophilum. References Further reading External links LPSN WORMS Thermophiles Symbiosis Bacteria described in 2000
Symbiobacterium thermophilum
[ "Biology" ]
1,328
[ "Biological interactions", "Behavior", "Symbiosis" ]
42,868,663
https://en.wikipedia.org/wiki/Piston-cylinder%20apparatus
The piston-cylinder apparatus is a solid media device, used in Geosciences and Material Sciences, for generating simultaneously high pressure (up to 6 GPa) and temperature (up to 1700 °C). Modifications of the normal set-up can push these limits to even higher pressures and temperatures. A particular type of piston-cylinder, called Griggs apparatus, is also able to add a deviatoric stress on the sample. The principle of the instrument is to generate pressure by compressing a sample assembly, which includes a resistance furnace, inside a pressure vessel. Controlled high temperature is generated by applying a regulated voltage to the furnace and monitoring the temperature with a thermocouple. The pressure vessel is a cylinder that is closed at one end by a rigid plate with a small hole for the thermocouple to pass through. A piston is advanced into the cylinder at the other hand. History Sir Charles Parsons was the first to attack the problem of generating high pressure simultaneously with high temperature. His pressure apparatus consisted of piston-cylinder devices that used internal electrical resistance heating. He used a solid pressure transmitting material, which also served as thermal and electrical insulation. His cylindrical chambers ranged in diameter from 1 to 15 cm. The maximum pressure at the temperature he reported was of the order of 15000 atm (corresponding to ~1.5 GPa) at 3000 °C. Loring L. Coes, Jr., of the Norton Co., was the first person to develop a piston-cylinder device with capabilities substantially beyond those of the Parsons device. He did not personally publish a description of this equipment until 1962. The key feature of this device is the use of a hot, molded alumina liner or cylinder. The apparatus is double ended, pressure being generated by pushing a tungsten carbide piston into each end of the alumina cylinder. Because the alumina cylinder is electrically insulating, heating is accomplished, very simply, by passing an electric current from one piston through a sample heating tube and out through the opposite piston. The apparatus was used at pressures as high as 45000 atm (corresponding to ~4.5 GPa) simultaneously with a temperature of 800 °C. Temperature was measured by means of a thermocouple located in a well. At these temperature and pressure conditions, only one run is obtained in this device, the pistons and the alumina cylinder both being expendable. Even at 30000 atm (corresponding to ~3.0 GPa) the alumina cylinder is only useful for a few runs, as is also the case for the tungsten carbide pistons. The expense of using such a device is great. Nowadays both the piston and the cylinder are constructed of cemented tungsten carbide and electrical insulation is provided in a different manner than in the device of Coes. In particular, the basis for the modern piston-cylinder apparatus is given by the design described by Boyd and England in 1960, which has been the first machine that allowed experiments under upper mantle conditions to be routinely carried out in a laboratory. Geologist Bernard Wood has made multiple important contributions to science using piston-cylinder experiments and has consequently become a prominent figure in experimental petrology. Along with Fred Wheeler, a workshop worker at the University of Bristol, he has designed a model of piston-cylinder that is known for its simplicity and blue features. Several units of this model have been made at the University of Oxford. Theory The piston-cylinder apparatus is based on the same simple relationship of other high-pressure devices (e.g. Multi-anvil press and Diamond Anvil Cell): where P is the pressure, F the applied force and A the area. It achieves high pressures using the principle of pressure amplification: converting a small load on a large piston to a relatively large load on a small piston. The uniaxial pressure is then distributed (quasi-hydrostatically) over the sample through deformation of the assembly materials. Components The main components of the piston-cylinder apparatus are the pressure generating system, the pressure vessel, and the assembly parts within the vessel. There are two types of piston-cylinder apparatus: non end-loaded and end-loaded, which involve, respectively, one or two hydraulic rams. In the end-loaded type the second hydraulic ram is used to vertically load and strengthen the pressure vessel. The non end-loaded type is smaller, more compact and cheaper, and is operable only to approximately 4 GPa. Pressure is applied to the sample by pressing a piston into the sample volume of the pressure vessel. The sample assembly consists of a solid pressure medium, a resistance heater and a small central volume for the sample. Three common configurations are used: ”, ” and 1”, which are the diameters of the piston and thus the sample assembly. According to the pressure amplification concept, the choice of the piston depends on the pressure you need to achieve. During the experiment, water circulates around the pressure vessel, the bridge and the upper plates to cool the system. Sample assemblies The purposes of the sample assembly are to transmit hydrostatic pressure to the sample from the compressing piston, to provide controlled heating of the sample and to provide, via the capsule, a suitable volatile and oxygen fugacity environment for the experiment. Therefore, it includes a component for each of these purposes. The outer cylinder is a pressure transmitting, electrically insulating cylinder made from NaCl, talc, BaCO3, KBr, CaF2, or even borosilicate glass. The next components are, in order, an electrically insulating borosilicate glass cylinder and a graphite cylinder, which acts as the “furnace”. To locate the sample exactly in the centre of the furnace and to grip the thermocouple, a support rod usually made of crushable ceramics is used. The final component is a conductive steel base plug, located at the top of the sample assembly. The final part of the assembly is the thermocouple itself, whose wires are insulated from one another and from the material of the assembly by a tube made of mullite. Capsules The sample capsule must contain the sample and prevent reaction between the sample and the other materials of the sample assembly and not, itself, react with the sample. It must also be weak so as not to interfere with pressure transmission during the run. For this purpose, the materials most used are: Au, Pt, AgPd alloys, Ni and graphite. Sample volumes are typically 200 mm3, which translates to ~500 mg of starting material, but with larger assemblies the volume can be up to 750 mm3. Pressure control The nominal pressure in an experiment can be calculated from the amplification of the oil pressure through the reduction in area over which it is applied, but every component has a characteristic yield stress, consequently the nominal pressure is different from the effective one. Thus, it must be adjusted taking into account the friction: Peffective = Pnominal + Pcorrection In order to determine the effective pressure, calibration experiments can be done using either static or dynamic methods, and usually make use of known phase transitions or reactions, melting curves or measured water solubility in melts. Since frictional effects also depend on whether the press is in compression or in decompression, it is good practice to perform the experiments in the same way as the calibration runs. Temperature control Temperature can be measured using a thermocouple within an accuracy of ± 1 °C. The accuracy of the temperature is influenced by both random and systematic errors, and is smaller at higher temperature and pressure conditions. Such errors can arise from temperature gradients, differential pressures in the assembly, contamination during the experiment and the effect of pressure on thermocouple electromotive force. These errors can be cushioned choosing the appropriate thermocouple type for the experimental conditions. Temperature gradients, on the other hand, can be minimised using a tapered furnace. Applications The main advantages of the piston-cylinder press are the relatively large volume of the assembly, fast heating and quenching rates, and the stability of the equipment over long run durations. These aspects, together with the ease and safety of procedure make this device suitable for geochemical studies and in-situ measurements of the physical properties of materials. Some applications, especially in Geosciences, are: synthesis of high-pressure and temperature materials, hot pressing and investigation of partial melting of rocks. References Scientific equipment Engineering thermodynamics
Piston-cylinder apparatus
[ "Physics", "Chemistry", "Engineering" ]
1,750
[ "Engineering thermodynamics", "Thermodynamics", "Mechanical engineering" ]
42,869,086
https://en.wikipedia.org/wiki/Macy%20catheter
The Macy Catheter is a specialized catheter designed to provide comfortable and discreet administration of ongoing medications via the rectal route. The catheter was developed to make rectal access more practical and provide a way to deliver and retain liquid formulations in the distal rectum so that health practitioners can leverage the established benefits of rectal administration. Patients often need medication when the oral route is compromised, and the Macy Catheter provides an alternative for those medications that can be prescribed per rectum. The Macy Catheter is of particular relevance during the end of life, when it can help patients to remain comfortable in their home. Key features and functions The Macy Catheter is a disposable device approved by the U.S. Food and Drug Administration (FDA), consisting of a dual-porl-lumen ballooned tube that is inserted by a clinician into the rectum just past the rectal sphincter. Once inserted into the rectum, a soft balloon is inflated with water via a balloon inflation valve to hold the device in place. This small, flexible "semi-retention" balloon exerts very little pressure on the rectal wall, and is designed for safety and comfort, while also allowing the catheter to be easily expelled when the patient needs to defecate. The catheter utilizes a small flexible silicone shaft, allowing the device to be placed safely and remain comfortably in the rectum for repeated administration of medications or liquids. Once in place, the medication delivery port of the Macy Catheter rests on the patient's leg or abdomen, where it is easily accessible for repeated administration of liquid medications in solution or suspension form. The device stays in place until the patient has a bowel movement and expels the retention balloon, or until manually removed after first deflating the balloon. The Macy Catheter medication port has a specialized valve to prevent leakage and is designed to be non-clogging and compatible only with the connectors on oral/enteral syringes (not syringes) for safety. The device is FDA-approved to remain in the rectum for up to 28 days. The catheter has a small lumen, allowing for small flush volumes to get medication to the rectum. Small volumes of medications (under 15ml) improve comfort by not stimulating the defecation response of the rectum, and can increase the overall absorption of a given dose by decreasing pooling of medication and migration of medication into more proximal areas of the rectum where absorption can be less effective. Indications for use The Macy Catheter is intended to provide rectal access to administer liquids and medications. The Macy Catheter can be used in the following clinical situations: Medication administration when the oral route fails Administration of fluids and electrolytes Administration of retention enemas Common clinical use scenarios The Macy Catheter provides an immediate way to administer medication or liquids for patients in the home setting when the oral route of medication administration is compromised. Unlike intravenous lines, which usually need to be placed in an inpatient environment and require special formulation of sterile medications, the Macy Catheter can be placed by a clinician, such as a hospice nurse or home health nurse in the home. Many oral forms of medications can be crushed and suspended in water to be given via the Macy Catheter. The Macy Catheter is useful for patients who cannot swallow, including those near the end of life (an estimated 1.65 million people are in hospice care in the US each year). Because the Macy Catheter enables a rapid, safe, and lower cost alternative to administration of medications, it may also be applicable to care of patients in long-term care or palliative care, or as an alternative to intravenous or subcutaneous medication delivery in some instances. The Macy Catheter is clinically indicated for the following scenarios: 1. Symptom management at the end of life, including but not limited to: Pain Agitation Dyspnea, or shortness of breath Nausea and vomiting Seizures Fever 2. Bowel obstruction For medication management, hydration, and symptom control when the oral route is not viable due to total obstruction 3. Discharge from acute care to the home setting Allows for easy, discreet, and safe medication administration and short-term hydration in the home setting For transitioning from intravenous or subcutaneous route to the rectal route when discharged from acute settings to the home setting History The Macy Catheter was invented by Brad Macy, RN, BSN, a 22-year veteran hospice nurse. Inspired by a patient who was terminally agitated and not responding to a solid form of a rectally delivered medication, Macy administered the same medication in a liquid suspension with a small tube inserted into the patient's rectum. The patient's agitation rapidly diminished, and the patient was sleeping within 30 minutes. After practicing repeated successful interventions involving the application of medication in highly concentrated form to the distal one-third of the rectum, Macy realized the potential implications for hospice and palliative patients worldwide. With this motivation, he proceeded to develop the Macy Catheter, a device designed and developed for commercial use. The commercial product is protected by two issued U.S. patents and received 510(k) clearance from the Food and Drug Administration in early 2014. Rectal drug delivery Rectal drug delivery is an effective route of medication delivery for many medications used at the end of life. The walls of the rectum absorb many medications quickly and effectively. Medications delivered to the distal one-third of the rectum at least partially avoid the "first pass effect" through the liver, which allows for greater bio-availability of many medications than that of the oral route. The rectal route of administration is highly effective as the rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. Although intravenous administration is the most commonly used alternate route in acute care settings, it is rarely used in hospice care, given the associated cost and need for a high level of care and training for providers. It can also lead to complications such as infection and pain. Although subcutaneous medication delivery is more common in hospice, it is also expensive and can cause infection, pain and swelling. The rectal route of administration is highly effective as the rectal mucosa is highly vascularized tissue that allows for rapid and effective absorption of medications. The Macy Catheter provides a solution to overcome the challenges and leverage the benefits of rectal administration. References Routes of administration
Macy catheter
[ "Chemistry" ]
1,348
[ "Pharmacology", "Routes of administration" ]
42,869,195
https://en.wikipedia.org/wiki/Rotinoff%20Super%20Atlantic
The Rotinoff Super Atlantic is a 6×4 ballast tractor made by the British company Rotinoff Motors Ltd. The tractor was designed for GCW of 200 tons with the help of a Rolls-Royce six cylinder supercharged engine producing almost 335 bhp with support of six speed main gearbox and a three speed auxiliary box for critical conditions. Kirkstall axles were used due to heavy operations of the tractor which had tire size of 18.00-25 and 14.0024 as optional to bring the vehicle to still it was equipped with compressed air brakes the drum measured 19 by 4 and 19 by 7 front and rear respectively. The Swiss Army bought ten of them (in Atlantic and Super Atlantic configuration) in 1958, which they used to pull trailers built by Scheuerle with a payload capacity of 50 tons. These were used for transporting the tank Pz 55/57 Centurion tank, thus reaching the total weight of 104 tons. The Rotinoff Super Atlantic features a fuel capacity of 580 litres in two tanks. In addition to the three places in the cabin, there are also two standing outside for traffic control available. One of these tractors is on display at the Schweizerisches Militärmuseum Full, another in the Swiss Army Historic Foundation in Burgdorf, Switzerland and one is preserved in Oxfordshire, England. References External links Data on the Rotinoff Super Atlantic by militärfahrzeuge.ch Schweizerisches Militärmuseum Full: Werksammlung Mowag GmbH Kreuzlingen Stiftung HAM Swiss Army Historic Foundation Tractors Military vehicles of Switzerland
Rotinoff Super Atlantic
[ "Engineering" ]
335
[ "Engineering vehicles", "Tractors" ]
42,870,108
https://en.wikipedia.org/wiki/Amanita%20roseotincta
Amanita roseotincta is a species of agaric fungus in the family Amanitaceae found in North America. It was first described by American mycologist William Alphonso Murrill in 1914 as a species of Venenarius before being transferred to Amanita the same year. See also List of Amanita species References roseotincta Fungi of the United States Fungi described in 1914 Taxa named by William Alphonso Murrill Fungi without expected TNC conservation status Fungus species
Amanita roseotincta
[ "Biology" ]
107
[ "Fungi", "Fungus species" ]
42,870,384
https://en.wikipedia.org/wiki/CGView
CGView (Circular Genome Viewer) is a freely available downloadable Java software program, applet and API (application programming interface) for generating colorful, zoomable, hyperlinked, richly annotated images of circular genomes such as bacterial chromosomes, mitochondrial DNA and plasmids. It is commonly used in bacterial sequence annotation pipelines to generate visual output suitable for the web. It has also been used in a variety of popular web servers (the CGView webserver, PlasMapper, BASys) and databases (BacMap). Overview More than 4000 bacterial genomes and thousands of plasmid genomes have been sequenced thanks to the advance in DNA sequencing technology. CGView was developed to address the specialized needs for visualizing and annotating circular genomes, such as bacterial, plasmid, chloroplast, mitochondrial DNA sequences. Once installed, the CGView program accepts a number of different file formats where feature data and rendering information can be XML file, a tab delimited file, or an NCBI ptt file. CGView then converts the input into a graphical map in various (PNG, JPG, or SVG) image formats that can include labels, titles, legends and footnotes. The images can be static, interactive, or poster-sized images for printing or for embedding into web pages. Technology and Accessibility CGView is written in the Java programming language. It is available as a downloadable Java application package as well as an applet and an API. The applet package can be used to embed interactive maps into web pages. The API can be used to incorporate CGView into another Java applications. A CGView server has recently been developed. See also Genomics Genome Browser BASys PlasMapper References External links CGView web server Biological databases
CGView
[ "Biology" ]
383
[ "Bioinformatics", "Biological databases" ]
42,870,676
https://en.wikipedia.org/wiki/Stencil%20printing
Stencil printing is the process of depositing solder paste on the printed wiring boards (PWBs) to establish electrical connections. It is immediately followed by the component placement stage. The equipment and materials used in this stage are a stencil, solder paste, and a printer. The stencil printing function is achieved through a single material namely solder paste which consists of solder metal and flux. Paste also acts as an adhesive during component placement and solder reflow. The tackiness of the paste enables the components to stay in place. A good solder joint is one where the solder paste has melted well and flowed and wetted the lead or termination on the component and the pad on the board. In order to achieve this kind of a solder joint, the component needs to be in the right place, the right volume of solder paste needs to be applied, the paste needs to wet well on the board and component, and there needs to be a residue that is either safe to leave on the board or one that can easily be cleaned. The solder volume is a function of the stencil, the printing process and equipment, solder powder, and rheology or the physical properties of the paste. Good solder wetting is a function of the flux. Inputs Inputs to the process can be classified as design input, material input and process parameter input. The output of the process is a printed wiring board that meets the process specification limits. These specifications usually are consistent solder paste volume and height, and printed solder paste aligned on the PWB pads. This determines the process yield. In electronic design automation, the solder paste mask and thus the stencil is typically defined in a layer named tCream/bCream aka CRC/CRS, PMC/PMS, TPS/BPS, or TSP/BSP (EAGLE), F.Paste/B.Paste (KiCad), PasteTop/PasteBot (TARGET), SPT/SPB (OrCAD), PT.PHO/PB.PHO (PADS), PASTE-VS/PASTE-RS (WEdirekt), GTP/GBP (Gerber and many others). Some (less common) EDA software does not treat the solder paste mask as a regular part of a PCB's layer stack, in which case the paste mask must be derived from the solder stop mask. For improved accuracy, stencils traditionally were often mounted in proprietary aluminum frames of various kinds. Today, the usage of quick mount systems is more common at least for low volume batches, mounting the stencil pneumatically or mechanically. For this the stencil needs additional perforations for alignment following one of several mount system standards including QuattroFlex, ZelFlex, ESSEMTEC, PAGGEN, Metz, DEK VectorGuard, Mechatronic Systems and others. Printing process The process begins with loading the board into the printer. The internal vision system aligns the stencil to the board, after which the squeegee prints the solder paste. The stencil and board are then separated and unloaded. The bottom of the stencil is wiped about every ten prints to remove excess solder paste remaining on the stencil. A typical printing operation has a speed of around 15 to 45 seconds per board. Print head speed is typically 1 to 8 inches per second. The printing process must be carefully controlled. Misalignment of motion from the reference results in several defects, hence the board must be secured correctly before the process begins. A snugger and vacuum holders are used to secure the X and Y axes of the board. Vacuum holders must be carefully used, as they may affect the pin-in-paste printing process if not secured properly. The longest process is the printing operation, followed by the separation process. Post print inspection is crucial and is usually performed with special 2D vision systems on the printer or separate 3D systems. Printed wiring boards Design Vision systems in the stencil printing machines use global fiducial marks for aligning the PWB. Without these fiducials the printer would not print the solder paste in exact alignment with the pads. The PWB should have close dimensional tolerances so that it mates to the stencil. This is necessary to achieve the required alignment of solder blocks on the pads. Masking The required accuracy in alignment can also be achieved by controlling the flow of solder on the PWB during reflow soldering. For this purpose, the space between the pads is often coated with a solder mask. The solder mask materials have no affinity to the molten solder and hence, no positive bonding is formed between them as the solder solidifies. This process is often referred to as Solder masking. The mask must be centered correctly. The mask protects the PWB against oxidation, and prevents unintended solder bridges from forming between closely spaced solder pads. Also the height of the solder mask should be lower than the pad height to avoid gasketing problems. If the height of the solder mask is greater than that of the pad, then some of the solder paste would settle in the empty space between the mask and the pad. This is what is referred to as gasketing. It is a seal that fills the space between two surfaces to prevent leakages. Gasketing is a problem as the excess solder paste around the pad may be more than a nuisance factor for circuits having very small line spacing. Finishing The pads on the PWB are made of copper and are susceptible to oxidization. Surface oxidization on the copper will inhibit the ability of the solder to form a reliable joint. To avoid this unwanted effect, all exposed copper is protected with a surface finish. Aperture fill and release The core of a well printed PWB lies in the fill and release of solder paste into the aperture. When the stencil is in contact with the PWB, solder paste is applied over the top surface of the stencil using a squeegee. This causes the aperture to fill with solder paste. The PWB is then lowered from the stencil. The amount of solder paste which is released from the stencil apertures and transferred to the PWB pads, determines whether or not the print is good. Ideally, all volumes of solder paste should be equal to the volume of the corresponding stencil aperture. In reality however, this is never the case. Hence, a print is considered to be good if a certain fraction of the paste is released. One way of quantifying print performance is to calculate the transfer efficiency. This is mathematically stated as: Transfer efficiency = (Volume of printed deposit) / (Theoretical maximum volume) In the above expression, the theoretical maximum volume is simply the open volume of the stencil aperture. Ideally, a transfer efficiency of 1 is desired. In reality however, greater the transfer efficiency, better is the print. Now in order to get the aperture full of paste requires sufficient flow rate and sufficient fill time. Apertures which are not completely filled will not release paste onto the board, which results in clogged stencils and defective solder joints. Solder paste release is determined by the separation speed of the board from the stencil. The adhesion of the paste to the board has to provide the shearing force to overcome the adhesion of the paste to the stencil walls. This hydrodynamic shearing force depends on the separation speed. Stencils Stencils are used to print solder paste on the PCB. They are often made of stainless steel or nickel and are manufactured by different processes described below. Manufacturing processes Laser cutting The use of laser technology allows having tighter tolerances and greater accuracy. The aperture walls can be smoothed through electro-polishing and/or nickel plating. The laser cutting process results in trapezoidal apertures that can create better solder paste release characteristics. The repeatability of dimensions in laser-cut stencils is generally better than that of chemical etching. With laser cutting, there are no photo films requiring precise alignment or protection from moisture. E-FAB stencil This stencil is formed by the process of electroforming nickel, hence the name E-FAB. The nickel has better wear characteristics than steel and electroforming creates smooth tapered aperture walls. The process also creates a ridge along the bottom of the stencil that can improve stencil-to-board gasketing and result in more consistent solder paste release. Stencil design Due to the need for fine pitch components, as the size of the aperture becomes smaller and smaller, they become “tall-narrow” apertures. In such cases, the apertures may be filled with solder paste but not completely released, or sometimes not even completely filled and hence get no deposits. In order to counter this problem, aperture walls are made as smooth as possible. Also, molecular layer nano coatings are put on the stencil walls so that the solder paste does not stick. Consistent fill and release is the most important output of stencil printing. When the stencil is down on the board, paste is filling the aperture and it's in contact with the pad and walls of the stencil. The contact is judged by taking the ratio of these areas i.e. the ratio of the area of the pad to the area of the walls. This is called area ratio. The information about the standards for stencil design is available at IPC Specification 7525 and other standards. In general, including stencils with tall and narrow apertures, an area ratio greater than 0.66 is recommended. Illustrations of the various dimensions: For fine pitch stencils (smaller 20 mils pitch, 10 mils aperture), even with a 5 mils stencil, which is the most commonly used stencil thickness, the area ratio is below 1.5. This necessitates the use of a thinner stencil. For BGA/CSP and other very small apertures, the area ratio is used. It should be greater than 0.66, as this ensures a high probability of good fill and release. An area ratio below 0.66 would mean a much less reliable process. Examples of area ratios for BGAs: Aperture size should be smaller than the pad size to avoid the excess solder paste or production of solder balls. A 10 to 20% reduction in aperture size as compared to the pad size is typical to minimize solder balls. Solder balls can result in malfunctioning of the electric circuit. Other considerations Step down stencils A PCB may need varying amounts of solder paste to be applied depending upon the design and size of components. Applying a uniform maximum level of solder may not be a good solution in this case, as these stencils often find use when "pin and paste" technology (i.e., printing solder paste into through-holes to avoid wave soldering) and components of significantly different pitch are used in the same PWB. For this purpose, to achieve a varying solder amount, step down stencils are used. Solder paste stencil life Ideally, a solder paste should have, at minimum, a 4-hour stencil life. The stencil life is defined as a time period in which there will be no significant change in the solder paste material characteristics. A solder paste with a longer stencil life will be more robust in the printing process. Actual stencil life for a paste should be determined from the manufacturers' specifications and on-site verification. Handling and storage of stencils To improve the life and performance of stencils, they must be cleaned after use by removing any solder paste on them or within the apertures. The cleaned stencils are stored away in a protective area. Before usage, stencils are inspected for wear or damage. Stencils are typically identified by job numbers to reduce the risk of mishandling or misplacing. Squeegee Squeegees are used to spread solder over the stencil and to fill all apertures consistently. Squeegees come in two different types based either on metal or polyurethane. Metal squeegees are preferred over polyurethane. They produce very consistent solder volumes and are resistant to scooping the solder paste out of the apertures when printing. In addition, they have better wear characteristics, leading to longer life. Common difficulties Insufficient solder paste Insufficient solder paste may cause poor bonds and contact between components and the board. The common causes of insufficient solder paste are poor gasketing, clogged stencil apertures, insufficient solder paste bead size, paste/stencil being used beyond recommended life span, stencil not wiped clean, or low squeegee pressure. Smudging/bridging The main causes of smudging/bridging are excessive squeegee pressure, inadequate stencil wiping, poor contact between the board and stencil, high temperature or humidity, or low solder paste viscosity. Misalignment print A typical misalignment print is usually caused by the vision system not spotting fiducials, PWB or stencil stretch, poor contact between the board and the stencil, or weak board support. Bow and twist A PCB board not fixed properly during solder paste printing gives poor results and increases soldering related issues. Normally, solder paste printing equipment can handle warpage of 1.0 to 3.0 mm but beyond this limit needs some special jigs or fixtures to hold the PCB. It may be difficult to tackle thick and small boards compared to thin and bigger size boards. Statistical process control More than 50% of defects in electronics assembly are due to solder paste printing problems. There are many parameters involved in this process, making it difficult to find the specific problem and to optimize the process. A careful statistical study of the process may be used to improve output significantly. The number of opportunities for a defect characterizes defects, not the actual number of defective parts. Example: If solder paste is printed on pads for a 68-pin QFP, then Total number of opportunities for defects = 68 pins + 1 for the component = 69 possible defects for printing only. Hence, there are 69 opportunities for defects to produce one defective component. Counting the defect opportunities is the most valid process monitor. Processes are typically rated in terms of number of defects per million opportunities (DPM). As an example, a process resulting in 100 defects when given 1 million defect opportunities would have a rating of 100 DPM. World class printing processes have defect levels around 20 DPM. A low DPM printing process may be achieved by employing statistical techniques to determine the effects of individual parameters or interactions between different parameters. Important process parameters can then be optimized using design of experiments (DOE) techniques. These optimized parameters can then be implemented and process bench marking can begin. Statistical process control can then be used to continuously monitor and improve printing DPM levels. See also Stencil Pochoir Notes References Further reading Printing processes Printed circuit board manufacturing
Stencil printing
[ "Engineering" ]
3,198
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
42,871,299
https://en.wikipedia.org/wiki/Brachoria
Brachoria is a genus of polydesmidan millipedes in the family Xystodesmidae inhabiting the Eastern United States. Also known as the Appalachian mimic millipedes, at least 30 species are known, with highest diversity in the Appalachian Mountains, especially the Cumberland Plateau and Ridge and Valley Province. Species of Brachoria are boldly patterned with yellow, orange, red, violet that contrasts with a black background, and in the Appalachians some species mimic species of Apheloria where they co-occur, a phenomenon known as Müllerian mimicry Species There are over 30 species of Brachoria which differ mainly in characteristics of the male gonopods (reproductive appendages), but since many species have very small known ranges, geographic location can aid in identification as well. These species belong to the genus Brachoria: Brachoria abbreviata (Shelley, 1986) Brachoria badbranchensis Marek, 2010 Brachoria blackmountainensis Marek, 2010 Brachoria calceata (Causey, 1955) Brachoria campcreekensis Marek, 2010 Brachoria camptera Means, Hennen & Marek, 2021 Brachoria cedra Keeton, 1959 Brachoria conta Keeton, 1965 Brachoria cryocybe Hennen, Means & Marek, 2021 Brachoria cumberlandmountainensis Marek, 2010 Brachoria dentata Keeton, 1959 Brachoria divicuma Keeton, 1965 Brachoria electa Causey, 1955 Brachoria enodicuma Keeton, 1965 Brachoria evides (Bollman, 1887) Brachoria flammipes Marek, 2010 Brachoria forficata (Shelley, 1986) Brachoria glendalea (Chamberlin, 1918) Brachoria gracilipes (Chamberlin, 1947) Brachoria grapevinensis Marek, 2010 Brachoria guntermountainensis Marek, 2010 Brachoria hansonia Causey, 1950 Brachoria hendrixsoni Marek, 2010 Brachoria hoffmani Keeton, 1959 Brachoria hubrichti Keeton, 1959 Brachoria indianae (Bollman, 1888) Brachoria initialis Chamberlin, 1939 Brachoria insolita Keeton, 1959 Brachoria kentuckiana (Causey, 1942) Brachoria laminata Keeton, 1959 Brachoria ligula Keeton, 1959 Brachoria mendota Keeton, 1959 Brachoria ochra (Chamberlin, 1918) Brachoria platana Means, Hennen & Marek, 2021 Brachoria plecta Keeton, 1959 Brachoria sheari Marek, 2010 Brachoria splendida (Causey, 1942) Brachoria virginia Marek, 2010 Brachoria viridicolens (Hoffman, 1948) Gallery References External links The Appalachian Mimic Millipedes: Tree of Life Polydesmida Millipedes of North America Mimicry
Brachoria
[ "Biology" ]
621
[ "Mimicry", "Biological defense mechanisms" ]
42,872,715
https://en.wikipedia.org/wiki/Tambjamine
Tambjamines are a group of natural products that are structurally related to the prodiginines. They are enamine derivatives of 4-methoxy-2,2'-bipyrrole-5-carboxaldehyde (MBC). Chemical structure Tambjamines are composed of two pyrrole rings with an enamine moiety at C-5 and a methoxy group at C-4: the majority have short alkyl chains connected to the enamine nitrogen. This group of alkaloids have been isolated from marine invertebrates and bacteria (both marine and terrestrial). Marine sources and ecological roles The large nudibranch Roboastra tigris is a known predator of Tambja eliora and Tambja abdere, two species of smaller nudibranchs. The chemical extracts of all three nudibranch species contain tambjamines, which were traced to Sessibugula translucens, a food source of the two prey species. It is hypothesized that tambjamines are a chemical defence mechanism of the bryozoan against feeding by the spotted kelpfish Gibbonsia elegans. Production Biosynthesis The biosynthetic gene cluster responsible for tambjamine production was identified in 2007 using functional genomic analysis of a Pseudoalteromonas tunicata strain. The Tam cluster consists of 19 proteins, 12 of which were found to be highly similar to proteins in the Red and Pig pathways from prodigiosin biosynthesis, based on sequence data. The biosynthesis of tambjamine YP1 first involves the incorporation of proline, malonyl Co-A, and serine to form 4-methoxy-2,2'-bipyrrole-5-carboxaldehyde (MBC). AfaA is hypothesized to activate long-chain fatty acids while the predicted dehydrogenase, TamT, introduces a double bond into a fatty acyl side chain. TamH then carries out the reduction of the CoA-ester to form an aldehyde intermediate, followed by transamination. Condensation of the dodec-3-en-1-amine product of this reaction and MBC by TamQ, results in the tambjamine YP1 (compound 21 in Figure 1). Laboratory The aldehyde MBC was first prepared by total synthesis when the structure of prodigiosin was being investigated. It has subsequently been synthesised by other methods and used to make tambjamines and related natural products. See also Prodiginines References Alkaloids Enamines Ethers Pyrroles
Tambjamine
[ "Chemistry" ]
566
[ "Biomolecules by chemical classification", "Natural products", "Functional groups", "Organic compounds", "Ethers", "Alkaloids" ]
42,873,698
https://en.wikipedia.org/wiki/Groundhog%20Technologies
Groundhog Technologies is a privately held company founded in 2001 and is headquartered in Cambridge, Massachusetts, USA. As a spin-off of MIT Media Lab, it was a semi-finalist in MIT's $50,000 Entrepreneurship Competition in 2000 and was incorporated the following year. The company received the first round of financing from major Japanese corporations and their venture capital arms in November 2002: Marubeni, Yasuda Enterprise Development and Japan Asia Investment Co. It received second round of financing in 2004 and since then has become self-sustainable. Groundhog Inc., Groundhog Technologies Inc.’ operation center in Taiwan, went public in 2022 The company's products are built on top of its Mobility Intelligence Platform, which analyzes the locations, Quality of Experience, context, and lifestyles of subscribers in mobile operator's network. The intelligence about geolocation is then applied to improve subscribers’ experience and enable applications such as geomarketing and geotargeting. The company has leveraged its platform to enable operators to address the advertising and data monetization opportunity both internally and in partnership with third party retailers, advertisers, and ad networks. Core Technologies Groundhog Technologies launched its Mobility Intelligence platform based on Chaos Theory and multi-dimensional modeling. The application of Chaos Theory gave rise to the company's mathematical models of subscribers' mobility and usage behavior, which can be used for different applications such as by mobile operators to optimize networks according to the user demands. According to Chaos Theory, some seemly random or chaotic signals may be converted to analyze in phase space which can reveal the patterns behind it. The cases of most interest arise when the chaotic behavior shows patterns around an attractor in the phase space. Based on the attractor in the phase space, data can be utilized from different space, time, and individuals for modeling and indoor geolocation. It is also found that the dimensional structure and characteristics of phase space can naturally neutralize the bias of positioning (based on techniques such as triangulation or trilateration) caused by reasons such as multipath. That is, although each input is biased in some way, the observation from different dimensions and angles are biased in different ways. Combining multi-dimensional input in the phase space, based on the Law of Large Numbers it can average out the bias with different samples through dimensions, time, and individuals. See also Location-Based Services Geolocation software References Big data companies Telecommunications companies of the United States Wireless locating Marubeni Geomarketing
Groundhog Technologies
[ "Technology" ]
512
[ "Mobile telecommunications", "Wireless locating" ]
41,441,485
https://en.wikipedia.org/wiki/Operational%20modal%20analysis
Ambient modal identification, also known as operational modal analysis (OMA), aims at identifying the modal properties of a structure based on vibration data collected when the structure is under its operating conditions, i.e., no initial excitation or known artificial excitation. The modal properties of a structure include primarily the natural frequencies, damping ratios and mode shapes. In an ambient vibration test the subject structure can be under a variety of excitation sources which are not measured but are assumed to be 'broadband random'. The latter is a notion that one needs to apply when developing an ambient identification method. The specific assumptions vary from one method to another. Regardless of the method used, however, proper modal identification requires that the spectral characteristics of the measured response reflect the properties of the modes rather than those of the excitation. Pros and cons Implementation economy is one primary advantage of ambient vibration tests as only the (output) vibration of the structure needs to be measured. This is particularly attractive for civil engineering structures (e.g., buildings, bridges) where it can be expensive or disruptive to carry out free vibration or forced vibration tests (with known input). Identifying modal properties using ambient data does have disadvantages: The identification methods are more sophisticated. As the loading is not measured, in the development of the identification method, it needs to be modeled (by some stochastic process), or its dynamic effects on the measured response have to be removed. Otherwise, it is not possible to explain the characteristics in the data based solely on the modal properties. Without loading information, the identified modal properties can have significant identification uncertainties. In particular, the results are as good as the broadband assumption applied. The identified modal properties only reflect the properties at the ambient vibration level, which is usually lower than the serviceability level or other design cases of interest. This is especially relevant for the damping ratio, which is commonly perceived to be amplitude-dependent. The measurement system needs to be low-noise and sensitive, since structures mainly vibrate at low levels in their operational conditions. Methods Methods of OMA can be broadly classified by two aspects, 1) frequency domain or time domain, and 2) Bayesian or non-Bayesian. Non-Bayesian methods were developed earlier than Bayesian ones. They make use of some statistical estimators with known theoretical properties for identification, e.g., the correlation function or spectral density of measured vibrations. Common non-Bayesian methods include stochastic subspace identification (time domain) and frequency domain decomposition (frequency domain). Bayesian methods have been developed in the time-domain and frequency-domain. Frequency domain and time domain operational modal analysis of structures The objective of operational modal analysis is to extract resonant frequencies, damping, and/or operating shapes (unscaled mode shapes) of a structure. This method sometime called output-only modal analysis because only the response of the structure is measured. The structure might be excited using natural operating conditions or some other excitations might be applied to the structure; however, as long as the operating shapes are not scaled based on the applied force, it is called operational modal analysis (e.g. operating shapes of a wind turbine blade excited by a shaker are measured using operating modal analysis). This method has been used to extract operating modes of a hovering helicopter. Operational modal analysis versus operational deflection shape The two terms, Operational Modal Analysis and Operational Deflection Shape, are very similar, but refer to two different analysis approaches. Both use ambient vibration data as inputs, but in the case of Operational Deflection Shapes, a shape that corresponds to the overall vibration response is created. It is based on the vibration amplitude only, there is no attempt to extract a mode shape and no quantification of the modal damping can be obtained. While Operational Modal Analysis, when the main assumptions are met, yields a representation of a system characteristic in its operating environment, an Operational Deflection Shape will simply extract the system response under the currently applied loads. Notes See monographs on non-Bayesian OMA and Bayesian OMA. See OMA datasets. See also Frequency domain decomposition Bayesian operational modal analysis Ambient vibrations Microtremor Modal analysis Modal testing References Wave mechanics
Operational modal analysis
[ "Physics" ]
897
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
41,441,632
https://en.wikipedia.org/wiki/Kent%20Design%20Awards
These awards were created to celebrate design excellence in Kent and were first staged in 2003 and are usually held every two years. They were then renamed 'Kent Design and Development Awards' in 2012. Then have stayed as the 'Kent Design and Development Awards' in 2014. 2003 Commercial and Industrial Building winner - Holiday Extras HQ Building, Newingreen, Hythe Public Building winner - Riverhead Infant School, Sevenoaks Urban Design and Town Centre Renewal winner - St. Mildreds Lavender Mews, Canterbury Best Individual House - Lynwood, Tunbridge Wells (private residence) Housebuilding for Quality winner - Ingress Park, Greenhithe Overall winner - Lynwood, Tunbridge Wells (private residence) Highly Commended was Romney Warren Visitor Centre 2004 Housebuilding for Quality winner - Vista (private residence), Dungeness Public Building/Education winner - St Augustine's RC School, Hythe Town and Village Renaissance - Horsebridge and Brownings Yard, Whitstable Overall Winner - St Augustine's RC School, Hythe 2005/2006 Public Building winner - Trosley Country Park amenity block Commercial, Industrial and retail winner - Kings Hill Village Centre Housebuilding winner - Iden Farm Cottage, Boughton Monchelsea, near Maidstone Building Renovation winner - The Old Gymnasium, Deal Cavalry Barracks, Deal Best New Neighbourhood winner - Affordable village housing in Ash Grove, St Margaret's at Cliffe, near Dover. Overall Winner - The Goods Shed, Canterbury 2007/2008 Also nominated was Sevenoaks Kaleidoscope museum, library and gallery, although misleadingly named as a winner on an architects brochure. Commercial, Industrial and retail winner - Broadside (HQ of MHS Homes), Chatham Housebuilding winner - Sandling Park (a residential scheme), Maidstone Building Renovation (joint winners) - Pilkington Building and Drill Hall Library, within the Universities at Medway Public Building winner - Parrock Street public toilets, Gravesend (Gravesham Community Project PFI) Landscape category winner - Lower Leas Coastal Park, Folkestone Overall Winner - The Pines Calyx, St Margaret's at Cliffe 2010 30 projects were shortlisted in seven categories from more than 60 entries. The Medway Building at the University of Kent as part of the Universities at Medway, was nominated for Best Public Building. Also nominated was Crossway Low Energy House, near Maidstone. Conservation & Craftsmanship Category winner - The Darnley Mausoleum, Cobham Town & Village Renaissance winner - Ashford Shared Space Residential overall winner - The Quays (towers with the former Chatham Dockyard), Chatham Maritime Residential (major development) winner - The Quays, Chatham Maritime Residential (minor development) winner - El Ray, Dungeness Commercial, Industrial & Retail winner - Deal Pier Public Buildings (general) winner - Quarterhouse, Folkestone Public Buildings (schools) winner - St. James the Great Primary & Nursery School, East Malling Project of the Year - the Lord Sandy Bruce-Lockhart Award - The Darnley Mausoleum, Cobham 2012 (Renamed as 'Kent Design and Development Awards') Jointly organised and sponsored by 'DHA Planning' (town planning and transport consultancy), Kent County Council and Ward Homes (public housing management). 94 nominees including Sevenoaks School Performing Arts Centre and Cornwallis Academy. Commercial, Industrial and Retail winner - Rocksalt Restaurant, Folkestone Public Buildings Education winner - Marlowe Theatre, Canterbury Civils and Infrastructure winner - Dover Esplande, sea frontage Environmental Performance winner - Hadlow College Minor Residential winner - Hill House, Ulcombe Major Residential winner - Rosemary Gardens, Park Wood Public Buildings, Community winner - Turner Contemporary Public Buildings, Education winner - Walderslade Primary School Project of the Year (Sponsored by DHA Planning) - Rocksalt Restaurant, Folkestone 2014 Kent Design and Development Awards The shortlist was announced in September 2014; Categories Include: Major Residential category - Horsted Park, Chatham Minor Residential category - Pobble House, Romney Marsh Commercial, Industrial and Retail category - Medway Crematorium Civils and Infrastructure category - Sandwich Town Tidal Defences Education Public Buildings category - Goat Lees Primary School, Ashford Community Public Buildings category - Cyclopark, Gravesend Environmental Performance category - Goat Lees Primary School, Ashford Overall winner ‘Project of the year’ - Goat Lees Primary School, Ashford, 2016 Awards Twenty-three developments were shortlisted for the eight categories; Winners: Commercial, Industrial and Retail category - The Wing, Capel-le-Ferne Conservation category - Command of the Oceans at Chatham Historic Dockyard, Environmental Performance category - North Vat, a house near Dungeness, Infrastructure and Renewables category - the cut and cover tunnel at Hermitage Quarry, Barming, by Gallagher Ltd, Education Public Buildings category - The Yarrow in Broadstairs, Community Public Buildings category - Fairfield (part of East Kent College) in Dartford Minor Residential category - Nautical Mews in Margate, Major Residential category - Farrow Court in Ashford and Wallis Fields in Maidstone, The Wing for the Battle of Britain Memorial Trust at Capel-le-Ferne was named Project of the Year. References External links Kent Design and Development Awards 2012 Design awards Architecture awards Architecture in the United Kingdom British awards Awards established in 2000 Kent
Kent Design Awards
[ "Engineering" ]
1,077
[ "Design", "Design awards" ]
41,442,019
https://en.wikipedia.org/wiki/Bayesian%20operational%20modal%20analysis
Bayesian operational modal analysis (BAYOMA) adopts a Bayesian system identification approach for operational modal analysis (OMA). Operational modal analysis aims at identifying the modal properties (natural frequencies, damping ratios, mode shapes, etc.) of a constructed structure using only its (output) vibration response (e.g., velocity, acceleration) measured under operating conditions. The (input) excitations to the structure are not measured but are assumed to be 'ambient' ('broadband random'). In a Bayesian context, the set of modal parameters are viewed as uncertain parameters or random variables whose probability distribution is updated from the prior distribution (before data) to the posterior distribution (after data). The peak(s) of the posterior distribution represents the most probable value(s) (MPV) suggested by the data, while the spread of the distribution around the MPV reflects the remaining uncertainty of the parameters. Pros and cons In the absence of (input) loading information, the identified modal properties from OMA often have significantly larger uncertainty (or variability) than their counterparts identified using free vibration or forced vibration (known input) tests. Quantifying and calculating the identification uncertainty of the modal parameters become relevant. The advantage of a Bayesian approach for OMA is that it provides a fundamental means via the Bayes' Theorem to process the information in the data for making statistical inference on the modal properties in a manner consistent with modeling assumptions and probability logic. The potential disadvantage of Bayesian approach is that the theoretical formulation can be more involved and less intuitive than their non-Bayesian counterparts. Algorithms are needed for efficient computation of the statistics (e.g., mean and variance) of the modal parameters from the posterior distribution. Unlike non-Bayesian methods, the algorithms are often implicit and iterative. E.g., optimization algorithms may be involved in the determination of most probable value, which may not converge for poor quality data. Methods Bayesian formulations have been developed for OMA in the time domain and in the frequency domain using the spectral density matrix and fast Fourier transform (FFT) of ambient vibration data. Based on the formulation for FFT data, fast algorithms have been developed for computing the posterior statistics of modal parameters. Recent developments based on EM algorithm show promise for simpler algorithms and reduced coding effort. The fundamental precision limit of OMA has been investigated and presented as a set of uncertainty laws which can be used for planning ambient vibration tests. Connection with maximum likelihood method Bayesian method and maximum likelihood method (non-Bayesian) are based on different philosophical perspectives but they are mathematically connected; see, e.g., and Section 9.6 of. For example, Assuming a uniform prior, the most probable value (MPV) of parameters in a Bayesian method is equal to the location where the likelihood function is maximized, which is the estimate in Maximum Likelihood Method Under a Gaussian approximation of the posterior distribution of parameters, their covariance matrix is equal to the inverse of Hessian of the negative log of likelihood function at the MPV. Generally, this covariance depends on data. However, if one assumes (hypothetically; non-Bayesian) that the data is indeed distributed as the likelihood function, then for large data size it can be shown that the covariance matrix is asymptotically equal to the inverse of the Fisher information matrix (FIM) of parameters (which has a non-Bayesian origin). This coincides with the Cramer–Rao bound in classical statistics, which gives the lower bound (in the sense of matrix inequality) of the ensemble variance of any unbiased estimator. Such lower bound can be reached by maximum-likelihood estimator for large data size. In the above context, for large data size the asymptotic covariance matrix of modal parameters depends on the 'true' parameter values (a non-Bayesian concept), often in an implicit manner. It turns out that by applying further assumptions such as small damping and high signal-to-noise ratio, the covariance matrix has mathematically manageable asymptotic form, which provides insights on the achievable precision limit of OMA and can be used to guide ambient vibration test planning. This is collectively referred as 'uncertainty law'. See also Operational modal analysis Bayesian inference Ambient vibrations Microtremor Modal analysis Modal testing Notes See monographs on non-Bayesian OMA and Bayesian OMA See OMA datasets See Jaynes and Cox for Bayesian inference in general. See Beck for Bayesian inference in structural dynamics (relevant for OMA) The uncertainty of the modal parameters in OMA can also be quantified and calculated in a non-Bayesian manner. See Pintelon et al. References Wave mechanics
Bayesian operational modal analysis
[ "Physics" ]
1,005
[ "Waves", "Wave mechanics", "Physical phenomena", "Classical mechanics" ]
41,442,396
https://en.wikipedia.org/wiki/InRule%20Technology
InRule Technology is a software company that offers Business Rule Management System (BRMS) enterprise software products. History InRule Technology's Chief Executive Officer Rik Chomko and Chief Technology Officer Loren Goodman founded InRule Technology in Chicago in 2002. Paul Hessinger joined InRule Technology in 2004 as chief executive officer and chairman of the board and served until his retirement in 2015. They work with companies in several markets, including financial services, public sector, healthcare, and insurance. In 2007, InRule Technology became a charter member of the Microsoft Business Process Alliance. In August 2019, InRule was acquired by Open Gate Capital. Products On October 29, 2012, InRule Technology launched InRule for Microsoft Dynamics CRM. The program provides components to enable creation and update of rules within Microsoft Dynamics CRM, InRule for Microsoft Dynamics CRM provides a platform for shops that prefer to work with Microsoft's platforms. With the availability of InRule 4.6 in 2014, the company introduced deployment of InRule through REST services and allowed REST services to be called from InRule. This enables access to data exposed as a REST service and to package up a rule service for RESTful access. The product launch reflected the move of the company's core audience to use a broader array of technologies despite an earlier focus on .NET. In 2017, InRule introduced InRule for the Salesforce Platform, as well as a technology partnership with Work-Relay, a Business Process Management (BPM) application built on the Salesforce Platform. One year earlier the company introduced InRule for JavaScript, allowing enterprises to run rules on the client-side, server-side or both. The software architecture includes multiple components, including irAuthor, the primary authoring tool for creating and maintaining rules; irVerify, a real-time test environment to run and debug rule applications; and irSDK, a set of APIs that allows developers to integrate inRule into their applications. Additionally, irSOA allows users to access the InRule rule engine as a service. irSOA is now called the irServer Execution Service. See also FICO IBM/ILOG Drools Pegasystems Oracle Corporation References External links Official Website Data modeling Decision-making Decision support systems Rule engines
InRule Technology
[ "Technology", "Engineering" ]
481
[ "Data modeling", "Data engineering", "Information systems", "Decision support systems" ]
41,442,761
https://en.wikipedia.org/wiki/Experimental%20biology
Experimental biology is the set of approaches in the field of biology concerned with the conduction of experiments to investigate and understand biological phenomena. The term is opposed to theoretical biology which is concerned with the mathematical modelling and abstractions of the biological systems. Due to the complexity of the investigated systems, biology is primarily an experimental science. However, as a consequence of the modern increase in computational power, it is now becoming more feasible to find approximate solutions and validate mathematical models of complex living organisms. The methods employed in experimental biology are numerous and of different nature including molecular, biochemical, biophysical, microscopical and microbiological. See :Category:Laboratory techniques for a list of biological experimental techniques. Gallery References Biological techniques and tools Branches of biology
Experimental biology
[ "Biology" ]
150
[ "nan" ]
41,442,860
https://en.wikipedia.org/wiki/Benzoxazinone%20biosynthesis
The biosynthesis of benzoxazinone, a cyclic hydroxamate and a natural insecticide, has been well-characterized in maize and related grass species. In maize, genes in the pathway are named using the symbol bx. Maize Bx-genes are tightly linked, a feature that has been considered uncommon for plant genes of a biosynthetic pathways. Especially notable are genes encoding the different enzymatic functions BX1, BX2 and BX8 and which are found within about 50 kilobases. Results from wheat and rye indicate that the cluster is an ancient feature. In wheat the cluster is split into two parts. The wheat genes Bx1 and Bx2 are located in close proximity on chromosome 4 and wheat Bx3, Bx4 and Bx5 map to the short arm of chromosome 5; an additional Bx3 copy was detected on the long arm of chromosome 5B. Recently, additional biosynthetic clusters have been detected in other plants for other biosynthetic pathways and this organization might be common in plants. Maize genes The bx1 gene encodes a protein, BX1, that forms indol from indol-3-glycerol phosphate in the plastid. It is the first step in the pathway and determines much of the natural variation in levels of DIMBOA in maize. The next steps in the pathway occur in the endoplasmic reticulum, also referred to as the microsomes in cell fractionation experiments, and are carried by proteins encoded by genes bx2, bx3, bx4, and bx5. References Biochemistry Genetics Biosynthesis
Benzoxazinone biosynthesis
[ "Chemistry", "Biology" ]
348
[ "Genetics", "Biosynthesis", "nan", "Chemical synthesis", "Biochemistry", "Metabolism" ]
41,443,085
https://en.wikipedia.org/wiki/External%20image
In psychology, the external image (also alien image, foreign image, public image, third-party image; ) is the image other people have of a person, i.e., a person's external image is the way they are viewed by other people. It contrasts with a person's self image (); how the external image is communicated to a person may affect their self esteem positively or negatively. Definition An external image is the totality of all perceptions, feelings, and judgments that third parties make about an individual. These interpersonal perceptions are automatically linked to earlier experiences with the person being observed and with the feelings arising from these interactions and evaluations. The image that others have of a person shapes their expectations of this person, and significantly affects their mutual social interaction. External image and self image A person's external image, or more precisely, how this image is communicated to the individual, and how others react to the individual as a result of his or her external image, significantly affects the person's self image. Positive, appreciative external images strengthen an individual's self confidence and self esteem. In extreme cases, negative or conflicting external images can cause mental illness. The external image is always different from an individual's self-image. From the two perspectives and the differences between them, or more accurately, the inferences that the two parties draw for themselves, social interactions evolve, influenced by the parties' own selves. In group dynamics Conscious handling of images about each other plays an important part in group dynamics. In feedback exercises, subjects are trained in giving and receiving external images. The Johari window describes the relationship between external and self images, and that between conscious and unconscious parts of these images. With mindful "awareness exercises", a person is trained to detect previously unconscious expectations of third parties, and with communication exercises, they are trained to reconcile their own and others' images and expectations of each other. In psychotherapy Psychotherapy also deals with external images when treating depression or in dealing with the effects of trauma or bullying, or more generally in counseling members of marginalized groups. References See also Constructivism Othering Conceptions of self Perception Cognitive psychology Interpersonal relationships Interpersonal communication
External image
[ "Biology" ]
451
[ "Behavior", "Behavioural sciences", "Cognitive psychology", "Interpersonal relationships", "Human behavior" ]
41,443,123
https://en.wikipedia.org/wiki/Bx1%20benzoxazin1
Function Maize gene for first step in biosynthesis of benzoxazin, which aids in resistance to insect pests, pathogenic fungi and bacteria. First report Hamilton 1964, as a mutant sensitive to the herbicide atrazine, and lacking benzoxazinoids (less than 1% of non-mutant plants). Molecular characterization reveals that the BX1 protein is a homologue to the alpha-subunit of tryptophan synthase. The reference mutant allele has a deletion of about 900 bp, located at the 5'-terminus and comprising sequence upstream of the transcription start site and the first exon. Additional alleles are given by a Mu transposon insertion in the fourth exon (Frey et al. 1997 ) and a Ds transposon insertion in the maize inbred line W22 genetic background (Betsiashvili et al. 2014). Gene sequence diversity analysis has been performed for 281 inbred lines of maize, and the results suggest that bx1 is responsible for much of the natural variation in DIMBOA (a benzoxazinoid compound) synthesis (Butron et al. 2010). Genetic variation in benzoxazinoid content influences maize resistance to several insect pests (Meihls et al. 2013; McMullen et al. 2009). Map location AB chromosome translocation analyses place on short arm of chromosome 4 (4S; Simcox and Weber 1985 ). There is close linkage to other genes in the benzoxazinoid synthesis pathway [bx2, bx3, bx4, bx5 Frey et al. 1995, 1997 ). Gene bx1 is 2490 bp from bx2 (Frey et al. 1997 ); between umc123 and agrc94 on 4S (Melanson et al. 1997 ). Mapping probes: SSR p-umc1022 (Sharopova et al. 2002 ); Overgo (physical map probe) PCO06449 (Gardiner et al. 2004 ). Phenotypes Mutants are viable, but may be distinguished from normal plants by FeCl3 staining: plants able to synthesize benzoxinoids have pale blue color when crushed and treated with FeCl3 solutions (Hamilton 1964, Simcox 1993 ). Mutations in the bx1 gene reduce the resistance to first generation European corn borer (Ostrinia nubilalis) that is conferred by benzoxazinoids (Klun et al. 1970 ). Bx1 mutant maize deposited less callose in response to chitosan elicitation than isogenic wildtype plants (Ahmad et al. 2011 ). Genetic mapping using recombinant inbred lines derived from maize inbred lines B73 and Mo17 showed that a 3.9 kb cis-regulatory element that is located approximately 140 kb upstream of Bx1 causes higher 2,4-dihydroxy-7-methoxy-1,4-benzoxazin-3-one (DIMBOA) accumulation in Mo17 than in B73 seedlings (Zheng et al. 2015 ). This genetic variation is also associated with higher corn leaf aphid (Rhopalosiphum maidis) reproduction on B73 compared to Mo17 maize seedlings (Betsiashvili et al. 2014 ). Relative to maize inbred line W22, Bx1::Ds mutant maize plants are more sensitive to corn leaf aphids (Rhopalosiphum maidis) (Betsiashvili et al. 2014) and beet armyworms (Spodoptera exigua) (Tzin et al. 2017 ). Highly localized induction of benzoxazinoid accumulation in response to Egyptian cotton leafworm (Spodoptera littoralis) feeding is abolished in a maize bx1 mutant (Maag et al. 2016 ). Gene Product Catalyzes the first step in the synthesis of DIMBOA, forming indole from indole-3-glycerol phosphate. The enzyme is called indole-3-glycerol phosphate lyase, chloroplast, EC 4.1.2.8 and is located in the chloroplast. The X-ray structure of BX1 protein has been resolved and compared with bacterial TSA (tryptophan synthase alpha subunit, Kulik et al. 2005). Three homologs of the BX1 protein occur in maize. One is encoded by the gene tsa1, tryptophan synthase alpha1(Frey et al. 1997, Melanson et al. 1997 ), on chromosome 7, another by igl1, indole-3-glycerol phosphate lyase1(Frey et al. 1997, on chromosome 1, and another by tsah1, 'TSA like" and located near the bx1 gene (Frey et al. 1997. ). Links MaizeGDB NCBI Uniprot References Biochemistry Genetics
Bx1 benzoxazin1
[ "Chemistry", "Biology" ]
1,052
[ "Biochemistry", "Genetics", "nan" ]
41,443,291
https://en.wikipedia.org/wiki/Swill%20milk%20scandal
The swill milk scandal was a major adulterated food scandal in the state of New York in the 1850s. The New York Times reported an estimate that in one year, 8,000 infants died from swill milk. Name Swill milk referred to milk from cows fed swill which was residual mash from nearby distilleries. The milk was whitened with plaster of Paris, thickened with starch and eggs, and hued with molasses. After the extraction of alcohol from the macerated grain, the residual mash still contains nutrients. Therefore, keeping cows stabled near distilleries and feeding them with swill was an economic advantage. History As the population of New York City exploded in the antebellum period, a time when safe drinking water was scarce, the demand for milk soared. But as the city expanded and real estate prices climbed, the meadows necessary to raise hay-fed cattle moved farther from its markets. The cost of bringing fresh milk to customers in the city became prohibitive and threatened to restrict its supply to relatively wealthy inhabitants. For the same sanitary reasons that made milk popular, Americans consumed alcohol at the highest per capita rates in US history, and New York City was home to a large number of distilleries. Distilleries in London had experimented with feeding the waste product of their industry—the fermented mash of rye, barley, and wheat commonly referred to as "swill"—to cattle with some success, and New York City distillers soon followed suit. The milk from swill-fed cows, produced in dense urban areas and often priced as low as 6 cents per quart, was affordable to most of New York City's poorest residents. The New York Academy of Medicine carried out an examination and established the connection of swill milk with the increased infant mortality in the city. The topic of swill milk was also well exposed in pamphlets and caricatures of the time. In May 1858, Frank Leslie's Illustrated Newspaper did a landmark exposé of the distillery-dairies of Manhattan and Brooklyn that marketed so-called swill milk that came from cows fed on distillery waste and then adulterated with water, eggs, flour, and other ingredients that increased the volume and masked the adulteration. Swill milk dairies were noted for their filthy conditions and overpowering stench both caused by the close confinement of hundreds (sometimes thousands) of cows in narrow stalls where, once farmers tied them, they would stay for the rest of their lives, often standing in their own manure, covered with flies and sores, and suffering from a range of virulent diseases. These cows were fed boiling distillery waste, often leaving the cows with rotting teeth and other maladies. The milk drawn from the cows was routinely adulterated with water, rotten eggs, flour, burnt sugar, and other adulterants with the finished product then marketed falsely as "pure country milk" or "Orange County Milk". In an editorial published at the height of the scandal, the New York Times described swill milk as a "bluish, white compound of true milk, pus and dirty water, which, on standing, deposits a yellowish, brown sediment that is manufactured in the stables attached to large distilleries by running the refuse distillery slops through the udders of dying cows and over the unwashed hands of milkers..." Frank Leslie's exposé caused widespread public outrage that strongly pressured local politicians to punish and regulate the distillery-dairies, which were formally complained to be "swill milk nuisance". The Tammany Hall politician Alderman Michael Tuomey, known as "Butcher Mike", defended the distillers vigorously throughout the scandal—in fact, he was put in charge of the Board of Health investigation. Frank Leslie's Illustrated Newspaper staked out distillery owner Bradish Johnson's mansion at 21st and Broadway and reported that amid the investigation, Tuomey was observed making late-night visits. Tuomey assumed a central role in the ensuing investigations and, with fellow Aldermen E. Harrison Reed and William Tucker, shielded the dairies and turned the hearings into one-sided exercises designed to make dairy critics and established health authorities look ridiculous, even going to the extent of arguing that swill milk was as good or better for children than regular milk. With Reed and others, Tuomey successfully blocked any serious inquiry into the dairies and stymied calls for reform. The Board of Health exonerated the distillers, but public outcry led to the passage of the first food safety laws in the form of milk regulations in 1862. Tuomey became known for his attempts to block the new regulations, and earned the new moniker "Swill Milk" Tuomey. In addition to Tuomey's assistance in clearing up the unclean image milk developed, Robert Milham Hartley, a social reformist, aided in the restoration of milk being a nutritional and safe-to-drink beverage. During the mid to late nineteenth century, Hartley utilized Biblical references in his essays to appeal to the urban community. He asserted that universal milk consumption could help alleviate society's "sins", poverty, and alcohol consumption. See also 2008 Chinese milk scandal References 1850s in New York (state) Food safety in the United States Adulteration Dairy industry History of New York City Infant mortality
Swill milk scandal
[ "Chemistry" ]
1,102
[ "Adulteration", "Drug safety" ]
41,443,440
https://en.wikipedia.org/wiki/Automation%20Master
Automation Master is an open source community maintained project. Automation Master was created to assist in the design, implementation and operation of an automated system. The installation and startup of any automated system is very time-consuming and costly. Much of the time spent starting up an automated system can be traced to the difficulties in providing an effective test of the computer based system in the integrator's laboratory. Traditional testing techniques required staging as much of the equipment as practical in the laboratory, and wiring up a simulator panel containing switches and indicator lights to all of the I/O modules on the PLC. The operator stations would be connected up to this "rats nest" of wires, switches, indicator lights, and equipment for the test. PLC software would be tested by sequencing the toggle switches to input the electrical signals to the input cards on the PLC, and then observing the response by software on the indicator lights and operator consoles. For small simple systems, this type of testing was manageable, and resulted in some degree of confidence that the control software would work once it was installed. However, the amount of time spent performing the test was relatively high, and a real-time test could not be achieved. As systems become larger and more complex, this method of testing only achieves, at a significant cost, a basic hardware and configuration check. The testing of complex logic sequences, is an act of futility without the ability to accurately reproduce the timing relationships between signals. What was needed was the ability to exercise the control system's software in a real-time environment. Real-time simulation fills this void. Real-time simulators such as Automation Master are PC based software packages, which utilize a model to mimic the automated system's reaction to the control software. History Max Hitchens and George Rote began working on Industrial Automation projects in the late 1970s. One of their first projects was an automatic guided vehicle system for Goodyear Tire and Rubber Company in Lawton, Oklahoma. This system was to automatically transport material and finished goods around a massive tire factory. Mr. Hitchens' and Mr. Rote's previous experience in software development was mainly in office environments where logic could be debugged based upon simple CRT or printed output. So, after four months of writing software for the automated system, they took the software to the field and thus got their "baptism" into the real world debugging of large automated systems. An automatic vehicle would be dispatched to do a task and it would not show up at its destination. First, they had to find the vehicle which could be anywhere in the massive facility, then try to figure out what went wrong. After 6 months of 16-hour days - 7 days a week, they finally got the system running. Mr. Hitchens and Mr. Rote had other automatic guided vehicle projects and resolved to not repeat the Goodyear debugging experience. So, they build a custom simulator which attached to the guided vehicle system controller and pretended to be the factory floor. The activity of the guided vehicles was displayed on a color graphic display. The software could be debugged on their desks and with finished and debugged taken to the field and installed with minimum effort. Sometime later, Mr. Hitchens and Mr. Rote were demonstrating their AGV simulator to Conco-Tellus, a conveyor system manufacturer, when they were asked if they could build a simulator for conveyor systems. Of course, the answer was yes and the Real Time Conveyor Simulator (RTCS) was born. The RTCS was a custom system with 3 single-board computers. They were awarded a patent for it in 1985. The RTCS was a specialty product which did not have a large market, but Mr. Hitchens and Mr. Rote continued refinement and development. Along this time the IBM PC was introduced and it was used to build the database necessary for the simulator. In the mid-1980s, a director for Bell Labs saw the simulator and wanted to try it out modeling software development projects. It was impractical on a custom hardware box. But since the code was written for Intel processors, it could possibly be converted to run on a PC. In exchange for free use of the software, Bell Labs contributed a development system and two software engineers to help with the conversion. It turned out to be not very difficult and within a few weeks RTCS was running on a PC. Well almost, the PC did not have enough power to meet the real time computing which the RTCS required. It did, however, make a great demonstration system. Now all was required was a disk and not 100 lbs of computer gear. As the 8088 PC metamorphosed into the 80286, customers were increasingly reluctant to spend thousands of dollars on a custom piece of computer gear. By the time the 80386 personal computers came out, the RTCS ceased to have a market. Fortunately, the 80386 and subsequently the 80486 had enough power to run the simulation in real time and Automation Master was born. Development continued until the mid-1990s when for a myriad of reasons, mainly the death of George Rote, it ceased. By this time, Automation Master embodied many thousands of hours of development and use. Automation Master languished until 2013 when Max Hitchens decided to create an open source project and release it into the public domain. Description Automation Master is a comprehensive modeling and simulation software package designed specifically for design, implementation and operation of factory/warehouse automation. After the testing is complete, the system will ship with confidence that, a real time test has been performed, and the system will work when it is installed. The installation will be faster and less costly and the system provided to the customer will be of higher quality and can be quickly placed into production. Project Life Cycle Automation Master can also be used throughout the life cycle of an automated factory, from the design phase, through the implementation phase, into actual production. An automation project is a cycle of activities. The project starts as a concept, the system concept is used to develop a design, the system design is used to fabricate the system components, the fabricated components are installed, and the installed system will be operated. The installed system generates concepts for improvements or new systems and the cycle repeats. A real time simulator can assist in the entire life cycle of a project. Design Animating the System Concept A concept is usually just an idea, which needs to be funded to make it a reality. Automated systems are dynamic. A static picture or description of an automated system does not demonstrate the interaction of the components or show how the system functions as a whole. It has been said that a picture is worth a thousand words; a corollary is that a moving picture is worth ten thousand words. An animated picture, as generated by a real time simulator, can communicate the concept and assist in selling the project to management. Simulating the System Design Designing an automated system is a balancing act. You want the best possible results for the least cost. The system design is selected from several alternatives. Choosing the best alternative requires evaluation of the alternatives and how they interact with each other. A real time simulator allows the system designer to evaluate potential designs, by using a model, to select the best approach for the automated system. An important element of automated system design is developing the overall strategy to be used in operating the facility. A simulation model allows the operating strategy to be developed interactively. A strategy is implemented in the model, the results viewed, and the strategy refined to improve performance. The operating strategy becomes increasingly important as the cost of system components escalates. The system efficiency can be improved by changing the operating strategy using the model without increasing the cost of the system. Scenario testing or test cases may be set up to test and confirm proper system operation under varying conditions and collect statistical data on its operation. Implementation Automation Master is used for software quality control during the implementation phase. Testing the Control Logic The real time simulator may be connected directly to the automated system's programmable controllers and computers. The model is used as a replacement for the physical equipment. Thus, the control logic and system software can be exhaustively tested in a laboratory environment instead of on the plant floor. The control logic can be stress tested under full operational loading to verify that the system will meet production requirements. System emulation reduces safety hazards and equipment damage during installation. Mistakes in the control logic and testing blunders are discovered using a model, not the live system. An emulation model contains more detail than the design phase simulation model. The simulation scenarios which exercised the system design may be rerun in emulation mode to verify that the detailed design and control logic implementation meet system production requirements. If it does not, it is far easier and less costly to modify the design or the control logic before the system is installed. Creating an "As Built" Model A real time simulator may be used during installation to determine the variance between the system design and the actual installation. Field verification logs the differences between the "as built" system and the model. If a major mistake has been made in translating the system design into the installed system, it can be corrected prior to system start up. The differences reported in the verification log are used to change the model to reflect the "as built" system. The control logic can then be retested to verify that the software will still meet the production requirements with the "as built" system. The simulation throughput scenarios can also be rerun to verify that the "as built" system meets all of the system design criteria. Operation Maintaining the Automated System The model may act as a diagnostic monitor. In this mode, the model is run in parallel with the operation of the installed system. The real time simulator displays the dynamic activity in the system and continuously compares the model with the actual operation. When a discrepancy, outside of specified tolerances, occurs between the operation of the system and the model, an error is reported, assisting maintenance personnel in diagnosing and repairing the system. Closing the Loop Automated systems are never static. Changes are inevitable. Ideas for new systems are generated. Because an exact real time simulation model exists, proposed changes can be completely tested before they are implemented. The changes required in the control software can be tested under emulation. The physical equipment modifications can be verified. The result of changes to the automated system may be tested before the changes are made in the production system, so that the changes can be made without halting production. Automation Master Operating Modes Simulation Emulation Automation Master connects to the Control System/PLC and emulates the real world I/O by reading and writing the PLC's internal I/O images. The simulator can receive the Control System/PLC's outputs, and respond with the inputs in real time without the need for any hard-wired physical I/O. A simulator emulates the real time response to the Control System/PLC actions based upon a model which duplicates the operation of the automated system. For example, if the Control System/PLC sets a digital output to start a motor to raise a door, the model, within milliseconds, provides the Control System/PLC with an auxiliary contact closure to indicate that the motor has been started. Shortly, the door closed limit switch is turned off as the door begins to rise. As long as, the Control System/PLC keeps on the output signal which raises the door, the door in the model continues to rise. When the door is fully open, the model turns on the door open limit switch, and the PLC responds by turning off the motor which raised the door. The model sees the Control System/PLC turn off the motor and drops out the motor's auxiliary contact. Once a model of a component has been built, it can be executed over and over again, under varying conditions to quickly and thoroughly exercise the control software. For instance, what happens if the Control System/PLC loses the motor's auxiliary contact as the door is rising?, Does the Control System/PLC turn off the output which raises the door? Is an alarm sent to the Level II system? How does the Level II system respond? When an error is detected, the programmer can easily alter the software and retest it using the model. The automated system is debugged in real time without any wiring, switches, bells, whistles, or hassles. Monitor Multimode Models Real time simulation allows multiple mode models to be built. A multiple mode model can be operated in either simulation, emulation, or monitor modes by simply invoking the simulator with a different configuration file. Multiple mode models are created by separating the model of the system control strategy from the model of the physical components. There are two distinct elements in a simulation model of an automated system. One element is the physical components of the system being modeled. The second element is a control strategy used to make decisions, to manage the system resources, and route product using the system components. In simulation mode, the interaction between the control strategy and the model of the physical components takes place internally within the real time simulation model. An emulation model only requires the second element. The control strategy is incorporated into the PLC logic, instead of being contained within the model. The control strategy is provided by a separate processor in emulation mode. The control software written to implement the control strategy will be the same software which will control the physical system components when the system is installed. A model of the physical system components is created which reacts identically to the physical components in the real system. The model of the physical system is constructed separately from the control logic being tested. The model of the physical system is passive and makes no decisions. The physical model reacts to the decisions made by the control logic in the same manner as the real system would. An emulation model will operate in both emulation and simulation modes with the addition of the control strategy to the model. The system control strategy now exists in two places, in the model and in the PLC. The source of the system control strategy can be selected using the OPERATING_MODE variable in the configuration file. The control strategy in the model is implemented using as asynchronous activity. A conditional is used as part of the activation conditions in all asynchronous activity entries used strictly for simulation mode. This enables the execution of the system control strategy in simulation mode and disables it in emulation mode. Two different configuration files are set up, one for each mode, to set the initialization file, operating mode, and other configuration differences between the modes. Running the real time simulator with the simulation mode configuration file causes the model to operate as a simulation. Running the real time simulator with the emulation mode configuration file runs the model as an emulation. The simulation runs with the internal system control strategy and disables the external connection to the PLC. Running in emulation mode disables the internal control strategy, and enables the interface to the PLC which supplies the external control strategy. The physical components of the real system are required for monitor mode. In monitor mode, only the model of the physical system components is required. The control strategy is executed in the PLC and simultaneously controls the real system and the model. The real time simulator receives the signals which are sent to, and received from, the real system. The physical system model is run in parallel with the real system, so that the differences between the activity in the model and the real system can be used to diagnose component failures. A single model may be run in all three modes by including the system control strategy (enabled only in simulation mode) in the model. A separate configuration file, containing the initialization file for the monitor, is created for operation in monitor mode. Changing the operating mode from monitor to emulation or simulation mode will require that the real system be disconnected. Once the real system is disconnected, the model may be switched between simulation and emulation modes by enabling or disabling the internal control strategy. Applications R.R. Donnelley - Diskette Collating Machine See also Programmable logic controller Industrial control systems Automation Lights out (manufacturing) Verification and Validation of Computer Simulation Models References External links Direct Connect Emulation and the Project Life Cycle Fast Track Project, White Paper Open Source Project U.S. Patent 4,512,747 Trademark (abandoned) Automation Master Community Simulation software Industrial computing
Automation Master
[ "Technology", "Engineering" ]
3,338
[ "Industrial computing", "Industrial engineering", "Automation" ]
41,443,677
https://en.wikipedia.org/wiki/Poly%28ethylene%20adipate%29
Poly(ethylene adipate) or PEA is an aliphatic polyester. It is most commonly synthesized from a polycondensation reaction between ethylene glycol and adipic acid. PEA has been studied as it is biodegradable through a variety of mechanisms and also fairly inexpensive compared to other polymers. Its lower molecular weight compared to many polymers aids in its biodegradability. Synthesis Polycondensation Poly(ethylene adipate) can be synthesized through a variety of methods. First, it could be formed from the polycondensation of dimethyl adipate and ethylene glycol mixed in equal amounts and subjected to increasing temperatures (100 °C, then 150 °C, and finally 180 °C) under nitrogen atmosphere. Methanol is released as a byproduct of this polycondensation reaction and must be distilled off. Second, a melt condensation of ethylene glycol and adipic acid could be carried out at 190-200 °C under nitrogen atmosphere. Lastly, a two-step reaction between adipic acid and ethylene glycol can be carried out. A polyesterification reaction is carried out first followed by polycondensation in the presence of a catalyst. Both of these steps are carried out at 190 °C or above. Many different catalysts can be used such as stannous chloride and tetraisopropyl orthotitanate. Generally, the PEA is then dissolved in a small amount of chloroform followed by precipitation out in methanol. Ring-opening polymerization An alternate and less frequently used method of synthesizing PEA is ring-opening polymerization. Cyclic can be mixed with di-n-butyltin in chloroform. This requires temperatures similar to melt condensation. Properties PEA has a density of 1.183 g/mL at 25 °C and it is soluble in benzene and tetrahydrofuran. PEA has a glass transition temperature of -50 °C. PEA can come in a high molecular weight or low molecular weight variety, i.e.10,000 or 1,000 Da. Further properties can be broken down into the following categories. Mechanical properties In general, most aliphatic polyesters have poor mechanical properties and PEA is no exception. Little research has been done on the mechanical properties of pure PEA but one study found PEA to have a tensile modulus of 312.8 MPa, a tensile strength of 13.2 MPa, and an elongation at break of 362.1%. Alternate values that have been found are a tensile strength of ~10 MPa and a tensile modulus of ~240 MPa. Chemical properties IR spectra for PEA show two peaks at 1715–1750 cm−1, another at 1175–1250 cm−1, and a last notable peak at 2950 cm−1. These peaks can be easily determined to be from ester groups, COOC bonds, and CH bonds respectively. Crystallization properties PEA has been shown to be able to form both ring-banded and Maltese-cross (or ring-less) type spherulites. Ring-banded spherulites most notably form when crystallization is carried out between 27 °C and 34 °C whereas Maltese-cross spherulites form outside of those temperatures. Regardless of the manner of banding, PEA polymer chains pack into a monoclinic crystal structure (some polymers may pack into multiple crystal structures but PEA does not). The length of the crystal edges are given as follows: a = 0.547 nm, b = 0.724 nm, and c = 1.55 nm. The monoclinic angle, α, is equal to 113.5°. The bands formed by PEA have been said to resemble corrugation, much like a butterfly wing or Pollia fruit skin. Electrical properties Conductivity of films made of PEA mixed with salts was found to exceed that of PEO4.5LiCF3SO3 and of poly(ethylene succinate)/LiBF4 suggesting it could be a practical candidate for use in lithium-ion batteries. Notably, PEA is used as a plasticizer and therefore amorphous flows occur at fairly low temperatures rendering it less plausible for use in electrical applications. Blends of PEA with polymers such as poly(vinyl acetate) showed improved mechanical properties at elevated temperatures. Miscibility PEA is miscible with a number of polymers including: (PLLA), (PBA), poly(ethylene oxide), tannic acid (TA), and (PBS). PEA is not miscible with low density polyethylene (LDPE). Miscibility is determined by the presence of only a single glass transition temperature being present in a polymer mixture. Degradability Biodegradability Aliphatic copolyesters are well known for their biodegradability by lipases and esterases as well as some strains of bacteria. PEA in particular is well degraded by hog liver esterase, Rh. delemar, Rh. arrhizus, P. cepacia, R. oryzae, and Aspergillus sp. An important property in the speed of degradation is the crystallinity of the polymer. Neat PEA has been shown to have a slightly lower degradation rate than copolymers due to a loss in crystallinity. PEA/poly(ethylene furanoate) (PEF) copolymers at high PEA concentrations were shown to degrade within 30 days while neat PEA had not fully degraded, however, mixtures approaching 50/50 mol% hardly degrade at all in the presence of lipases. Copolymerizing styrene glycol with adipic acid and ethylene glycol can result in phenyl side chains being added to PEA. Adding phenyl side chains increases steric hindrance causing a decrease in the crystallinity in the PEA resulting in an increase in biodegradability but also a notable loss in mechanical properties. Further work has shown that decreasing crystallinity is more important to degradation carried out in water than whether or not a polymer is hydrophobic or hydrophilic. PEA polymerized with 1,2-butanediol or 1,2-decanediol had an increased biodegradability rate over PBS copolymerized with the same side branches. Again, this was attributed to a greater loss in crystallinity as PEA was more affected by steric hindrance, even though it is more hydrophobic than PBS. Poly(ethylene adipate) urethane combined with small amounts of ligin can aid in preventing degradation by acting as an antioxidant. Additionally, the mechanical properties of the PEA urethane increased by ligin addition. This is thought to be due to the rigid nature of ligin which aids in reinforcing soft polymers such as PEA urethane. When PEA degrades, it has been shown that cyclic oligomers are the highest fraction of formed byproducts. Ultrasonic degradation Using toluene as a solvent, the efficacy of degrading PEA through ultrasonic sound waves was examined. Degradation of a polymer chain occurs due to cavitation of the liquid leading to scission of chemical chains. In the case of PEA, degradation was not observed due to ultrasonic sound waves. This was determined to be likely due to PEA not having a high enough molar mass to warrant degradation via these means. A low molecular weight has been indicated as being necessary for the biodegradation of polymers. Applications Plasticizer Poly(ethylene adipate) can effectively be used as a plasticizer reducing the brittleness of other polymers. Adding PEA to PLLA was shown to reduce the brittleness of PLLA significantly more than (PBA), (PHA), and (PDEA) but reduced the mechanical strength. The elongation at break was increased approximately 65x over neat PLLA. The thermal stability of PLLA also showed a significant increase with an increasing concentration of PEA. PEA has also been shown to increase the plasticity and flexibility of the terpolymer maleic anhydride-styrene-methyl metacrylate (MAStMMA). Observing the changes in thermal expansion coefficient allowed for the increasing in plasticity to be determined for this copolymer blend. Mending capabilities Self-healing polymers is an effective method of healing microcracks caused by an accumulation of stress. Diels-Alder (DA) bonds can be incorporated into a polymer allowing microcracks to occur preferentially along these weaker bonds. Furyl-telechelic poly(ethylene adipate) (PEAF2) and tris-maleimide (M3) can be combined through a DA reaction in order to bring about self-healing capabilities in PEAF2. PEAF2M3 was found to have some healing capabilities after 5 days at 60 °C, although significant evidence of the original cut appeared and the original mechanical properties were not fully restored. Microcapsules for drug delivery PEA microbeads intended for drug delivery can be made through water/oil/water double emulsion methods. By blending PEA with Poly-ε-caprolactone, beads can be given membrane porosity. Microbeads were placed into a variety of solutions including a synthetic stomach acid, pancreatin, Hank's buffer, and newborn calf serum. The degradation of the microcapsules and therefore the release of the drug was the greatest in newborn calf serum, followed by pancreatin, then synthetic stomach acid, and lastly Hank's buffer. The enhanced degradation in newborn calf serum and pancreatin was attributed to the presence of enzyme activity and that simple ester hydrolysis was able to be carried out. Additionally, an increase in pH is correlated with higher degradation rates. References Polymers Adipate esters Glycol esters
Poly(ethylene adipate)
[ "Chemistry", "Materials_science" ]
2,063
[ "Polymers", "Polymer chemistry" ]
41,443,948
https://en.wikipedia.org/wiki/Georg%20Seelig
Georg Seelig is a Swiss computer scientist, bioengineer, and synthetic biologist. He is an associate professor of Electrical Engineering and Computer Science & Engineering at the University of Washington. He is a researcher in the field of DNA nanotechnology. Life He graduated from University of Basel with a Diploma in Physics in 1998 and did his PhD on condensed matter Physics from University of Geneva in 2003. He was a post doctoral associate in the lab of Professor Erik Winfree at California Institute of Technology between 2003 and 2009 . He has won the prestigious NSF CAREER award in 2010, the Alfred P. Sloan Research Fellowship in 2011, and the DARPA Young Faculty Award in 2012. He is a part of the Molecular Programming Project. References University of Washington Paul G. Allen School of Computer Science & Engineering faculty Living people Swiss computer scientists Swiss biologists University of Basel alumni University of Geneva alumni Synthetic biologists DNA nanotechnology people Year of birth missing (living people)
Georg Seelig
[ "Biology" ]
195
[ "Synthetic biology", "Synthetic biologists" ]
41,444,371
https://en.wikipedia.org/wiki/Social%20rationality
In behavioural sciences, social rationality is a type of decision strategy used in social contexts, in which a set of simple rules is applied in complex and uncertain situations. Definition Social rationality is a form of bounded rationality applied to social contexts, where individuals make choices and predictions under uncertainty. While game theory deals with well-defined situations, social rationality explicitly deals with situations in which not all alternatives, consequences, and event probabilities can be foreseen. The idea is that, similar to non-social environments, individuals rely, and should rely, on fast and frugal heuristics in order to deal with complex and genuinely uncertain social environments. This emphasis on simple rules in an uncertain world contrasts with the view that the complexity of social situations requires highly sophisticated mental strategies, as has been assumed in primate research and neuroscience, among others. A descriptive and normative program Social rationality is both a descriptive program and a normative program. The descriptive program studies the repertoire of heuristics an individual or organization uses, that is, their adaptive toolbox. The normative program studies the environmental conditions to which a heuristic is adapted, that is, where it performs better than other decision strategies. This approach is called the study of the ecological rationality of social heuristics. It assumes that social heuristics are domain- and problem-specific. Applications Heuristics can be applied to social and non-social decision tasks (also called social games and games against nature), judgments, or categorizations. They can use social or non-social input. Social rationality is thus about three of the four possible combinations, excluding the case of heuristics using non-social input for non-social tasks. 'Games against nature' comprise situations where individuals face environmental uncertainty, and need to predict or outwit nature, e.g., harvest food or master hard-to-predict or unpredictable hazards. 'Social games' include situations, where the decision outcome depends on the choices of others, e.g., in cooperation, competition, mate search and even in morally significant situations. Social rationality has been studied in a number of other fields than human decision-making, e.g. in evolutionary social learning, and social learning in animals. Examples Imitate-the-majority heuristic An example for a heuristic that is not necessarily social but that requires social input is the imitate-the-majority heuristic, where in a situation of uncertainty, individuals follow the actions or choices of the majority of their peers regardless of their social status. The domain of pro-environmental behavior provides numerous illustrations for this strategy, such as littering behavior in public places, the reuse of towels in hotel rooms, and changes in private energy consumption in response to information about the consumption of the majority of neighbors. 1/N (Equality heuristic) Following the equality heuristic (sometimes called 1/N rule) people divide and invest their resources equally in a number of N different options. These options can be both social (e.g., time spent with children) and nonsocial entities (e.g., financial investments or natural resources). For example, many parents invest their limited resources, such as affection, time, and money (e.g., for education) equally into their offspring. In highly uncertain environments with large numbers of assets and only few possibilities to learn, the equality heuristic can outperform optimizing strategies and yield better performance on various measures of success than optimal asset allocation strategies. Social heuristics Adapted from Hertwig & Herzog, 2009. Imitate-the-majority heuristic Social circle heuristic Averaging heuristic Tit-for-tat Generous tit-for-tat (or tit-for-two-tat) Status tree Regret matching heuristic Mirror heuristic 1/N (Equality heuristic) Group recognition heuristic White coat heuristic/ Trust your doctor heuristic Imitate-the-successful heuristic Plurality vote-based lexicographic heuristic See also Social heuristics Ecological rationality Optimization Risk Uncertainty Max Planck Institute for Human Development Notes References Cialdini, R. B., Reno, R. R., & Kallgren, C. A. (1990). A focus theory of normative conduct: Recycling the concept of norms to reduce littering in public places. Journal of Personality and Social Psychology, 58(6), 1015–1026. DeMiguel, V., Garlappi, L., & Uppal, R. (2009). Optimal versus naive diversification: How inefficient ist the 1/N portfolio strategy? The Review of Financial Studies, 22(5), 1915-1953. Gigerenzer, G. (2010). Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in Cognitive Science, 2(3), 528–554. doi:10.1111/j.1756-8765.2010.01094.x Gigerenzer, G., Todd, P., & the ABC Research Group (1999). Simple heuristics that make us smart. New York: Oxford University Press. Hertwig, R., & Herzog, S. M. (2009). Fast and frugal heuristics: tools of social rationality. Social Cognition, 27(5), 661–698. Retrieved from http://guilfordjournals.com/doi/abs/10.1521/soco.2009.27.5.661 Hertwig, R. Hoffrage, U. & the ABC Research Group (2012). Simple heuristics in a social world. New York: Oxford University Press. Rieucau, G., & Giraldeau, L.-A. (2011). Exploring the costs and benefits of social information use: An appraisal of current experimental evidence. Philosophical Transactions of the Royal Society B, 366(1567), 949–957. doi:10.1098/rstb.2010.0325 Seymour, B., & Dolan, R. (2008). Emotion, decision making, and the amygdala. Neuron, 58, 662–671. Schultz, P. W., Nolan, J. M., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2007). The constructive, destructive, and reconstructive power of social norms. Psychological Science, 18(5), 429–434. Simon, Herbert A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129–138. Behavioral economics Game theory Rational choice theory
Social rationality
[ "Mathematics", "Biology" ]
1,452
[ "Game theory", "Behavioral economics", "Behavior", "Behaviorism" ]
41,444,886
https://en.wikipedia.org/wiki/Homotopy%20group%20with%20coefficients
In topology, a branch of mathematics, for , the i-th homotopy group with coefficients in an abelian group G of a based space X is the pointed set of homotopy classes of based maps from the Moore space of type to X, and is denoted by . For , is a group. The groups are the usual homotopy groups of X. References Algebraic topology Homotopy theory ko:호모토피 군#계수가 있는 호모토피
Homotopy group with coefficients
[ "Mathematics" ]
108
[ "Topology stubs", "Fields of abstract algebra", "Topology", "Algebraic topology" ]
41,445,100
https://en.wikipedia.org/wiki/White-box%20cryptography
In cryptography, the white-box model refers to an extreme attack scenario, in which an adversary has full unrestricted access to a cryptographic implementation, most commonly of a block cipher such as the Advanced Encryption Standard (AES). A variety of security goals may be posed (see the section below), the most fundamental being "unbreakability", requiring that any (bounded) attacker should not be able to extract the secret key hardcoded in the implementation, while at the same time the implementation must be fully functional. In contrast, the black-box model only provides an oracle access to the analyzed cryptographic primitive (in the form of encryption and/or decryption queries). There is also a model in-between, the so-called gray-box model, which corresponds to additional information leakage from the implementation, more commonly referred to as side-channel leakage. White-box cryptography is a practice and study of techniques for designing and attacking white-box implementations. It has many applications, including digital rights management (DRM), pay television, protection of cryptographic keys in the presence of malware, mobile payments and cryptocurrency wallets. Examples of DRM systems employing white-box implementations include CSS, Widevine. White-box cryptography is closely related to the more general notions of obfuscation, in particular, to Black-box obfuscation, proven to be impossible, and to Indistinguishability obfuscation, constructed recently under well-founded assumptions but so far being infeasible to implement in practice. As of January 2023, there are no publicly known unbroken white-box designs of standard symmetric encryption schemes. On the other hand, there exist many unbroken white-box implementations of dedicated block ciphers designed specifically to achieve incompressibility (see ). Security goals Depending on the application, different security goals may be required from a white-box implementation. Specifically, for symmetric-key algorithms the following are distinguished: Unbreakability is the most fundamental goal requiring that a bounded attacker should not be able to recover the secret key embedded in the white-box implementation. Without this requirement, all other security goals are unreachable since a successful attacker can simply use a reference implementation of the encryption scheme together with the extracted key. One-wayness requires that a white-box implementation of an encryption scheme can not be used by a bounded attacker to decrypt ciphertexts. This requirement essentially turns a symmetric encryption scheme into a public-key encryption scheme, where the white-box implementation plays the role of the public key associated to the embedded secret key. This idea was proposed already in the famous work of Diffie and Hellman in 1976 as a potential public-key encryption candidate. Code lifting security is an informal requirement on the context, in which the white-box program is being executed. It demands that an attacker can not extract a functional copy of the program. This goal is particularly relevant in the DRM setting. Code obfuscation techniques are often used to achieve this goal. A commonly used technique is to compose the white-box implementation with so-called external encodings. These are lightweight secret encodings that modify the function computed by the white-box part of an application. It is required that their effect is canceled in other parts of the application in an obscure way, using code obfuscation techniques. Alternatively, the canceling counterparts can be applied on a remote server. Incompressibility requires that an attacker can not significantly compress a given white-box implementation. This can be seen as a way to achieve code lifting security (see above), since exfiltrating a large program from a constrained device (for example, an embedded or a mobile device) can be time-consuming and may be easy to detect by a firewall. Examples of incompressible designs include SPACE cipher, SPNbox, WhiteKey and WhiteBlock. These ciphers use large lookup tables that can be pseudorandomly generated from a secret master key. Although this makes the recovery of the master key hard, the lookup tables themselves play the role of an equivalent secret key. Thus, unbreakability is achieved only partially. Traceability (Traitor tracing) requires that each distributed white-box implementation contains a digital watermark allowing identification of the guilty user in case the white-box program is being leaked and distributed publicly. History The white-box model with initial attempts of white-box DES and AES implementations were first proposed by Chow, Eisen, Johnson and van Oorshot in 2003. The designs were based on representing the cipher as a network of lookup tables and obfuscating the tables by composing them with small (4- or 8-bit) random encodings. Such protection satisfied a property that each single obfuscated table individually does not contain any information about the secret key. Therefore, a potential attacker has to combine several tables in their analysis. The first two schemes were broken in 2004 by Billet, Gilbert, and Ech-Chatbi using structural cryptanalysis. The attack was subsequently called "the BGE attack". The numerous consequent design attempts (2005-2022) were quickly broken by practical dedicated attacks. In 2016, Bos, Hubain, Michiels and Teuwen showed that an adaptation of standard side-channel power analysis attacks can be used to efficiently and fully automatically break most existing white-box designs. This result created a new research direction about generic attacks (correlation-based, algebraic, fault injection) and protections against them. Competitions Four editions of the WhibOx contest were held in 2017, 2019, 2021 and 2024 respectively. These competitions invited white-box designers both from academia and industry to submit their implementation in the form of (possibly obfuscated) C code. At the same time, everyone could attempt to attack these programs and recover the embedded secret key. Each of these competitions lasted for about 4-5 months. WhibOx 2017 / CHES 2017 Capture the Flag Challenge targeted the standard AES block cipher. Among 94 submitted implementations, all were broken during the competition, with the strongest one staying unbroken for 28 days. WhibOx 2019 / CHES 2019 Capture the Flag Challenge again targeted the AES block cipher. Among 27 submitted implementations, 3 programs stayed unbroken throughout the competition, but were broken after 51 days since the publication. WhibOx 2021 / CHES 2021 Capture the Flag Challenge changed the target to ECDSA, a digital signature scheme based on elliptic curves. Among 97 submitted implementations, all were broken within at most 2 days. WhibOx 2024 / CHES 2024 Capture the Flag Challenge again targeted ECDSA. Among 47 submitted implementations, all were broken during the competition, with the strongest one staying unbroken for almost 5 days. See also Black-box obfuscation, a stronger form of obfuscation proven to be impossible Indistinguishability obfuscation, a more formal theoretic notion of obfuscation Obfuscation (software), non-cryptographic code obfuscation Digital rights management, a widely used application of white-box cryptography External links WhibOx Contests References Cryptography
White-box cryptography
[ "Mathematics", "Engineering" ]
1,474
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
41,445,293
https://en.wikipedia.org/wiki/Quillen%27s%20theorems%20A%20and%20B
In topology, a branch of mathematics, Quillen's Theorem A gives a sufficient condition for the classifying spaces of two categories to be homotopy equivalent. Quillen's Theorem B gives a sufficient condition for a square consisting of classifying spaces of categories to be homotopy Cartesian. The two theorems play central roles in Quillen's Q-construction in algebraic K-theory and are named after Daniel Quillen. The precise statements of the theorems are as follows. In general, the homotopy fiber of is not naturally the classifying space of a category: there is no natural category such that . Theorem B constructs in a case when is especially nice. References Theorems in topology
Quillen's theorems A and B
[ "Mathematics" ]
147
[ "Theorems in topology", "Topology stubs", "Topology", "Mathematical problems", "Mathematical theorems" ]
41,445,839
https://en.wikipedia.org/wiki/Poly%28hexamethylene%20carbonate%29
Poly(hexamethylene carbonate) (PHC) is an organic polymer. It can be biodegredated to form adipic acid and di(6-hydroxyhexyl) carbonate by Roseateles depolymerans 61A. PHC can be synthesized to terminate in primarily hydroxyl groups or methyl carbonate groups depending on the concentrations of monomers during synthesis. PHC with the hydroxyl end groups has less thermal stability than PHC with methyl carbonate end groups. The hydroxyl group allow for an unzipping reaction to take place in which the polymer chain bends back on itself and the hydroxyl group reacts with an acetyl mid chain, resulting in a shorter chain and a looped molecule. This type of degradation quickly shorten the length of the PHC. References Organic polymers Polycarbonates
Poly(hexamethylene carbonate)
[ "Chemistry" ]
176
[ "Organic compounds", "Polymer stubs", "Organic polymers", "Organic chemistry stubs" ]
41,446,565
https://en.wikipedia.org/wiki/Ministry%20of%20Petroleum%20Resources%20Development
The Ministry of Energy (Sinhala: බලශක්ති අමාත්‍යාංශය Balashakthi Amathyanshaya; Tamil: பெற்றோலிய வள அபிவிருத்தி அமைச்சு) is the cabinet ministry of the Government of Sri Lanka responsible for oversight of the country's energy supply via crude oil import, storage and refining (carried out at the nation's sole refinery at Sapugaskanda), as well as sale (through the Ceylon Petroleum Corporation) of processed petroleum products. It is thus responsible for the maintenance of (and upgrades to) petroleum and petroleum product storage and transport facilities as well as for developing the country's natural gas and crude oil reserves. In 2020 the minister was Udaya Gammanpila. The ministry's secretary is KDR Olga. References External links Ministry of Petroleum Resources Development Government of Sri Lanka https://ceypetco.gov.lk/ https://www.cpstl.lk/cpstl/ Petroleum Resources Development Petroleum Resources Development Energy in Sri Lanka Energy ministries
Ministry of Petroleum Resources Development
[ "Engineering" ]
203
[ "Energy organizations", "Energy ministries" ]
41,447,656
https://en.wikipedia.org/wiki/Albrecht%20Schrauf
Albrecht Schrauf (14 December 1837, Vienna – 29 November 1897, Vienna) was an Austrian mineralogist and crystallographer. Biography Schrauf studied mathematics, physics and mineralogy at the University of Vienna, where one of his instructors was Wilhelm Josef Grailich. Several years later, he became "custos-adjunct" at the "Imperial Hofmineralien Cabinet" in Vienna. In 1867 he was named first curator of the mineral cabinet, and in 1874 was appointed professor and director of the mineralogical museum at the University of Vienna. Known for his investigations in the field of crystallography, he was a proponent of the crystallographic index developed by William Hallowes Miller. In the mid-1860s, he published his best works, "Atlas der Krystallformen des Mineralreiches" and an award-winning textbook titled "Lehrbuch der physikalischen Mineralogie". In Vienna, he collaborated with Gustav Tschermak in publication of the journal "Mineralogische Mitteilungen". A rare mineral known as albrechtschraufite is named in his honor. In 1896 Schrauf lost sight in his left eye due to sudden exposure of sunlight in the course of performing crystallographic measurements. Principal works Atlas der krystall-formen des mineralreiches, 1865 - Atlas of crystal forms. Lehrbuch der physikalischen Mineralogie, 1866 - Textbook of physical mineralogy. Physikalische Studien. Die gesetzmässigen Beziehungen von Materie und licht, mit specieller Berucksichtigung der Molecular-constitution organischer Reihen und Krystallisirter Körper, 1867 - Physics studies. the lawful relationships of matter and light, etc. Handbuch der Edelsteinkunde, 1869 - Handbook of gemstone types. References 1837 births 1897 deaths Scientists from Vienna Geologists from Austria-Hungary Crystallographers Austrian mineralogists Academic staff of the University of Vienna
Albrecht Schrauf
[ "Chemistry", "Materials_science" ]
427
[ "Crystallographers", "Crystallography" ]
41,449,061
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensation%20of%20quasiparticles
Bose–Einstein condensation can occur in quasiparticles, particles that are effective descriptions of collective excitations in materials. Some have integer spins and can be expected to obey Bose–Einstein statistics like traditional particles. Conditions for condensation of various quasiparticles have been predicted and observed. The topic continues to be an active field of study. Properties BECs form when low temperatures cause nearly all particles to occupy the lowest quantum state. Condensation of quasiparticles occurs in ultracold gases and materials. The lower masses of material quasiparticles relative to atoms lead to higher BEC temperatures. An ideal Bose gas has a phase transitions when inter-particle spacing approaches the thermal De-Broglie wavelength: . The critical concentration is then , leading to a critical temperature: . The particles obey the Bose–Einstein distribution and all occupy the ground state: The Bose gas can be considered in a harmonic trap, , with the ground state occupancy fraction as a function of temperature: This can be achieved by cooling and magnetic or optical control of the system. Spectroscopy can detect shifts in peaks indicating thermodynamic phases with condensation. Quasiparticle BEC can be superfluids. Signs of such states include spatial and temporal coherence and polarization changes. Observation for excitons in solids was seen in 2005 and for magnons in materials and polaritons in microcavities in 2006. Graphene is another important solid state system for studies of condensed matter including quasi particles; It's a 2D electron gas, similar to other thin films. Excitons Excitons are electron-hole pairs. Similar to helium-4 superfluidity at the -point (2.17K); a condensate was proposed by Böer et al. in 1961. Experimental phenomenon were predicted leading to various pulsed laser searches that failed to produce evidence. Signs were first seen by Fuzukawa et al. in 1990, but definite detection was published later in the 2000s. Condensed excitons are a superfluid and will not interact with phonons. While the normal exciton absorption is broadened by phonons, in the superfluid absorption degenerates to a line. Theory Excitons results from photons exciting electrons creating holes, which are then attracted and can form bound states. The 1s paraexciton and orthoexciton are possible. The 1s triplet spin state, 12.1meV below the degenerate orthoexciton states(lifetime ~ns), is decoupled and has a long lifetime to an optical decay. Dilute gas densities (n~1014cm−3) are possible, but paraexciton generation scales poorly, so significant heating occurs in creating high densities(1017cm−3) preventing BECs. Assuming a thermodynamic phase occurs when separation reaches the de Broglie wavelength() gives: Where, is the exciton density, effective mass(of electron mass order) , and , are the Planck and Boltzmann constants. Density depends on the optical generation and lifetime as: . Tuned lasers create excitons which efficiently self-annihilate at a rate: , preventing a high density paraexciton BEC. A potential well limits diffusion, damps exciton decay, and lowers the critical number, yielding an improved critical temperature versus the T3/2 scaling of free particles: Experiments In an ultrapure Cu2O crystal: = 10s. For an achievable T = 0.01K, a manageable optical pumping rate of 105/s should produce a condensate. More detailed calculations by J. Keldysh and later by D. Snoke et al. started a large number of experimental searches into the 1990s that failed to detect signs. Pulse methods led to overheating, preventing condensate states. Helium cooling allows mili-kelvin setups and continuous wave optics improves on pulsed searches. Relaxation explosion of a condensate at lattice temperature 354 mK was seen by Yoshioka et al. in 2011. Recent experiments by Stolz et al. using a potential trap have given more evidence at ultralow temperature 37 mK. In a parabolic trap with exciton temperature 200 mK and lifetime broadened to 650ns, the dependence of luminescence on laser intensity has a kink which indicates condensation. The theory of a Bose gas is extended to a mean field interacting gas by a Bogoliubov approach to predict the exciton spectrum; The kink is considered a sign of transition to BEC. Signs were seen for a dense gas BEC in a GaAs quantum well. Magnons Magnons, electron spin waves, can be controlled by a magnetic field. Densities from the limit of a dilute gas to a strongly interacting Bose liquid are possible. Magnetic ordering is the analog of superfluidity. The condensate appears as the emission of monochromatic microwaves, which are tunable with the applied magnetic field. In 1999 condensation was demonstrated in antiferromagnetic TlCuCl3, at temperatures as large as 14 K. The high transition temperature (relative to atomic gases) is due to the small mass (near an electron) and greater density. In 2006, condensation in a ferromagnetic Yttrium-iron-garnet thin film was seen even at room temperature with optical pumping. Condensation was reported in gadolinium in 2011. Magnon BECs have been considered as qubits for quantum computing. Polaritons Polaritons, caused by light coupling to excitons, occur in optical cavities and condensation of exciton-polaritons in an optical microcavity was first published in Nature in 2006. Semiconductor cavity polariton gases transition to ground state occupation at 19K. Bogoliubov excitations were seen polariton BECs in 2008. The signatures of BEC were observed at room temperature for the first time in 2013, in a large exciton energy semiconductor device and in a polymer microcavity. Other quasiparticles Rotons, an elementary excitation in superfluid 4He introduced by Landau, were discussed by Feynman and others. Rotons condense at low temperature. Experiments have been proposed and the expected spectrum has been studied, but roton condensates have not been detected. Phonons were first observed in a condensate in 2004 by ultrashort pulses in a bismuth crystal at 7K. See also Bose–Einstein condensate Bose-Einstein condensation of polaritons Important publications References Bose–Einstein condensates Quasiparticles
Bose–Einstein condensation of quasiparticles
[ "Physics", "Chemistry", "Materials_science" ]
1,407
[ "Bose–Einstein condensates", "Phases of matter", "Subatomic particles", "Condensed matter physics", "Quasiparticles", "Matter" ]
41,449,427
https://en.wikipedia.org/wiki/Heinz%20Billing
Heinz Billing (7 April 1914 – 4 January 2017) was a German physicist and computer scientist, widely considered a pioneer in the construction of computer systems and computer data storage, who built a prototype laser interferometric gravitational wave detector. Biography Billing was born in Salzwedel, in Saxony-Anhalt, Germany. After studying mathematics and physics in University of Göttingen he received his doctorate in 1938 in Munich at the age of 24. During the Second World War he worked in the Aerodynamics Research Institute in Göttingen. On 3 October 1943 he married Anneliese Oetker. Billing has three children: Heiner Erhard Billing (born 18 November 1944 in Salzwedel), Dorit Gerda Gronefeld Billing (born 27 June 1946 in Göttingen) and Arend Gerd Billing (born 19 September 1954 in Göttingen). He turned 100 in April 2014 and died on 4 January 2017 at the age of 102. Advanced LIGO detected the fourth gravitational wave event GW170104 on the same day. Computer science Billing worked at the Aerodynamic Research Institute in Göttingen, where he developed a magnetic drum memory. According to Billing's memoirs, published by Genscher, Düsseldorf (1997), there was a meeting between Alan Turing and Konrad Zuse. It took place in Göttingen in 1947. The interrogation had the form of a colloquium. Participants were Womersley, Turing, Porter from England and a few German researchers like Zuse, Walther, and Billing. (For more details see Herbert Bruderer, Konrad Zuse und die Schweiz). After a brief stay at the University of Sydney, Billing returned to join the Max Planck Institute for Physics in 1951. From 1952 through 1961 the group under Billing's direction constructed a series of four digital computers: the G1, G2, G1a, and G3. He is the designer of the first German sequence-controlled electronic digital computer as well as of the first German stored-program electronic digital computer. Gravitational wave detector After transistors had been firmly established, when microelectronics arrived, after scientific computers were slowly overshadowed by commercial applications and computers were mass-produced in factories, Heinz Billing left the computer field in which he had been a pioneer for nearly 30 years. In 1972, Billing returned to his original field of physics, at the Max Planck Institute's new location at Garching near Munich. Beginning in 1972, Heinz Billing became involved in gravitational physics, when he tried to verify the detection claims made by American physicist Joseph Weber. Weber's results were considered to be proven wrong by these experiments. In 1975, Billing acted on a proposal by Rainer Weiss from the Massachusetts Institute of technology (MIT) to use laser interferometry to detect gravitational waves. He and colleagues built a 3m prototype Michelson interferometer using optical delay lines. From 1980 onward Billing commissioned the development and construction in MPA in Garching of a laser interferometer with an arm length of 30m. Without the knowledge gained from this prototype, the LIGO project would not have been started when it did. Awards and honors In 1987, Heinz Billing received the Konrad Zuse Medal for the invention of magnetic drum storage. In 2015 he received the Order of Merit of the Federal Republic of Germany. In 1993, the annual Heinz Billing prize for "outstanding contributions to computational science" was established by the Max Planck Society in his honor, with a prize amount of 5,000 Euro. Selected publications Heinz Billing: Ein Interferenzversuch mit dem Lichte eines Kanalstrahles. J. A. Barth, Leipzig 1938. Heinz Billing, Wilhelm Hopmann: Mikroprogramm-Steuerwerk. In: Elektronische Rundschau. Heft 10, 1955. Heinz Billing, Albrecht Rüdiger: Das Parametron verspricht neue Möglichkeiten im Rechenmaschinenbau. In: eR – Elektronische Rechenanlagen. Band 1, Heft 3, 1959. Heinz Billing: Lernende Automaten. Oldenbourg Verlag, München 1961. Heinz Billing: Die im MPI für Physik und Astrophysik entwickelte Rechenanlage G3. In: eR – Elektronische Rechenanlagen. Band 5, Heft 2, 1961. Heinz Billing: Magnetische Stufenschichten als Speicherelemente. In: eR – Elektronische Rechenanlagen. Band 5, Heft 6, 1963. Heinz Billing: Schnelle Rechenmaschinenspeicher und ihre Geschwindigkeits- und Kapazitätsgrenzen. In: eR – Elektronische Rechenanlagen. Band 5, Heft 2, 1963. Heinz Billing, Albrecht Rüdiger, Roland Schilling: BRUSH – Ein Spezialrechner zur Spurerkennung und Spurverfolgung in Blasenkammerbildern. In: eR – Elektronische Rechenanlagen. Band 11, Heft 3, 1969. Heinz Billing: Zur Entwicklungsgeschichte der digitalen Speicher. In: eR – Elektronische Rechenanlagen. Band 19, Heft 5, 1977. Heinz Billing: A wide-band laser interferometer for the detection of gravitational radiation. progress report, Max-Planck-Institut für Physik und Astrophysik, München 1979. Heinz Billing: Die Göttinger Rechenmaschinen G1, G2, G3. In: Entwicklungstendenzen wissenschaftlicher Rechenzentren, Kolloquium, Göttingen. Springer, Berlin 1980, . Heinz Billing: The Munich gravitational wave detector using laser interferometry. Max-Planck-Institut für Physik und Astrophysik, München 1982. Heinz Billing: Die Göttinger Rechenmaschinen G1, G2 und G3. In: MPG-Spiegel. 4, 1982. Heinz Billing: Meine Lebenserinnerungen. Selbstverlag, 1994. Heinz Billing: Ein Leben zwischen Forschung und Praxis. Selbstverlag F. Genscher, Düsseldorf 1997. Heinz Billing: Fast memories for computers and their limitations regarding speed and capacity (Schnelle Rechenmaschinen- speicher und ihre Geschwindigkeits- und Kapazitätsgrenzen). In: IT – Information Technology. Band 50, Heft 5, 2008. References External links Tracking down the gentle tremble at Max-Planck-Gesellschaft's website on account history of GEO600 with Heinz Billing. 1914 births 2017 deaths People from Salzwedel Scientists from the Province of Saxony German computer scientists 20th-century German physicists Gravitational-wave astronomy Max Planck Society people German men centenarians Officers Crosses of the Order of Merit of the Federal Republic of Germany Max Planck Institute directors
Heinz Billing
[ "Physics", "Astronomy" ]
1,464
[ "Astronomical sub-disciplines", "Gravitational-wave astronomy", "Astrophysics" ]
41,450,066
https://en.wikipedia.org/wiki/Jaime%20Imitola
Jaime Imitola is an American neuroscientist, neurologist and immunologist. Imitola's clinical and research program focuses on Progressive Multiple Sclerosis and the molecular and cellular mechanisms of neurodegeneration and repair in humans. His research includes the translational neuroscience of neural stem cells into patients. Imitola is known for his discoveries on the intrinsic immunology of neural stem cells, the impact of inflammation in the endogenous neural stem cell in multiple sclerosis, and the ethical implications of stem cell tourism in neurological diseases. Early life and education Imitola earned his M.D. degree from the University of Cartagena in 1993. He went on to receive postdoctoral training at Harvard University, Imitola completed postdoctoral fellowships at Harvard Medical School in 2005 with Samia J. Khoury in collaboration and guidance from Evan Y. Snyder and Christopher A. Walsh in stem cell biology and neuroimmunology, later that year joined the faculty at Harvard Medical School as an instructor in neurology. He trained at the Ann Romney Center for Neurologic Diseases at the Brigham and Women's Hospital at Harvard Medical School. Here, he studied the molecular biology of neural stem cells (NSCs) and neuroimmunology. As a faculty at Harvard University, and affiliate faculty of the Harvard Stem Cell Institute (HSCI), he established novel techniques in imaging to study the immunology of neural stem cells and microglia that lead to the discovery of the mechanisms of migration of Neural stem cells in Stroke and the alteration of neural stem cells self-renewal capacity in models of Multiple sclerosis by microglia activation. Imitola has authored more than 100 publications, abstracts, and book chapters in scholarly journals. His discovery of the molecular mechanisms of neural stem cells to CNS injury have been replicated by additional groups. Imitola is highly cited for his work in neural stem cells migration. Academic career The mechanisms of how neural stem cells migrate to injury are critical to understanding repair. The role of the chemokines in the migration of stem cells was demonstrated in 1997 when it was discovered that bone marrow stem cells could migrate to the chemokine SDF-1 alpha. However, the migration of stem cells in the brain to injury was less understood. In 2004, Imitola and his colleagues demonstrated an inflammation-dependent mechanism for the responses of NSCs to CNS injury by astrocytes. They showed that the inflammatory chemokine Stromal cell-derived factor 1 alpha released by astrocytes during stroke was responsible for the directed migration of human and mouse NSCs to areas of injury in mice, creating Injury induced stem cell niches elucidated by reporter stem cells, as proposed by Professor Evan Y. Snyder to denote the regenerative (micro-environments) areas created after CNS damage and the ability to visualize these areas by using stem cells expressing reporter genes (i.e. LacZ). This discovery paved the way for the study of the responses of endogenous neural stem cell migration in regeneration in other neurological diseases. The work has been extensively cited and reproduced by multiple labs, and firmly established chemokines as important modulators of migration of neural stem cells not only in CNS development but also repair. Imitola has received awards for his research in stem cells including the John N. Whitaker, MD Award for Multiple Sclerosis research References External links Jaime Imitola at Google Scholar Harvard Stem Cell Institute Harvard catalyst Marquis Inflammation’s Other Face: Repairing Injury to the Brain Awards, Honors & Grants John N. Whitaker, MD (1940-2001). jamanetwork.com. Retrieved 2024-08-30. Faculty Directory. Jaime Imitola, M.D.. facultydirectory.uchc.edu. Retrieved 2024-08-30. American neuroscientists Colombian neuroscientists Harvard University faculty Living people Stem cell researchers American geneticists Year of birth missing (living people) American people of Colombian descent
Jaime Imitola
[ "Biology" ]
833
[ "Stem cell researchers", "Stem cell research" ]
41,450,867
https://en.wikipedia.org/wiki/Centre%20for%20Ecology%20%26%20Rural%20Development
The Centre of Ecology & Rural Development (CERD) is an Indian organisation that is part of the Pondicherry Science Forum. It was formed to take up interventions in Health, Sanitation, Natural Resource Management, Energy, Watershed Management and Information Communication Technology. CERD was set up in 1994 by the Pondicherry Science Forum and Tamil Nadu Science Forum to advance science and technology-based development initiatives improving rural livelihoods. Earlier works included interventions in sericulture, vegetable leather tanning, and fish aggregation devices. CERD has a field station at Bahoor called the Kalanjiyam (granary in Tamil) that acts as a hub of agriculture and technology options for the surrounding area. CERD has a full-time structure with a team of scientists working on areas including women's technology, science communication, continuing education, participatory irrigation management through local democratic people's institutions, women's microcredit networks etc. The latest projects include the AICP Project on BIOFARM, a watershed development project in Sedappatti Block of Madurai funded by NABARD, and the Tank Rehabilitation Project-Pondicherry. Accomplishments/Objectives Soil fertility management Research and development on Alternate Soil Fertility Management Strategies Systems for irrigated and dryland crops. Developed Decision Support System (DSS) for Soil Fertility Management. Bioresource integrated farming Reduction of external inputs, increasing internal resource flows in the farming system, and ensuring the nutritional security of the agriculture system. Watershed development Initiated programs in Madurai district. Participatory planning, implementation and management of the Watershed through people's organizations. Outlined the impacts of major watershed development programs in terms of biophysical impacts, environmental impacts, social-economic impacts and overall economic impacts. Participatory irrigation management Pilot work on irrigation tanks in Pondicherry. Evolved guidelines for sustainable institutional structures. Stakeholder participation was ensured including women and Dalits of landless communities. Large scale encroachment eviction through participatory approach leading to Participatory Irrigation Mgmt. Worked in low energy technologies for water system development (Oorani – drinking water pond) in Ramanathapuram Wasteland reclamation Evolved models for participatory wasteland reclamation through a coalition between landless Self-help Group (SHG) women and farmers’ groups. Sustainable income for landless SHG women through collective farming. ICT for rural development Established the first successful model of Village Information Centre known as Samadhan Kendra through unique content creation in the local language. Software creation for local planning and primary production in local language including DSS. Fuel-efficient stoves CERD plans to expand its activities in this area by constructing more stoves and by expanding the works to areas such as Tamil Nadu. CERD construction of a fuel-efficient tawa for making dosa/parotta. Sought subsidies from the Renewable Energy Agency for the Tawa stove. Biomass-based biogas units With technical collaboration from IISc. Bangalore, CERD planned to construct biogas units. The units are based on biomass decomposition and tapping the biogas for cooking purposes. Organic farming Formed an Organic Farmers’ Association CERD plans to extend support to this network, including establishing organic certification processes for farmers and for arranging marketing linkages for their produce. Backward and forward links connect seeds, plant protection, harvest and post-harvest options. Nutrition-based kitchen gardening systems This is women-focused, especially for women under SHGs since malnutrition levels in Pondicherry women approach 80%. This programme plans to provide complete backward and forward linkages from seeds, bio-manures, biopesticides, processing, and marketing. Micro-enterprises CERD plans to support viable village-level enterprises for improved livelihood options for women, Dalits and other weaker sections with providing necessary science and technology inputs and data processing skills. References External links CERD Hot cooking style Member of Water Conflict Forum Ecology Organisations based in Puducherry Environmental organisations based in India 1994 establishments in Pondicherry Organizations established in 1994
Centre for Ecology & Rural Development
[ "Biology" ]
822
[ "Ecology" ]
41,451,915
https://en.wikipedia.org/wiki/Solid%20acid
Solid acids are acids that are insoluble in the reaction medium. They are often used as heterogeneous catalysts. Many solid acids are zeolites. A variety of techniques are used to quantify the strength of solid acids. Examples Examples of inorganic solid acids include silico-aluminates (zeolites, alumina, silico-aluminophosphate), and sulfated zirconia. Many transition metal oxides are acidic, including titania, zirconia, and niobia. Such acids are used in cracking. Many solid Brønsted acids are also employed industrially, including polystyrene sulfonate, solid phosphoric acid, niobic acid, and heteropolyoxometallates. Applications Solid acids are used in catalysis in many industrial chemical processes, from large-scale catalytic cracking in petroleum refining to the synthesis of various fine chemicals. One large scale application is alkylation, e.g., the combination of benzene and ethylene to give ethylbenzene. Another application is the rearrangement of cyclohexanone oxime to caprolactam. Many alkylamines are prepared by amination of alcohols, catalyzed by solid acids. Acylations are also catalyzed by solid acids.<ref></ref Solid acids can be used as electrolytes in fuel cells. References Acids Acid–base chemistry Acid catalysts
Solid acid
[ "Chemistry" ]
315
[ "Acid–base chemistry", "Acids", "Acid catalysts", "Equilibrium chemistry", "nan" ]
41,452,299
https://en.wikipedia.org/wiki/Non-road%20engine
Non-road engines (or non-road mobile machinery in the European union) are internal combustion engines that are used for other purposes than a motor vehicle that is used on a public roadway. The term is commonly used by regulators to classify the engines in order to control their emissions. Non-road engines are used in a wide range of applications which may include machinery and non-road vehicles. In many jurisdictions, the term non-road engine is assumed to refer to the engines that have mobility or portability, which is separated from the term stationary engine. The definition of non-road engine may explicitly exclude certain non-road vehicles such as aircraft, locomotives, and ocean-going marine vessels. Classifications There are many classifications of the non-road engines based on the jurisdictions. The following are common classifications: lawn mowers, chainsaws, string trimmers and garden equipment snowmobiles, dirt bikes, monster trucks and off-road vehicles cold chain transport vehicles forklifts, generators and compressors using gasoline or propane boats, yachts and personal watercraft heavy equipment and agricultural machinery such as backhoes and tractors. Other equipment are included such as ground support equipment, forklifts, generators, compressors and pumps that use diesel engines. marine diesel engines locomotives and multiple units aircraft engines In certain jurisdictions, stationary engines that are diesel powered may be classified as non-road engines. United States and Europe The rationale for establishing emission standards for non-road engines is that they are a significant source of pollution. The engines of on-road vehicles have advanced emission controls which are not found on those non-road engines. The non-road engines also emit air pollution particles at much higher rates. The emission standards are based on the engine classifications and vary in various jurisdictions. The main model regulations that are used by many countries are the United States Environmental Protection Agency through the section 213 of the Clean Air Act (42 U.S.C. 7547) and the directive of the European Commission (the "mother" Directive 97/68/EC, the amendments Directive 2002/88/EC, Directive 2004/26/EC, Directive 2006/105/EC, Directive 2011/88/EU and the last amendment Directive 2012/46/EU). The directives cover diesel engines, spark-ignition engines, constant-speed engines, railcars, locomotives and inland waterway vessels. In Europe, the term "non-road mobile machinery" (NRMM) is used to clarify that the definition refers to non-road engines that are capable of self-propulsion. In the European Union, in 2023, the Commission and the Council proposed to harmonize road safety requirements to ease non-road mobile machinery (such as lawn mowers, harvesters or bulldozers) to circulate on public roads and replace local European union member states regulations. This would only apply to machine with maximum speed greater than 6 km/hour (around 4 miles per hour). Next legislative step would be in the European parliament. Other countries The standards for non-road diesel engines are more harmonized. Many countries adopt the emission standards derived from either the US or the European models. Canada adopted the US standards in 1999. Korea modeled its Tier 2 standards from the US Tier 2. Russia adopted the European Stage I standards. Turkey adopted the European standards but with different implementation dates. China adopted the European Stage I/II standards in 2007. India introduced its own standards in 2006 called Bharat (CEV) Stage II (based in part on European Stage I) and Bharat (CEV) Stage III (based on US Tier 2/3). Japan introduced its own standards that are similar but not harmonized to the US Tier 3 and Europe Stage III A. Brazil adopted the resolution in 2011 to set emission standards that are equivalent to US Tier 3 and European Stage III A. In Australia, the definition includes some stationary engines such as electric generators and pumps. See also Small engine References External links Article on Small SI Engines. Article on Compact Diesel Engines. Internal combustion engine Emission standards
Non-road engine
[ "Technology", "Engineering" ]
826
[ "Internal combustion engine", "Combustion engineering", "Engines" ]
41,452,537
https://en.wikipedia.org/wiki/A%20Slower%20Speed%20of%20Light
A Slower Speed of Light is a freeware video game developed by MIT Game Lab that demonstrates the effects of special relativity by gradually slowing down the speed of light to a walking pace. The game runs on the Unity engine using its open-source OpenRelativity toolkit. Gameplay In A Slower Speed of Light, the player controls the ghost of a young child who was killed in an unspecified accident. The child wants to "become one with light", but the speed of light is too fast for the child. This is solved through the use of magic orbs which, as each are collected, slow down the speed of light, until by the end it is at walking speed. These orbs are spread throughout the level. At the beginning of the game, walking around and collecting these orbs is easy; however, as the game progresses, the effects of special relativity become apparent. This gradually increases the difficulty of the game. After collecting all 100 orbs, a portal (as seen in the poster) appears. Entering the portal will open a tab explaining the effects of Special relativity, and will also show the time the game was finished in. The time the game was completed in is displayed with both the actual real world time as well as the time your character experienced, which differs due to the simulated effects of special relativity. Effects of special relativity As the game progresses, the light becomes slower, and therefore the effects of special relativity start to become more apparent, increasing the difficulty of the game. These effects include the Doppler Effect (red/blue-shifting of visible light and the shifting of ultraviolet and infrared into the visible spectrum), the Searchlight Effect (increased brightness in the direction of travel), Time Dilation (difference between the passage of time perceived by the player and the outside world), Length Contraction and Terrell Rotation (the perceived warping of the environment at near-light speeds), and the runtime effect (seeing objects in the past because of the speed of light). OpenRelativity OpenRelativity is a toolkit designed for use with the proprietary Unity game engine. It was developed by MIT Game Lab during the development of A Slower Speed of Light. The toolkit allows for the accurate simulation of a 3D environment when light is slowed down. It is hosted on GitHub and has been published under the permissive MIT license. Use in education A Slower Speed of Light was developed in hopes of being used as an educational tool to explain special relativity in an easy-to-understand fashion. The game is meant to be used as an interactive learning tool for those interested in physics. See also Numerical relativity Special relativity References External links Unofficial Speedrun page Official game page 2012 video games Educational games Freeware games Linux games MacOS games Windows games Special relativity Video games developed in the United States
A Slower Speed of Light
[ "Physics" ]
572
[ "Special relativity", "Theory of relativity" ]
41,452,559
https://en.wikipedia.org/wiki/Approximate%20tangent%20space
In geometric measure theory an approximate tangent space is a measure theoretic generalization of the concept of a tangent space for a differentiable manifold. Definition In differential geometry the defining characteristic of a tangent space is that it approximates the smooth manifold to first order near the point of tangency. Equivalently, if we zoom in more and more at the point of tangency the manifold appears to become more and more straight, asymptotically tending to approach the tangent space. This turns out to be the correct point of view in geometric measure theory. Definition for sets Definition. Let be a set that is measurable with respect to m-dimensional Hausdorff measure , and such that the restriction measure is a Radon measure. We say that an m-dimensional subspace is the approximate tangent space to at a certain point , denoted , if as in the sense of Radon measures. Here for any measure we denote by the rescaled and translated measure: Certainly any classical tangent space to a smooth submanifold is an approximate tangent space, but the converse is not necessarily true. Multiplicities The parabola is a smooth 1-dimensional submanifold. Its tangent space at the origin is the horizontal line . On the other hand, if we incorporate the reflection along the x-axis: then is no longer a smooth 1-dimensional submanifold, and there is no classical tangent space at the origin. On the other hand, by zooming in at the origin the set is approximately equal to two straight lines that overlap in the limit. It would be reasonable to say it has an approximate tangent space with multiplicity two. Definition for measures One can generalize the previous definition and proceed to define approximate tangent spaces for certain Radon measures, allowing for multiplicities as explained in the section above. Definition. Let be a Radon measure on . We say that an m-dimensional subspace is the approximate tangent space to at a point with multiplicity , denoted with multiplicity , if as in the sense of Radon measures. The right-hand side is a constant multiple of m-dimensional Hausdorff measure restricted to . This definition generalizes the one for sets as one can see by taking for any as in that section. It also accounts for the reflected paraboloid example above because for we have with multiplicity two. Relation to rectifiable sets The notion of approximate tangent spaces is very closely related to that of rectifiable sets. Loosely speaking, rectifiable sets are precisely those for which approximate tangent spaces exist almost everywhere. The following lemma encapsulates this relationship: Lemma. Let be measurable with respect to m-dimensional Hausdorff measure. Then is m-rectifiable if and only if there exists a positive locally -integrable function such that the Radon measure has approximate tangent spaces for -almost every . References , particularly Chapter 3, Section 11 "'Basic Notions, Tangent Properties.''" Geometry Measure theory
Approximate tangent space
[ "Mathematics" ]
611
[ "Geometry" ]
41,452,598
https://en.wikipedia.org/wiki/Wilhelm%20Josef%20Grailich
Wilhelm Joseph Grailich (16 February 1829, in Pressburg – 13 September 1859, in Vienna) was an Austrian physicist, mineralogist and crystallographer. Education From 1847, Grailich studied sciences at the polytechnic institute in Vienna. Career Grailich served as an assistant to Andreas von Ettingshausen in the institute of physics at the University of Vienna. In 1856 he became an assistant at the Hofmineraliencabinett, where soon afterwards, he succeeded Gustav Adolf Kenngott as "kustos-adjunkt". In 1857 he became an associate professor of higher physics at the university, and in 1859, was chosen as a member of the Vienna Academy of Sciences. In 1910, a thoroughfare in the Landstrasse district of Vienna was named Grailichgasse in his honor. Known for his work in crystal optics and crystal physics, he was the author of numerous scientific papers in the field of crystallography. In 1856, he translated William Hallowes Miller's textbook of crystallography into German as "Lehrbuch der Kristallographie". He explained the phenomenon of fluorescence in crystals, and is credited for making improvements to Wheatstone's vibration apparatus (in German: "Schwingungsapparat"). Personal life His full name was Andreas Wilhelm Joseph Grailich, he was son of Friedrich Joseph Grailich (teacher in the Lutheran School in Pressburg) and Carolina Neidherr. His grandfather was Andreas Grailich. On September 30, 1859, Grailich died in Vienna at the age of 30. His wife was Carolina Augusta von Ettingshausen, daughter of Andreas von Ettingshausen. His daughter was Auguste Grailich, mother of Rudolf Allers. Selected works Untersuchungen über den Ein- und Zweiaxigen Glimmer, 1853 - Studies on uniaxial and biaxial mica. Das Sklerometer, ein Apparat zur genaueren Messung der Harte der Krystalle, 1854 - (with F Pekárek) - The sclerometer, an instrument for accurate measurement of the hardness of crystals. Untersuchungen über die physikalische Verhältnisse krystallisirter Körper, 1858 (with Viktor von Lang) - Studies on the physical conditions of crystallized bodies. Der Römerit, ein neues Mineral aus dem Rammelsberge, nebst Bemerkungen über die Bleiglätte, 1858 - Romerite, a new mineral from the Rammelsberg. Krystallographisch-optische Untersuchungen, 1858 - Crystallographic-optical studies. References 1829 births 1859 deaths Academic staff of the University of Vienna Scientists from the Austrian Empire Mineralogists Crystallographers Scientists from Bratislava Austrian physicists
Wilhelm Josef Grailich
[ "Chemistry", "Materials_science" ]
596
[ "Crystallographers", "Crystallography" ]
41,453,139
https://en.wikipedia.org/wiki/Zeta2%20Muscae
{{DISPLAYTITLE:Zeta2 Muscae}} Zeta2 Muscae, Latinized from ζ2 Muscae, is a star in the southern constellation of Musca. Its apparent magnitude is 5.16. This is a white main sequence star of spectral type A5V around 330 light-years distant from Earth. Like several other stars in the constellation, it is a member of the Lower Centaurus–Crux subgroup of the Scorpius–Centaurus association, a group of predominantly hot blue-white stars that share a common origin and proper motion across the galaxy. It is part of a triple star system with faint companions at 0.5 and 32.4 arc seconds distance. The former is an infrared source, the latter has a visual magnitude of 10.7. References Lower Centaurus Crux Musca Muscae, Zeta2 4703 060320 107566 BD-66 1747 A-type main-sequence stars
Zeta2 Muscae
[ "Astronomy" ]
200
[ "Musca", "Constellations" ]
53,038,301
https://en.wikipedia.org/wiki/Night%20box
Night boxes were a service of post office boxes offered by the British Post Office at some major sorting offices during the inter-war years. Such boxes were available to callers to collect mail during the night hours when the boxes were normally closed to callers, at double the rent of normal boxes. Letters to such boxes had to be sent in red envelopes clearly marked "Special Private Box Night Delivery". References Postal systems
Night box
[ "Technology" ]
86
[ "Transport systems", "Postal systems" ]
53,042,084
https://en.wikipedia.org/wiki/Surfactant%20leaching
Surfactant leaching of acrylic (latex) paints, also known as exudate staining, streak staining, streaking, weeping, exudation, etc., occurs when the freshly painted surface becomes wet and water-soluble components of the paint (dispersants, surfactants, thickeners, glycols, etc.) leach out of the paint in sticky brown streaks. This may happen, e.g., due to rain or dew for exterior surfaces, or water vapor condensation on interior ones. On the external surfaces the streaks will normally weather off in several weeks, and removal of them before that time is impractical, especially because it may damage the paint before it is completely cured. The streaking phenomenon may also be observed for some silicone sealants. The leaching effect should be taken into an account by manufacturers when formulating latex paints. A common approach is replacing water-soluble ingredients with volatile organic compounds (VOCs), which are not environmentally safe. See also Exudate References Paints Solid-solid separation
Surfactant leaching
[ "Chemistry" ]
226
[ "Solid-solid separation", "Coatings", "Paints", "Separation processes by phases" ]
53,043,110
https://en.wikipedia.org/wiki/Anisophylly
Anisophylly is when leaves of a pair differ from one another, either in size or in shape. When a horizontal stem (plagiotropic shoot) also exhibits anisophylly, the photosynthetic leaf surfaces interfere less with light from above, and rotation of the leaf or the petiole can enhance that effect. The phenomenon is relatively common in some tropical plant families with decussate leaf arrangement, such as Melastomataceae, Gesneriaceae and Urticaceae as well as in certain species of other families. References Plant morphology Leaves
Anisophylly
[ "Biology" ]
118
[ "Plant morphology", "Plants" ]
53,043,810
https://en.wikipedia.org/wiki/Silicon%20Mountain%20%28Denver%29
Silicon Mountain, also known as the "Silicon Flatirons" is a nickname given to the tech hub in the Denver, Colorado metropolitan area and Colorado Springs, Colorado metropolitan area. The name is analogous to Silicon Valley, but refers to the Rocky Mountains beyond the skyline. Denver startups raised $401 million in 2015, and Boulder startups raised $183 million in 2015. Startups SolidFire Zayo Group Dot Hill Systems AlchemyAPI Venture capital Incubators The Founder Institute Techstars Innovation Pavilion Fortune 500 Companies Ball Corporation CH2M DaVita Dish Network Envision Healthcare Level 3 Communications Newmont Qurate Retail Group Western Union See also Denver Tech Center Northern Colorado Economic Development Corporation List of companies with Denver area operations List of places with "Silicon" names References High-technology business districts in the United States Information technology places
Silicon Mountain (Denver)
[ "Technology" ]
169
[ "Information technology", "Information technology places" ]
53,043,861
https://en.wikipedia.org/wiki/Terraform%20%28software%29
Terraform is an infrastructure-as-code software tool created by HashiCorp. Users define and provide data center infrastructure using a declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally JSON. Design Terraform manages external resources (such as public cloud infrastructure, private cloud infrastructure, network appliances, software as a service, and platform as a service) with "providers". HashiCorp maintains an extensive list of official providers, and can also integrate with community-developed providers. Users can interact with Terraform providers by declaring resources or by calling data sources. Rather than using imperative commands to provision resources, Terraform uses declarative configuration to describe the desired final state. Once a user invokes Terraform on a given resource, Terraform will perform CRUD actions on the user's behalf to accomplish the desired state. The infrastructure as code can be written as modules, promoting reusability and maintainability. Terraform supports a number of cloud infrastructure providers such as Amazon Web Services, Cloudflare, Microsoft Azure, IBM Cloud, Serverspace, Selectel Google Cloud Platform, DigitalOcean, Oracle Cloud Infrastructure, Yandex.Cloud, VMware vSphere, and OpenStack. HashiCorp maintains a Terraform Module Registry, launched in 2017. In 2019, Terraform introduced the paid version called Terraform Enterprise for larger organizations. License change Terraform was previously free software available under version 2.0 of the Mozilla Public License (MPL). On August 10, 2023, HashiCorp announced that all products produced by the company would be relicensed under the Business Source License (BUSL), with HashiCorp prohibiting commercial use of the community edition by those who offer "competitive services". The last MPL-licensed version of Terraform was forked as "OpenTofu", which is backed by the Linux Foundation. In April 2024, HashiCorp sent a cease and desist notice to the OpenTofu project, stating that it had incorporated code from a BUSL-licensed version of Terraform without permission and "incorrectly re-labeled HashiCorp's code to make it appear as if it was made available by HashiCorp originally under a different license." OpenTofu denied the allegation, stating that the code cited had originated from an MPL-licensed version of Terraform. References External links Cross-platform software Cloud infrastructure Systems engineering Orchestration software Software using the Business Source License
Terraform (software)
[ "Technology", "Engineering" ]
515
[ "Cloud infrastructure", "Systems engineering", "IT infrastructure" ]
53,045,318
https://en.wikipedia.org/wiki/Cyanobacterial%20RNA%20thermometer
The first cyanobacterial RNA thermometer (RNAT) Hsp17 was found in the 5'UTR of Synechocystis heat shock hsp17 mRNA. Further study demonstrated that cyanobacteria commonly use RNATs to control the translation of their heat shock genes. HspA is a homolog of Hsp17 in thermophilic Thermosynechococcus elongatus. Two more thermometers were found in the 5'UTRs of mesophilic cyanobacteria A. variabilis and Nostoc sp. The first RNAT called avashort was shown to regulate translation by masking the AUG translation start site. The second RNAT called avalong, as it has an extended initial hairpin, might involve tertiary interactions and has similarities to the ROSE element. References Non-coding RNA Cis-regulatory RNA elements
Cyanobacterial RNA thermometer
[ "Chemistry" ]
187
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
53,046,273
https://en.wikipedia.org/wiki/Continuing%20airworthiness%20management%20organization
Continuing airworthiness management organisation (CAMO) is a civil aviation organization authorized to schedule and control continuing airworthiness activities on aircraft and their parts The scope of the CAMO is to organise and manage all documents and publications for Maintenance Organizations Part 145 and Part M approved, like development and management of aircraft maintenance programmes fulfilled. A CAMO must also provide record keeping of maintenance performed. In other words, a CAMO is responsible to the Air Operator Certificate (AOC) holder. EASA has the power to give CAMO second privileges also but not in all cases. These second privileges allow the CAMO to conduct airworthiness review on aircraft, issue (or recommend for issue) Airworthiness Review Certificates and issue 'permit to fly' for maintenance check flights. General requirements to be met by a CAMO are facilities (offices and documentation storage), a Continuing Airworthiness Management Exposition (CAME) which must be approved by the competent authority of the country or EASA and company procedures (to comply with Part M requirements). A CAMO can also be the operator of the aircraft. Personnel required to be employed in a CAMO are the Accountable Manager (which can be the same person for CAMO and operator), the Quality Manager (to ensure all EASA requirements are in compliance) and appropriately qualified staff for airworthiness management. These personnel must be mentioned in the CAME. In case of second privileges Airworthiness Review Staff must be employed. Like any other aviation organisation a CAMO is audited by authorities and must fulfill all requirements. Findings in audits are categorized in levels. Level 1 finding is a serious hazard to flight safety and the approval to operate can be revoked until a satisfactory correction is taken. Level 2 finding is non serious to flight safety, but must be taken care of because it can lead to a Level 1 finding. References Aircraft maintenance Aviation safety organizations Civil aviation
Continuing airworthiness management organization
[ "Engineering" ]
384
[ "Aircraft maintenance", "Aerospace engineering" ]
53,046,631
https://en.wikipedia.org/wiki/Multilinear%20multiplication
In multilinear algebra, applying a map that is the tensor product of linear maps to a tensor is called a multilinear multiplication. Abstract definition Let be a field of characteristic zero, such as or . Let be a finite-dimensional vector space over , and let be an order-d simple tensor, i.e., there exist some vectors such that . If we are given a collection of linear maps , then the multilinear multiplication of with is defined as the action on of the tensor product of these linear maps, namely Since the tensor product of linear maps is itself a linear map, and because every tensor admits a tensor rank decomposition, the above expression extends linearly to all tensors. That is, for a general tensor , the multilinear multiplication is where with is one of 's tensor rank decompositions. The validity of the above expression is not limited to a tensor rank decomposition; in fact, it is valid for any expression of as a linear combination of pure tensors, which follows from the universal property of the tensor product. It is standard to use the following shorthand notations in the literature for multilinear multiplications:andwhere is the identity operator. Definition in coordinates In computational multilinear algebra it is conventional to work in coordinates. Assume that an inner product is fixed on and let denote the dual vector space of . Let be a basis for , let be the dual basis, and let be a basis for . The linear map is then represented by the matrix . Likewise, with respect to the standard tensor product basis , the abstract tensoris represented by the multidimensional array . Observe that where is the jth standard basis vector of and the tensor product of vectors is the affine Segre map . It follows from the above choices of bases that the multilinear multiplication becomes The resulting tensor lives in . Element-wise definition From the above expression, an element-wise definition of the multilinear multiplication is obtained. Indeed, since is a multidimensional array, it may be expressed as where are the coefficients. Then it follows from the above formulae that where is the Kronecker delta. Hence, if , then where the are the elements of as defined above. Properties Let be an order-d tensor over the tensor product of -vector spaces. Since a multilinear multiplication is the tensor product of linear maps, we have the following multilinearity property (in the construction of the map): Multilinear multiplication is a linear map: It follows from the definition that the composition of two multilinear multiplications is also a multilinear multiplication: where and are linear maps. Observe specifically that multilinear multiplications in different factors commute, if Computation The factor-k multilinear multiplication can be computed in coordinates as follows. Observe first that Next, since there is a bijective map, called the factor-k standard flattening, denoted by , that identifies with an element from the latter space, namely where is the jth standard basis vector of , , and is the factor-k flattening matrix of whose columns are the factor-k vectors in some order, determined by the particular choice of the bijective map In other words, the multilinear multiplication can be computed as a sequence of d factor-k multilinear multiplications, which themselves can be implemented efficiently as classic matrix multiplications. Applications The higher-order singular value decomposition (HOSVD) factorizes a tensor given in coordinates as the multilinear multiplication , where are orthogonal matrices and . Further reading Tensors Multilinear algebra
Multilinear multiplication
[ "Engineering" ]
735
[ "Tensors" ]
53,047,506
https://en.wikipedia.org/wiki/Rovalpituzumab%20tesirine
Rovalpituzumab tesirine (Rova-T) is an experimental antibody-drug conjugate targeting the protein DLL3 on tumor cells. It was originally developed by Stemcentrx and was purchased by AbbVie. It was tested for use in small-cell lung cancer, but development was terminated after unsuccessful phase III trial. Development In 2018, an Independent Data Monitoring Committee found that in the TAHOE phase III trial, Rova-T shortened survival of lung cancer patients compared to SOC chemotherapy topotecan, prompting termination of trial enrollment. Another phase III trial (MERU) demonstrated no survival benefit over placebo. A phase II trial using the drug as a third-line treatment for relapsed or refractory lung cancer showed objective response rate at just 16%. Chemical structure Chemical structure of "tesirine" (drawn in black). It consists of a pyrrolobenzodiazepine type dimer (top), which is the actual anti-cancer agent, a Val–Ala structure that can be cleaved by an enzyme to detach the anti-cancer agent from the antibody, a polyethylene glycol spacer, and a maleimide linker which is attached to a cysteine in the antibody's (rovalpituzumab's) peptide backbone, drawn blue. Each rovalpituzumab molecule has an average of two such attachments. See also Vadastuximab talirine, with a similar cytotoxin References Experimental cancer drugs Antibody-drug conjugates Orphan drugs Monoclonal antibodies for tumors
Rovalpituzumab tesirine
[ "Biology" ]
339
[ "Antibody-drug conjugates" ]
53,048,220
https://en.wikipedia.org/wiki/Tinnunculite
Tinnunculite is a naturally-occurring form of dihydrate of uric acid. It should not be confused with a proposed mineral species with the identical name 'Tinnunculite', that forms when droppings from a European kestrel react with the burning dumps of coal mines and quarries. The name tinnunculite is derived from the kestrel's binomial name, "Falco tinnunculus", which is itself derived from the Latin word , meaning "kestrel", from , meaning "shrill". Tinnunculite is a naturally occurring form of the same type of origin. The mineral is a dihydrate of uricite to which it is visually very similar. Tinnunculite is chemically similar to other organic minerals: guanine, uricite; also acetamide, kladnoite. A new mineral proposal with the same name but slightly different formula (C10H12N8O8) was submitted by Chesnokov & Shcherbakova and ultimately rejected by the International Mineralogical Association (IMA) on the basis of being of anthropogenic origin. Localities Russia: Mount Rasvumchorr, Khibiny Massif, Kola Peninsula, Murmanskaja Oblast, Northern Region. References See also Organic compounds (minerals) Hyraceum Organic minerals
Tinnunculite
[ "Chemistry" ]
289
[ "Organic compounds", "Organic minerals" ]
53,048,287
https://en.wikipedia.org/wiki/Sapanisertib
Sapanisertib (also known as MLN0128, INK128 and TAK-228) is an experimental small molecule inhibitor of mTOR which is administered orally. It targets both mTORC1 and mTORC2. Developed by Millennium Pharmaceuticals, and is in phase II clinical trials for breast cancer, endometrial cancer, glioblastoma, renal cell carcinoma, and thyroid cancer. The drug has been well tolerated by patients with advanced solid tumours in Phase I trials. References Experimental cancer drugs Benzoxazoles Isopropyl compounds Amines
Sapanisertib
[ "Chemistry" ]
124
[ "Amines", "Bases (chemistry)", "Functional groups" ]
53,048,910
https://en.wikipedia.org/wiki/P%20Leonis
The Bayer designation p Leonis (p Leo) is shared by five star systems in the constellation Leo: p1 Leonis (HD 94402) p2 Leonis (61 Leonis) p3 Leonis (62 Leonis) p4 Leonis (65 Leonis) p5 Leonis (69 Leonis) Not to be confused with: π Leonis ρ Leonis Leonis, p Leo (constellation)
P Leonis
[ "Astronomy" ]
89
[ "Leo (constellation)", "Constellations" ]
53,049,031
https://en.wikipedia.org/wiki/Jin%20Akiyama
Jin Akiyama (; born 1946) is a Japanese mathematician, known for his appearances on Japanese prime-time television (NHK) presenting magic tricks with mathematical explanations. He is director of the Mathematical Education Research Center at the Tokyo University of Science, and professor emeritus at Tokai University. Akiyama studied mathematics at the Tokyo University of Science, where one of his mentors was Takashi Hamada. He completed a graduate degree at Sophia University under the supervision of Mitio Nagumo, in differential equations, but soon shifted his interests to graph theory. He planned to take a position in Ghana, but after conflict there caused it to be cancelled he joined the faculty at Nippon Ika University, and then moved to the U.S. for 1978 and 1979 to work with Frank Harary at the University of Michigan. In the 1990s, his interests shifted again, from graph theory to discrete geometry. Akiyama is a founder of the Japan Conference on Discrete and Computational Geometry, Graphs, and Games (JCDCG3), the founding managing editor of Graphs and Combinatorics, and the author of the books A Day's Adventure in Math Wonderland (with Mari-Jo Ruiz, World Scientific, 2008), Factors and Factorizations of Graphs (with Mikio Kano, Lecture Notes in Mathematics 2031, Springer, 2011), and Treks Into Intuitive Geometry: The World of Polygons and Polyhedra (with Kiyoko Matsunaga, Springer, 2015). He is also the namesake of a Nintendo DS game, Master Jin Jin's IQ Challenge. Akiyama's lectures sometimes also include musical performances by him, on accordion or xylophone. References External links Home page Year of birth missing (living people) Living people Japanese mathematicians Sophia University alumni Tokyo University of Science alumni Academic staff of Tokai University Graph theorists University of Michigan people
Jin Akiyama
[ "Mathematics" ]
384
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
53,050,024
https://en.wikipedia.org/wiki/Granny%20dumping
Granny dumping (informal) is a form of modern senicide. The term was introduced in the early 1980s by professionals in the medical and social work fields. Granny dumping is defined by the Oxford English Dictionary as "the abandonment of an elderly person in a public place such as a hospital or nursing home, especially by a relative". It may be carried out by family members who are unable or unwilling to continue providing care due to financial problems, burnout, lack of resources (such as home health or assisted living options), or stress. However, instances of institutional granny dumping, by hospitals and care facilities, has also been known to occur. The "dumping" may involve the literal abandonment of an elderly person, who is taken to a location such as hospital waiting area or emergency room and then left, or in the refusal to return to collect an elderly person after the person is discharged from a hospital visit or hotel stay. While leaving an elderly person in a hospital or nursing facility is a common form of the practice, there have been incidences of elderly people being "dumped" in other locations, such as the side of a public street. Historical background, causes, and costs A practice known as ubasute, existed in Japanese mythology since centuries ago, involving of legends senile elders who were brought to mountaintops by poor citizens who were unable to look after them. The widespread economic and demographic problems facing Japan have seen it on the rise, with relatives dropping off seniors at hospitals or charities. 70,000 (both male and female equally) elderly Americans were estimated to have been abandoned in 1992 in a report issued by the American College of Emergency Physicians. In this same study, ACEP received informal surveys from 169 hospital Emergency Departments and report an average of 8 "granny dumping" abandonments per week. According to the New York Times, 1 in 5 people are now caring for an elderly parent and people are spending more time than ever caring for an elderly parent than their own children. Social workers have said that this may be the result of millions of people who are near the breaking point of looking after their elderly parents who are in poor health. In the US, granny dumping is more likely to happen in states such as Florida, Texas, and California where there are large populations of retirement communities. Congress has attempted to step in by mandating to emergency departments requiring them to see all patients. In some US states, and some other countries, the practice is illegal, or is subject to efforts to declare it illegal. However, Medicaid is covering fewer and fewer of medical bills through reimbursement (in 1989, it was 78% but that number is decreasing) and reduced eligibility. In some cases, the hospitals may not want to take the risk of having a patient who cannot pay so they will attempt to transfer their care to another hospital. According to the Consolidated Omnibus Budget Reconciliation Act of 1985 set into place by Ronald Reagan, a hospital can transfer at the patient's request or providers must sign a document providing why they believe a patient's care should be better served at another facility. With 40% of revenue coming from Medicaid and Medicare a hospital must earn 8 cents per dollar to compensate for the loss of 7 cents per Medicaid/Medicare patients. Hospitals had to pay an additional 2 billion dollars to private payers to cover costs for Medicare/Medicaid patients in 1989. By caregivers In cases where granny dumping is practiced by family members or caregivers, the dumping falls into two categories: temporary, or permanent. Temporary abandonment of elderly persons is generally due to the inability or expense of finding temporary care for a person with complex medical needs. Needing a break, or wishing to go on a holiday, the normal caregivers will take their elderly patient to a hospital emergency room, or possibly a hotel, and then leave, with the plan to return once the vacation is over. Incidents of granny dumping often happen before long weekends and may peak before Christmas when families head off on holidays. Caregivers in both Australia and New Zealand report that old people without acute medical problems are dropped off at hospitals. As a result, hospitals and care facilities have to carry an extra burden on their limited resources. In Poland, the practice of dumping elderly persons before Christmas or Easter is known among emergency and ambulance personnel as Babka Świąteczna, i.e. Holiday Granny, the phrase also meaning 'Holiday pie' Caregivers may also intend the abandonment to be permanent. In such cases, the caregivers will refuse to return to collect the elderly person, even when contacted by officials. Caregivers may go to great lengths to abandon the elderly person in a place far from their home location to prevent being tracked down and having the elderly person returned to their care. Permanent abandonment might be done because the caregiver is mentally, physically, or financially unable to continue to provide care, or conscientiously as a tool and method of forcing institutions and government assistance to step in and provide placement and support which would otherwise be unavailable or denied to the caregiver or elderly person. Caregivers who abandon their elderly charges may face criminal charges or legal repercussions for doing so, dependent on their local laws. Institutional A hospital or care facility's legal obligation in such cases can be complicated. The protocols to handle a permanently abandoned elderly person are unclear and vary between institutions. However, the expense of providing emergency or long-term care to an abandoned elderly person can represent a considerable burden on a facility's budget, capacity, and manpower. This has led to institutional granny-dumping, where a hospital or nursing facility likewise abandon the elderly person to avoid the expense of their care. Hospitals generally seek to place an abandoned elderly person with a long-term care or nursing facility, but such facilities may have no capacity, or may refuse to take the patient, who may have no ability to pay. When this occurs, hospitals are faced with the dilemma of either providing care themselves at great expense, or similarly dumping the patient by taking them off of hospital property and leaving them. Nursing homes may similarly abandon low-income residents by evicting them and leaving them in hotels, homeless shelters, or on the street. Nursing homes may refuse to readmit residents after a trip home. In a granny dumping practice also called hospital dumping, residents may be sent to a hospital for temporary treatment and not permitted to return. Another form of institutional granny dumping may occur when a nursing home closes, and staff abandon residents in the facility, or leave them in hotels, homeless shelters, or similar. During the COVID-19 pandemic, institutional granny dumping by nursing homes became a widespread problem in the United States as above average numbers of care facilities closed with no alternatives to provide care for the displaced residents. References Gerontology Health care
Granny dumping
[ "Biology" ]
1,384
[ "Gerontology" ]
53,050,926
https://en.wikipedia.org/wiki/Schwarz%20triangle%20function
In complex analysis, the Schwarz triangle function or Schwarz s-function is a function that conformally maps the upper half plane to a triangle in the upper half plane having lines or circular arcs for edges. The target triangle is not necessarily a Schwarz triangle, although that is the most mathematically interesting case. When that triangle is a non-overlapping Schwarz triangle, i.e. a Möbius triangle, the inverse of the Schwarz triangle function is a single-valued automorphic function for that triangle's triangle group. More specifically, it is a modular function. Formula Let πα, πβ, and πγ be the interior angles at the vertices of the triangle in radians. Each of α, β, and γ may take values between 0 and 1 inclusive. Following Nehari, these angles are in clockwise order, with the vertex having angle πα at the origin and the vertex having angle πγ lying on the real line. The Schwarz triangle function can be given in terms of hypergeometric functions as: where a = (1−α−β−γ)/2, b = (1−α+β−γ)/2, c = 1−α, a′ = a − c + 1 = (1+α−β−γ)/2, b′ = b − c + 1 = (1+α+β−γ)/2, and c′ = 2 − c = 1 + α. This function maps the upper half-plane to a spherical triangle if α + β + γ > 1, or a hyperbolic triangle if α + β + γ < 1. When α + β + γ = 1, then the triangle is a Euclidean triangle with straight edges: a = 0, , and the formula reduces to that given by the Schwarz–Christoffel transformation. Derivation Through the theory of complex ordinary differential equations with regular singular points and the Schwarzian derivative, the triangle function can be expressed as the quotient of two solutions of a hypergeometric differential equation with real coefficients and singular points at 0, 1 and ∞. By the Schwarz reflection principle, the reflection group induces an action on the two dimensional space of solutions. On the orientation-preserving normal subgroup, this two-dimensional representation corresponds to the monodromy of the ordinary differential equation and induces a group of Möbius transformations on quotients of hypergeometric functions. Singular points This mapping has regular singular points at z = 0, 1, and ∞, corresponding to the vertices of the triangle with angles πα, πγ, and πβ respectively. At these singular points, where is the gamma function. Near each singular point, the function may be approximated as where is big O notation. Inverse When α, β, and γ are rational, the triangle is a Schwarz triangle. When each of α, β, and γ are either the reciprocal of an integer or zero, the triangle is a Möbius triangle, i.e. a non-overlapping Schwarz triangle. For a Möbius triangle, the inverse is a modular function. In the spherical case, that modular function is a rational function. For Euclidean triangles, the inverse can be expressed using elliptical functions. Ideal triangles When α = 0 the triangle is degenerate, lying entirely on the real line. If either of β or γ are non-zero, the angles can be permuted so that the positive value is α, but that is not an option for an ideal triangle having all angles zero. Instead, a mapping to an ideal triangle with vertices at 0, 1, and ∞ is given by in terms of the complete elliptic integral of the first kind: . This expression is the inverse of the modular lambda function. Extensions The Schwarz–Christoffel transformation gives the mapping from the upper half-plane to any Euclidean polygon. The methodology used to derive the Schwarz triangle function earlier can be applied more generally to arc-edged polygons. However, for an n-sided polygon, the solution has n-3 additional parameters, which are difficult to determine in practice. See for more details. Applications L. P. Lee used Schwarz triangle functions to derive conformal map projections onto polyhedral surfaces. References Sources Complex analysis Hyperbolic geometry Conformal mappings Modular forms Spherical geometry Automorphic forms
Schwarz triangle function
[ "Mathematics" ]
873
[ "Modular forms", "Number theory" ]
53,051,490
https://en.wikipedia.org/wiki/Marysville%20Cotton%20Mill
The Marysville Cotton Mill, now known as Marysville Place, is an industrial building in Marysville, New Brunswick, that is a National Historic Site of Canada. It was built by Alexander Gibson in the mid 1880s as he expanded his industrial operations into textile manufacturing at the company town he had established. Since 1986, it has been used by the Government of New Brunswick as an office building and houses the Marysville Data Centre, a data centre used by government departments. Background Alexander Gibson moved to what is now Marysville from Lepreau, Charlotte County in late 1862. For £7,300, he purchased a property that included a gristmill, a blacksmith shop, a general store, sawmills, a farm, "a number of houses well suited for workmen", and a of woodland. The sawmill operated on the Nashwaak River, on which he had acquired the rights to float logs and rafts to its mouth at the Saint John River. The flow of water on the river was controlled by dams Gibson had built, ensuring he could transport logs along it throughout the year. When the government offered a grant of per of railway track built in the province, Gibson funded the construction of a narrow-gauge railway line to Chatham, for which he received a total grant of . He sold the railway for $800,000. His new property had poor sanitation, with the "buildings filthy" and typhoid fever endemic. He had the site cleared, then built a model village named Marysville to house the workers and their families with the funds from the railway sale. These were located on the east side of the river near the cotton mill. On the west side of the river were built mansions on hills for Gibson and the managers. A footbridge across the Nashwaak River connected the 24 duplex houses, known as "White Row", to the nearby mills. Gibson also established a brickyard to manufacture bricks, instead of purchasing them from elsewhere, which was used for the cotton mill, the tenement buildings, and other buildings in the town. Mill The mill's construction began in 1883 and was completed in 1885. Its design was influenced by the mill designs of New England, and used a brick pier foundation. The building was designed by Lockwood, Greene & Company, an engineering firm based in Providence, Rhode Island, and built by contractor Albert H. Kelsey. Along with the operation in Milltown (now part of St. Stephen), Marysville Cotton Mill was the largest and most isolated of mills in The Maritimes. The company built a church and a school, and operated a company store that deducted its bills and housing rents from employee's pay. Employees were paid once a month, that with the company housing and requisite family labour would "maximize dependence and discourage sudden resignations". Employees were provided land for kitchen gardens and to use as pasture, and received free firewood. The industrious Gibson was well-respected by his employees, which constituted the bulk of the town's population. Workers were awakened in the morning by a steam whistle sounded from the factory, which was also sounded to dismiss them after a ten-hour work-day. Description Marysville Cotton Mill is a large, brick building on the east bank of the Nashwaak River, at the intersection of Bridge Street and Rue McGloin. It is in Marysville, now the most northeasterly suburb of Fredericton, with which it was amalgamated in 1973. Each storey of the building has a row of identical multi-pane mullion windows. The four storey structure is long and wide. It was the first building in Fredericton to have electric lighting, and had a sprinkler system. Most of the materials were obtained locally, with the exception of the southern hard pine used for the posts and beams. National Historic Site The cotton mill was designated a National Historic Site of Canada on 16 June 1986. The neighbourhood of Marysville was declared a national historic district on 20 November 1993, and on 8 June 2007, Alexander Gibson was designated a Person of National Historic Significance. The railway line was converted into a hiking trail. Use The mill manufactured textiles until its closing in the 1970s. In 1985, the Government of New Brunswick undertook a project to restore the building, and when complete its first tenant became the Department of Tourism, Recreation and Heritage. Today, the Government of New Brunswick uses it as an office building, and it is known as Marysville Place. It was used as the site of the Marysville Data Centre up until 2016, a data centre used by a number of the government's departments, among them the Department of Finance, Department of Health, Department of Justice and Attorney General, Department of Public Safety, and Department of Social Development. It has since been repurposed as an office building for several government departments. Notes References Further reading External links Marysville Cotton Mill National Historic Site of Canada at Parks Canada Buildings and structures in Fredericton Cotton mills Textile mills in Canada National Historic Sites in New Brunswick Industrial buildings completed in 1885 1880s establishments in New Brunswick Data centers Historic buildings and structures in New Brunswick
Marysville Cotton Mill
[ "Technology" ]
1,036
[ "Data centers", "Computers" ]
53,054,355
https://en.wikipedia.org/wiki/Thorium%28IV%29%20nitrate
Thorium(IV) nitrate is a chemical compound, a salt of thorium and nitric acid with the formula Th(NO3)4. A white solid in its anhydrous form, it can form tetra- and pentahydrates. As a salt of thorium it is weakly radioactive. Preparation Thorium(IV) nitrate hydrate can be prepared by the reaction of thorium(IV) hydroxide and nitric acid: Different hydrates are produced by crystallizing in different conditions. When a solution is very dilute, the nitrate is hydrolysed. Although various hydrates have been reported over the years, and some suppliers even claim to stock them, only the tetrahydrate and pentahydrate actually exist. What is called a hexahydrate, crystallized from a neutral solution, is probably a basic salt. The pentahydrate is the most common form. It is crystallized from dilute nitric acid solution. The tetrahydrate, Th(NO3)4•4H2O is formed by crystallizing from a stronger nitric acid solution. Concentrations of nitric acid from 4 to 59% result in the tetrahydrate forming. The thorium atom has 12-coordination, with four bidentate nitrate groups and four water molecules attached to each thorium atom. To obtain the anhydrous thorium(IV) nitrate, thermal decomposition of Th(NO3)4·2N2O5 is required. The decomposition occurs at 150-160 °C. Properties Anhydrous thorium nitrate is a white substance. It is covalently bound with low melting point of 55 °C. The pentahydrate Th(NO3)4•5H2O crystallizes with clear colourless crystals in the orthorhombic system. The unit cell size is a=11.191 b=22.889 c=10.579 Å. Each thorium atom is connected twice to each of four bidentate nitrate groups, and to three water molecules via their oxygen atoms. In total the thorium is eleven-coordinated. There are also two other water molecules in the crystal structure. The water is hydrogen bonded to other water, or to nitrate groups. The density is 2.80 g/cm3. Vapour pressure of the pentahydrate at 298K is 0.7 torr, and increases to 1.2 torr at 315K, and at 341K it is up to 10.7 torr. At 298.15K the heat capacity is about 114.92 calK−1mol−1. This heat capacity shrinks greatly at cryogenic temperatures. Entropy of formation of thorium nitrate pentahydrate at 298.15K is −547.0 calK−1mol−1. The standard Gibbs energy of formation is −556.1 kcalmol−1. Thorium nitrate can dissolve in several different organic solvents including alcohols, ketones, esters and ethers. This can be used to separate different metals such as the lanthanides. With ammonium nitrate in the aqueous phase, thorium nitrate prefers the organic liquid, and the lanthanides stay with the water. Thorium nitrate dissolved in water lowers it freezing point. The maximum freezing point depression is −37 °C with a concentration of 2.9 mol/kg. At 25° a saturated solution of thorium nitrate contains 4.013 moles per liter. At this concentration the vapour pressure of water in the solution is 1745.2 Pascals, compared to 3167.2 Pa for pure water. Reactions When thorium nitrate pentahydrate is heated, nitrates with less water are produced, however the compounds also lose some nitrate. At 140 °C a basic nitrate, ThO(NO3)2 is produced. When strongly heated thorium dioxide is produced. A polymeric peroxynitrate is precipitated when hydrogen peroxide combines with thorium nitrate in solution with dilute nitric acid. Its formula is Th6(OO)10(NO3)4 •10H2O. The hydrolysis of thorium nitrate solutions produces basic nitrates Th2(OH)4(NO3)4•H2O and Th2(OH)2(NO3)6•8H2O. In crystals of Th2(OH)2(NO3).6•8H2O a pair of thorium atoms are connected by two bridging oxygen atoms. Each thorium atom is surrounded by three bidentate nitrate groups and three water molecules, bringing the coordination number to 11. When oxalic acid is added to a thorium nitrate solution, insoluble thorium oxalate precipitates. Other organic acids added to thorium nitrate solution produce precipitates of organic salts with citric acid; basic salts, such as tartaric acid, adipic acid, malic acid, gluconic acid, phenylacetic acid, valeric acid. Other precipitates are also formed from sebacic acid and azelaic acid Double salts Hexanitratothorates with the generic formula M2Th(NO3)6 or MTh(NO3)6•8H2O are made by mixing other metal nitrates with thorium nitrate in dilute nitric acid solution. M can be Mg, Mn, Co, Ni, or Zn. M can be Cs, (NO)+ or (NO2)+. Crystals the divalent metal thorium hexanitrate octahydrate have a monoclinic form with similar unit cell dimensions: β=97°, a=9.08 b=8.75-8 c=12.61-3. Pentanitratothorates with the generic formula MTh(NO3)5•H2O are known for M being Na or K. K3Th(NO3)7 and K3H3Th(NO3)10•4H2O are also known. Complexed salts Thorium nitrate also crystallizes with other ligands and organic solvates including ethylene glycol diethyl ether, tri(n‐butyl)phosphate, butylamine, dimethylamine, and trimethylphosphine oxide. References Notes 1. Bogus hydrates include 12, 6, 5.5, 2 and 1 water molecules Thorium(IV) compounds Nitrates Deliquescent materials
Thorium(IV) nitrate
[ "Chemistry" ]
1,360
[ "Oxidizing agents", "Salts", "Nitrates", "Deliquescent materials" ]
53,055,349
https://en.wikipedia.org/wiki/Axiom%20of%20finite%20choice
In mathematics, the axiom of finite choice is a weak version of the axiom of choice which asserts that if is a family of non-empty finite sets, then (set-theoretic product). If every set can be linearly ordered, the axiom of finite choice follows. Applications An important application is that when is a measure space where is the counting measure and is a function such that , then for at most countably many . References Axioms of set theory Axiom of choice
Axiom of finite choice
[ "Mathematics" ]
103
[ "Axiom of choice", "Axioms of set theory", "Mathematical axioms" ]
70,096,665
https://en.wikipedia.org/wiki/Membrane%20scaling
Membrane scaling is when one or more sparingly soluble salts (e.g., calcium carbonate, calcium phosphate, etc.) precipitate and form a dense layer on the membrane surface in reverse osmosis (RO) applications. Figures 1 and 2 show scanning electron microscopy (SEM) images of the RO membrane surface without and with scaling, respectively. Membrane scaling, like other types of membrane fouling, increases energy costs due to higher operating pressure, and reduces permeate water production. Furthermore, scaling may damage and shorten the lifetime of membranes due to frequent membrane cleanings and therefore it is a major operational challenge in RO applications. Membrane scaling can occur when sparingly soluble salts in RO concentrate become supersaturated, meaning their concentrations exceed their equilibrium (solubility) levels. In RO processes, the increased concentration of sparingly soluble salts in the concentrate is primarily caused by the withdrawal of permeate water from the feedwater. The ratio of permeate water to feedwater is known as recovery which is directly related to membrane scaling. Recovery needs to be as high as possible in RO installations to minimize specific energy consumption. However, at high recovery rates, the concentration of sparingly soluble salts in the concentrate can increase dramatically. For example, for 80% and 90% recovery, the concentration of salts in the concentrate can reach 5 and 10 times their concentration in the feedwater, respectively. If the calcium and phosphate concentrations in the RO feedwater are 200 mg/L and 5 mg/L, respectively, the concentrations in the RO concentrate will be 1000 mg/L and 50 mg/L at 90% recovery, exceeding the calcium phosphate solubility limit and resulting in calcium phosphate scaling. It is important to note that membrane scaling is not only dependent on supersaturation but also on crystallization kinetics, i.e., nucleation and crystal growth. Scaling compounds encountered in RO The most common salts that cause scaling in RO processes are: Calcium carbonate Calcium sulfate Silica/metal silicates Barium sulfate Calcium phosphate Scaling prediction methods There are a number of indices available to determine the scaling tendency of sparingly soluble salts in a water solution. These indices provide information if a given scale-forming specie is undersaturated, saturated, or supersaturated. Scaling does not occur when a compound is undersaturated, while it will take place sooner or later when a compound is supersaturated. The most commonly used indices to predict scaling in RO applications are: Saturation index (SI) where, IAP and Ksp are ion activity product and solubility product of the sparingly soluble salt, respectively. For instance, SI for calcium sulphate can be calculated as follows: where, γ is activity coefficient. [Ca2+] and [SO42−] are calcium and sulphate concentrations in mol/L, respectively. Supersaturation ratio (Sr) where IAP and Ksp are ion activity product and solubility product of the sparingly soluble salt, respectively. For instance, Sr for calcium sulphate can be calculated as follows: where, γ is activity coefficient. [Ca2+] and [SO42−] are calcium and sulphate concentrations in mol/L, respectively. Langelier saturation index (LSI) LSI is used only for calcium carbonate scaling. On the other hand, SI and Sr are applicable for all compounds. A positive value for each SI and LSI indicates that scaling may occur in RO, whereas a negative value implies that scaling will not occur. Similarly, scaling may occur when Sr>1, but not when Sr<1. Scaling control in RO applications There are several methods for preventing scaling in RO applications, including acidification of RO feed, lowering RO system recovery, and antiscalant addition. Acidification of RO feedwater was one of the first methods for tackling calcium carbonate scaling in RO processes. However, due to the risks associated with the use of acid, this method is becoming less common. Furthermore, acidification may not be effective for all types of scales; for example, it is very effective in preventing calcium carbonate scaling but not calcium sulphate scaling. Another method of preventing scaling is to operate RO at low recovery (ratio of permeate water to the feedwater). The recovery of the RO application is reduced in this approach to reduce the supersaturation level of the concentrate water to undersaturated conditions. Low recovery reduces the adverse effect of concentration polarization because there is less solute concentration on the membrane surface, reducing the potential for scale formation. This approach, however, is not very appealing or economical because it results in high specific energy consumption. Furthermore, the large amount of concentrate disposal is a problem. Antiscalants addition to the RO feed is one of the most widely applied strategies in term of scale control. Antiscalants can be used to increase the recovery of RO process and are primarily contains organic compounds such as sulphonate, phosphonate, or carboxylic acid functional groups. The addition of antiscalants hinder the crystallization process, i.e., nucleation and/or growth phase of scaling compounds. Antiscalant prevent scale formation by three mechanisms, namely threshold inhibition, crystal modification and dispersion. Threshold inhibition is when antiscalant molecules adsorb on crystal nuclei and halt their nucleation process, whereas crystal modification and dispersion are the ability of antiscalants to stop the growth and/or agglomeration of crystals and particles. For silica scale, there is also an additional function where it prevents polymerisation of silica monomers, hence preventing the growth of silica polymers. There are various commercial antiscalants on the market such as Kurita, Avista, BASF etc. In RO applications, antiscalants are chosen based on the composition of the feedwater, and their doses are usually calculated using computer programs created by antiscalant manufacturers. For example, Avista has a chemical dosing software called AdvisorCI™, that is used to compute accurate dosing of chemicals in RO systems. References Water treatment Fouling Membrane technology
Membrane scaling
[ "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
1,274
[ "Separation processes", "Water treatment", "Water pollution", "Membrane technology", "Environmental engineering", "Water technology", "Materials degradation", "Fouling" ]
70,098,161
https://en.wikipedia.org/wiki/V1005%20Orionis
V1005 Orionis is a young flare star in the equatorial constellation of Orion. It has the identifier GJ 182 in the Gliese–Jahreiß catalogue; V1005 Ori is its variable star designation. This star is too faint to be visible to the naked eye, having a mean apparent visual magnitude of 10.1. It is located at a distance of 79.6 light years from the Sun and is drifting further away with a radial velocity of 19.2 km/s. The star is a possible member of the IC 2391 supercluster. Flare activity was first reported for this star by N. I. Shakhovskaya in 1974. B. W. Bopp found anomalously strong lithium lines in the spectrum of GJ 182, a rarity for stars of this class and a possible indicator of a very young star. Together with F. Espenak, in 1977 Bopp demonstrated the star showed periodic variations similar to BY Draconis. In 1984, Byrne and associates found a preliminary rotation period of 4.55 days and showed the star had a normal flare rate. The stellar classification of V1005 Ori is M0Ve, indicating this is an M-type main-sequence star (a "red dwarf") with emission lines (e) in its spectrum. It is classified as a BY Draconis and UV Ceti variable, which means it is a magnetically active star that exhibits rotational modulation of star spots and undergoes sudden increases in brightness from flares. Because of this activity, the star displays a low level of X-ray emission. The surface magnetic field strength is and the magnetic field has multiple poles. It shows a possible activity cycle with a period of 38 years and an amplitude of 0.13 in magnitude. This star is an estimated 25 million years old and is currently about a half magnitude above the main sequence. However, the high lithium content suggests it may be as young as 10–15 million years, as this element is typically expected to be depleted after 20 million years. It is spinning with a projected rotational velocity of ~9 km/s, and a rotation period of 4.4 days suggests it is being viewed from close to the equatorial plane. The star has less mass, a smaller radius, and a lower luminosity compared to the Sun. V1005 Ori is surrounded by a circumstellar disk of dust that indicates planetary formation is under way. This disk has a radius of , a mean temperature of , and a dust mass equal to 3.35 times the mass of the Moon. A candidate sub-stellar companion was identified in 2001, but this was determined to be a background object. References Further reading M-type main-sequence stars BY Draconis variables Flare stars Circumstellar disks Orion (constellation) 0182 023200 Orionis, V1005
V1005 Orionis
[ "Astronomy" ]
593
[ "Constellations", "Orion (constellation)" ]
70,098,410
https://en.wikipedia.org/wiki/Agrafka%20Creative%20Workshop
The Agrafka Creative Workshop is a design studio founded by Ukrainian artists Romana Romanyshyn and Andriy Lesiv. The studio is specialized in graphics, painting, and design. It has won international awards in the field of book illustration, including the Biennial International Award for Illustration and the 2018 Bologna Ragazzi Award. History Romanyshyn and Lesiv began working in book illustration while they were still students at the Lviv State College of Decorative and Applied Arts from 1999 to 2003. After they graduated, they received a proposal from the Lviv-based publishing house Літопис ("Chronicle") to create cover art for the novel Naive by Erlend Loe. The duo continued to design for Chronicle, producing general designs for two poetry collections: "Withered Leaves" by Ivan Franko in 2006 and "Three Rings" by Bohdan Ihor Antonych in 2008. Their work on "Withered Leaves" was especially important, as it represented their first complete work which encompassed design, layout, and graphics. Romanyshyn and Lesiv continued their studies in book design during an internship in Krakow in 2010 as part of the Polish Minister of Culture's "Gaude Polonia" scholarship. During this period they worked on the last piece by Polish Nobel Prize laureate Wislawa Szymborska, a collection of poems called "Może To Wszystko". The collection was published by BoSz and received praise from the author. In 2011, Agrafka undertook its first children's book project. Produced alongside the Bogdan Textbook publishing house, the book, a Ukrainian folk tale, was recognized as the best book at the Lviv International Children's Festival, won the Grand Prix Children's Book Prize, and was recognized in the 2012 edition of White Ravens, an international catalog of children's books. Working again with Bogdan Textbook, Agrafka illustrated another Ukrainian folk tale called "Turnip" in 2012. The book won the Lion's Children's Book Award for Best Art and was included in White Ravens in 2013. In the following years, Agrafka cooperated closely with the Old Lion Publishing House. The publishing house put out four books illustrated by Agrafka: "Antomies" and "Stars and Poppies" in 2014, and "My Home and Things in It" and "The War that Changed Rondo" in 2015. In 2015, Old Lion, Agrafka, and authors O. Dumanska and G. Tereshchuk produced the first book in a series of alphabet encyclopedias which won Best Book of the Publishers' Forum. The next year, they designed a supplement to the series which also won Best Book of the Publishers' Forum and earned the All-Ukrainian title of Book of the Year 2016. Awards 2006: 13th Publishers' Forum in Lviv Prize for "Withered Leaves" 2009: International Renaissance Foundation Awards for design of world humanitarian classics 2011: Children's Book Prize Grand Prix for the Ukrainian folk tale "Glove" 2011: 18th Publishers' Forum in Lviv Book of the Year for the Ukrainian folk tale "Glove" 2011: 23rd International Biennial Award for Illustration for the Ukrainian folk tale "Glove" 2013: Included in the White Ravens annual catalog for "Turnip" 2014: International Children's Book Fair Bologna Award (Opera Prima category) for "Stars and Poppies" 2015: International Children's Book Fair Bologna Special Award (New Horizons category) for "The War That Changed Rondo" 2015: Publishers' Forum Best Book for the alphabet encyclopedia "Sheptytsky from A to Z" 2016: Frankfurt Book Fair Global Illustration Award (Cover Illustration category) for "George's Secret Key to the Universe" 2018: Bologna Ragazzi Award (children's non-fiction category) for the original books "Loud, Quiet, Whispers" and "I See So" 2018: Book Arsenal Grand Prix (Best Book Design category) for "I See So" 2019: South Korea Nami Concours Winner for "Farewell" 2019: 38th Andersen Prize Winner for "Loud, Quiet, Whispers" 2020: ED-Awards Gold Medal (Book and Publishing Illustration category) for "Optics of God" References Design companies Studios
Agrafka Creative Workshop
[ "Engineering" ]
865
[ "Design", "Engineering companies", "Design companies" ]
70,099,461
https://en.wikipedia.org/wiki/Tetraacetylethane
Tetraacetylethane is the organic compound with the nominal formula [CH(C(O)CH3)2]2. It is a white solid that has attracted interest as a precursor to heterocycles and metal complexes. It is prepared by oxidation of sodium acetylacetonate: I2 + 2 NaCH(C(O)CH3)2 → [CH(C(O)CH3)2]2 + 2 NaI Reminiscent of the case of acetylacetone, tetraacetylethane exists as the enol, as established by X-ray crystallography. The two C3O2H rings are twisted with a dihedral angle near 90°. Many metal complexes have been prepared from the conjugate base of this ligand. One example is diruthenium(III) derivative [Ru(acac)2]2[C(C(O)CH3)2]2, which is closely related to ruthenium(III) acetylacetonate. References Diketones Chelating agents Ligands 3-Hydroxypropenals
Tetraacetylethane
[ "Chemistry" ]
231
[ "Chelating agents", "Ligands", "Coordination chemistry", "Process chemicals" ]
70,099,871
https://en.wikipedia.org/wiki/Sodium%20acetylacetonate
Sodium acetylacetonate is an organic compound with the nominal formula Na[CH(C(O)CH3)2]. This white, water-soluble solid is the conjugate base of acetylacetone. Preparation The compound is prepared by deprotonation of acetylacetone: NaOH + CH2(C(O)CH3)2 → NaCH(C(O)CH3)2 + H2O The anhydrous compound is produced by deprotonation with sodium hydride in an aprotic solvent such as THF: NaH + CH2(C(O)CH3)2 → NaCH(C(O)CH3)2 + H2 Reactions Oxidation of the salt gives tetraacetylethane. With metal salts, it reacts to give metal acetylacetonate complexes. Alkylation of sodium acetylacetonate can result in both O-alkylation and C-alkylation. The former gives the enol ether and the latter gives 3-substituted derivative of acetylacetone. Structure The structure of the monohydrate has been established by X-ray crystallography. The sodium cation is bonded to the enolate oxygen centers. References Diketones Chelating agents Ligands Acetylacetonate complexes 3-Hydroxypropenals
Sodium acetylacetonate
[ "Chemistry" ]
282
[ "Chelating agents", "Ligands", "Coordination chemistry", "Process chemicals" ]
70,101,493
https://en.wikipedia.org/wiki/Federation%20of%20Textile%2C%20Leather%2C%20Chemical%20and%20Allied%20Industries
The Federation of Textile, Leather, Chemical and Allied Industries (, FITEQA) was a trade union representing workers in manufacturing industries in Spain. The union was founded in 1994, when the National Federation of Textiles and Leather merged with the National Federation of Chemicals. Like both its predecessors, the union affiliated to the Workers' Commissions, and by the end of the year, it had 51,053 members. In 2014, it merged with the Federation of Industry, to form a new Federation of Industry. References Chemical industry trade unions Textile and clothing trade unions Trade unions established in 1994 Trade unions disestablished in 2014 Trade unions in Spain
Federation of Textile, Leather, Chemical and Allied Industries
[ "Chemistry" ]
129
[ "Chemical industry trade unions" ]
70,102,727
https://en.wikipedia.org/wiki/Syntrophales
The Syntrophales are an order of gram-negative Thermodesulfobacteriota. It is the only order in the monotypic class Syntrophia. Acetate is converted by syntrophales into acetyl-CoA, which can be used as a source of carbon and energy. Given that genes involved in fermentation were missing, this might then be channeled into gluconeogenesis. See also List of bacterial orders List of bacteria genera References 2. Langwig, M.V., De Anda, V., Dombrowski, N. et al. Large-scale protein level comparison of Deltaproteobacteria reveals cohesive metabolic groups. ISME J 16, 307–320 (2022). https://doi.org/10.1038/s41396-021-01057-y Thermodesulfobacteriota Bacteria orders
Syntrophales
[ "Biology" ]
195
[ "Bacteria stubs", "Bacteria" ]
70,102,926
https://en.wikipedia.org/wiki/Desulfonatronovibrionaceae
Desulfonatronovibrionaceae is a family of bacteria belonging to the phylum Thermodesulfobacteriota. See also List of bacterial orders List of bacteria genera References Desulfovibrionales Bacteria families
Desulfonatronovibrionaceae
[ "Biology" ]
49
[ "Bacteria stubs", "Bacteria" ]
70,103,350
https://en.wikipedia.org/wiki/Geobacterales
The Geobacterales are an order within the Thermodesulfobacteriota. See also List of bacterial orders List of bacteria genera References Thermodesulfobacteriota Bacteria orders
Geobacterales
[ "Biology" ]
42
[ "Bacteria stubs", "Bacteria" ]
70,103,474
https://en.wikipedia.org/wiki/Additive%20effect
Additive effect in pharmacology describes the situation when the combining effects of two drugs equal the sum of the effects of the two drugs acting independently. The concept of additive effect is derived from the concept of synergy. It was introduced by the scientists in pharmacology and biochemistry fields in the process of understanding the synergistic interaction between drugs and chemicals over the century. Additive effect often occurs when two similar drugs are taken together to achieve the same degree of therapeutic effect while reducing the specific adverse effect of one particular drug. For example, aspirin, paracetamol, and caffeine are formulated together to treat pain caused by tension headaches and migraine. Additive effect can be used to detect synergy as it can be considered as the baseline effect in methods determining whether drugs have synergistic effect. Synergistic effect is similar to additive effect, having a combination effect greater than additive effect. It can produce an effect of 2+2 > 4 when two drugs are used together. Additive effect can also be found in a majority of combination therapies, although synergistic effect is more common. If the combination of two drugs in combination therapy has an effect lower than the sum of the effects of the two drugs acting independently, also known as antagonistic effect, the drugs will seldom be prescribed together in the same therapy. Drug or chemical combinations with additive effects can cause adverse effects. For example, co-administration of non-steroidal anti-inflammatory drugs (NSAIDs) and glucocorticoids increases the risk of gastric bleeding. History The concept of additive effect is derived from the concept of drug synergy. Thus, the origin of additive effect dates back to the early twentieth century when the search for synergy started. During the search for synergy, the models of Loewe additivity and Bliss independence were proposed. These models are capable of measuring the effects of drug combinations. Hence, Loewe additivity and Bliss independence were developed to determine whether an effect of a drug combination is synergistic or antagonistic. During the construction of these models, the concept of additive effect was introduced as the baseline for the determination of synergy and antagonism. Types of Additive Effect Additive effects can occur with drugs with either equivalent or overlapping actions, or independent actions. Equivalent or overlapping actions Many of the drugs in the same class exert additive effect as they have a similar therapeutic mechanism of action. For example, the calcium carbonate, magnesium, and aluminium salts are all antacids with the mechanism of using the negative ion to neutralize the acid in the stomach. The antacids have no interaction between them, so they would be considered to have additive effect when taken together. Drugs that are in the same class, but do not have the same target, may also act additively by interacting with different targets in the same pathway. For example, propofol and sevoflurane can both produce anesthetic effects. Propofol can potentiate the activity of GABAA receptor and act on α, β and γ subunits, while sevoflurane enhances the response of the GABAA receptor to endogenous GABA by binding to the α1-subunit. By using Dixon up-down method, a trial has shown that the effect in producing anesthetic effects between propofol and sevoflurane is additive. Independent actions Two drugs having different targets in unrelated pathways that ultimately result in the desired therapeutic result are considered to have additive effects with independent actions. For example, artemisinin and curcumin both exert antimalarial effects. Artemisinin works by being metabolized in the body into active metabolites. The metabolites would then create reactive oxygen species(ROS) that damage the parasites and kill them. The mechanism of action of curcumin remains largely unknown, but the antiparasitic effect is believed to be associated with the potentiation of innate and adaptive immunological responses. The combined effects of artemisinin and curcumin each contribute to the death of parasites via different mechanisms and the effect is shown to be additive by fractional inhibitory concentrations. Drugs with the same target in different sites that produce additive effects are also considered as independent action. For example, doxorubicin and trabectedin can both produce anticancer effect. Doxorubicin is a DNA intercalator that prefers to bind to AT regions, while trabectedin forms guanine adduct in DNA to disrupt DNA repair system. A recent study has shown that doxorubicin and trabectedin do not hinder each other and could produce an additive anticancer effect. Common misconceptions The concept of additive effect is analogous to the concept of simple addition in mathematics. However, the additive effect is not simply the arithmetic summation of two (or more) drugs in most cases. For an additive inhibition effect, drug A and drug B could each inhibit 20% individually, but the additive effect is not 40%. The effect cannot be simply arithmetic because if drug A and drug B each inhibits 60% cannot theoretically exert an inhibitory effect of 120%. With 60% inhibitory effect each, the remaining function would be at (1-60%)×(1-60%)=16%, meaning the additive inhibitory effect would be 84%. Since the application of additive effect is commonly seen in clinical practice, avoiding the common misconceptions of additive effect is crucial to understanding the clinical significance of additive effect. Clinical Significance Detection of synergy One of the typical uses of additive effect is to detect synergy. Additive effect can be considered as the baseline effect in methods of determining the presence of synergistic effect between two or more drugs. Synergistic effect is similar to additive effect. The only difference is it has a combination effect greater than additive effect. To be brief, synergy can produce an effect of 2 + 2 > 4 when drugs are used in combination. The combination of angiotensin II receptor antagonist (ARB), Candesartan-cilexetil, and angiotensin-converting enzyme inhibitor (ACEI), Ramipril, demonstrates a synergistic effect in reducing systolic blood pressure. Detection of antagonism The other use of additive effect is to detect antagonism. Similarly, additive effect can be considered as the baseline effect in methods of determining the presence of antagonistic effect between drugs. Pharmacists can confirm the presence of antagonism when the combination effect of drugs is less than additive effect. The combination of acetylsalicylic acid and ibuprofen demonstrates an antagonistic effect in relieving pain and inflammation. Combination therapy The most common clinical usage of additive effect in pharmacology is combination therapy. Two or more therapeutic agents are used in combination therapy to treat a single disease. Different drugs in the same combination therapy act on different biological and biochemical pathways in the body to produce an additive effect. An example of combination therapy demonstrating additive effect is the use of β-2 adrenergic receptor agonists together with inhaled corticosteroids. This is a treatment for two commonly seen pulmonary diseases, asthma and chronic obstructive pulmonary disease. β-2 adrenergic receptor agonists act as bronchodilators, having an effect of inducing bronchodilation to relieve bronchoconstriction; Inhaled corticosteroids act as anti-inflammatory drugs to decrease the inflammatory response. The two drugs act on different sites in the body. The corticosteroids also reverse and restore the function and number of β-2 adrenergic receptors in patients’ lungs in vivo. Meanwhile, the combined activity of two drugs resolves the problem of reduced sensitivity in some patients with chronic obstructive pulmonary disease towards inhaled corticosteroids. A common drug from this example can be found is Seretide®, containing a long-acting β-2 adrenergic receptor agonist named as Salmeterol and a corticosteroid named as Fluticasone. Additive interaction can also be found in combination therapy for treating hypertension. The combination of angiotensin II receptor blockers (ARBs) and calcium channel blockers (CCBs) is one of the suggested antihypertensive therapies. ARBs inhibit the action of angiotensin II to decrease fluid retention and blood volume to decrease blood pressure, reduce vasoconstriction to decrease peripheral vascular resistance, and prevent vascular fibrosis to decrease vascular stiffness. CCBs are vasodilators inhibiting L-type voltage-operated calcium channels in the blood vessels to alleviate vasoconstriction resulting in a decrease in peripheral vascular resistance. The two types of drugs act on different pathways to produce an additive effect on lowering blood pressure without any increase in adverse effects. This combination, with ARB, valsartan, and CCB, amlodipine, is a common treatment in high-risk hypertensive patients, especially the elderly. The treatment for another common disease, primary hypercholesterolemia, also demonstrates additive effect. Plant sterol-ester margarine and a common type of antihyperlipidaemic drug, cerivastatin, have an additive effect in reducing LDL cholesterol, without significant interaction between the two drugs. Another drug combination with additive effect for hypercholesterolemia is niacin (vitamin B3) and simvastatin. This drug combination is also known as Simcor commercially. Niacin can reduce the secretion of LDL cholesterol and very-low-density lipoprotein cholesterol (VLDL cholesterol). On the other hand, simvastatin can reduce the synthesis of LDL cholesterol and triglycerides, and increase the level of high-density lipoprotein cholesterol (HDL cholesterol). Together, niacin and simvastatin reduce the level of LDL cholesterol and increases the level of HDL cholesterol, therefore managing hypercholesterolemia effectively. Optimal dosing Additive interaction or additive effect can be found in the treatment of the majority of common diseases. The combination of drugs with different effects has the benefit of using each drug at its optimal dose. This decreases the possibility of using a higher dose of a single medication if the previous dose is ineffective in treating diseases or relieving symptoms. The significance of using drugs with optimal dose is lowering the occurrence of intolerable side effects, adverse reactions, and possible drug toxicity in patient's body. This increases the safe use of drugs and increases patient compliance with the therapy. One of the examples is the use of calcium channel blocker and beta-blocker. They are drugs that can be used to treat stable angina. They can both decrease the frequency of angina, aiming to relieve the symptoms of angina. There are controlled, double blind clinical trials and studies involving patients with preserved left ventricular function demonstrating that the combination of calcium channel blocker and beta blocker has an additive cardio depressant effects when comparing with either drug class alone. The combination therapy is used when a single medication fails to produce a therapeutic effect. Choosing the optimal dose of the two medications in the combination therapy prevents the use of an extreme high dose of a single medication alone, leading to adverse effects. Adverse Effects Drug combinations with additive effects have the potential to cause adverse effects. Adverse effects induced by drug combinations are not uncommon. The risk of having adverse effects is increased when the drug combination with additive effect has the same adverse effect. Thus, some drug combinations with additive effect are avoided. Below are commonly seen drug combinations with additive effect causing adverse effects. ACEI and potassium-sparing diuretics An example demonstrating how drug combination with additive effect can cause adverse effects is the co-administration of ACEI and potassium-sparing diuretics. Despite having different mechanisms of action, the drugs are able to reduce potassium excretion from the body. Hence, both ACEI and potassium-sparing diuretics have the side effect of hyperkalemia. When two drugs are used together, the risk of having hyperkalemia is doubled. Since hyperkalemia has the potential to cause arrhythmia and metabolic acidosis, the combination of ACEI and potassium-sparing diuretics is avoided. NSAIDs and glucocorticoids Another example is the combination of non-steroidal anti-inflammatory drugs (NSAIDs) and glucocorticoids. Although NSAIDs and glucocorticoids have different mechanisms of action, the drugs are able to diminish the protective effect of gastric mucosa from gastric acid. As a result, the concomitant use of NSAIDs and glucocorticoids increases the risk of gastric bleeding and worsens peptic ulcer disease. As a result, the combination of NSAIDs and glucocorticoids is not recommended. See also Antibiotic synergy References Pharmacology
Additive effect
[ "Chemistry" ]
2,729
[ "Pharmacology", "Medicinal chemistry" ]
70,105,084
https://en.wikipedia.org/wiki/Coral%20Barbas
María del Coral Barbas Arribas (or Arriba) is a professor at the Universidad CEU San Pablo in Madrid, Spain who is known for her research on metabolomics and integration of chemical data. Education and career Barbas has a Ph.D. from Complutense University of Madrid. From 2005 until 2006 she was a Marie Curie fellow at King's College London. As of 2022 she is a professor of analytical chemistry at the Universidad CEU San Pablo and is the president of the Madrid section of the Spanish Royal Society of Chemistry. Research Barbas is known for her research on metabolomics, a field she was first introduced to while she was a Marie Curie fellow. Her early research centered on the analysis of vitamins and development of chemical methods to analyze compounds such as caffeine. Her subsequent research has developed methods to analyze organic compounds in pharmaceutical drugs and foods, and defined biomarkers for diseases such as leukemia and Parkinson's disease. She is also known for defining quality assurance protocols for metabolomics data analysis and establishing workflows to analyze metabolomics data. Selected publications Awards and honors The Analytical Scientist named Barbas to their 2016 Power List in recognition of her contributions to chemistry. In 2017, she was honored by Acta Sanitaria for her chemical research linking diabetes and obesity. In 2018, she received the International Award of the Belgian Society of Pharmaceutical Sciences. References External links Analytical chemists Women chemists Living people Spanish scientists Year of birth missing (living people)
Coral Barbas
[ "Chemistry" ]
312
[ "Analytical chemists" ]
70,105,459
https://en.wikipedia.org/wiki/Cosmic%20Dust%20Analyzer
The Cosmic Dust Analyzer (CDA) on the Cassini mission is a large-area (0.1 m2 total sensitive area) multi-sensor dust instrument that includes a chemical dust analyzer (time-of-flight mass spectrometer), a highly reliable impact ionization detector, and two high rate polarized polyvinylidene fluoride (PVDF) detectors. During 6 years en route to Saturn the CDA analysed the interplanetary dust cloud, the stream of interstellar dust, and Jupiter dust streams. During 13 years in orbit around Saturn the CDA studied the E ring, dust in the plumes of Enceladus, and dust in Saturn's environment. Overview The Cosmic Dust Analyzer, CDA was the seventh dust instrument from the Max Planck Institute for Nuclear Physics (MPIK), Heidelberg (Germany) following the dust detectors on the HEOS 2 satellite and dust detectors on the Galileo and Ulysses space probes and the more complex dust analyzers on the Helios spacecraft, the Giotto and VeGa spacecraft to Halley's Comet. The new dust analyzer system was developed by a team of scientists led by Eberhard Grün and engineers led by Dietmar Linkert to analyze dust in the Saturn system on board the Cassini spacecraft. This instrument employs a larger sensitive area (0.1 m2) impact detector, a smaller time-of-flight mass spectrometer chemical analyzer and two high rate polarized polyvinylidene fluoride (PVDF) detectors, in order to cope with the high fluxes during crossings of the E ring. The Max Planck Institute for Nuclear Physics in Heidelberg was responsible for the overall instrument development and test. Major contributions were provided by the DLR in Berlin-Adlershof (mechanics, cleanliness, thermal design, tests), Tony McDonnell from University of Canterbury (chemical analyzer, UK), Rutherford Appleton Laboratory (spectrometer electronics, UK) and G. Pahl (mechanical design, Munich, Ger). The PVDF detectors were provided by Tony Tuzzolino from the University of Chicago. The proposing Principal Investigator for CDA was Eberhard Grün. In 1990 the PI-ship was handed over to Ralf Srama from the Max Planck Institute for Nuclear Physics, who is now at the University of Stuttgart, Germany. Ralf Srama got his degree “Dr.-Ing.” from the Technical University of Munich for his Thesis (10 Nov. 2000, in German), "From the Cosmic-Dust-Analyzer to a model describing scientific spacecraft". The main sensor of CDA is an impact ionization detector (IID) like the Galileo and Ulysses Dust Detectors. In the center of the hemispherical target is the smaller (0.016 m2) Chemical Analyzer Target, CAT, at +1000 V electric potential. Three millimeter in front of the target is a grid at 0 V potential. Dust impacts onto CAT generate a plasma that is separated by the high electric field. Ions obtain an energy of ~1000eV and are focused towards the center collector. Ions are partly collected by the semi-transparent grid at 230 millimeter distance and the center electron multiplier. The waveforms of the charge signals are measured, stored and transmitted to ground. The multiplier signal represents a time-of-flight mass spectrum of the released ions. Two of the four grids at the entrance of the analyzer pick-up the electric charge of the dust particle. With these capabilities CDA can be considered a prototype dust telescope. CDA measured the micrometeoroid environment for 18 years, from 1999 until the last active seconds of Cassini in 2017 without major degradation. The instrument fly-away-cover was released already in 1997 on day 317. Science planning and operations were managed by Max-Planck-Institute for Nuclear Physics and later by the University of Stuttgart. The Cassini spacecraft was a three-axes stabilized spacecraft with the antenna occasionally pointing to Earth in order to download data and receive operational commands. In the mean time Cassini’s attitude was controlled by requested observations from one or more of the 12 instruments onboard. In order to obtain some more control of its pointing attitude, CDA employed a turntable between the spacecraft and the dust analyzer. Major discoveries and observations During interplanetary cruise From launch in 1997 until arrival at Saturn in 2004, Cassini–Huygens cruised interplanetary space from 0.7 to 10 AU. During this time there were long periods useful for observations of interplanetary and interstellar dust in the inner planetary system. Highlights were the detection of electrical charges of dust in interplanetary space and the determination of the composition of interplanetary dust particles. No measurements were possible during the crossing of the asteroid belt. During Jupiter flyby in 2000 there was a chance to analyze nanometer-sized dust stream particles and demonstrate their compositional relation to Jupiter's moon Io where they originate from. On approach to Saturn in 2004, similar streams of submicron grains with speeds in the order of 100 km/s were detected. These particles originate mostly from the outer parts of the dense rings. They were ejected by Saturn’s magnetic field until they become entrained in the solar wind magnetic field. The Saturn stream particles consist of silicate impurities of the primary icy ring particles. In Saturn orbit During Cassini’s 292 orbits around Saturn (2004 to 2017) CDA measured several million dust impacts that characterize dust mostly in Saturn’s E ring. In this process CDA found that the E ring extends about twice as far from Saturn as optically observed. Measurements of variable dust charges depending on the magnetospheric plasma conditions (allowed the definition of a dynamical dust model of Saturn's E ring describing the observed properties. In 2005 during Cassini’s close flyby of Enceladus within 175 km from the surface CDA together with two other Cassini instruments discovered active ice geysers located at the south pole of Saturn's moon Enceladus. Later, detailed compositional analyses of the water ice grains in the vicinity of Enceladus led to the discovery of large reservoirs of liquid water oceans below the icy crust of Enceladus. During the Cassini spacecraft’s Grand Finale mission in 2017, it performed 22 traversals of the region between Saturn and its innermost D ring. During this path CDA detected of dust from Saturn's dense rings. Most analyzed grains were a few tens of nanometers in size and had silicate and water-ice composition. For most of Cassini’s orbital tour CDA observed a faint signature of interstellar dust in the largely dominant foreground of E ring water-ice particles. Mass spectra of the interstellar grains suggest the presence of magnesium-rich grains of silicate and oxide composition, some with iron inclusions. Major discoveries until 2011 were summarized in a dedicated paper. See also Galileo and Ulysses Dust Detectors References Spacecraft instruments Scientific instruments Space science experiments Cassini–Huygens
Cosmic Dust Analyzer
[ "Technology", "Engineering" ]
1,449
[ "Scientific instruments", "Measuring instruments" ]
70,105,931
https://en.wikipedia.org/wiki/Cole%E2%80%93Davidson%20equation
The Cole-Davidson equation is a model used to describe dielectric relaxation in glass-forming liquids. The equation for the complex permittivity is where is the permittivity at the high frequency limit, where is the static, low frequency permittivity, and is the characteristic relaxation time of the medium. The exponent represents the exponent of the decay of the high frequency wing of the imaginary part, . The Cole–Davidson equation is a generalization of the Debye relaxation keeping the initial increase of the low frequency wing of the imaginary part, . Because this is also a characteristic feature of the Fourier transform of the stretched exponential function it has been considered as an approximation of the latter, although nowadays an approximation by the Havriliak-Negami function or exact numerical calculation may be preferred. Because the slopes of the peak in in double-logarithmic representation are different it is considered an asymmetric generalization in contrast to the Cole-Cole equation. The Cole–Davidson equation is the special case of the Havriliak-Negami relaxation with . The real and imaginary parts are and See also Debye relaxation Cole-Cole relaxation Havriliak–Negami relaxation Curie–von Schweidler law References Equations Glass Liquids Electric and magnetic fields in matter
Cole–Davidson equation
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
265
[ "Glass", "Unsolved problems in physics", "Phases of matter", "Electric and magnetic fields in matter", "Mathematical objects", "Homogeneous chemical mixtures", "Equations", "Materials science", "Condensed matter physics", "Amorphous solids", "Matter", "Liquids" ]
70,106,688
https://en.wikipedia.org/wiki/HD%2042618
HD 42618 is a well-studied star with an exoplanetary companion in the equatorial constellation of Orion. With an apparent visual magnitude of 6.85 it is too faint to be readily visible to the naked eye. This system is located at a distance of 79.6 light years from the Sun based on parallax measurements. It has a relatively high proper motion, traversing the celestial sphere at an angular rate of per year. HD 42618 is drifting closer with a radial velocity of −53.5 km/s and is predicted to come as near as in around 297,000 years. The stellar classification of HD 42618 is G4V, which shows it to be an ordinary G-type main-sequence star. It is considered a close solar analog, which means the physical properties of the star are particularly similar to those of the Sun. Seismic model indicates the star is older and more evolved than the Sun with an age of about 5.5 billion years. It is spinning with a low projected rotational velocity of 1.8 km/s, with the rotation rate being consistent with the star's low activity level. The star has 92% of the mass of the Sun and 94% of the Sun's radius. The surface metallicity is lower than in the Sun, with the abundance patterns being consistent with a solar-type star. HD 42618 is radiating 92% of the luminosity of the Sun from its photosphere at an effective temperature of 5,765 K. In 2016, the discovery of a candidate exoplanet companion orbiting HD 42618 was announced. Designated HD 42618 b, it was found using the radial velocity method which showed a periodicity of 149.6 days. The orbital elements have the planet orbiting at a distance of from the host star with an orbital eccentricity (ovalness) of 0.2 and a Neptune-like mass. A second signal with a period of 388 days was detected, but this is unconfirmed and may be false. A 4,850 day signal is likely the result of star's magnetic activity cycle. References Further reading G-type main-sequence stars Solar analogs Planetary systems with one confirmed planet Orion (constellation) Durchmusterung objects 042618 029432
HD 42618
[ "Astronomy" ]
472
[ "Constellations", "Orion (constellation)" ]
70,107,821
https://en.wikipedia.org/wiki/List%20of%20Indian%20astronomical%20treatises
Ancient India was one of the most important seat of Astronomical studies. There were many scholars, philosophers and astronomers in ancient India, who wrote treatises on experimental and mathematical astronomy. Most of the Ancient Indian Astronomical Treatises were written and composed in Sanskrit language. List of the Astronomical Treatises Vedanga Jyotisha Aryabhatiya Brahmasphuta-siddhanta Pañcasiddhāntikā Mahabhaskariya Laghubhaskariya Aryabhatiyabhashya Śisyadhīvrddhida Siddhāntatilaka Siddhāntaśiromani Karanakutūhala Siddhāntaśekhara Yantra-rāja Jyotirmimamsa Sphutanirnaya Karanottama Uparāgakriyākrama Śiṣyadhīvṛddhidatantra Brihat-Samhita Grahana-Maala Lilavati Shatapatha Brahmana Surya Siddhanta Makarandasarini Mahadevi (astronomy book) Rājamṛgāṅka (astronomy book) Jagadbhūṣaṇa Grahacaranibandhana and lost text Mahamarganibandhana Tantrasamgraha Karanapaddhati Venvaroha References Astronomy Astronomy books Indian Astronomy in India Astronomy data and publications
List of Indian astronomical treatises
[ "Astronomy" ]
260
[ "Astronomy-related lists", "Astronomy books", "Works about astronomy", "nan", "Astronomy data and publications" ]
70,109,158
https://en.wikipedia.org/wiki/Tin-Lun%20Ho
Tin-Lun "Jason" Ho (born August 12, 1951) is a Chinese-American theoretical physicist, specializing in condensed matter theory, quantum gases, and Bose-Einstein condensates. He is known for the Mermin-Ho relation. Education and career Ho graduated in 1972 with a B.Sc. from Chung Chi College, Chinese University of Hong Kong. He was a graduate student for the academic year 1972–1973 at the University of Minnesota and in 1973 transferred to Cornell University. There he graduated in 1977 with a Ph.D. under the supervision of N. David Mermin. Ho was a postdoc from 1977 to 1980 under the supervision of Christopher J. Pethick at the University of Illinois, from 1978 to 1980 at NORDITA, and from 1980 to 1982 at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara. At Ohio State University (OSU), he was an assistant professor from 1983 to 1989 and an associate professor from 1989 to 1996, when he became a full professor. At OSU he is since 2002 a Distinguished Professor of Mathematical and Physical Sciences. From 2007 to 2014 he was a member of the editorial board of the Journal of Low Temperature Physics. Ho was an Alfred P. Sloan Foundation Fellow for the academic year 1984–1985 and a Fellow of the John Simon Guggenheim Memorial Foundation for the academic year 1999–2000. In 2008 he received the Lars Onsager Prize for "his contributions to quantum liquids and dilute quantum gases, both multi-component and rapidly rotating, and for his leadership in unifying condensed matter and atomic physics research in this area." Ho was elected in 1999 a Fellow the American Physical Society, in 2011 a Fellow of the American Association for the Advancement of Science, and in 2015 a Member of the American Academy of Arts and Sciences. Most recently, he has been working on Bose-Einstein condensates and optical lattices, for which he proposed a cooling mechanism in 2009. Selected publications (over 650 citations) (over 1750 citations) (over 550 citations) (See Hubbard model.) References External links 20th-century Chinese physicists 21st-century Chinese physicists 20th-century American physicists 21st-century American physicists Theoretical physicists Condensed matter physicists Alumni of the University of Hong Kong Cornell University alumni Ohio State University faculty Fellows of the American Physical Society Fellows of the American Academy of Arts and Sciences 1951 births Living people
Tin-Lun Ho
[ "Physics", "Materials_science" ]
492
[ "Theoretical physics", "Condensed matter physicists", "Condensed matter physics", "Theoretical physicists" ]
70,110,126
https://en.wikipedia.org/wiki/HD%2030479
HD 30479 (HR 1531) is a solitary star in the southern circumpolar constellation Mensa. It has an apparent magnitude of 6.04, making it barely visible to the naked eye even under ideal conditions. It is located at a distance of 540 light years but is receding with a heliocentric radial velocity of . HD 30479 has a stellar classification of K2 III, indicating that it is an early K-type giant star and has an angular diameter of (after limb darkening correction). This yields a radius 17.99 times that of the Sun at its estimated distance. At present it has 1.28 times the mass of the Sun and radiates at 116 times the luminosity of the Sun at an effective temperature of 4,390 K from its enlarged photosphere, which gives it an orange glow. HD 30479 is believed to be one of the metal-deficient members of the young disk population with an iron abundance of 71% that of the Sun. Currently, it spins leisurely with a projected rotational velocity less than , common for giants. References K-type giants Mensa (constellation) Durchmusterung objects 030479 021611 1531 Mensae, 13
HD 30479
[ "Astronomy" ]
257
[ "Mensa (constellation)", "Constellations" ]
49,161,809
https://en.wikipedia.org/wiki/Journal%20of%20Guidance%2C%20Control%2C%20and%20Dynamics
The Journal of Guidance, Control, and Dynamics is a monthly peer-reviewed scientific journal published by the American Institute of Aeronautics and Astronautics. It covers the science and technology of guidance, control, and dynamics of flight. The editor-in-chief is Ping Lu (San Diego State University). It was established in 1978 as Journal of Guidance and Control, obtaining its current title in 1982. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.024. History The journal was published bimonthly until it switched to monthly in 2015. Prior editors have been Donald C. Fraser (1978–1992), Kyle T. Alfriends (1992–1996), and George T. Schmidt (1997–2013). References External links Aerospace engineering journals English-language journals Monthly journals Academic journals established in 1978
Journal of Guidance, Control, and Dynamics
[ "Engineering" ]
185
[ "Aerospace engineering journals", "Aerospace engineering" ]
49,162,176
https://en.wikipedia.org/wiki/AGCS%20family
Members of the Alanine or Glycine:Cation Symporter (AGCS) Family (TC# 2.A.25) transport alanine and/or glycine in symport with Na+ and or H+. Structure and function Known proteins in the AGCS family are between 445 and 550 amino acyl residues in length and possess 8 to 12 putative transmembrane α-helical spanners. Members may possess 11 transmembrane segments (TMSs), as seems to be true for DagA (TC# 2.A.25.1.1) and AgcS (TC# 2.A.25.1.3), although Acp (TC# 2.A.25.1.2) has only 8 TMSs, perhaps the result of truncation. As of early 2016, there does not appear to be any 3D crystal structure data available for these proteins. Members of the AGCS family have been found in bacteria and archaea, such as extremophile halotolerant cyanobacterium Aphanothece halophytica, and thermophilic bacteria, Bacillus PS3. As of 2015, only three members of the family have been functionally characterized. These proteins show limited sequence similarity in the APC family (TC# 2.A.3). High-resolution structures of AgcS from Methanococcus maripaludis were obtained using X-ray crystallography and released in 2019 and show structural homology to other members of the Amino acid-Polyamine-Organocation superfamily of transporters. Transport reaction The generalized transport reaction catalyzed by the AGCS family is: alanine or glycine (out) + Na+ or H+ (out) → alanine or glycine (in) + Na+ or H+ (in). Proteins in the AGCS family There are currently 10 proteins belonging to the AGCS family. These proteins and their descriptions can be found in the Transporter Classification Database. References Further reading Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
AGCS family
[ "Biology" ]
450
[ "Protein families", "Protein classification", "Membrane proteins" ]
49,162,954
https://en.wikipedia.org/wiki/Quick%20return%20mechanism
A quick return mechanism is an apparatus to produce a reciprocating motion in which the time taken for travel in return stroke is less than in the forward stroke. It is driven by a circular motion source (typically a motor of some sort) and uses a system of links with three turning pairs and a sliding pair. A quick-return mechanism is a subclass of a slider-crank linkage, with an offset crank. Quick return is a common feature of tools in which the action is performed in only one direction of the stroke, such as shapers and powered saws, because it allows less time to be spent on returning the tool to its initial position. History During the early-nineteenth century, cutting methods involved hand tools and cranks, which were often lengthy in duration. Joseph Whitworth changed this by creating the quick return mechanism in the mid-1800s. Using kinematics, he determined that the force and geometry of the rotating joint would affect the force and motion of the connected arm. From an engineering standpoint, the quick return mechanism impacted the technology of the Industrial Revolution by minimizing the duration of a full revolution, thus reducing the amount of time needed for a cut or press. Applications Quick return mechanisms are found throughout the engineering industry in different machines: Shaper Screw press Power-driven saw Mechanical actuator Revolver mechanisms Design The disc influences the force of the arm, which makes up the frame of reference of the quick return mechanism. The frame continues to an attached rod, which is connected to the circular disc. Powered by a motor, the disc rotates and the arm follows in the same direction (linear and left-to-right, typically) but at a different speed. When the disc nears a full revolution, the arm reaches its furthest position and returns to its initial position at a quicker rate, hence its name. Throughout the cut, the arm has a constant velocity. Upon returning to its initial position after reaching its maximum horizontal displacement, the arm reaches its highest velocity. The quick return mechanism was modeled after the crank and slider (arm), and this is present in its appearance and function; however, the crank is usually hand powered and the arm has the same rate throughout an entire revolution, whereas the arm of a quick return mechanism returns at a faster rate. The "quick return" allows for the arm to function with less energy during the cut than the initial cycle of the disc. Specifications When using a machine that involves this mechanism, it is very important to not force the machine into reaching its maximum stress capacity; otherwise, the machine will break. The durability of the machine is related to the size of the arm and the velocity of the disc, where the arm might not be flexible enough to handle a certain speed. Creating a graphical layout for a quick return mechanism involves all inversions and motions, which is useful in determining the dimensions for a functioning mechanism. A layout would specify the dimensions of the mechanism by highlighting each part and its interaction among the system. These interactions would include torque, force, velocity, and acceleration. By relating these concepts to their respective analyses (kinematics and dynamics), one can comprehend the effect each part has on another. Mechanics In order to derive the force vectors of these mechanisms, one must approach a mechanical design consisting of both kinematic and dynamic analyses. Kinematic Analysis Breaking the mechanism up into separate vectors and components allows us to create a kinematic analysis that can solve for the maximum velocity, acceleration, and force the mechanism is capable of in three-dimensional space. Most of the equations involved in the quick return mechanism setup originate from Hamilton's principle. The position of the arm can be found at different times using the substitution of Euler's formula: into the different components that have been pre-determined, according to the setup. This substitution can solve for various radii and components of the displacement of the arm at different values. Trigonometry is needed for the complete understanding of the kinematic analyses of the mechanism, where the entire design can be transcribed onto a plane layout, highlighting all of the vector components. An important concept for the analysis of the velocity of the disc relative to the arm is the angular velocity of the disc: If one desires to calculate the velocity, one must derive the angles of interaction at a single moment of time, making this equation useful. Dynamic Analysis In addition to the kinematic analysis of a quick return mechanism, there is a dynamic analysis present. At certain lengths and attachments, the arm of the mechanism can be evaluated and then adjusted to certain preferences. For example, the differences in the forces acting upon the system at an instant can be represented by D'Alembert's principle. Depending on the structural design of the quick return mechanism, the law of cosines can be used to determine the angles and displacements of the arm. The ratio between the working stroke (engine) and the return stroke can be simplified through the manipulation of these concepts. Despite similarities between quick return mechanisms, there are many different possibilities for the outline of all forces, speeds, lengths, motions, functions, and vectors in a mechanism. See also References Mechanisms (engineering) Mechanical power transmission
Quick return mechanism
[ "Physics", "Engineering" ]
1,061
[ "Mechanical power transmission", "Mechanics", "Mechanical engineering", "Mechanisms (engineering)" ]
49,163,572
https://en.wikipedia.org/wiki/Calcium%20hydroxyphosphate
Calcium hydroxyphosphate (calcium phosphate tribasic, tribasic calcium phosphate, hydroxyapatite, HAp) is an inorganic chemical compound that is made up of calcium, hydrogen, oxygen and phosphorus. Its formula is Ca5(OH)(PO4)3. It is found in the body and as the mineral hydroxyapatite. References Calcium compounds
Calcium hydroxyphosphate
[ "Chemistry" ]
79
[ "Inorganic compounds", "Inorganic compound stubs" ]
49,166,255
https://en.wikipedia.org/wiki/Ellis%20wormhole
The Ellis wormhole is the special case of the Ellis drainhole in which the 'ether' is not flowing and there is no gravity. What remains is a pure traversable wormhole comprising a pair of identical twin, nonflat, three-dimensional regions joined at a two-sphere, the 'throat' of the wormhole. As seen in the image shown, two-dimensional equatorial cross sections of the wormhole are catenoidal 'collars' that are asymptotically flat far from the throat. There being no gravity in force, an inertial observer (test particle) can sit forever at rest at any point in space, but if set in motion by some disturbance will follow a geodesic of an equatorial cross section at constant speed, as would also a photon. This phenomenon shows that in space-time the curvature of space has nothing to do with gravity (the 'curvature of time’, one could say). As a special case of the Ellis drainhole, itself a 'traversable wormhole', the Ellis wormhole dates back to the drainhole's discovery in 1969 (date of first submission) by H. G. Ellis, and independently at about the same time by K. A. Bronnikov. Ellis and Bronnikov derived the original traversable wormhole as a solution of the Einstein vacuum field equations augmented by inclusion of a scalar field minimally coupled to the geometry of space-time with coupling polarity opposite to the orthodox polarity (negative instead of positive). Some years later M. S. Morris and K. S. Thorne manufactured a duplicate of the Ellis wormhole to use as a tool for teaching general relativity, asserting that existence of such a wormhole required the presence of 'negative energy', a viewpoint Ellis had considered and explicitly refused to accept, on the grounds that arguments for it were unpersuasive. The wormhole solution The wormhole metric has the proper-time form where and is the drainhole parameter that survives after the parameter of the Ellis drainhole solution is set to 0 to stop the ether flow and thereby eliminate gravity. If one goes further and sets to 0, the metric becomes that of Minkowski space-time, the flat space-time of the special theory of relativity. In Minkowski space-time every timelike and every lightlike (null) geodesic is a straight 'world line' that projects onto a straight-line geodesic of an equatorial cross section of a time slice of constant as, for example, the one on which and , the metric of which is that of euclidean two-space in polar coordinates , namely, Every test particle or photon is seen to follow such an equatorial geodesic at a fixed coordinate speed, which could be 0, there being no gravitational field built into Minkowski space-time. These properties of Minkowski space-time all have their counterparts in the Ellis wormhole, modified, however, by the fact that the metric and therefore the geodesics of equatorial cross sections of the wormhole are not straight lines, rather are the 'straightest possible' paths in the cross sections. It is of interest, therefore, to see what these equatorial geodesics look like. Equatorial geodesics of the wormhole The equatorial cross section of the wormhole defined by and (representative of all such cross sections) bears the metric When the cross section with this metric is embedded in euclidean three-space the image is the catenoid shown above, with measuring the distance from the central circle at the throat, of radius , along a curve on which is fixed (one such being shown). In cylindrical coordinates the equation has as its graph. After some integrations and substitutions the equations for a geodesic of parametrized by reduce to and where is a constant. If then and and vice versa. Thus every 'circle of latitude' ( constant) is a geodesic. If on the other hand is not identically 0, then its zeroes are isolated and the reduced equations can be combined to yield the orbital equation There are three cases to be considered: which implies that thus that the geodesic is confined to one side of the wormhole or the other and has a turning point at or which entails that so that the geodesic does not cross the throat at but spirals onto it from one side or the other; which allows the geodesic to traverse the wormhole from either side to the other. The figures exhibit examples of the three types. If is allowed to vary from to the number of orbital revolutions possible for each type, latitudes included, is unlimited. For the first and third types the number rises to infinity as for the spiral type and the latitudes the number is already infinite. That these geodesics can bend around the wormhole makes clear that the curvature of space alone, without the aid of gravity, can cause test particles and photons to follow paths that deviate significantly from straight lines and can create lensing effects. Dynamic Ellis wormhole There is a dynamic version of the Ellis wormhole that is a solution of the same field equations that the static Ellis wormhole is a solution of. Its metric is where being a positive constant. There is a 'point singularity' at but everywhere else the metric is regular and curvatures are finite. Geodesics that do not encounter the point singularity are complete; those that do can be extended beyond it by proceeding along any of the geodesics that encounter the singularity from the opposite time direction and have compatible tangents (similarly to geodesics of the graph of that encounter the singularity at the origin). For a fixed nonzero value of the equatorial cross section on which has the metric This metric describes a 'hypercatenoid' similar to the equatorial catenoid of the static wormhole, with the radius of the throat (where ) now replaced by and in general each circle of latitude of geodesic radius having circumferential radius . For the metric of the equatorial cross section is which describes a 'hypercone' with its vertex at the singular point, its latitude circles of geodesic radius having circumferences Unlike the catenoid, neither the hypercatenoid nor the hypercone is fully representable as a surface in euclidean three-space; only the portions where (thus where or equivalently ) can be embedded in that way. Dynamically, as advances from to the equatorial cross sections shrink from hypercatenoids of infinite radius to hypercones (hypercatenoids of zero radius) at then expand back to hypercatenoids of infinite radius. Examination of the curvature tensor reveals that the full dynamic Ellis wormhole space-time manifold is asymptotically flat in all directions timelike, lightlike, and spacelike. Applications Scattering by an Ellis wormhole Gravitational lensing in the Ellis wormhole Microlensing by the Ellis wormhole Wave effect in lensing by the Ellis wormhole Image centroid displacements due to microlensing by the Ellis wormhole Exact lens equation for the Ellis wormhole Lensing by wormholes References Wormhole theory Exact solutions in general relativity
Ellis wormhole
[ "Astronomy", "Mathematics" ]
1,464
[ "Exact solutions in general relativity", "Astronomical hypotheses", "Mathematical objects", "Equations", "Wormhole theory" ]
49,167,113
https://en.wikipedia.org/wiki/Pocket%20FM
Pocket FM is a small, low-powered radio transmitter designed for use in areas with tightly controlled or undeveloped communications infrastructure. The devices are portable and have the appearance of a receiver rather than a transmitter, making them more practical for citizen use and harder for authorities to detect when used subversively in pirate radio networks. It was designed by Germany-based non-profit organization Media in Cooperation and Transition (MiCT) in 2013 and has been deployed in Syria to create the radio network called Syrnet. Development MiCT led the project as an extension of its work on media projects to empower people in areas of conflict and crisis. The device's design is a result of the organization's collaboration with German design firm IXDS. Radio, as an analog medium, is more difficult than Internet and phone networks to shut down, requires less physical infrastructure, and its use is less dependent on a functional electric grid. For these reasons radio is a common medium among resistance and other independent groups. The early versions' shoebox-sized appearance, as described by The Local, was "a black box about thirty centimeters in length and twenty wide. It is smooth and light with a corrugated surface." Its design emphasizes portability and an appearance unlike typical transmitters. Version 3, introduced at the Global Media Forum in June 2016, is smaller, measuring 20 x 20 x 13 cm, with an aluminum case. The cost of version 3 at release was 3,000. The Pocket FM was a finalist for 2016 Siemens Stiftung Empowering People Award. Technical overview Pocket FM is intended for use in areas with poor or unreliable broadcasting infrastructure, and incorporates several design elements to support its operation in challenging situations. Radio stations typically use large transmitters to produce strong signals broadcast across large areas. However, large transmitters are very expensive, challenging to maintain, and provide highly visible targets for those wishing to sabotage, raid, hijack, or otherwise interrupt communications. The idea behind Pocket FM is to instead create a network of many small transmitters that can blanket an area. The first two versions had a range of , with version 3 up to , using only a antenna. The device has the ability to change frequencies in case the default frequency is jammed or otherwise unavailable, and it broadcasts a signature using the Radio Data System (RDS) protocol to allow listeners to find the new station if it changes. RDS can also be used to broadcast other text-based messages to users with compatible tuners. The device runs on 10-15 volts of electricity, capable of being used with a standard power adapter, solar power, or through a car's cigarette lighter receptacle. It is capable of operating autonomously for extended periods when given a steady power supply, and can be passcode-protected to prevent unauthorized transmission. The computational foundation of the device is Raspberry Pi, an inexpensive, customizable computer platform about the size of a credit card. The use of Raspberry Pi helps to keep the cost of the units low, but its simple, open design also allows for flexibility and experimentation with different configurations and upgrades. Pocket FM can broadcast material connected through basic analog audio inputs. Importantly, it also comes with a built-in satellite receiver to download audio or connect to a live feed over the Internet where there is otherwise no Internet connection available. It uses Airtime Pro software by Sourcefabric to compile the content stream and provide the streaming links for both public internet streams and towards MiCT's satellite provider. The third version of the device has GSM, 3G, and Wi-fi capabilities, creating several ways of accessing and operating it remotely, depending on available technology: SMS text message, a web browser, or directly via wireless network. Implementations Pocket FM was first deployed in Syria in September 2013. Since then MiCT has also started a project in Sierra Leone and smaller initiatives in Yemen and Tanzania. MiCT has produced about two dozen units. Syria Pocket FM was developed for use in Syria. Throughout the Syrian Civil War, president Bashar Assad has exerted control over communications infrastructure, frequently disabling Internet, phone networks, and electricity, and making frequent use of propaganda and misinformation in state-run mass media. But when other modes of communication become unavailable and unbiased reporting is inaccessible, most Syrians still have access to a radio receiver, either as a battery-powered stand-alone device or as part of a cell phone. Syrnet, a radio network enabled by Pocket FM, launched in September 2013 to provide information that would be censored or manipulated if reported on by state-run media. , Syrnet operates nine stations in different areas of the country, including areas controlled by the Islamic State of Iraq and the Levant (ISIS). In addition to allowing for reports to be shared with other regions where the situation may be very different, the use of a structure with multiple centers of operation ensures that a raid on one does not pose an existential threat to the broadcast. The units change hands multiple times in order to get them into the country, ending up with a local resident who has identified a potential location to hide the transmitter, as far from civilian homes as possible without losing the signal. Given the political climate, the frequency with which journalists and activists have been detained or even killed, and the ease with which any radio transmitter can be triangulated even if not visually identified, precautions are necessary. According to MiCT project manager Philip Hochleichter, none of the units have been discovered , and only one has been lost, due to bombing in Kobanî. The network combines local and external content, which is broadcast throughout the country via Pocket FM as well as through NileSat and Internet downloads. The radio network approach to broadcasting allows Syrian citizens access to censored outside broadcasts, creates a sustainable system for communications between and about different parts of the country and the dissemination of stories to a global audience. To help local stations and reporters, MiCT employs a team of professionals from the journalism field to assist with production. Sierra Leone MiCT is working with Culture Radio, a Freetown media organization, to disseminate information about Ebola to communities in Sierra Leone who have not had access to other campaigns about the disease and its prevention. See also Community radio Low-power broadcasting References External links Media in Cooperation and Transition Radio technology Mass media in Syria Community radio Pirate radio Censorship of broadcasting
Pocket FM
[ "Technology", "Engineering" ]
1,298
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
49,167,470
https://en.wikipedia.org/wiki/Match%20Analysis
Match Analysis is a US company with headquarters in Emeryville, California. The company employs 70 staff in their offices and data collection facilities in California and Mexico City, Mexico. The company provides video analysis tools and digital library archiving services supplying performance and physical tracking data to football (soccer) coaches, teams, and players. The objective is to improve individual and team performance and/or analyze opposition patterns of play to give tactical advantage. Match Analysis records and verifies over 2,500 distinct events per football match with every touch by every player catalogued, synchronized against video feeds, and stored in a searchable video database. History Match Analysis was founded in 2000 by Mark Brunkhart, its current President, after he developed a system to help amateur football players see the game objectively. The system evolved from a collection of printed reports and info graphics into video analysis software and statistical data tools supplied to professional and amateur football teams, governing bodies/professional organizations and media partners around the world. Match Analysis is one of the pioneers of statistical analysis in football. In 2002, the company released Mambo Studio, the first video editing and retrieval system for football. In 2004, Tango Online was launched to replace printed reports with the first instant access online video database of a complete league. In May 2012, Match Analysis acquired Edinburgh based Spinsight Ltd purchasing the intellectual property and other assets relating to its K2 Panoramic Video Camera System. Match Analysis signed strategic alliances with Major League Soccer and Liga MX in 2013. In addition Match Analysis's K2 Panoramic Video Camera System was implemented in every stadium across Major League Soccer and Liga MX in the summer of 2013. During November 2015, Match Analysis participated in discussions with IFAB and FIFA at their headquarters in Zürich, Switzerland to advise on global standards for electronic performance and tracking systems. In May 2016, Match Analysis announced the introduction of Tango VIP their new foundational technology platform for their extensive online presence. Products Match Analysis tools and services provide video indexing and archiving, statistical analysis, live data collection, player tracking, fitness reports, and performance analysis. The company's product range includes Mambo Studio, K2 Panoramic Video, TrueView Visualizations, Tango Online, Tango Live, Tango ToGo, Player Tracking and Fitness Reports. Clients The company has worked with eight different national teams including Germany, the United States, and Mexico and has relationships with over 50 professional clubs. Match Analysis currently supports league-wide deals with Major League Soccer and Liga MX. Over the past decade, Match Analysis has worked with almost every major professional club in North America and media outlets including the New York Times World Cup coverage. Current Match Analysis clients include all 18 Liga MX clubs in Mexico, 17 MLS clubs, the Mexico national team, PRO Professional Referee Organization and a wide array of college and amateur sides. References External links Association football equipment Motion in computer vision Tracking
Match Analysis
[ "Physics", "Technology" ]
582
[ "Physical phenomena", "Wireless locating", "Tracking", "Motion (physics)", "Motion in computer vision" ]
49,168,255
https://en.wikipedia.org/wiki/Planet%20Nine
Planet Nine is a hypothetical ninth planet in the outer region of the Solar System. Its gravitational effects could explain the peculiar clustering of orbits for a group of extreme trans-Neptunian objects (ETNOs), bodies beyond Neptune that orbit the Sun at distances averaging more than 250 times that of the Earth i.e. over 250 astronomical units (AU). These ETNOs tend to make their closest approaches to the Sun in one sector, and their orbits are similarly tilted. These alignments suggest that an undiscovered planet may be shepherding the orbits of the most distant known Solar System objects. Nonetheless, some astronomers question this conclusion and instead assert that the clustering of the ETNOs' orbits is due to observational biases, resulting from the difficulty of discovering and tracking these objects during much of the year. Based on earlier considerations, this hypothetical super-Earth-sized planet would have had a predicted mass of five to ten times that of the Earth, and an elongated orbit 400–800 AU. The orbit estimation was refined in 2021, resulting in a somewhat smaller semimajor axis of This was shortly thereafter updated to and to in 2025. Batygin & Brown suggested that Planet Nine may be the core of a giant planet that was ejected from its original orbit by Jupiter during the genesis of the Solar System. Others proposed that the planet was captured from another star, was once a rogue planet, or that it formed on a distant orbit and was pulled into an eccentric orbit by a passing star. Although sky surveys such as Wide-field Infrared Survey Explorer (WISE) and Pan-STARRS did not detect Planet Nine, they have not ruled out the existence of a Neptune-diameter object in the outer Solar System. The ability of these past sky surveys to detect Planet Nine was dependent on its location and characteristics. Further surveys of the remaining regions are ongoing using NEOWISE and the 8 meter Subaru Telescope. Unless Planet Nine is observed, its existence remains purely conjectural. Several alternative hypotheses have been proposed to explain the observed clustering of trans-Neptunian objects (TNOs). History Following the discovery of Neptune in 1846, there was considerable speculation that another planet might exist beyond its orbit. The best-known of these theories predicted the existence of a distant planet that was influencing the orbits of Uranus and Neptune. After extensive calculations, Percival Lowell predicted the possible orbit and location of the hypothetical trans-Neptunian planet and began an extensive search for it in 1906. He called the hypothetical object a name previously used by Gabriel Dallet. Clyde Tombaugh continued Lowell's search and in 1930 discovered Pluto, but it was soon determined to be too small to qualify as Lowell's Planet X. After Voyager 2s flyby of Neptune in 1989, the difference between Uranus' predicted and observed orbit was determined to have been due to the use of a previously inaccurate mass of Neptune. Attempts to detect planets beyond Neptune by indirect means such as orbital perturbation date to before the discovery of Pluto. Among the first was George Forbes who postulated the existence of two trans-Neptunian planets in 1880. One would have an average distance from the Sun, or semi-major axis, of 100 AU, 100 times that of the Earth. The second would have a semi-major axis of 300 AU. His work is considered similar to more recent Planet Nine theories in that the planets would be responsible for a clustering of the orbits of several objects, in this case the clustering of aphelion distances of periodic comets near about 100–300 AU. This is similar to how the aphelion distances of Jupiter-family comets cluster near its orbit. The discovery of Sedna, a dwarf planet with a highly peculiar orbit in 2004, led to speculation that it had encountered a massive body other than one of the known planets. Sedna's orbit is detached, with a perihelion distance of 76 AU that is too large to be due to gravitational interactions with Neptune. Several authors proposed that Sedna entered this orbit after encountering a massive body such as an unknown planet on a distant orbit, a member of the open cluster that formed with the Sun, or another star that later passed near the Solar System. The announcement in March 2014 of the discovery of a second sednoid with a perihelion distance of 80 AU, , in a similar orbit led to renewed speculation that an unknown super-Earth remained in the distant Solar System. At a conference in 2012, Rodney Gomes proposed that an undetected planet was responsible for the orbits of some ETNOs with detached orbits and the large semi-major axis Centaurs, small Solar System bodies that cross the orbits of the giant planets. The proposed Neptune-massed planet would be in a distant eccentric and steeply inclined orbit. Like Planet Nine it would cause the perihelia of objects with semi-major axes greater than 300 AU to oscillate, delivering some into planet-crossing orbits and others into detached orbits like that of Sedna. An article by Gomes, Soares, and Brasser was published in 2015, detailing their arguments. In 2014, astronomers Chad Trujillo and Scott S. Sheppard noted the similarities in the orbits of Sedna and and several other ETNOs. They proposed that an unknown planet in a circular orbit between 200 and 300 AU was perturbing their orbits. Later that year, Raúl and Carlos de la Fuente Marcos argued that two massive planets in orbital resonance were necessary to produce the similarities of so many orbits, 13 known at the time. Using a larger sample of 39 ETNOs, they estimated that the nearer planet had a semi-major axis in the range of 300–400 AU, a relatively low eccentricity, and an inclination of nearly 14°. Batygin and Brown hypothesis In early 2016, California Institute of Technology's Batygin and Brown described how the similar orbits of six ETNOs could be explained by Planet Nine and proposed a possible orbit for the planet. This hypothesis could also explain ETNOs with orbits perpendicular to the inner planets and others with extreme inclinations, and had been offered as an explanation of the tilt of the Sun's axis. Orbit Planet Nine was initially hypothesized to follow an elliptical orbit around the Sun with an eccentricity of , and its semi-major axis was estimated to be , roughly 13–26 times the distance from Neptune to the Sun. It would take the planet between to make one full orbit around the Sun, and its inclination to the ecliptic, the plane of the Earth's orbit, was projected to be . The aphelion, or farthest point from the Sun, would be in the general direction of the constellation of Taurus, whereas the perihelion, the nearest point to the Sun, would be in the general direction of the southerly areas of Serpens (Caput), Ophiuchus, and Libra. Brown thinks that if Planet Nine exists, a probe could reach it in as little as 20 years by using a powered slingshot trajectory around the Sun. Mass and radius The planet is estimated to have 5–10 times the mass and 2–4 times the radius of the Earth. Brown thinks that if Planet Nine exists, its mass is sufficient to clear its orbit of large bodies in 4.5 billion years, the age of the Solar System, and that its gravity dominates the outer edge of the Solar System, which is sufficient to make it a planet by current definitions. Astronomer Jean-Luc Margot has also stated that Planet Nine satisfies his criteria and would qualify as a planet if and when it is detected. Later simulations by Amir Siraj and colleagues in 2025 proposed that Planet Nine's mass would instead be 4.4 ± 1.1 times that of Earth. Internal composition Given a hypothesized ~10 Earth masses and using a theory of exoplanet sizes in the Kepler-454 system, Esther Linder and Christoph Mordasini assumed that Planet Nine's radius would be 3.66 times that of Earth's (23,300 km versus 6,378 km), and that its internal composition would be similar to Uranus and Neptune's: Planet Nine would likely have a hydrogen-helium atmosphere averaging 47° Kelvin, with a core composed of iron and a mantle filled with magnesium silicate and water ice. However, Siraj et al. (2025) suggest that Planet Nine's mass and orbital characteristics would render its composition closer to that of a rocky planet like Earth. Origin Several possible origins for Planet Nine have been examined, including its ejection from the neighborhood of the known giant planets, capture from another star, and in situ formation. In their initial article, Batygin and Brown proposed that Planet Nine formed closer to the Sun and was ejected into a distant eccentric orbit following a close encounter with Jupiter or Saturn during the nebular epoch. Then, either the gravity of a nearby star or drag from the gaseous remnants of the Solar nebula reduced the eccentricity of its orbit. This process raised its perihelion, leaving it in a very wide but stable orbit beyond the influence of the other planets. The odds of this occurring has been estimated at a few percent. If it had not been flung into the Solar System's farthest reaches, Planet Nine could have accreted more mass from the proto-planetary disk and developed into the core of a gas giant or ice giant. Instead, its growth was halted early, leaving it with a lower mass than Uranus or Neptune. Dynamical friction from a massive belt of planetesimals also could have enabled Planet Nine's capture into a stable orbit. Recent models propose that a disk of planetesimals could have formed as the gas was cleared from the outer parts of the proto-planetary disk. As Planet Nine passed through this disk its gravity would alter the paths of the individual objects in a way that reduced Planet Nine's velocity relative to it. This would lower the eccentricity of Planet Nine and stabilize its orbit. If this disk had a distant inner edge, 100–200 AU, a planet encountering Neptune would have a 20% chance of being captured in an orbit similar to that proposed for Planet Nine, with the observed clustering more likely if the inner edge is at 200 AU. Unlike the gas nebula, the planetesimal disk is likely to have been long lived, potentially allowing a later capture. An encounter with another star could also alter the orbit of a distant planet, shifting it from a circular to an eccentric orbit. The in situ formation of a planet at this distance would require a very massive and extensive disk, or the outward drift of solids in a dissipating disk forming a narrow ring from which the planet accreted over a billion years. If a planet formed at such a great distance while the Sun was in its original cluster, the probability of it remaining bound to the Sun in a highly eccentric orbit is roughly 10%. However, while the Sun remained in the open cluster where it formed, any extended disk would have been subject to gravitational disruption by passing stars and by mass loss due to photoevaporation. Planet Nine could have been captured from outside the Solar System during a close encounter between the Sun and another star. If a planet was in a distant orbit around this star, three-body interactions during the encounter could alter the planet's path, leaving it in a stable orbit around the Sun. A planet originating in a system without Jupiter-massed planets could remain in a distant eccentric orbit for a longer time, increasing its chances of capture. The wider range of possible orbits would reduce the odds of its capture in a relatively low inclination orbit to 1–2%. Amir Siraj and Avi Loeb found that the odds of the Sun capturing Planet Nine increases by 20× if the Sun once had a distant, equal-mass binary companion. This process could also occur with rogue planets, but the likelihood of their capture is much smaller, with only 0.05–0.10% being captured in orbits similar to that proposed for Planet Nine. Evidence The gravitational influence of Planet Nine would explain four peculiarities of the Solar System: the clustering of the orbits of ETNOs; the high perihelia of objects like Sedna that are detached from Neptune's influence; the high inclinations of ETNOs with orbits roughly perpendicular to the orbits of the eight known planets; high-inclination trans-Neptunian objects (TNOs) with semi-major axis less than 100 AU. Planet Nine was initially proposed to explain the clustering of orbits, via a mechanism that would also explain the high perihelia of objects like Sedna. The evolution of some of these objects into perpendicular orbits was unexpected, but found to match objects previously observed. The orbits of some objects with perpendicular orbits were later found to evolve toward smaller semi-major axes when the other planets were included in simulations. Although other mechanisms have been offered for many of these peculiarities, the gravitational influence of Planet Nine is the only one that explains all four. The gravity of Planet Nine would also increase the inclinations of other objects that cross its orbit, however, which could leave the scattered disk objects, bodies orbiting beyond Neptune with semi-major axes greater than 50 AU, and short-period comets with a broader inclination distribution than is observed. Previously Planet Nine was hypothesized to be responsible for the 6° tilt of the Sun's axis relative to the orbits of the planets, but recent updates to its predicted orbit and mass limit this shift to ≈1°. Observations: Orbital clustering of high perihelion objects The clustering of the orbits of TNOs with large semi-major axes was first described by Trujillo and Sheppard, who noted similarities between the orbits of Sedna and . Without the presence of Planet Nine, these orbits should be distributed randomly, without preference for any direction. Upon further analysis, Trujillo and Sheppard observed that the arguments of perihelion of 12 TNOs with perihelia greater than and semi-major axes greater than were clustered near 0°, meaning that they rise through the ecliptic when they are closest to the Sun. Trujillo and Sheppard proposed that this alignment was caused by a massive unknown planet beyond Neptune via the Kozai mechanism. For objects with similar semi-major axes the Kozai mechanism would confine their arguments of perihelion near to either 0° or 180°. This confinement allows objects with eccentric and inclined orbits to avoid close approaches to the planet because they would cross the plane of the planet's orbit at their closest and farthest points from the Sun, and cross the planet's orbit when they are well above or below its orbit. Trujillo and Sheppard's hypothesis about how the objects would be aligned by the Kozai mechanism has been supplanted by further analysis and evidence. Batygin and Brown, looking to refute the mechanism proposed by Trujillo and Sheppard, also examined the orbits of the TNOs with large semi-major axes. After eliminating the objects in Trujillo and Sheppard's original analysis that were unstable due to close approaches to Neptune or were affected by Neptune's mean-motion resonances, Batygin and Brown determined that the arguments of perihelion for the remaining six objects (Sedna, , 474640 Alicanto, , , and ) were clustered around . This finding did not agree with how the Kozai mechanism would tend to align orbits with arguments of perihelion at 0° or 180°. Batygin and Brown also found that the orbits of the six ETNOs with semi-major axis greater than 250 AU and perihelia beyond 30 AU (Sedna, , Alicanto, , , and ) were aligned in space with their perihelia in roughly the same direction, resulting in a clustering of their longitudes of perihelion, the location where they make their closest approaches to the Sun. The orbits of the six objects were also tilted with respect to that of the ecliptic and approximately coplanar, producing a clustering of their longitudes of ascending nodes, the directions where they each rise through the ecliptic. They determined that there was only a 0.007% likelihood that this combination of alignments was due to chance. These six objects had been discovered by six different surveys on six telescopes. That made it less likely that the clumping might be due to an observation bias such as pointing a telescope at a particular part of the sky. The observed clustering should be smeared out in a few hundred million years due to the locations of the perihelia and the ascending nodes changing, or precessing, at differing rates due to their varied semi-major axes and eccentricities. This indicates that the clustering could not be due to an event in the distant past, for example a passing star, and is most likely maintained by the gravitational field of an object orbiting the Sun. Two of the six objects ( and Alicanto) also have very similar orbits and spectra. This has led to the suggestion that they were a binary object disrupted near aphelion during an encounter with a distant object. The disruption of a binary would require a relatively close encounter, which becomes less likely at large distances from the Sun. In a later article Trujillo and Sheppard noted a correlation between the longitude of perihelion and the argument of perihelion of the TNOs with semi-major axes greater than 150 AU. Those with a longitude of perihelion of 0–120° have arguments of perihelion between 280 and 360°, and those with longitude of perihelion between ° and ° have arguments of perihelion between ° and °. The statistical significance of this correlation was 99.99%. They suggested that the correlation is due to the orbits of these objects avoiding close approaches to a massive planet by passing above or below its orbit. A 2017 article by Carlos and Raúl de la Fuente Marcos noted that distribution of the distances to the ascending nodes of the ETNOs, and those of centaurs and comets with large semi-major axes, may be bimodal. They suggest it is due to the ETNOs avoiding close approaches to a planet with a semi-major axis of 300–400 AU. With more data (40 objects), the distribution of mutual nodal distances of the ETNOs shows a statistically significant asymmetry between the shortest mutual ascending and descending nodal distances that may not be due to observational bias but likely the result of external perturbations. Simulations: Observed clustering reproduced The clustering of the orbits of ETNOs and raising of their perihelia is reproduced in simulations that include Planet Nine. In simulations conducted by Batygin and Brown, swarms of scattered disk objects with semi-major axes up to 550 AU that began with random orientations were sculpted into roughly collinear and coplanar groups of spatially confined orbits by a massive distant planet in a highly eccentric orbit. This left most of the objects' perihelia pointed in similar directions and the objects' orbits with similar tilts. Many of these objects entered high-perihelion orbits like Sedna and, unexpectedly, some entered perpendicular orbits that Batygin and Brown later noticed had been previously observed. In their original analysis Batygin and Brown found that the distribution of the orbits of the first six ETNOs was best reproduced in simulations using a planet in the following orbit: semi-major axis ≈ (orbital period 7001.5 ) eccentricity ≈ 0.6, (perihelion ≈ , aphelion ≈ ) inclination ≈ 30° to the ecliptic longitude of the ascending node ≈ . argument of perihelion ≈ 140° and longitude of perihelion ≡ + ≈ These parameters for Planet Nine produce different simulated effects on TNOs. Objects with semi-major axis greater than 250 AU are strongly anti-aligned with Planet Nine, with perihelia opposite Planet Nine's perihelion. Objects with semi-major axes between 150 and 250 AU are weakly aligned with Planet Nine, with perihelia in the same direction as Planet Nine's perihelion. Little effect is found on objects with semi-major axes less than 150 AU. The simulations also revealed that objects with semi-major axes greater than could have stable, aligned orbits if they had lower eccentricities. These objects have yet to be observed. Other possible orbits for Planet Nine were also examined, with semi-major axes between and , eccentricities up to 0.8, and a wide range of inclinations. These orbits yield varied results. Batygin and Brown found that orbits of the ETNOs were more likely to have similar tilts if Planet Nine had a higher inclination, but anti-alignment also decreased. Simulations by Becker et al. showed that their orbits were more stable if Planet Nine had a smaller eccentricity, but that anti-alignment was more likely at higher eccentricities. Lawler et al. found that the population captured in orbital resonances with Planet Nine was smaller if it had a circular orbit, and that fewer objects reached high inclination orbits. Investigations by Cáceres et al. showed that the orbits of the ETNOs were better aligned if Planet Nine had a lower perihelion orbit, but its perihelion would need to be higher than 90 AU. Later investigations by Batygin et al. found that higher eccentricity orbits reduced the average tilts of the ETNOs' orbits. While there are many possible combinations of orbital parameters and masses for Planet Nine, none of the alternative simulations were better at predicting the observed alignment of the original ETNOs. The discovery of additional distant Solar System objects would allow astronomers to make more accurate predictions about the orbit of the hypothesized planet. These may also provide further support for, or refutation of, the Planet Nine hypothesis. Simulations that included the migration of giant planets resulted in a weaker alignment of the ETNOs' orbits. The direction of alignment also switched, from more aligned to anti-aligned with increasing semi-major axis, and from anti-aligned to aligned with increasing perihelion distance. The latter would result in the sednoids' orbits being oriented opposite most of the other ETNOs. Dynamics: How Planet Nine modifies the orbits of ETNOs Planet Nine modifies the orbits of ETNOs via a combination of effects. On very long timescales Planet Nine exerts a torque on the orbits of the ETNOs that varies with the alignment of their orbits with Planet Nine's. The resulting exchanges of angular momentum cause the perihelia to rise, placing them in Sedna-like orbits, and later fall, returning them to their original orbits after several hundred million years. The motion of their directions of perihelion also reverses when their eccentricities are small, keeping the objects anti-aligned, see blue curves on diagram, or aligned, red curves. On shorter timescales mean-motion resonances with Planet Nine provides phase protection, which stabilizes their orbits by slightly altering the objects' semi-major axes, keeping their orbits synchronized with Planet Nine's and preventing close approaches. The gravity of Neptune and the other giant planets, and the inclination of Planet Nine's orbit, weaken this protection. This results in a chaotic variation of semi-major axes as objects hop between resonances, including high-order resonances such as 27:17, on million-year timescales. The mean-motion resonances may not be necessary for the survival of ETNOs if they and Planet Nine are both on inclined orbits. The orbital poles of the objects precess around, or circle, the pole of the Solar System's Laplace plane. At large semi-major axes the Laplace plane is warped toward the plane of Planet Nine's orbit. This causes orbital poles of the ETNOs on average to be tilted toward one side and their longitudes of ascending nodes to be clustered. In 2024, Brown and Batygin completed a simulation which showed that the presence of Planet Nine, over time, would increase the eccentricities of a significant subset of objects with semi-major axes above 100 AU until their perihelion reduced under 30 AU, which would mean that their orbits cross that of Neptune. They also conducted a survey of Neptune-crossing objects with inclinations below 40 degrees and semi-major axes between 100 and 1000 AU and argued that the results aligned with the presence of Planet Nine, which would produce a ratio of Neptune-crossers to objects with a perihelion beyond Neptune's orbit of 3%, compared to 0.5% in the absence of Planet Nine. Objects in perpendicular orbits with large semi-major axis Planet Nine can deliver ETNOs into orbits roughly perpendicular to the ecliptic. Several objects with high inclinations, greater than 50°, and large semi-major axes, above 250 AU, have been observed. These orbits are produced when some low inclination ETNOs enter a secular resonance with Planet Nine upon reaching low eccentricity orbits. The resonance causes their eccentricities and inclinations to increase, delivering the ETNOs into perpendicular orbits with low perihelia where they are more readily observed. The ETNOs then evolve into retrograde orbits with lower eccentricities, after which they pass through a second phase of high eccentricity perpendicular orbits, before returning to low eccentricity and inclination orbits. The secular resonance with Planet Nine involves a linear combination of the orbit's arguments and longitudes of perihelion: Unlike the Kozai mechanism this resonance causes objects to reach their maximum eccentricities when in nearly perpendicular orbits. In simulations conducted by Batygin and Morbidelli this evolution was relatively common, with 38% of stable objects undergoing it at least once. The arguments of perihelion of these objects are clustered near or opposite Planet Nine's and their longitudes of ascending node are clustered around 90° in either direction from Planet Nine's when they reach low perihelia. This is in rough agreement with observations with the differences attributed to distant encounters with the known giant planets. Orbits of high-inclination objects A population of high-inclination TNOs with semi-major axes less than 100 AU may be generated by the combined effects of Planet Nine and the other giant planets. The ETNOs that enter perpendicular orbits have perihelia low enough for their orbits to intersect those of Neptune or the other giant planets. An encounter with one of these planets can lower an ETNO's semi-major axis to below 100 AU, where the object's orbits is no longer controlled by Planet Nine, leaving it in an orbit like . The predicted orbital distribution of the longest lived of these objects is nonuniform. Most would have orbits with perihelia ranging from 5 AU to 35 AU and inclinations below 110°; beyond a gap with few objects are would be others with inclinations near 150° and perihelia near 10 AU. Previously it was proposed that these objects originated in the Oort cloud, a theoretical cloud of icy planetesimals surrounding the Sun at distances of 2,000 to 200,000 AU. In simulations without Planet Nine an insufficient number are produced from the Oort cloud relative to observations, however. A few of the high-inclination TNOs may become retrograde Jupiter Trojans. Oort cloud and comets Planet Nine would alter the source regions and the inclination distribution of comets. In simulations of the migration of the giant planets described by the Nice model fewer objects are captured in the Oort cloud when Planet Nine is included. Other objects would be captured in a cloud of objects dynamically controlled by Planet Nine. This Planet Nine cloud, made up of the ETNOs and the perpendicular objects, would extend from semi-major axes of and contain roughly . When the perihelia of objects in the Planet Nine cloud drop low enough for them to encounter the other planets some would be scattered into orbits that enter the inner Solar System where they could be observed as comets. If Planet Nine exists these would make up roughly one third of the Halley-type comets. Interactions with Planet Nine would also increase the inclinations of the scattered disk objects that cross its orbit. This could result in more with moderate inclinations of 15–30° than are observed. The inclinations of the Jupiter-family comets derived from that population would also have a broader inclination distribution than is observed. Recent estimates of a smaller mass and eccentricity for Planet Nine would reduce its effect on these inclinations. 2019 estimate In February 2019, the total of ETNOs that fit the original hypothesis of having semi-major axis of over 250 AU had increased to fourteen objects. The orbit parameters for Planet Nine favored by Batygin and Brown after an analysis using these objects were: semi-major axis of 400–500 AU; orbital eccentricity of 0.15–0.3; orbital inclination around 20°; mass of about . 2021 estimate In August 2021, Batygin and Brown reanalyzed the data related to ETNO observations while accounting for observational biases, they found that observations were more likely in some directions than others. They stated that the orbital clustering observed "remains significant at a 99.6% confidence level". Combining observational biases with numerical simulations, they predicted the characteristics of Planet Nine: semi-major axis of (300–520 AU); perihelion of (240–385 AU); orbital inclination of (11°–21°); mass of 6.2 Earth masses Reception Batygin was cautious in interpreting the results of the simulation developed for his and Brown's research article, saying, "Until Planet Nine is caught on camera it does not count as being real. All we have now is an echo." In 2016, Brown put the odds for the existence of Planet Nine at about 90%. Greg Laughlin, one of the few researchers who knew in advance about this article, gives an estimate of 68.3%. Other skeptical scientists demand more data in terms of additional KBOs to be analyzed or final evidence through photographic confirmation. Brown, though conceding the skeptics' point, still thinks that there is enough data to mount a search for a new planet. The Planet Nine hypothesis is supported by several astronomers and academics. In January 2016 Jim Green, director of NASA's Science Mission Directorate, said, "the evidence is stronger now than it's been before". But Green also cautioned about the possibility of other explanations for the observed motion of distant ETNOs and, quoting Carl Sagan, he said, "extraordinary claims require extraordinary evidence." Massachusetts Institute of Technology Professor Tom Levenson concluded that, for now, Planet Nine seems the only satisfactory explanation for everything now known about the outer regions of the Solar System. Astronomer Alessandro Morbidelli, who reviewed the research article for The Astronomical Journal, concurred, saying, "I don't see any alternative explanation to that offered by Batygin and Brown." Astronomer Renu Malhotra remains agnostic about Planet Nine, but noted that she and her colleagues have found that the orbits of ETNOs seem tilted in a way that is difficult to otherwise explain. "The amount of warp we see is just crazy," she said. "To me, it's the most intriguing evidence for Planet Nine I've run across so far." Other experts have varying degrees of skepticism. American astrophysicist Ethan Siegel, who previously speculated that planets may have been ejected from the Solar System during an early dynamical instability, is skeptical of the existence of an undiscovered planet in the Solar System. In a 2018 article discussing a survey that did not find evidence of clustering of the ETNOs' orbits, he suggests the previously observed clustering could have been the result of observational bias and claims most scientists think Planet Nine does not exist. Planetary scientist Hal Levison thinks that the chance of an ejected object ending up in the inner Oort cloud is about 2%, and speculates that many objects must have been thrown past the Oort cloud if one has entered a stable orbit. Further skepticism about the Planet Nine hypothesis arose in 2020, based on results from the Outer Solar System Origins Survey and the Dark Energy Survey, with the OSSOS documenting over 800 trans-Neptunian objects and the DES discovering 316 new ones. Both surveys adjusted for observational bias and concluded that of the objects observed there was no evidence for clustering. The authors go further to explain that practically all objects' orbits can be explained by physical phenomena rather than a ninth planet as proposed by Brown and Batygin. An author of one of the studies, Samantha Lawler, said the hypothesis of Planet Nine proposed by Brown and Batygin "does not hold up to detailed observations" pointing out the much larger sample size of 800 objects compared to the much smaller 14 and that conclusive studies based on said objects were "premature". She went further to explain the phenomenon of these extreme orbits could be due to gravitational occultation from Neptune when it migrated outwards earlier in the Solar System's history. Alternative hypotheses Nice Planet #5 Planet Nine has been proposed as a potential remnant of the early Solar System's evolution. According to the Five-planet Nice model, the early Solar System contained five giant planets: Jupiter, Saturn, Uranus, Neptune, and a fifth, now-missing ice giant. Simulations of the Nice model suggest that gravitational interactions among these planets, coupled with interactions with a disk of planetesimals, led to the ejection of the fifth giant from the Solar System approximately 4 billion years ago. Some researchers propose that Planet Nine could be this fifth giant, lingering in a distant, eccentric orbit far beyond Neptune instead of being entirely ejected from the Solar System. This hypothesis aligns with observations suggesting Planet Nine's orbit would be stable over the Solar System's lifetime, supporting its survival as an outer-system object. The hypothesis that Planet Nine may be the fifth giant is bolstered by its proposed mass and orbital characteristics, which are consistent with those of an ice giant. Numerical simulations of the Nice model show that the ejection of the fifth giant often leaves a gravitational signature in the form of altered orbits for the remaining planets and small bodies. The observed clustering of certain trans-Neptunian objects (TNOs) has been cited as indirect evidence of Planet Nine's gravitational influence, possibly originating from its early interactions with the outer Solar System. Temporary or coincidental clustering The results of the Outer Solar System Survey (OSSOS) suggest that the observed clustering is the result of a combination of observational bias and small number statistics. OSSOS, a well-characterized survey of the outer Solar System with known biases, observed eight objects with semi-major axis with orbits oriented in a wide range of directions. After accounting for the observational biases of the survey, no evidence for the arguments of perihelion () clustering identified by Trujillo and Sheppard was seen, and the orientation of the orbits of the objects with the largest semi-major axis was statistically consistent with being random. Pedro Bernardinelli and his colleagues also found that the orbital elements of the ETNOs found by the Dark Energy Survey showed no evidence of clustering. However, they also noted that the sky coverage and number of objects found were insufficient to show that there was no Planet Nine. A similar result was found when these two surveys were combined with a survey by Trujillo and Sheppard. These results differed from an analysis of discovery biases in the previously observed ETNOs by Mike Brown. He found that after observation biases were accounted for, the clustering of longitudes of perihelion of 10 known ETNOs would be observed only 1.2% of the time if their actual distribution was uniform. When combined with the odds of the observed clustering of the arguments of perihelion, the probability was 0.025%. A later analysis of the discovery biases of fourteen ETNOs used by Brown and Batygin determined the probability of the observed clustering of the longitudes of perihelion and the orbital pole locations to be 0.2% . Simulations of 15 known objects evolving under the influence of Planet Nine also revealed differences from observations. Cory Shankman and his colleagues included Planet Nine in a simulation of many clones (objects with similar orbits) of 15 objects with semi-major axis and perihelion While they observed alignment of the orbits opposite that of Planet Nine's for the objects with semi-major axis greater than 250 AU, clustering of the arguments of perihelion was not seen. Their simulations also showed that the perihelia of the ETNOs rose and fell smoothly, leaving many with perihelion distances between 50 and 70 AU where none had been observed, and predicted that there would be many other unobserved objects. These included a large reservoir of high-inclination objects that would have been missed due to most observations being at small inclinations, and a large population of objects with perihelia so distant that they would be too faint to observe. Many of the objects were also ejected from the Solar System after encountering the other giant planets. The large unobserved populations and the loss of many objects led Shankman et al. to estimate that the mass of the original population was tens of Earth masses, requiring that a much larger mass had been ejected during the early Solar System. Shankman et al. concluded that the existence of Planet Nine is unlikely and that the currently observed alignment of the existing ETNOs is a temporary phenomenon that will disappear as more objects are detected. Inclination instability in a massive disk Ann-Marie Madigan and Michael McCourt postulate that an inclination instability in a distant massive belt hypothetically termed a Zderic-Madigan, or ZM belt is responsible for the alignment of the arguments of perihelion of the ETNOs. An inclination instability could occur in such a disk of particles with high eccentricity orbits around a central body, such as the Sun. The self-gravity of this disk would cause its spontaneous organization, increasing the inclinations of the objects and aligning the arguments of perihelion, forming it into a cone above or below the original plane. This process would require an extended time and significant mass of the disk, on the order of a billion years for a 1–10 Earth-mass disk. Ann-Marie Madigan argues that some already discovered trans-neptunian objects like Sedna and 2012 VP113 may be members of this disk. If this is the case there would likely be thousands of similar objects in the region. Mike Brown considers Planet Nine a more probable explanation, noting that current surveys have not revealed a large enough scattered-disk to produce an "inclination instability". In Nice model simulations of the Solar System that included the self-gravity of the planetesimal disk an inclination instability did not occur. Instead, the simulation produced a rapid precession of the objects' orbits and most of the objects were ejected on too short of a timescale for an inclination instability to occur. Madigan and colleagues have shown that the inclination instability would require 20 Earth masses in a disk of objects with semi-major axes of a few hundred AU. An inclination instability in this disk could also reproduce the observed gap in the perihelion distances of the extreme TNOs, and the observed apsidal alignment following the inclination instability given sufficient time. Simulations show that the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) project should be able to supply strong evidence for or against the ZM belt. Shepherding by a massive disk Antranik Sefilian and Jihad Touma propose that a massive disk of moderately eccentric TNOs is responsible for the clustering of the longitudes of perihelion of the ETNOs. This disk would contain 10 Earth-mass of TNOs with aligned orbits and eccentricities that increased with their semi-major axes ranging from zero to 0.165. The gravitational effects of the disk would offset the forward precession driven by the giant planets so that the orbital orientations of its individual objects are maintained. The orbits of objects with high eccentricities, such as the observed ETNOs, would be stable and have roughly fixed orientations, or longitudes of perihelion, if their orbits were anti-aligned with this disk. Although Brown thinks the proposed disk could explain the observed clustering of the ETNOs, he finds it implausible that the disk could survive over the age of the Solar System. Batygin thinks that there is insufficient mass in the Kuiper belt to explain the formation of the disk, and asks "why would the protoplanetary disk end near 30 AU and restart beyond 100 AU?" Planet in lower eccentricity orbit The Planet Nine hypothesis includes a set of predictions about the mass and orbit of the planet. An alternative hypothesis predicts a planet with different orbital parameters. Renu Malhotra, Kathryn Volk, and Xianyu Wang have proposed that the four detached objects with the longest orbital periods, those with perihelia beyond and semi-major axes greater than , are in n:1 or n:2 mean-motion resonances with a hypothetical planet. Two other objects with semi-major axes greater than are also potentially in resonance with this planet. Their proposed planet could be on a lower eccentricity, low inclination orbit, with eccentricity e < 0.18 and inclination i ≈ 11°. The eccentricity is limited in this case by the requirement that close approaches of to the planet be avoided. If the ETNOs are in periodic orbits of the third kind, with their stability enhanced by the libration of their arguments of perihelion, the planet could be in a higher inclination orbit, with i ≈ 48°. Unlike Batygin and Brown, Malhotra, Volk and Wang do not specify that most of the distant detached objects would have orbits anti-aligned with the massive planet. Alignment due to the Kozai mechanism Trujillo and Sheppard argued in 2014 that a massive planet in a circular orbit with an average distance between and was responsible for the clustering of the arguments of perihelion of twelve TNOs with large semi-major axes. Trujillo and Sheppard identified a clustering near zero degrees of the arguments of perihelion of the orbits of twelve TNOs with perihelia greater than and semi-major axes greater than . After numerical simulations showed that the arguments of perihelion should circulate at varying rates, leaving them randomized after billions of years, they suggested that a massive planet in a circular orbit at a few hundred astronomical units was responsible for this clustering. This massive planet would cause the arguments of perihelion of the TNOs to librate about 0° or 180° via the Kozai mechanism so that their orbits crossed the plane of the planet's orbit near perihelion and aphelion, the closest and farthest points from the planet. In numerical simulations including a 2–15 Earth mass body in a circular low-inclination orbit between and the arguments of perihelia of Sedna and librated around 0° for billions of years (although the lower perihelion objects did not) and underwent periods of libration with a Neptune mass object in a high inclination orbit at 1,500 AU. Another process such as a passing star would be required to account for the absence of objects with arguments of perihelion near 180°. These simulations showed the basic idea of how a single large planet can shepherd the smaller TNOs into similar types of orbits. They were basic proof of concept simulations that did not obtain a unique orbit for the planet as they state there are many possible orbital configurations the planet could have. Thus they did not fully formulate a model that successfully incorporated all the clustering of the ETNOs with an orbit for the planet. But they were the first to notice there was a clustering in the orbits of TNOs and that the most likely reason was from an unknown massive distant planet. Their work is very similar to how Alexis Bouvard noticed Uranus' motion was peculiar and suggested that it was likely gravitational forces from an unknown 8th planet, which led to the discovery of Neptune. Raúl and Carlos de la Fuente Marcos proposed a similar model but with two distant planets in resonance. An analysis by Carlos and Raúl de la Fuente Marcos with Sverre J. Aarseth confirmed that the observed alignment of the arguments of perihelion could not be due to observational bias. They speculated that instead it was caused by an object with a mass between that of Mars and Saturn that orbited at some from the Sun. Like Trujillo and Sheppard they theorized that the TNOs are kept bunched together by a Kozai mechanism and compared their behavior to that of Comet 96P/Machholz under the influence of Jupiter. They also struggled to explain the orbital alignment using a model with only one unknown planet, and therefore suggested that this planet is itself in resonance with a more-massive world about from the Sun. In their article, Brown and Batygin noted that alignment of arguments of perihelion near 0° or 180° via the Kozai mechanism requires a ratio of the semi-major axes nearly equal to one, indicating that multiple planets with orbits tuned to the data set would be required, making this explanation too unwieldy. Primordial black hole In 2019, Jakub Scholtz and James Unwin proposed that a primordial black hole was responsible for the clustering of the orbits of the ETNOs. Their analysis of OGLE gravitational lensing data revealed a population of planetary mass objects in the direction of the galactic bulge more numerous than the local population of stars. They propose that instead of being free floating planets, these objects are primordial black holes. Since their estimate of the size of this population is greater than the estimated population of free floating planets from planetary formation models they argue that the capture of a hypothetical primordial black hole would be more probable than the capture of a free floating planet. This could also explain why an object responsible for perturbing the orbits of the ETNOs, if it exists, has yet to be seen. A detection method was proposed in the paper, stating that the black hole is too cold to be detected over the CMB, but interaction with surrounding dark matter would produce gamma rays detectable by the FERMILAT. Konstantin Batygin commented on this, saying while it is possible for Planet Nine to be a primordial black hole, there is currently not enough evidence to make this idea more plausible than any other alternative. Edward Witten proposed a fleet of probes accelerated by radiation pressure that could discover a Planet Nine primordial black hole's location, however Thiem Hoang and Avi Loeb showed that any signal would be dominated by noise from the interstellar medium. Amir Siraj and Avi Loeb proposed a method for the Vera C. Rubin Observatory to detect flares from any low-mass black hole in the outer Solar System, including a possible Planet Nine primordial black hole. Modified Newtonian dynamics In 2023, it was shown that a gravity theory known as modified Newtonian dynamics (MOND), which can explain galactic rotation without invoking dark matter, can provide an alternative explanation using secular approximations. It predicts that the major axes of the KBO orbits will be aligned with the direction toward the Galactic Center and that the orbits cluster in phase space, in agreement with observations. Detection attempts Visibility and location Due to its extreme distance from the Sun, Planet Nine would reflect little sunlight, potentially evading telescope sightings. It is expected to have an apparent magnitude fainter than 22, making it at least 600 times fainter than Pluto. If Planet Nine exists and is close to perihelion, astronomers could identify it based on existing images. At aphelion, the largest telescopes would be required, but if the planet is currently located in between, many observatories could spot Planet Nine. Statistically, the planet is more likely to be close to its aphelion at a distance greater than 600 AU. This is because objects move more slowly when near their aphelion, in accordance with Kepler's second law. A 2019 study estimated that Planet Nine, if it exists, may be smaller and closer than originally thought. This would make the hypothetical planet brighter and easier to spot, with an apparent magnitude of 21–22. Observation and analysis of the orbital dynamics of Kuiper Belt objects constrain the possible orbital parameters of a Planet Nine, and at the current rate of new observations, University of Michigan professor Fred Adams believes enough data will have been gathered to pinpoint Planet Nine or rule out its existence by 2035. Searches of existing data The search of databases of stellar objects by Batygin and Brown has already excluded much of the sky along Planet Nine's predicted orbit. The remaining regions include the direction of its aphelion, where it would be too faint to be spotted by these surveys, and near the plane of the Milky Way, where it would be difficult to distinguish from the numerous stars. This search included the archival data from the Catalina Sky Survey to magnitude 21–22, Pan-STARRS to magnitude 21.5, and infrared data from the Wide-field Infrared Survey Explorer (WISE) satellite. In 2021, they also searched the first three years of data from the Zwicky Transient Facility (ZTF) without identifying Planet Nine. The search of the ZTF data alone has ruled out 56% of the parameter space for possible Planet Nine positions. As a result of ruling out mostly objects with small semi-major axes, the expected orbit of Planet Nine was pushed slightly further away. Other researchers have been conducting searches of existing data. David Gerdes, who helped develop the camera used in the Dark Energy Survey, claims that software designed to identify distant Solar System objects such as could find Planet Nine if it was imaged as part of that survey, which covered a quarter of the southern sky. Michael Medford and Danny Goldstein, graduate students at the University of California, Berkeley, are also examining archived data using a technique that combines images taken at different times. Using a supercomputer they will offset the images to account for the calculated motion of Planet Nine, allowing many faint images of a faint moving object to be combined to produce a brighter image. A search combining multiple images collected by WISE and NEOWISE data has also been conducted without detecting Planet Nine. This search covered regions of the sky away from the galactic plane at the "W1" wavelength (the 3.4 μm wavelength used by WISE) and is estimated to be able to detect a 10-Earth mass object out to 800–900 AU. Malena Rice and Gregory Laughlin applied a targeted shift-stacking search algorithm to analyze data from TESS sectors 18 and 19 looking for Planet Nine and candidate outer Solar System objects. Their search generated no serious evidence for the presence of a distant planet, but it produced 17 new outer Solar System body candidates located at geocentric distances in the range 80–200 AU, that need follow-up observations with ground-based telescope resources for confirmation. Early results from a survey with WHT aimed at recovering these distant TNO candidates have failed to confirm two of them. By 2022, a comparison between IRAS and AKARI data yielded no Planet Nine detection. It was noted that far-infrared data in the major portion of the sky are heavily contaminated by the emission from the galactic nebulae, making detection of Planet Nine thermal emission problematic close to the galactic plane or bulge. Ongoing searches Because the planet is predicted to be visible in the Northern Hemisphere, the primary search is expected to be carried out using the Subaru Telescope, which has both an aperture large enough to see faint objects and a wide field of view to shorten the search. Two teams of astronomers—Batygin and Brown, as well as Trujillo and Sheppard—are undertaking this search together, and both teams expect the search to take up to five years. Brown and Batygin initially narrowed the search for Planet Nine to roughly 2,000 square degrees of sky near Orion, a swath of space that Batygin thinks could be covered in about 20 nights by the Subaru Telescope. Subsequent refinements by Batygin and Brown have reduced the search space to 600–800 square degrees of sky. In December 2018, they spent four half–nights and three full nights observing with the Subaru Telescope. Due to the elusiveness of the hypothetical planet, it has been proposed that different detection methods be used when looking for a super-Earth mass planet ranging from using differing telescopes to using multiple spacecraft. In late April and early May 2020, Scott Lawrence and Zeeve Rogoszinski proposed the latter method for finding it as multiple spacecraft would have advantages that land-based telescopes do not have. Radiation Although a distant planet such as Planet Nine would reflect little light, due to its large mass it would still be radiating the heat from its formation as it cools. At its estimated temperature of , the peak of its emissions would be at infrared wavelengths; its apparent magnitude in the V filter (540 nm wavelength) would be 21.7. This radiation signature could be detected by Earth-based submillimeter telescopes, such as ALMA, and a search could be conducted by cosmic microwave background experiments operating at mm wavelengths. A search of part of the sky using archived data of the Atacama Cosmology Telescope has not detected Planet Nine. Jim Green of NASA's Science Mission Directorate is optimistic that it could be observed by the James Webb Space Telescope, the successor to the Hubble Space Telescope. Citizen science The Zooniverse "Catalina Outer Solar System Survey" project, operating from August 2020 to April 2023, was using archived data from the Catalina Sky Survey to search for TNOs. Attempts to predict location Measurements of Saturn's orbit by the Cassini probe Precise observations of Saturn's orbit using data from Cassini suggest that Planet Nine could not be in certain sections of its proposed orbit because its gravity would cause a noticeable effect on Saturn's position. This data neither proves nor disproves that Planet Nine exists. An initial analysis by Fienga, Laskar, Manche, and Gastineau using Cassini data to search for Saturn's orbital residuals, small differences with its predicted orbit due to the Sun and the known planets, was inconsistent with Planet Nine being located with a true anomaly, the location along its orbit relative to perihelion, of −130° to −110° or −65° to 85°. The analysis, using Batygin and Brown's orbital parameters for Planet Nine, suggests that the lack of perturbations to Saturn's orbit is best explained if Planet Nine is located at a true anomaly of . At this location, Planet Nine would be approximately from the Sun, with right ascension close to 2h and declination close to −20°, in Cetus. In contrast, if the putative planet is near aphelion it would be located near right ascension 3.0h to 5.5h and declination −1° to 6°. A later analysis of Cassini data by astrophysicists Matthew Holman and Matthew Payne tightened the constraints on possible locations of Planet Nine. Holman and Payne developed a more efficient model that allowed them to explore a broader range of parameters than the previous analysis. The parameters identified using this technique to analyze the Cassini data was then intersected with Batygin and Brown's dynamical constraints on Planet Nine's orbit. Holman and Payne concluded that Planet Nine is most likely to be located within 20° of RA = 40°, Dec = −15°, in an area of the sky near the constellation Cetus. William Folkner, a planetary scientist at the Jet Propulsion Laboratory (JPL), has stated that the Cassini spacecraft was not experiencing unexplained deviations in its orbit around Saturn. An undiscovered planet would affect the orbit of Saturn, not Cassini. This could produce a signature in the measurements of Cassini, but JPL has seen no unexplained signatures in Cassini data. Analysis of Pluto's orbit An analysis in 2016 of Pluto's orbit by Holman and Payne found perturbations much larger than predicted by Batygin and Brown's proposed orbit for Planet Nine. Holman and Payne suggested three possible explanations: systematic errors in the measurements of Pluto's orbit; an unmodeled mass in the Solar System, such as a small planet in the range of 60– (potentially explaining the Kuiper cliff); or a planet more massive or closer to the Sun instead of the planet predicted by Batygin and Brown. Orbits of nearly parabolic comets An analysis of the orbits of comets with nearly parabolic orbits identifies five new comets with hyperbolic orbits that approach the nominal orbit of Planet Nine described in Batygin and Brown's initial article. If these orbits are hyperbolic due to close encounters with Planet Nine the analysis estimates that Planet Nine is currently near aphelion with a right ascension of 83–90° and a declination of 8–10°. Scott Sheppard, who is skeptical of this analysis, notes that many different forces influence the orbits of comets. Occultations by Jupiter trojans Malena Rice and Gregory Laughlin have proposed that a network of telescopes be built to detect occultations by Jupiter trojans. The timing of these occultations would provide precise astrometry of these objects enabling their orbits to be monitored for variations due to the tide from Planet Nine. Possible encounter with interstellar meteor In May 2022, it was suggested that the peculiar meteor CNEOS 2014-01-08 may have entered Earth-crossing orbit after a swing-by of Planet Nine. If that hypothesis is true, the trajectory back-tracing of CNEOS 2014-01-08 means Planet Nine may be currently located in the constellation of Aries, at right ascension 53°, and declination 9.2°. Attempts to predict the semi-major axis An analysis by Sarah Millholland and Gregory Laughlin identified a pattern of commensurabilities (ratios between orbital periods of pairs of objects consistent with both being in resonance with another object) of the ETNOs. They identify five objects that would be near resonances with Planet Nine if it had a semi-major axis of 654 AU: Sedna (3:2), 474640 Alicanto (3:1), (4:1), (5:1), and (5:1). They identify this planet as Planet Nine but propose a different orbit with an eccentricity e ≈ 0.5, inclination i ≈ 30°, argument of perihelion ω ≈ 150°, and longitude of ascending node Ω ≈ 50° (the last differs from Brown and Batygin's value of 90°). Carlos and Raúl de la Fuente Marcos also note commensurabilities among the known ETNOs similar to that of the Kuiper belt, where accidental commensurabilities occur due to objects in resonances with Neptune. They find that some of these objects would be in 5:3 and 3:1 resonances with a planet that had a semi-major axis of ≈700 AU. Three objects with smaller semi-major axes near 172 AU (, and (594337) 2016 QU89) have also been proposed to be in resonance with Planet Nine. These objects would be in resonance and anti-aligned with Planet Nine if it had a semi-major axis of 315 AU, below the range proposed by Batygin and Brown. Alternatively, they could be in resonance with Planet Nine, but have orbital orientations that circulate instead of being confined by Planet Nine if it had a semi-major axis of 505 AU. A later analysis by Elizabeth Bailey, Michael Brown, and Konstantin Batygin found that if Planet Nine is in an eccentric and inclined orbit the capture of many of the ETNOs in higher-order resonances and their chaotic transfer between resonances prevent the identification of Planet Nine's semi-major axis using current observations. They also determined that the odds of the first six objects observed being in N/1 or N/2 period ratios with Planet Nine are less than 5% if it has an eccentric orbit. A 2025 study by Amir Siraj, Christopher F. Chyba, and Scott Tremaine using an expanded sample of 51 ETNOs to inform 300 simulations in the Rebound program, proposed new orbital characteristics for Planet Nine: that its semi-major axis is 290 ± 30 AU, its eccentricity is 0.29 ± 0.13, and its inclination is roughly 6°. The authors noted that it would put Planet Nine in the field of view of the Rubin Observatory's early observations. In late 2020 it was determined HD 106906 b, a candidate exoplanet, had an eccentric orbit that took it outside the debris disk of its binary host stars. Its orbit appears to be similar to the predictions made for Planet Nine's semi-major axis and it may serve as a proxy for Planet Nine that helps explain how such planetary orbits evolve, although this exoplanet is well over ten times as massive as Jupiter. Naming Planet Nine does not have an official name and will not receive one unless its existence is confirmed via imaging. Only two planets, Uranus and Neptune, have been discovered in the Solar System during recorded history. However, many minor planets, including dwarf planets such as Pluto, asteroids, and comets have been discovered and named. Consequently, there is a well-established process for naming newly discovered Solar System objects. If Planet Nine is observed, the International Astronomical Union will certify a name, with priority usually given to a name proposed by its discoverers. It is likely to be a name chosen from Roman or Greek mythology. In their original article, Batygin and Brown simply referred to the object as "perturber", and only in later press releases did they use "Planet Nine". They have also used the names "Jehoshaphat" and "George" (a reference to William Herschel's proposed name for Uranus) for Planet Nine. Brown has stated: "We actually call it Phattie when we're just talking to each other." In 2018, Batygin has also informally suggested, based on a petition on Change.org, to name the planet after singer David Bowie, and to name any potential moons of the planet after characters from Bowie's song catalogue, such as Ziggy Stardust and Major Tom. Jokes have been made connecting "Planet Nine" to Ed Wood's 1959 science-fiction horror film Plan 9 from Outer Space. In connection with the Planet Nine hypothesis, the film title recently found its way into academic discourse. In 2016, an article titled Planet Nine from Outer Space about the hypothesized planet in the outer region of the Solar System was published in Scientific American. Several conference talks since then have used the same word play, as did a lecture by Mike Brown given in 2019. Persephone, the wife of the deity Pluto, had been a popular name commonly used in science fiction for a planet beyond Neptune, most notably in the works of Arthur C. Clarke and Larry Niven. However, it is unlikely that Planet Nine or any other conjectured planet beyond Neptune will be given the name Persephone once its existence is confirmed, as it is already the name for asteroid 399 Persephone. In 2017, physicist Lorenzo Iorio informally suggested to name the hypothetical planet as ″Telisto″, from the ancient Greek word "τήλιστος" for "farthest" or "most remote". Another classical mythological name, suggested by Jet Propulsion Laboratory physicist Makan Mohageg, is Chronos, after the Greek personification of time; Mohageg's method of finding Planet Nine would revolve around precision timing. In 2018, planetary scientist Alan Stern objected to the name Planet Nine, saying, "It is an effort to erase Clyde Tombaugh's legacy and it's frankly insulting", suggesting the name Planet X until its discovery. He signed a statement with 34 other scientists saying, "We further believe the use of this term [Planet Nine] should be discontinued in favor of culturally and taxonomically neutral terms for such planets, such as Planet X, Planet Next, or Giant Planet Five." According to Brown, Planet X' is not a generic reference to some unknown planet, but a specific prediction of Lowell's which led to the (accidental) discovery of Pluto. Our prediction is not related to this prediction." See also Hypothetical planets of the Solar System Nemesis (hypothetical star) Planets beyond Neptune Tyche (hypothetical planet) Five-planet Nice model Notes References External links The Search for Planet Nine – Blog by Brown and Batygin Hypothetical Planet X – NASA Planetary Science Division 2016 in outer space Astronomical events of the Solar System Hypothetical planets Hypothetical trans-Neptunian objects Solar System
Planet Nine
[ "Astronomy" ]
13,375
[ "Astronomical hypotheses", "Outer space", "Astronomical events", "Astronomical myths", "Hypothetical astronomical objects", "Astronomical events of the Solar System", "Astronomical objects", "Solar System" ]
49,168,357
https://en.wikipedia.org/wiki/Kilim%20motifs
Many motifs are used in traditional kilims, handmade flat-woven rugs, each with many variations. In Turkish Anatolia in particular, village women wove themes significant for their lives into their rugs, whether before marriage or during married life. Some motifs represent desires, such as for happiness and children; others, for protection against threats such as wolves (to the flocks) and scorpions, or against the evil eye. These motifs were often combined when woven into patterns on kilims. With the fading of tribal and village cultures in the 20th century, the meanings of kilim patterns have also faded. In these tribal societies, women wove kilims at different stages of their lives, choosing themes appropriate to their own circumstances. Some of the motifs used are widespread across Anatolia and sometimes across other regions of West Asia, but patterns vary between tribes and villages, and rugs often expressed personal and social meaning. Context A Turkish kilim is a flat-woven rug from Anatolia. Although the name kilim is sometimes used loosely in the West to include all type of rug such as cicim, palaz, soumak and zili, in fact any type other than pile carpets, the name kilim properly denotes a specific weaving technique. Cicim, palaz, soumak and zili are made using three groups of threads, namely longitudinal warps, crossing wefts, and wrapping coloured threads. The wrapping threads give these rugs additional thickness and strength. Kilim in contrast are woven flat, using only warp and weft threads. Kilim patterns are created by winding the weft threads, which are coloured, backwards and forwards around pairs of warp threads, leaving the resulting weave completely flat. Kilim are therefore called flatweave or flatware rugs. To create a sharp pattern, weavers usually end each pattern element at a particular thread, winding the coloured weft threads back around the same warps, leaving a narrow gap or slit. These are prized by collectors for the crispness of their decoration. The motifs on kilims woven in this way are constrained to be somewhat angular and geometric. In tribal societies, kilim were woven by women at different stages of their lives: before marriage, in readiness for married life; while married, for her children; and finally, kilim for her own funeral, to be given to the mosque. Kilims thus had strong personal and social significance in tribal and village cultures, being made for personal and family use. Feelings of happiness or sorrow, hopes and fears were expressed in the weaving motifs. Many of these represent familiar household and personal objects, such as a hairband, a comb, an earring, a trousseau chest, a jug, a hook. Meanings The meanings expressed in kilims derive both from the individual motifs used, and by their pattern and arrangement in the rug as a whole. A few symbols are widespread across Anatolia as well as other regions including Persia and the Caucasus; others are confined to Anatolia. An especially widely used motif is the (hands on hips): Anatolian symbol of the mother goddess, mother with child in womb, fertility, and abundance. Other motifs express the tribal weavers' desires for protection of their families' flocks from wolves with the wolf's mouth or the wolf's foot motif (), or for safety from the sting of the scorpion (). Several protective motifs, such as those for the dragon (), scorpion, and spider (sometimes called the crab or tortoise by carpet specialists) share the same basic diamond shape with a hooked or stepped boundary, often making them very difficult to distinguish. Several motifs hope for the safety of the weaver's family from the evil eye (, also used as a motif), which could be divided into four with a cross symbol (), or averted with the symbol of a hook (), a human eye (), or an amulet (; often, a triangular package containing a sacred verse). The carpet expert Jon Thompson explains that such an amulet woven into a rug is not a theme: to the weaver, it actually is an amulet, conferring protection by its presence. In his words, to people in the village and tribal cultures that wove kilims, "the device in the rug has a materiality, it generates a field of force able to interact with other unseen forces and is not merely an intellectual abstraction." Other motifs symbolised fertility, as with the trousseau chest motif (), or the explicit fertility () motif. The motif for running water () similarly depicts the resource literally. The desire to tie a family or lovers together could be depicted with a fetter motif (). Similarly, a tombstone motif may indicate not simply death, but the desire to die rather than to part from the beloved. Several motifs represented the desire for good luck and happiness, as for instance the bird () and the star or Solomon's seal (). The oriental symbol of Yin/Yang is used for love and unison (). Among the motifs used late in life, the Tree of Life () symbolizes the desire for immortality. Many of the plants used to represent the Tree of Life can also be seen as symbols of fruitfulness, fertility, and abundance. Thus the pomegranate, a tree whose fruits carry many seeds, implies the desire for many children. Symbols are often combined, as when the feminine elibelinde and the masculine ram's horn are each drawn twice, overlapping at the centre, forming a figure (some variants of the or fertility motif) of the sacred union of the principles of the sexes. Motifs All these motifs can vary considerably in appearance according to the weaver. Colours, sizes and shapes can all be chosen according to taste and the tradition in a given village or tribe; further, motifs are often combined, as illustrated in the photographs above. To give some idea of this variability, a few alternative forms are shown in the table. See also Islamic geometric patterns References External links Border motifs in oriental carpets Textiles in folklore Culture of Turkey Visual motifs Textile patterns Turkish rugs and carpets
Kilim motifs
[ "Mathematics" ]
1,253
[ "Symbols", "Visual motifs" ]
49,168,382
https://en.wikipedia.org/wiki/EsyN
esyN (Easy Networks) is a bioinformatics web-tool for visualizing, building and analysing molecular interaction networks. esyN is based on cytoscape.js and its aim is to make it easy for everybody to perform network analysis. esyN is connected with a number of databases - specifically: pombase, flybase, and most InterMine data warehouses, DrugBank, and BioGRID from which its possible to download the protein protein or genetic interactions for any protein or gene in a number of different organisms. Networks published in esyN can be easily published in other websites using the <iframe> methodology. Usage As of January 2016 esyN is being viewed by 1500 unique users a day (about 16000 a month) according to Google Analytics. The embedding capabilities of esyN are used by a number of databases to display their interaction data: FlyBase FlyMine HumanMine PomBase See also Computational genomics Metabolic network modelling Protein–protein interaction prediction References External links Bioinformatics software Metabolomic databases Proteomics Science and technology in Cambridgeshire South Cambridgeshire District
EsyN
[ "Biology" ]
228
[ "Bioinformatics", "Bioinformatics software" ]
49,172,235
https://en.wikipedia.org/wiki/Camp%20Thomas%20A.%20Scott
Camp Thomas A. Scott, located in Fort Wayne, Indiana, was a Railway Operating Battalion training center for the Pennsylvania Railroad from 1942 to 1944 and a prisoner of war camp during World War II. It was named for Thomas A. Scott, who served as the fourth president of the Pennsylvania Railroad from 1874 to 1880. As the United States Assistant Secretary of War in 1861, Scott was instrumental in using railroads for military purposes during the American Civil War. Pennsylvania Railroad Training Center Camp Scott was built in August 1942 as a training camp for U.S. Army Railway Operating Battalions. This made sense because Fort Wayne was a major hub for the Pennsylvania Railroad, and Camp Scott was constructed adjacent to Pennsylvania Railroad lines. The 717th, the 730th, and the 750th Railway Operating Battalions were all trained on Pennsylvania Railroad lines in Fort Wayne. The last battalion was deployed from Camp Scott in mid-1944. Prisoner of war camp Camp Scott was a branch camp of Camp Perry in Ohio. Camp Scott housed approximately 600 prisoners of war. Most of these prisoners were German and had served in the Afrika Korps, although some were Italians captured at the Battle of Anzio and the Battle of Monte Cassino in Italy. Like the rest of the United States, Fort Wayne suffered labor shortages due to wartime enlistment, and prisoners from Camp Scott were put to work in Fort Wayne and surrounding areas of Allen County, Indiana. Prisoners weeded and harvested potatoes for local farmers, cleared snow from Fort Wayne streets, and set pins at a local bowling alley. Following VE Day, the prisoners were gradually repatriated, and Camp Scott officially closed on November 16, 1945. Uses after 1945 Camp Scott sat dormant until January 1946, when the Fort Wayne Housing Authority began the process of converting camp buildings into much-needed housing for returning American veterans and their families. In the years following, more housing was built in Fort Wayne, and the families living at Camp Scott gradually relocated to other homes. Camp Scott served as a temporary housing facility until August 1949. Over the next decades, the buildings were torn down, with the last building being demolished in 1977. The City of Fort Wayne converted some of the land on which Camp Scott stood into a constructed wetland. It also serves as a facility for storing and treating stormwater run-off. References Eastes, Erick E. "'A By-Product of War': A History of Camp Thomas A. Scott 1942-1949" Old Fort News 49.2 (1986). Hawfield, Michael. "World War II Camp Had Impact on City" The News-Sentinel 15 December 1990. Camp Thomas A. Scott - Fort Wayne, Indiana - World War II Prisoner of War Camps on Waymarking.com http://explorepahistory.com/story.php?storyId=1-9-10&chapter=1 World War II prisoner-of-war camps in the United States Constructed wetlands 1942 establishments in Indiana 1945 disestablishments in Indiana Buildings and structures in Fort Wayne, Indiana
Camp Thomas A. Scott
[ "Chemistry", "Engineering", "Biology" ]
615
[ "Bioremediation", "Constructed wetlands", "Environmental engineering" ]
49,172,614
https://en.wikipedia.org/wiki/Tac-Promoter
The Tac-Promoter (abbreviated as Ptac), or tac vector is a synthetically produced DNA promoter, produced from the combination of promoters from the trp and lac operons. It is commonly used for protein production in Escherichia coli. Two hybrid promoters functional in Escherichia coli were constructed. These hybrid promoters, tacI and tacII, were derived from sequences of the trp and the lac UV5 promoters. In the first hybrid promoter (tacI), the DNA upstream of position –20 with respect to the transcriptional start site was derived from the trp promoter. The DNA downstream of position –20 was derived from the lac UV5 promoter. In the second hybrid promoter (tacII), the DNA upstream of position –11 at the Hpa I site within the Pribnow box was derived from the trp promoter. The DNA downstream of position –11 is a 46-base-pair synthetic DNA fragment that specifies part of the hybrid Pribnow box and the entire lac operator. It also specifies a Shine–Dalgarno sequence flanked by two unique restriction sites (portable Shine–Dalgarno sequence). The tacI and the tacII promoters respectively direct transcription approximately 11 and 7 times more efficiently than the derepressed parental lac UV5 promoter and approximately 3 and 2 times more efficiently than the trp promoter in the absence of the trp repressor. Both hybrid promoters can be repressed by the lac repressor and both can be derepressed with isopropyl-beta-D-thiogalactoside. Consequently, these hybrid promoters are useful for the controlled expression of foreign genes at high levels in E. coli. In contrast to the trp and the lac UV5 promoters, the tacI promoter has not only a consensus –35 sequence but also a consensus Pribnow box sequence. This may explain the higher efficiency of this hybrid promoter with respect to either one of the parental promoters. About The tac promoter is used to control and increase the expression levels of a target gene and is used in the over-expression of recombinant proteins. The tac promoter is named after the two promoters which comprise its sequence: the 'trp' and the 'lac' promoters. Bacterial promoters consist of two parts, the '–35' region and the '–10' region (the Pribnow box). These two regions bind the sigma factor of RNA polymerase, which then initiates transcription of the downstream gene. The tac promoter consists of the '–35' region of the trp promoter and the '–10' region of the lac promoter (and differs from a related trc promoter by 1 bp). The tac promoter is, therefore, inducible by IPTG (Isopropyl β-D-1-thiogalactopyranoside), whilst also allowing higher maximum gene expression than either the lac or trp promoters. This makes it suitable for high-efficiency protein production of a recombinant protein. The strong repression of expression in the 'off' state is important since foreign proteins can be toxic to the host cell. Applications The tac promoter finds various applications. The tac promoter/operator (dubbed PTAC) is one of the most widely used expression systems. Ptac is a strong hybrid promoter composed of the –35 region of the trp promoter and the –10 region of the lacUV5 promoter/operator. The expression of PTAC is repressed by the lacI protein. The lacIq allele is a promoter mutation that increases the intracellular concentration of LacI repressor, resulting in the strong repression of PTAC. The addition of the inducer IPTG inactivates the LacI repressor. Thus, the amount of expression from PTAC is proportional to the concentration of IPTG added: low concentrations of IPTG result in relatively low expression from PTAC and high concentrations of IPTG result in high expression from PTAC. By varying the IPTG concentration the amount of gene product cloned downstream from PTAC can be varied over several orders of magnitude. For example, the PTAC system is used for fusion protein expression within the PMAL-C2X expression. References See also Human artificial chromosome Yeast artificial chromosome Bacterial artificial chromosome Biotechnology
Tac-Promoter
[ "Biology" ]
892
[ "nan", "Biotechnology" ]
49,174,103
https://en.wikipedia.org/wiki/Sarcodon%20rimosus
Sarcodon rimosus, commonly known as the cracked hydnum, is a species of tooth fungus in the family Bankeraceae. Found in the Pacific Northwest region of North America, it was described as new to science in 1964 by mycologist Kenneth A. Harrison, who initially called it Hydnum rimosum. He transferred it to the genus Sarcodon in 1984. Fruit bodies of S. rimosum have convex to somewhat depressed caps that are in diameter. The surface becomes scaly in age, often developing conspicuous cracks and fissures. It is brown with violet tints. The flesh lacks any significant taste and odor. Underneath the cap cuticle, the flesh turns a bluish-green color when tested with a solution of potassium hydroxide. The brownish-pinks spines on the cap underside are typically 2.5–7 mm long, extending decurrently on the stipe. Spores are roughly spherical with fine warts on the surface, and measure 5–6.5 by 4.5–5 μm. The hyphae do not have clamp connections. Sarcodon rimosus is common in the states of Idaho, Oregon, and Washington, where it fruits in groups under pines, or in coniferous forest. Fruiting occurs in late summer and autumn. References External links Fungi described in 1964 Fungi of the United States rimosus Fungi without expected TNC conservation status Fungus species
Sarcodon rimosus
[ "Biology" ]
293
[ "Fungi", "Fungus species" ]
49,174,237
https://en.wikipedia.org/wiki/Sarcodon%20lanuginosus
Sarcodon lanuginosus is a species of tooth fungus in the family Bankeraceae. It was described as new to science in 1961 by mycologist Kenneth A. Harrison, who initially called it Hydnum lanuginosum. He transferred it to the genus Sarcodon in 1984. It is found in Nova Scotia, Canada, where it fruits on the ground singly or in groups under spruce and fir. The type collection was made in Cape Split, Kings County. The fungus has fruit bodies with irregularly shaped, shaggy caps measuring in diameter, supported by a smooth, greyish stipe. Conditions of high humidity can result in reddish or pinkish drops appearing on the stipe. The spores of S. lanuginosus are roughly spherical, covered in small warts (tubercules), and measure 4.5–6 by 4.5–5 μm. References External links Fungi described in 1961 Fungi of Canada lanuginosus Fungi without expected TNC conservation status Fungus species
Sarcodon lanuginosus
[ "Biology" ]
206
[ "Fungi", "Fungus species" ]
49,174,301
https://en.wikipedia.org/wiki/Sarcodon%20cyanellus
Sarcodon cyanellus is a species of tooth fungus in the family Bankeraceae. Found in the Pacific Northwest region of North America, where it associates with Pinaceae, it was described as new to science in 1964 by mycologist Kenneth A. Harrison, who initially called it Hydnum cyanellum. He transferred it to the genus Sarcodon in 1984. It has a vinaceous-violet to bluish-black cap. References External links Herbarium of the University of Michigan Photo of holotype collection Fungi described in 1984 Fungi of North America cyanellus Fungus species
Sarcodon cyanellus
[ "Biology" ]
121
[ "Fungi", "Fungus species" ]
49,174,350
https://en.wikipedia.org/wiki/Sarcodon%20calvatus
Sarcodon calvatus, commonly known as the robust hedgehog, is a species of tooth fungus in the family Bankeraceae. It was described as new to science in 1964 by mycologist Kenneth A. Harrison, who initially called it Hydnum calvatum. He transferred it to the genus Sarcodon in 1984. It is found in North America. References External links Fungi described in 1964 Fungi of North America calvatus Fungus species
Sarcodon calvatus
[ "Biology" ]
93
[ "Fungi", "Fungus species" ]
49,175,209
https://en.wikipedia.org/wiki/Hydnellum%20martioflavum
Hydnellum martioflavum is a species of tooth fungus in the family Bankeraceae, found in Europe and North America. It was first described by Wally Snell, Kenneth A. Harrison, and Henry Jackson in 1962 as Hydnum martioflavum. Rudolph Arnold Maas Geesteranus transferred it to the genus Sarcodon in 1964. He considered his Sarcodon armeniacus, described the year previously, to be a synonym. The fungus was originally described from collections made in Quebec and Nova Scotia, Canada, growing under spruce and balsam fir. It is considered vulnerable in Switzerland. References External links Fungi described in 1962 Fungi of Europe Fungi of North America martioflavum Fungus species
Hydnellum martioflavum
[ "Biology" ]
148
[ "Fungi", "Fungus species" ]
49,176,464
https://en.wikipedia.org/wiki/Glass%20mosaic
In Myanmar culture, glass mosaic () is a traditional form of glasswork where pieces of glass are used to embellish decorative art, structures, and furniture. Glass mosaic is typically divided into two subcategories, hman gyan si () and hman nu si (). The former is typically used to decorate the walls and ceilings of pagodas, while the latter is used to embellish furniture and accessories. The art form originated in the 1500s during the Nyaungyan era. Glass mosaic is often studded with gems and semi-precious stones. History Glass mosaic is a traditional Burmese mosaic made with pieces of glass, used to embellish decorative art, structures, and furniture. Glass mosaic is typically divided into two subcategories, hman gyan si () and hman nu si (). The former is typically used to decorate the walls and ceilings of pagodas, while the latter is used to embellish furniture and accessories. The art form originated in the 1500s during the Nyaungyan era. Glass mosaic is often studded with gems and semi-precious stones. The National Museum of Myanmar exhibits hundreds of glass mosaic pieces like dolls, animal figures, chairs. Notable artists Isaiah Zagar Boris Anrep Miksa Róth Materials Glass Gems Glue Grout Sponge See also Mosaic Art of Myanmar Tiffany Glass and Decorating Company Āina-kāri, a similar element in Persian architecture References External links Glass Mosaics of Burma, 1901 Burmese art Mosaic Architectural elements Glass art
Glass mosaic
[ "Technology", "Engineering" ]
311
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
68,604,778
https://en.wikipedia.org/wiki/DashO%20%28software%29
DashO is a code obfuscator, compactor, optimizer, watermarker, and encryptor for Java, Kotlin and Android applications. It aims to achieve little or no performance loss even as the code complexity increases. DashO can also statically analyze the code to find unused types, methods, and fields, and delete them, thereby making the application smaller. DashO can delete used methods that are not needed in published applications, such as debugging and logging calls. See also Dotfuscator — a code obfuscator for .NET. ProGuard (software) — a code obfuscator for Java. References Software obfuscation Java development tools Android (operating system) development software
DashO (software)
[ "Technology", "Engineering" ]
155
[ "Cybersecurity engineering", "Software obfuscation" ]
68,604,844
https://en.wikipedia.org/wiki/Mars%20and%20the%20Mind%20of%20Man
Mars and the Mind of Man is a non-fiction book chronicling a public symposium at the California Institute of Technology on November 12, 1971. The panel consisted of five luminaries of science, literature, and journalism: Ray Bradbury; Arthur C. Clarke; Bruce C. Murray; Carl Sagan and Walter Sullivan. These five are the authors of this book. The symposium occurred shortly before the Mariner 9 space probe entered orbit around Mars. The book was published in 1973 by Harper and Row of New York. About the book The book is record of the November 1971 discussion undertaken by the five distinguished panel members mentioned above. This conversation earmarked Mariner 9's Martian arrival as an important moment. Also, the symposium hailed a remarkable milestone. Mariner 9 was to be the first earth spacecraft to be inserted into the orbit of another distinct planet. As noted, "...Caltech Planetary Science professor Bruce Murray summoned [the] formidable panel of thinkers to discuss the implications of this historic event." The discussion's moderator was Walter Sullivan, the New York Times science editor. Varied perspectives were offered on the Mariner 9 mission; the red planet itself; the interrelationship of humans and the Cosmos; prioritizing the exploration of space; and contemplating civilization's future. Also included in the book are the first photos sent to Earth by the Mariner 9 space probe and "...a selection of 'afterthoughts' by the panelists, looking back on the historic achievement." Bradbury's poem On several minutes of archived footage released by NASA, Bradbury is shown engaging in witty banter with other panel members at the November 1971 panel discussion. The film segment was issued in 2012 to honor a newly named site on the red planet,"Bradbury Landing". Also the released footage shows Bradbury reading his poem "If Only We Had Taller Been" (poem begins at 2:20) At the time, this was "...one of several unpublished poems he shared at the event." Before reading the poem, Bradbury is recorded saying "I don’t know what in the hell I’m doing here. I’m the least scientific of all the people up on the platform here today...I was hoping, that during the last few days, as we got closer to Mars and the dust cleared, that we’d see a lot of Martians standing there with huge signs saying, ‘Bradbury was right,’” References External links Exploration of the Planets. A short 1971 NASA film. US National Archives. YouTube. American non-fiction books 1973 non-fiction books Astronomy books Popular physics books Popular science books California Institute of Technology NASA space probes Harper & Row books Works by Carl Sagan
Mars and the Mind of Man
[ "Astronomy" ]
566
[ "Astronomy books", "Works about astronomy" ]
68,605,357
https://en.wikipedia.org/wiki/Anogon
In Greek mythology, Anogon (Ancient Greek: Ἀνώγων means 'command, exhortation') was the son of Castor, one of the Dioscuri, and Hilaeira, daughter of Leucippus of Messenia. He was also called Anaxias. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Sextus Propertius, Elegies from Charm. Vincent Katz. trans. Los Angeles. Sun & Moon Press. 1995. Online version at the Perseus Digital Library. Latin text available at the same website. Mythological Messenians Castor and Pollux
Anogon
[ "Astronomy" ]
309
[ "Castor and Pollux", "Astronomical myths" ]
68,605,473
https://en.wikipedia.org/wiki/Mnesileus
In Greek mythology, Mnesileus (Ancient Greek: Μνησίλεως Mnesileos) or Mnasinous (Μνασίνους) was the son of Polydeuces, one of the Dioscuri, and Phoebe, daughter of Leucippus of Messenia. The temple of the Dioscuri at Argos contained also the statues of these two sons of the Dioscuri, Anaxias and Mnasinous, and on the throne of Amyclae both were represented riding on horseback. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Sextus Propertius, Elegies from Charm. Vincent Katz. trans. Los Angeles. Sun & Moon Press. 1995. Online version at the Perseus Digital Library. Latin text available at the same website. Mythological Messenians Castor and Pollux
Mnesileus
[ "Astronomy" ]
364
[ "Castor and Pollux", "Astronomical myths" ]
68,607,146
https://en.wikipedia.org/wiki/Miriani%20Griselda%20Pastoriza
Miriani Griselda Pastoriza (born 1939) is an Argentine-born Brazilian astronomer, tenured professor in the Department of Astronomy of the Institute of Physics, at the Federal University of Rio Grande do Sul, and is a member of the Brazilian Academy of Sciences. Biography Miriani Griselda Pastoriza was born in 1939, in Villa San Martín Loreto, Santiago del Estero Province, Argentina. One of her main scientific contributions was the discovery and characterization, together with the Argentine astronomer José Luis Sérsic, of the so-called Sersic-Pastoriza galaxies (also known as galaxies with peculiar nuclei). In 1970, she personally determined that the spectrum of the galaxy NGC 1566 is variable, which was a shocking discovery that introduced a change in the discipline. Continuing with this line of research, Pastoriza, in collaboration with international researchers, carried out work on light variability in other galaxies, which allowed mapping of the structure and size of the central regions of galaxies where supermassive black holes are hosted. Pastoriza was the scientific advisor for many Brazilian astronomers who are now leading international scientists, including Thaisa Storchi Bergmann and Eduardo Luiz Damiani Bica. Pastoriza is also an active fighter for female equality in science. She collaborates with the Latin American Association of Women Astronomers. She also participates in a program in Brazil called "Girls in Science". Pastoriza is the representative of Brazil in the “International Scientific Committee of Gemini telescopes”. She also represents Brazil on the “SOAR Telescope International Board of Directors” and belongs to the “Board of Directors of the National Observatory of Rio de Janeiro”. She was appointed a member of the “Board of Directors of the National Astrophysics Laboratory of Sao Paulo”. Since 2014, she has been an Emeritus Professor at the Federal University of Rio Grande do Sul. Pastoriza is a naturalized Brazilian. Awards and honours In 1995 she was included in a list of the 170 most productive researchers in Brazil in all areas of science, published by Folha do Sao Paulo, one of the newspapers with the largest circulation in Brazil. She has reached the highest category for a researcher in Brazil, classified as 1A within the CNPq. She is the representative of Brazil in the International Scientific Committee of Gemini telescopes and SOAR Telescope International Board of Directors. She is part of the Board of Directors of the National Observatory of Rio de Janeiro and Board of Directors of the National Astrophysics Laboratory of Sao Paulo. The Brazilian Astronomical Society named an award after her to recognize outstanding contributions in astronomical research. In 2007, she was named a member of the Brazilian Academy of Sciences. In 2008, she was awarded the Medal of Commendation from the National Order of Scientific Merit of Brazil - one of the highest recognition to which a scientist in that country can aspire - for her relevant contributions to Science and Technology On October 24, 2018, the National University of Córdoba awarded her the title of Doctor Honoris Causa for her contributions to the field of astronomy. Legacy The "Miriani Pastoriza Award" is named in her honor by the board of directors of the Brazilian Astronomical Society. References 1939 births Living people People from Santiago del Estero Province Argentine astronomers Brazilian astronomers Argentine academics Academic staff of the Federal University of Rio Grande do Sul Argentine emigrants to Brazil Women astronomers
Miriani Griselda Pastoriza
[ "Astronomy" ]
666
[ "Women astronomers", "Astronomers" ]
68,607,634
https://en.wikipedia.org/wiki/Neeff%27s%20wheel
Neeff's wheel, also known as the Blitzrad (German: "lightning wheel" or "spark wheel") is a historical electrical apparatus. It is a kind of contact breaker, designed to interrupt an electrical circuit at periodic intervals, producing visible sparks. It was first presented in the 1830s by the German scientist (1782–1849). The arrangement consists of a toothed wheel against which a conductive wire is pressed (by a spring something like that of a mousetrap). Electrical current flows through the wheel into the wire. When the gear wheel is turned, each tooth of the gear causes the wire to ride up and then briefly drop down, losing contact with the wheel and generating a spark. The gear wheel can be driven by a hand crank. Instead of air, the gaps between the teeth of the gear wheel may be filled with a solid electrical insulator such as ebony wood. Neeff credited this innovation to his colleague Johann Philipp Wagner. Neeff's wheel was a forerunner of the modern contact breaker. References Automotive electrics Historical scientific instruments
Neeff's wheel
[ "Engineering" ]
217
[ "Electrical engineering", "Automotive electrics" ]