id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
42,054,240 | https://en.wikipedia.org/wiki/Metreleptin | Metreleptin, sold under the brand name Myalept among others, is a synthetic analog of the hormone leptin used to treat various forms of dyslipidemia. It has been approved in Japan for metabolic disorders including lipodystrophy and in the United States as replacement therapy to treat the complications of leptin deficiency, in addition to diet, in patients with congenital generalized or acquired generalized lipodystrophy.
The most common side effects include hypoglycaemia (low blood glucose) and weight loss.
It was approved for medical use in Canada in January 2024.
Medical uses
In the European Union, metreleptin is indicated in addition to diet to treat lipodystrophy, where people have a loss of fatty tissue under the skin and a build-up of fat elsewhere in the body such as in the liver and muscles. It is used in adults and children above the age of two years with generalised lipodystrophy (Berardinelli-Seip syndrome and Lawrence syndrome); and in adults and children above the age of twelve years with partial lipodystrophy (including Barraquer-Simons syndrome), when standard treatments have failed.
In the United States, it is indicated as an adjunct to diet as replacement therapy to treat the complications of leptin deficiency in people with congenital or acquired generalized lipodystrophy.
Research
Metreleptin is being investigated for the treatment of diabetes and/or hypertriglyceridemia, in patients with rare forms of lipodystrophy, syndromes characterized by abnormalities in adipose tissue distribution, and severe metabolic abnormalities. The FDA approved Metreleptin injection for treating complications of leptin deficiency in February 2014.
In a three-year study of metreleptin in patients with lipodystrophy organized by the National Institute of Diabetes and Digestive and Kidney Diseases at the National Institutes of Health, metreleptin treatment was associated with a significant decrease in blood glucose (A1c decreased from 9.4% at baseline to 7.0% at study end) and triglyceride concentration (from 500 mg/dl at baseline to 200 mg/dl at study end). Metreleptin is effective in most patients with generalized lipodystrophy where circulating leptin levels are extremely low. Analogous to insulin replacement for patients with type 1 Diabetes, metreleptin restores the function of a deficient hormone. However, in patients with partial lipodystrophy where there is only a relative leptin deficiency, the response to metreleptin is not universal. This may or may not be due to anti-leptin antibodies.
Metreleptin is undergoing research for its potential benefit in the treatment of anorexia nervosa. It is hypothesized that the gradual loss of body fat mass, and more specifically the ensuing low leptin levels, escalate the preexisting drive for thinness into an obsessive-compulsive-like and addictive-like state. It was shown that short-term metreleptin treatment of patients with anorexia nervosa had rapid on-set of beneficial cognitive, emotional, and behavioral effects. Among other things, depression, drive for activity, repetitive thoughts of food, inner restlessness, and weight phobia decreased rapidly. Whether metreleptin (or another leptin analogue) is a suitable treatment for anorexia nervosa remains to be seen. Potential side effects are weight loss and the development of anti-metreleptin antibodies.
In a clinical study, metreleptin treatment improved non-alcoholic steatohepatitis (fatty liver disease) both in patients with partial lipodystrophy and in those with relative leptin deficiency. Both steatosis and hepatic injury scores decreased. Metreleptin reduces body weight in overweight people with low leptin levels.
Although it is not very effective as a weight loss drug, leptin levels are lowered in people who have lost weight and it is hypothesized that supplemental leptin could help them with weight loss maintenance. However, there is no regulatory pathway for drug approval for this indication.
References
Leptin receptor agonists
Systemic hormonal preparations
Drugs developed by AstraZeneca
Orphan drugs
Pharmacology | Metreleptin | Chemistry | 908 |
41,062 | https://en.wikipedia.org/wiki/Double-ended%20synchronization | For two connected exchanges in a communications network, a double-ended synchronization (also called double-ended control) is a synchronization control scheme in which the phase error signals used to control the clock at one telephone exchange are derived by comparison with the phase of the incoming digital signal and the phase of the internal clocks at both exchanges.
References
Telecommunications techniques
Synchronization | Double-ended synchronization | Engineering | 80 |
26,633,993 | https://en.wikipedia.org/wiki/One-male%20group | One-male groups are a type of social organization where one male interacts with a group of females and their immature offspring. Offspring of both sexes are evicted from the group upon reaching puberty. It can be seen in many species of primates, including the gelada baboon, the patas monkey, savanna baboon, sun-tailed monkey, golden snub-nosed monkey, and the hamadryas baboon. There are costs and benefits for individuals living in one-male groups. As well, individuals within one-male groups can interact with each other just like individuals can interact with those from different one-male groups.
Origin
A study of savanna baboons (hamadryas ursinus) indicates that the one-male groups in this species are formed by fissioning. For example, a 100-month old male entered a multi male - multi female (mm) group then formed a one-male group with eight of the adult females in the MM group. Juveniles of the species, suspected to be young of the eight adult females, also joined the new one-male group. However, when a new male successfully enters a one-male group, the social hierarchy will be changed depending on the previously determined rankings of the newly entered male. The previous resident male of the one-male group may be out-ranked and therefore placed lower on the hierarchy of males.
Costs
Infanticide
One of the costs of living in one-male groups is the killing of unweaned young by conspecific adult males. This is known as infanticide, and mostly occurs when adult males or coalitions of males takeover the group and kill the resident male. This is done to increase the reproductive success of the intervening males because the females are more likely to mate with them now that they need to produce new offspring. While the infanticide is an obvious cost to females, it is beneficial to the infanticidal males. Infanticide in one-male groups has been studied in the Virungas population of mountain gorillas.
Inbreeding
Another cost of living in one-male social groups is that there is a high occurrence of inbreeding. This means that closely related individuals can mate and produce offspring. This results in decreasing genetic diversity with subsequent generations of the species. For example, inbreeding has been studied in one-male groups of sun-tailed monkeys (Cercopithecus solatus). In this study, the time between two births for females increased when an inbred offspring was born. This suggests that there could be increased maternal costs with giving birth to and rearing an inbred offspring, compared to a noninbred offspring. Inbreeding depression resulted from the decreased genetic diversity within this population, meaning that the population as a whole experienced a decrease in fitness (i.e. reproductive success). Unlike infanticide, the high occurrence of inbreeding in one-male groups is a disadvantageous to both the females and males in the group.
Benefits
Feeding advantages
Experiments involving the hamadryas baboon species (hamadryas hamadryas) provide evidence of feeding advantages for male and female members of one-male groups. However, the findings of feeding advantages were only evident when these one-male groups formed clans. It has been shown that males from single one-male groups did not approach males that were part of clans to compete for food sources. Additionally, it was found that males from smaller clans did not approach males from bigger clans (i.e. with more one-male groups) to compete for food. Ultimately, these feeding advantages of decreased competition were seen between one-male groups, not for males within the same groups or clans. In addition, it can be said that males and females in a clan have feeding advantages compared to males and females in single one-male groups because it has been shown that the males and females in clans gain access to clumped food sources earlier than those in single one-male groups and that they spend more time with clumped food sources than the single groups.
Within-group interactions
Female-female interactions
Studies of social interactions among golden snub‐nosed monkeys (Rhinopithecus roxellana) reveal that adult females tend to interact with each other, but they do not form strong social bonds with other females in the same one-male group.
Female-male interactions
It has been shown that adult female golden snub-nosed monkeys do not form strong social relationships with the resident male in the one-male group. However, the adult females tended to interact more with other adult females instead of the resident male when they were looking for social interaction.
Patterns of social relationships
While researchers have found that individuals in one-male groups of hamadryas baboons exhibit a pattern of social relationships called a star-shaped relationship, it has been found that gelada baboon (Theropithecus gelada) individuals in one-male groups exhibit a net-shaped relationship pattern. Individuals in the snub-nosed monkey species exhibit a different pattern of social relationships than the two other baboon species.
Between-group interactions
Allomaternal nursing
In a study of social relationships among a clan (i.e. multiple one-male groups) of Yunnan snub-nosed monkeys (Rhinopithecus bieti), it was determined that the adult females of one-male groups sometimes care for the young of other one-male groups. For example, when a mother and her young offspring were accidentally separated, a mother belonging to a different one-male group cared for the young. The separated young was nursed by the adoptive mother (who also nursed her own offspring) and tolerated by the resident male of the one-male group that the offspring was now temporarily a part of.
Affiliative interactions
Affiliative interactions between individuals of one-male groups include sitting near, grooming in front of, and handling the infants of other one-male groups. The most prevalent type of affiliative interaction seen in a study involving Sichuan snub-nosed monkeys (Rhinopithecus roxellana) is infant handling. This infant handling can form gatherings of multiple one-male units that forage together. This type of social structure is called a band.
See also
Multi-male group
References
Ethology | One-male group | Biology | 1,301 |
3,677,332 | https://en.wikipedia.org/wiki/Paraxanthine | Paraxanthine, also known as 1,7-dimethylxanthine, is an isomer of theophylline and theobromine, two well-known stimulants found in coffee, tea, and chocolate mainly in the form of caffeine. It is a member of the xanthine family of alkaloids, which includes theophylline, theobromine and caffeine.
Production and metabolism
Paraxanthine is not known to be produced by plants but is observed in nature as a metabolite of caffeine in animals and some species of bacteria.
Paraxanthine is the primary metabolite of caffeine in humans and other animals, such as mice. Shortly after ingestion, roughly 84% of caffeine is metabolized into paraxanthine by hepatic cytochrome P450, which removes a methyl group from the N3 position of caffeine. After formation, paraxanthine can be broken down to 7-methylxanthine by demethylation of the N1 position, which is subsequently demethylated into xanthine or oxidized by CYP2A6 and CYP1A2 into 1,7-dimethyluric acid. In another pathway, paraxanthine is broken down into 5-acetylamino-6-formylamino-3-methyluracil through N-acetyl-transferase 2, which is then broken down into 5-acetylamino-6-amino-3-methyluracil by non-enzymatic decomposition. In yet another pathway, paraxanthine is metabolized CYPIA2 forming 1-methyl-xanthine, which can then be metabolized by xanthine oxidase to form 1-methyl-uric acid.
Certain proposed synthetic pathways of caffeine make use of paraxanthine as a bypass intermediate. However, its absence in plant alkaloid assays implies that these are infrequently, if ever, directly produced by plants.
Pharmacology and physiological effects
Like caffeine, paraxanthine is a psychoactive central nervous system (CNS) stimulant.
Pharmacodynamics
Studies indicate that, similar to caffeine, simultaneous antagonism of adenosine receptors is responsible for paraxanthine's stimulatory effects. Paraxanthine adenosine receptor binding affinity (21 μM for A1, 32 μM for A2A, 4.5 μM for A2B, and >100 for μM for A3) is similar or slightly stronger than caffeine, but weaker than theophylline.
Paraxanthine is a selective inhibitor of cGMP-preferring phosphodiesterase (PDE9) activity and is hypothesized to increase glutamate and dopamine release by potentiating nitric oxide signaling. Activation of a nitric oxide-cGMP pathway may be responsible for some of the behavioral effects of paraxanthine that differ from those associated with caffeine.
Paraxanthine is a competitive nonselective phosphodiesterase inhibitor which raises intracellular cAMP, activates PKA, inhibits TNF-alpha and leukotriene synthesis, and reduces inflammation and innate immunity.
Unlike caffeine, paraxanthine acts as an enzymatic effector of Na+/K+ ATPase. As a result, it is responsible for increased transport of potassium ions into skeletal muscle tissue. Similarly, the compound also stimulates increases in calcium ion concentration in muscle.
Pharmacokinetics
The pharmacokinetic parameter for paraxanthine are similar to those for caffeine, but differ significantly from those for theobromine and theophylline, the other major caffeine-derived methylxanthine metabolites in humans (Table 1).
Uses
Paraxanthine is a phosphodiesterase type 9 (PDE9) inhibitor and it is sold as a research molecule for this same purpose.
Toxicity
Paraxanthine is believed to exhibit a lower toxicity than caffeine and the caffeine metabolite, theophylline. In a mouse model, intraperitoneal paraxanthine doses of 175 mg/kg/day did not result in animal death or overt signs of stress; by comparison, the intraperitoneal LD50 for caffeine in mice is reported at 168 mg/kg. In in vitro cell culture studies, paraxanthine is reported to be less harmful than caffeine and the least harmful of the caffeine-derived metabolites in terms of hepatocyte toxicity.
As with other methylxanthines, paraxanthine is reported to be teratogenic when administered in high doses; but it is a less potent teratogen as compared to caffeine and theophylline. A mouse study on the potentiating effects of methylxanthines coadministered with mitomycin C on teratogenicity reported the incidence of birth defects for caffeine, theophylline, and paraxanthine to be 94.2%, 80.0%, and 16.9%, respectively; additionally, average birth weight decreased significantly in mice exposed to caffeine or theophylline when coadministered with mitomycin C, but not for paraxanthine coadministered with mitomycin C.
Paraxanthine was reported to be significantly less clastogenic compared to caffeine or theophylline in an in vitro study using human lymphocytes.
References
External links
Adenosine receptor antagonists
Animal metabolites
Human drug metabolites
Phosphodiesterase inhibitors
Stimulants
Wakefulness-promoting agents
Xanthines | Paraxanthine | Chemistry | 1,236 |
13,276,879 | https://en.wikipedia.org/wiki/Racetrack%20memory | Racetrack memory or domain-wall memory (DWM) is an experimental non-volatile memory device under development at IBM's Almaden Research Center by a team led by physicist Stuart Parkin. It is a current topic of active research at the Max Planck Institute of Microstructure Physics in Dr. Parkin's group. In early 2008, a 3-bit version was successfully demonstrated. If it were to be developed successfully, racetrack memory would offer storage density higher than comparable solid-state memory devices like flash memory.
Description
Racetrack memory uses a spin-coherent electric current to move magnetic domains along a nanoscopic permalloy wire about 200 nm across and 100 nm thick. As current is passed through the wire, the domains pass by magnetic read/write heads positioned near the wire, which alter the domains to record patterns of bits. A racetrack memory device is made up of many such wires and read/write elements. In general operational concept, racetrack memory is similar to the earlier bubble memory of the 1960s and 1970s. Delay-line memory, such as mercury delay lines of the 1940s and 1950s, are a still-earlier form of similar technology, as used in the UNIVAC and EDSAC computers. Like bubble memory, racetrack memory uses electrical currents to "push" a sequence of magnetic domains through a substrate and past read/write elements. Improvements in magnetic detection capabilities, based on the development of spintronic magnetoresistive sensors, allow the use of much smaller magnetic domains to provide far higher bit densities.
In production, it was expected that the wires could be scaled down to around 50 nm. There were two arrangements considered for racetrack memory. The simplest was a series of flat wires arranged in a grid with read and write heads arranged nearby. A more widely studied arrangement used U-shaped wires arranged vertically over a grid of read/write heads on an underlying substrate. This would allow the wires to be much longer without increasing its 2D area, although the need to move individual domains further along the wires before they reach the read/write heads results in slower random access times. Both arrangements offered about the same throughput performance. The primary concern in terms of construction was practical; whether or not the three dimensional vertical arrangement would be feasible to mass-produce.
Comparison to other memory devices
Projections in 2008 suggested that racetrack memory would offer performance on the order of 20-32 ns to read or write a random bit. This compared to about 10,000,000 ns for a hard drive, or 20-30 ns for conventional DRAM. The primary authors discussed ways to improve the access times with the use of a "reservoir" to about 9.5 ns. Aggregate throughput, with or without the reservoir, would be on the order of 250-670 Mbit/s for racetrack memory, compared to 12800 Mbit/s for a single DDR3 DRAM, 1000 Mbit/s for high-performance hard drives, and 1000 to 4000 Mbit/s for flash memory devices. The only current technology that offered a clear latency benefit over racetrack memory was SRAM, on the order of 0.2 ns, but at a higher cost. Larger feature size "F" of about 45 nm (as of 2011) with a cell area of about 140 F2.
Racetrack memory is one among several emerging technologies that aim to replace conventional memories such as DRAM and Flash, and potentially offer a universal memory device applicable to a wide variety of roles. Other contenders included magnetoresistive random-access memory (MRAM), phase-change memory (PCRAM) and ferroelectric RAM (FeRAM). Most of these technologies offer densities similar to flash memory, in most cases worse, and their primary advantage is the lack of write-endurance limits like those in flash memory. Field-MRAM offers excellent performance as high as 3 ns access time, but requires a large 25-40 F² cell size. It might see use as an SRAM replacement, but not as a mass storage device. The highest densities from any of these devices is offered by PCRAM, with a cell size of about 5.8 F², similar to flash memory, as well as fairly good performance around 50 ns. Nevertheless, none of these can come close to competing with racetrack memory in overall terms, especially density. For example, 50 ns allows about five bits to be operated in a racetrack memory device, resulting in an effective cell size of 20/5=4 F², easily exceeding the performance-density product of PCM. On the other hand, without sacrificing bit density, the same 20 F² area could fit 2.5 2-bit 8 F² alternative memory cells (such as resistive RAM (RRAM) or spin-torque transfer MRAM), each of which individually operating much faster (~10 ns).
In most cases, memory devices store one bit in any given location, so they are typically compared in terms of "cell size", a cell storing one bit. Cell size itself is given in units of F², where "F" is the feature size design rule, representing usually the metal line width. Flash and racetrack both store multiple bits per cell, but the comparison can still be made. For instance, hard drives appeared to be reaching theoretical limits around 650 nm²/bit, defined primarily by the capability to read and write to specific areas of the magnetic surface. DRAM has a cell size of about 6 F², SRAM is much less dense at 120 F². NAND flash memory is currently the densest form of non-volatile memory in widespread use, with a cell size of about 4.5 F², but storing three bits per cell for an effective size of 1.5 F². NOR flash memory is slightly less dense, at an effective 4.75 F², accounting for 2-bit operation on a 9.5 F² cell size. In the vertical orientation (U-shaped) racetrack, nearly 10-20 bits are stored per cell, which itself would have a physical size of at least about 20 F². In addition, bits at different positions on the "track" would take different times (from ~10 to ~1000 ns, or 10 ns/bit) to be accessed by the read/write sensor, because the "track" would move the domains at a fixed rate of ~100 m/s past the read/write sensor.
Development challenges
One limitation of the early experimental devices was that the magnetic domains could be pushed only slowly through the wires, requiring current pulses on the orders of microseconds to move them successfully. This was unexpected, and led to performance equal roughly to that of hard drives, as much as 1000 times slower than predicted. Recent research has traced this problem to microscopic imperfections in the crystal structure of the wires which led to the domains becoming "stuck" at these imperfections. Using an X-ray microscope to directly image the boundaries between the domains, their research found that domain walls would be moved by pulses as short as a few nanoseconds when these imperfections were absent. This corresponds to a macroscopic performance of about 110 m/s.
The voltage required to drive the domains along the racetrack would be proportional to the length of the wire. The current density must be sufficiently high to push the domain walls (as in electromigration). A difficulty for racetrack technology arises from the need for high current density (>108 A/cm2); a 30 nm x 100 nm cross-section would require >3 mA. The resulting power draw becomes higher than that required for other memories, e.g., spin-transfer torque memory (STT-RAM) or flash memory.
Another challenge associated with racetrack memory is the stochastic nature in which the domain walls move, i.e., they move and stop at random positions. There have been attempts to overcome this challenge by producing notches at the edges of the nanowire. Researchers have also proposed staggered nanowires to pin the domain walls precisely. Experimental investigations have shown the effectiveness of staggered domain wall memory. Recently researchers have proposed non-geometrical approaches such as local modulation of magnetic properties through composition modification. Techniques such as annealing induced diffusion and ion-implantation are used.
See also
Giant magnetoresistance (GMR) effect
Magnetoresistive random-access memory (MRAM)
Spintronics
Spin transistor
References
External links
Redefining the Architecture of Memory
IBM Moves Closer to New Class of Memory (YouTube video)
IBM Racetrack Memory Project
Computer memory
Non-volatile memory
IBM storage devices
Spintronics | Racetrack memory | Physics,Materials_science | 1,757 |
26,165,831 | https://en.wikipedia.org/wiki/HD%20129445%20b | HD 129445 b is an eccentric Jupiter gas giant exoplanet orbiting the star HD 129445 which was discovered by the Magellan Planet Search Program in 2010. Its minimum mass is 1.6 times Jupiter's, and it takes 5 years to complete one orbit around HD 129445, a G-type star approximately 219 light years away. In 2023, the inclination and true mass of HD 129445 b were determined via astrometry.
References
Exoplanets discovered in 2010
Exoplanets detected by radial velocity
Giant planets
Circinus
Exoplanets detected by astrometry | HD 129445 b | Astronomy | 125 |
61,956,186 | https://en.wikipedia.org/wiki/Consumer%20green%20energy%20program | A consumer green energy program is a program that enables households to buy energy from renewable sources. By allowing consumers to purchase renewable energy, it simultaneously diverts the utilization of fossil fuels and promotes the use of renewable energy sources such as solar and wind.
In several countries with common carrier arrangements, electricity retailing arrangements make it possible for consumers to purchase "green" electricity from either their utility or a green power provider. Electricity is considered to be green if it is produced from a source that produces relatively little pollution, and the concept is often considered equivalent to renewable energy. Although electricity is the most common green energy, biomethane is sold as "green gas" in some locations.
In many countries, green energy currently provides a very small amount of electricity, generally contributing less than 2 to 5% to the overall pool of electricity offered by most utility companies, electric companies, or state power pools. In some U.S. states, local governments have formed regional power purchasing pools using Community Choice Aggregation and Solar Bonds to achieve a 51% renewable mix or higher, such as in the City of San Francisco.
By participating in a green energy program a consumer may be having an effect on the energy sources used and ultimately might be helping to promote and expand the use of green energy. They are also making a statement to policy makers that they are willing to pay a price premium to support renewable energy. Green energy consumers either obligate the utility companies to increase the amount of green energy that they purchase from the pool (so decreasing the amount of non-green energy they purchase), or directly fund the green energy through a green power provider. If insufficient green energy sources are available, the utility must develop new ones or contract with a third party energy supplier to provide green energy, causing more to be built. However, there is no way the consumer can check whether or not the electricity bought is "green" or otherwise.
In some countries such as the Netherlands, electricity companies guarantee to buy an equal amount of 'green power' as is being used by their green power customers. The Dutch government exempts green power from pollution taxes, which means green power is hardly any more expensive than other power.
Green energy and labeling by region
European Union
Directive 2004/8/EC of the European Parliament and of the Council of 11 February 2004 on the promotion of cogeneration based on a useful heat demand in the internal energy market includes the article 5 (Guarantee of origin of electricity from high-efficiency cogeneration).
European environmental NGOs have launched an ecolabel for green power. The ecolabel is called EKOenergy. It sets criteria for sustainability, additionality, consumer information and tracking. Only part of electricity produced by renewables fulfills the EKOenergy criteria.
United Kingdom
The Green Energy Supply Certification Scheme was launched in 2010: it implements guidelines from the Energy Regulator, Ofgem, and sets requirements on transparency, the matching of sales by renewable energy supplies, and additionality. Green electricity in the United Kingdom is widespread, and green gas is supplied to over a million homes.
United States
The United States Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Center for Resource Solutions (CRS) recognizes the voluntary purchase of electricity from renewable energy sources (also called renewable electricity or green electricity) as green power.
The most popular way to purchase renewable energy as revealed by NREL data is through purchasing Renewable Energy Certificates (RECs). According to a Natural Marketing Institute (NMI) survey 55 percent of American consumers want companies to increase their use of renewable energy.
DOE selected six companies for its 2007 Green Power Supplier Awards, including Constellation NewEnergy; 3Degrees; Sterling Planet; SunEdison; Pacific Power and Rocky Mountain Power; and Silicon Valley Power. The combined green power provided by those six winners equals more than 5 billion kilowatt-hours per year, which is enough to power nearly 465,000 average U.S. households. In 2014, Arcadia Power made RECS available to homes and businesses in all 50 states, allowing consumers to use "100% green power" as defined by the EPA's Green Power Partnership.
The U.S. Environmental Protection Agency (USEPA) Green Power Partnership is a voluntary program that supports the organizational procurement of renewable electricity by offering expert advice, technical support, tools and resources. This can help organizations lower the transaction costs of buying renewable power, reduce carbon footprint, and communicate its leadership to key stakeholders.
Throughout the country, more than half of all U.S. electricity customers now have an option to purchase some type of green power product from a retail electricity provider. Roughly one-quarter of the nation's utilities offer green power programs to customers, and voluntary retail sales of renewable energy in the United States totaled more than 12 billion kilowatt-hours in 2006, a 40% increase over the previous year.
In the United States, one of the main problems with purchasing green energy through the electrical grid is the current centralized infrastructure that supplies the consumer's electricity. This infrastructure has led to increasingly frequent brown outs and black outs, high CO2 emissions, higher energy costs, and power quality issues. An additional $450 billion will be invested to expand this fledgling system over the next 20 years to meet increasing demand. In addition, this centralized system is now being further overtaxed with the incorporation of renewable energies such as wind, solar, and geothermal energies. Renewable resources, due to the amount of space they require, are often located in remote areas where there is a lower energy demand. The current infrastructure would make transporting this energy to high demand areas, such as urban centers, highly inefficient and in some cases impossible. In addition, despite the amount of renewable energy produced or the economic viability of such technologies only about 20 percent will be able to be incorporated into the grid. To have a more sustainable energy profile, the United States must move towards implementing changes to the electrical grid that will accommodate a mixed-fuel economy.
Several initiatives are being proposed to mitigate distribution problems. First and foremost, the most effective way to reduce USA's CO2 emissions and slow global warming is through conservation efforts. Opponents of the current US electrical grid have also advocated for decentralizing the grid. This system would increase efficiency by reducing the amount of energy lost in transmission. It would also be economically viable as it would reduce the amount of power lines that will need to be constructed in the future to keep up with demand. Merging heat and power in this system would create added benefits and help to increase its efficiency by up to 80-90%. This is a significant increase from the current fossil fuel plants which only have an efficiency of 34%.
Asia
India
India's Ministry of Power notified 'Green Energy Open Access' Rules to accelerate ambitious renewable energy programmes by enabling provisions to incentivize the common consumers to get Green Power at reasonable rates
through Electricity (Promoting Renewable Energy Through Green Energy Open Access) Rules, 2022 on 06.06.2022
Small-scale green energy systems
Those not satisfied with the third-party grid approach to green energy via the power grid can install their own locally based renewable energy system. Renewable energy electrical systems from solar to wind to even local hydro-power in some cases, are some of the many types of renewable energy systems available locally. Additionally, for those interested in heating and cooling their dwelling via renewable energy, geothermal heat pump systems that tap the constant temperature of the earth, which is around 7 to 15 degrees Celsius a few feet underground and increases dramatically at greater depths, are an option over conventional natural gas and petroleum-fueled heat approaches. Also, in geographic locations where the Earth's Crust is especially thin, or near volcanoes (as is the case in Iceland) there exists the potential to generate even more electricity than would be possible at other sites, thanks to a more significant temperature gradient at these locales.
The advantage of this approach in the United States is that many states offer incentives to offset the cost of installation of a renewable energy system. In California, Massachusetts and several other U.S. states, a new approach to community energy supply called Community Choice Aggregation has provided communities with the means to solicit a competitive electricity supplier and use municipal revenue bonds to finance development of local green energy resources. Individuals are usually assured that the electricity they are using is actually produced from a green energy source that they control. Once the system is paid for, the owner of a renewable energy system will be producing their own renewable electricity for essentially no cost and can sell the excess to the local utility at a profit.
In household power systems, organic matter such as cow dung and spoilable organic matter can be converted to biochar. To eliminate emissions, carbon capture and storage is then used.
References
Sustainable energy
Emissions reduction | Consumer green energy program | Chemistry | 1,788 |
1,619,958 | https://en.wikipedia.org/wiki/Transportation%20Safety%20Board%20of%20Canada | The Transportation Safety Board of Canada (TSB, ), officially the Canadian Transportation Accident Investigation and Safety Board () is the agency of the Government of Canada responsible for advancing transportation safety in Canada. It is accountable to Parliament directly through the President of the King’s Privy Council and the Minister of Intergovernmental and Northern Affairs and Internal Trade. The independent agency investigates accidents and makes safety recommendations in four modes of transportation: aviation, rail, marine and pipelines.
Agency history
Prior to 1990, Transport Canada's Aircraft Accident Investigation Branch (1960–1984) and its successor the Canadian Aviation Safety Board or CASB (1984–1990) were responsible for investigation of air incidents. Before 1990, investigations and actions were taken by Transport Canada and even after 1984 the findings from CASB were not binding for Transport Canada to respond to.
The TSB was created under the Canadian Transportation Accident Investigation and Safety Board Act, which received royal assent in June 1989 and came into force March 29, 1990. It was formed in response to a number of high-profile accidents, following which the Government of Canada identified the need for an independent, multi-modal investigation agency. The headquarters are located in Place du Centre in Gatineau, Quebec.
The provisions of the Canadian Transportation Accident Investigation and Safety Board Act were written to establish an independent relationship between the agency and the Government of Canada.
This agency's first major test came with the crash of Swissair Flight 111 on September 2, 1998, the largest single aviation accident on Canadian territory since the 1985 crash of Arrow Air Flight 1285R. The TSB delivered its report on the accident on March 27, 2003, some 4½ years after the accident and at a cost of $57 million, making it the most complex and costly accident investigation in Canadian history to that date.
From 2005 to 2010, the TSB concluded a number of investigations into high-profile accidents, including:
the crash of Air France Flight 358;
the Cheakamus River derailment;
the sinking of Queen of the North;
the loss overboard of a crewmember of Picton Castle;
the Burnaby pipeline rupture;
the crash of Cougar Helicopters Flight 91;
the sinking of Concordia.
To increase the uptake of its recommendations and address accident patterns, the TSB launched its Watchlist in 2010, which points to nine critical safety issues troubling Canada's transportation system.
On 3 December 2013, in the wake of the Lac-Mégantic rail disaster the previous July, it was reported that the number of runaway trains was triple the number documented by the TSB.
In August 2014, the TSB released the report on its investigation into the July 2013 Lac-Mégantic derailment. In a news conference, then TSB chair Wendy Tadros described how eighteen factors played a role in the disaster including a "weak safety culture" at the now-defunct Montreal, Maine & Atlantic Railways with "a lack of standards, poor training and easily punctured tanks." The TSB also blamed Transport Canada, the regulator, for not doing thorough safety audits often enough on railways "to know how those companies were really managing, or not managing, risk." The TSB report called for "physical restraints, such as wheel chocks, for parked trains." Prior to the accident TSB had called for "new and more robust wagons for flammable liquids" but as of August 2014, little progress had been made in implementing this.
On February 4, 2019, the TSB deployed to the derailment of Canadian Pacific Railway (CP) train 301-349. Ninety-nine cars and two locomotives derailed at Mile 130.6 of the CP Laggan Subdivision, near Field, British Columbia (BC) while proceeding westward to Vancouver, BC. The three train crewmembers – a locomotive engineer, a conductor, and a conductor trainee – died as a result.
During the course of its investigation into the derailment, the organization issued two safety advisories on April 11, 2019 to Transport Canada . The first called attention to the need for effective safety procedures to be applied to all trains stopped in emergency on both "heavy grades" and "mountain grades" and the second highlighted the need to review the efficacy of the inspection and maintenance procedures for grain hopper cars used in CP's unit grain train operations (and for other railways as applicable), and ensure that these cars can be operated safely at all times.
In January 2020, the Senior Investigator was reassigned in order to protect the integrity and objectivity of the investigation after voicing an opinion implying civil or criminal liability. The TSB labelled the comments made to The Fifth Estate journalists as "completely inappropriate" as the mandate of the TSB is to make findings as to causes and contributing factors of a transportation occurrence, but not to assign fault or determine civil or criminal liability. The CBC documentary pointed out what seemed to be a problem, where the private police service of CP Rail investigated the accident. A CPPS officer was also resigned over these circumstances. As of June 2020, the investigation is ongoing.
Mandate and direction
The Transportation Safety Board's mandate is to
conduct independent investigations, including public inquiries when necessary, into selected transportation occurrences in order to make findings as to their causes and contributing factors;
identify safety deficiencies, as evidenced by transportation occurrences;
make recommendations designed to eliminate or reduce any such safety deficiencies; and
report publicly on its investigations and on the related findings
The TSB may assist other transportation safety boards in their investigations. This may happen when:
an incident or accident occurs involving a Canadian-registered aircraft in commercial or air transport use;
an incident or accident occurs involving a Canadian-built aircraft (or an aircraft with Canadian-built engines, propellers, or other vital components) in commercial or air transport use;
a country without the technical ability to conduct a full investigation asks for the TSB's assistance (especially in the field of reading and analyzing the content of flight recorders).
Provincial and territorial governments may call upon the TSB to investigate occurrences. However, it is up to the TSB whether or not to proceed with an investigation. Public reports are published following class one, class two, class three and class four investigations. Recommendations made by the TSB are not legally binding upon the Government of Canada, nor any of its Ministers of departments. However, when a recommendation is made to a federal department, a formal response must be presented to the TSB within 90 days.
The TSB reports to the Parliament of Canada through the President of the King's Privy Council for Canada.
Board membership
As of August 2024, the Board was composed of the following four members:
Chair Yoan Marier
Ken Potter
Paul Dittmann
Leo Donatti
Facilities
The TSB Engineering Laboratory, which has the facilities for investigating transport accidents and incidents, is in Ottawa, adjacent to Ottawa International Airport.
List of chairs
John W. Stants 1990–1996
Benoît Bouchard 1996–2001
Charles H. Simpson 2001–2002 (acting)
Camille Thériault 2002–2004
Charles H. Simpson 2004–2005 (acting)
Wendy A. Tadros 2005–2006 (acting)
Wendy A. Tadros 2006–2014
Kathleen Fox 2014–2024
Yoan Marier 2024–present
See also
Aviation safety
References
External links
Rail accident investigators
Organizations investigating aviation accidents and incidents
Aviation authorities
Transport safety organizations
Federal departments and agencies of Canada
Aviation in Canada
1990 establishments in Quebec
History of transport in Canada
Railway safety
Organizations based in Gatineau
Transport organizations based in Canada
Canadian transport law | Transportation Safety Board of Canada | Technology | 1,531 |
38,462,710 | https://en.wikipedia.org/wiki/LRLL%2054361 | LRLL 54361 also known as L54361 is thought to be a binary protostar producing strobe-like flashes, located in the constellation Perseus in the star-forming region IC 348 and 950 light-years away.
The object may offer insight into a star's early stages of formation, when large masses of gas and dust are falling into a newly forming binary star - called a pulsed accretion model. LRLL 54361 emits a burst of light at regular intervals of 25.34 days, increasing in infrared luminosity by an order of magnitude over a span of a week and then gradually dimming until the next pulse. This behavior be caused by repeated close approaches between the two component stars which are gravitationally linked in an eccentric orbit. The flashes may be the result of large amounts of matter falling into the growing protostars. Since the stars are obscured by the dense disk and envelope of dust surrounding them, direct observation is difficult. This process of star birth has been witnessed in its later stages, but has to date not been seen in such a young system, nor with such intensity and regularity. The pair of stars are thought to be only a few hundred thousand years old.
LRLL 54361 was first detected by the Spitzer Space Telescope as a variable object inside the star-forming region IC 348. The Hubble Space Telescope confirmed the Spitzer observations and revealed the detailed structure around the protostar. Hubble images show two large, clear-swept regions in the disk around the stars. The monitoring of LRLL 54361 continues using other instruments, including the Herschel Space Telescope, and astronomers hope to obtain more direct measurements of the binary star and its orbit.
References
Perseus (constellation)
Protostars | LRLL 54361 | Astronomy | 368 |
66,122,548 | https://en.wikipedia.org/wiki/Polymateria | Polymateria Ltd is a British technology company developing biodegradable plastic alternatives. In 2020, the privately owned company was the first to achieve certified biodegradation of the most commonly-littered forms of plastic packaging in real-world conditions, in less than a year without creating microplastics.
History
Polymateria was founded in 2015 at Imperial College London by Jonathan Sieff and Lee Davy-Martin. Between 2016 and 2017, it was based at the Imperial White City Incubator, and since 2017 has been headquartered at the nearby Translation & Innovation Hub (I-HUB). In January 2018, Niall Dunne became CEO, and in March 2018 the company brought its first product to market.
Prince Charles visited the Polymateria laboratories in March 2019.
In October 2019, Polymateria announced a partnership with specialty chemical company Clariant to bring its new Biotransformation technology to the Southeast Asian market.
A subsequent partnership agreement between Polymateria, Clariant and the Indian Ministry of Chemicals and Fertilizers announced in January 2020 aims to bring Biotransformation to India.
In July 2020, the impact investing platform Planet First Partners (PFP) invested £15 million in Polymateria. Alongside the investment, several people joined the Polymateria board, including PFP head Frédéric de Mévius and former Marks & Spencer CEO Marc Bolland as chairman. The same month, it was reported that Puma would be the first company to use Polymateria's technology in the 160 million plastic bags it used each year, starting September 2020 in Southeast Asian markets, and in Britain in 2021.
The family of Hong Kong billionaire Silas Chou, whose daughter Veronica Chou was said to be pushing for more sustainability in the fashion industry, invested in Polymateria in 2020.
Two years after Polymateria CEO Niall Dunne announced his company's intention to become the "Tesla of plastics",
in November 2020, former Tesla executive Steven Altmann-Richer joined Polymateria as head of public affairs and regulatory strategy. Also in November 2020, the company hinted that its product was already being tested in commercial food packaging in the UK, Spain, Portugal, Taiwan and Kenya, although it did not reveal which brands or products were involved.
In February 2021, clothing company Pour les Femmes announced that it would be using Polymateria's biodegradable plastic in its packaging. Electric racing series Extreme E revealed in March 2021 its partnership with Polymateria, which will supply cups and food packaging for the event, and later collect these for recycling.
In April 2021, FiberVisions and Avgol, two companies owned by Thai Indorama Ventures, partnered with Polymateria, planning to apply the technology to their nonwoven fabrics, which are used for products like face masks and diapers.
The company signed a deal in September 2021 with Taiwanese Formosa Plastics Corp, potentially worth US$100 million in license fees. By then, Polymateria's plastics were also used in some of the packaging of Taiwanese 7-Eleven stores.
The technology was demonstrated during the 2022 Chicago Marathon, on sugarcane-based recovery bags for the runners.
Since 2023, their technology has been branded as "Lyfecycle", and in that same year was applied to plastic bags from Indian fashion brand Doodlage.
In April 2023, Polymateria partnered with Toppan Specialty Films, an Indian plastic manufacturer based in the Punjab region. In May 2023, the company received another £20 million investment, while also signing a deal with a subsidiary of Lotte Chemical to develop products in Malaysia. After the £20 million investment, CEO Dunne announced expansion plans for the company, and also hinted that turnover was in the lower millions, and that the company had experienced growth of 300% between 2021 and 2022.
By January 2024 the company had introduced a biodegradeable baler twine which was produced by a Portuguese firm.
Biodegradable plastics
Biotransformation technology
The company has developed a technology called Biotransformation, which involves adding a masterbatch to plastics during production to aid their decomposition.
The technology is applicable to polyolefins, which include the most commonly littered types of plastics: polyethylene (e.g. plastic bags, packaging) and polypropylene (e.g. plastic cups, bottle caps).
Although these plastics can still be recycled, they will also decompose into a waxy substance in less than a year, provided they are exposed to environmental conditions such as sunlight, air and water. Ecotoxicity tests have shown that this intermediary wax is "non-harmful for contact with soil, plants and the aquatic environment". Bacteria and fungi will then digest the wax and break it down into carbon dioxide and water. It does not leave behind microplastics, a common problem of previous biodegradable products. According to Polymateria, this is achieved because the additives do not just break down the amorphous, but also the crystalline regions of the polymer. The resulting substance thus has a molecular weight of only around 6001000 daltons, compared to existing technologies which were unable to get below 5000 daltons. At these lower levels, the polymer is broken down enough to become a waxy substance biologically available to microbes.
Under sub-optimal conditions, degradation might take slightly longer, with an experimental flowerpot taking up to two years to dissolve if "tossed in a ditch".
The company claims that the onset of biodegradation can be precisely time-controlled, so plastics won't deteriorate before recycling can happen. CEO Dunne said it was looking to apply "terms consumers understand" to the new packaging, such as "recycle-by dates or where recycling isn’t an option dispose-by dates".
Production of the additive in form of a masterbatch was done at a factory in Clermont-Ferrand in 2020, but the company was in talks for a larger facility in India. The technology is expected to increase the cost of packaging by 10 to 15 percent.
A study of Polymateria's plastic biodegradation performance was published in Polymers in July 2021.
BSI standard
In 2020, a new British standard for biodegradability named PAS 9017 was adopted by the BSI Group. Polymateria had sponsored the standard, which was reviewed by the Waste & Resources Action Programme (WRAP), the Department for Environment, Food and Rural Affairs and the Department for Business, Energy and Industrial Strategy. Polymateria's product became the first to reach the new benchmark. Ecologist Dannielle Green of Anglia Ruskin University, who was involved in validating the standard, called it a "step in the right direction" and praised the "interdisciplinary collaborative approach" taken by the BSI.
Criticism and rebuttal
The BSI standard was criticised on 22 October 2020 in an open letter by a group of 40 organizations, including Tesco, Aldi and the Environmental Services Association. The letter called upon the UK government to "follow the lead" of the European Union in banning oxo-degradable plastics, warning of the dangers of "microplastics [...] entering the food chain" and claiming that "degradable plastic alternatives will disrupt [Britain]'s recycling facilities". WRAP, a registered charity that was on the steering committee for the standard, responded to inquiries by declaring that its involvement should not be mistaken as an endorsement of the standard. However, WRAP maintained that littering was a "real issue" and that it would continue to encourage "developments in plastics technologies which have no negative impact on the ability for plastic to be effectively recycled and have no negative impacts to the natural environment". After a "small but significant anomaly" was found in the BSI consultation process, WRAP said in December 2020 that the committee was due to meet in January the next year to discuss details of the testing process for microplastics.
However, Polymateria's Biotransformation technology does not involve the oxo-degradable plastics criticised by the open letter, which are often confused with biodegradable plastics. It also does not produce microplastics (as required by the PAS 9017 standard), and the company insists its chemical additive has "no adverse impact on recycling streams".
Environmental organizations that have criticized the BSI standard have included the WWF and Keep Britain Tidy, which voiced concerns that degradable plastics would increase littering.
Polymateria CEO Dunne countered by declaring that the main problem were exports to non-EU countries where the plastic waste was "not being recycled and is winding up in unmanaged waste systems." The BSI has responded by calling littering "illegal" and a "complex behavioural issue", voicing doubts that any standard would be able to "control how a member of the public acts". The "recycle-by" date stamped on Polymateria's plastics is also meant to encourage consumers to recycle the product, instead of throwing it away.
See also
Biodegradable polymer
Circular economy
Notes
References
External links
Video interview with Polymateria CEO Niall Dunne by Dr. Miniya Chatterji
2015 establishments in England
Privately held companies based in London
Biodegradable waste management | Polymateria | Chemistry | 1,910 |
44,465,987 | https://en.wikipedia.org/wiki/Non-constructive%20algorithm%20existence%20proofs | The vast majority of positive results about computational problems are constructive proofs, i.e., a computational problem is proved to be solvable by showing an algorithm that solves it; a computational problem is shown to be in P by showing an algorithm that solves it in time that is polynomial in the size of the input; etc.
However, there are several non-constructive results, where an algorithm is proved to exist without showing the algorithm itself. Several techniques are used to provide such existence proofs.
Using an unknown finite set
In combinatorial game theory
A simple example of a non-constructive algorithm was published in 1982 by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy, in their book Winning Ways for Your Mathematical Plays. It concerns the game of Sylver Coinage, in which players take turns specifying a positive integer that cannot be expressed as a sum of previously specified values, with a player losing when they are forced to specify the number 1. There exists an algorithm (given in the book as a flow chart) for determining whether a given first move is winning or losing: if it is a prime number greater than three, or one of a finite set of 3-smooth numbers, then it is a winning first move, and otherwise it is losing. However, the finite set is not known.
In graph theory
Non-constructive algorithm proofs for problems in graph theory were studied beginning in 1988 by Michael Fellows and Michael Langston.
A common question in graph theory is whether a certain input graph has a certain property. For example:
Input: a graph G.
Question: Can G be embedded in a 3-dimensional space, such that no two disjoint cycles of G are topologically linked (as in links of a chain)?
There is a highly exponential algorithm that decides whether two cycles embedded in a 3d-space are linked, and one could test all pairs of cycles in the graph, but it is not obvious how to account for all possible embeddings in a 3d-space. Thus, it is a-priori not clear at all if the linkedness problem is decidable.
However, there is a non-constructive proof that shows that linkedness is decidable in polynomial time. The proof relies on the following facts:
The set of graphs for which the answer is "yes" is closed under taking minors. I. e., if a graph G can be embedded linklessly in 3-d space, then every minor of G can also be embedded linklessly.
For every two graphs G and H, it is possible to find in polynomial time whether H is a minor of G.
By Robertson–Seymour theorem, any set of finite graphs contains only a finite number of minor-minimal elements. In particular, the set of "yes" instances has a finite number of minor-minimal elements.
Given an input graph G, the following "algorithm" solves the above problem:
For every minor-minimal element H:
If H is a minor of G then return "yes".
return "no".
The non-constructive part here is the Robertson–Seymour theorem. Although it guarantees that there is a finite number of minor-minimal elements it does not tell us what these elements are. Therefore, we cannot really execute the "algorithm" mentioned above. But, we do know that an algorithm exists and that its runtime is polynomial.
There are many more similar problems whose decidability can be proved in a similar way. In some cases, the knowledge that a problem can be proved in a polynomial time has led researchers to search and find an actual polynomial-time algorithm that solves the problem in an entirely different way. This shows that non-constructive proofs can have constructive outcomes.
The main idea is that a problem can be solved using an algorithm that uses, as a parameter, an unknown set. Although the set is unknown, we know that it must be finite, and thus a polynomial-time algorithm exists.
There are many other combinatorial problems that can be solved with a similar technique.
Counting the algorithms
Sometimes the number of potential algorithms for a given problem is finite. We can count the number of possible algorithms and prove that only a bounded number of them are "bad", so at least one algorithm must be "good".
As an example, consider the following problem.
I select a vector v composed of n elements which are integers between 0 and a certain constant d.
You have to guess v by asking sum queries, which are queries of the form: "what is the sum of the elements with indices i and j?". A sum query can relate to any number of indices from 1 to n.
How many queries do you need? Obviously, n queries are always sufficient, because you can use n queries asking for the "sum" of a single element. But when d is sufficiently small, it is possible to do better. The general idea is as follows.
Every query can be represented as a 1-by-n vector whose elements are all in the set {0,1}. The response to the query is just the dot product of the query vector by v. Every set of k queries can be represented by a k-by-n matrix over {0,1}; the set of responses is the product of the matrix by v.
A matrix M is "good" if it enables us to uniquely identify v. This means that, for every vector v, the product M v is unique. A matrix M is "bad" if there are two different vectors, v and u, such that M v = M u.
Using some algebra, it is possible to bound the number of "bad" matrices. The bound is a function of d and k. Thus, for a sufficiently small d, there must be a "good" matrix with a small k, which corresponds to an efficient algorithm for solving the identification problem.
This proof is non-constructive in two ways: it is not known how to find a good matrix; and even if a good matrix is supplied, it is not known how to efficiently re-construct the vector from the query replies.
There are many more similar problems which can be proved to be solvable in a similar way.
Additional examples
Some computational problems can be shown to be decidable by using the Law of Excluded Middle. Such proofs are usually not very useful in practice, since the problems involved are quite artificial.
An example from Quantum complexity theory (related to Quantum query complexity) is given in.
References
Credits
The references in this page were collected from the following Stack Exchange threads:
See also
Existence theorem#'Pure' existence results
Constructive proof#Non-constructive proofs
Computational complexity theory
Constructivism (mathematics) | Non-constructive algorithm existence proofs | Mathematics | 1,378 |
51,583,637 | https://en.wikipedia.org/wiki/Amanita%20basii | Amanita basii is a mushroom of the family Amanitaceae.
Description
Its cap is at around wide, with a brown reddish color to "cadmium orange" becoming very intense red, "lake red" or brownish red in the center part of the cap, which is somewhat faded by the sun, in spots it's red-orange, orange-yellow to deep orange at the margin, yellow at the margin in maturity. The volva seen in the mushroom is absent in maturity or is present when young as small white patches. Its flesh has a color ranging from butter yellow to yellowish under the cap skin, yellow in the center part and near the margin, from pale yellowish white to white elsewhere, the flesh is around thick above the stem, and it thins evenly to the margin. The gills are free, subcrowded, thickest close to the margin, and are around 9–12 mm broad.
The stem is 124–137 × 16–23 mm with a pale yellowish to orange color in the upper part of the stem with light yellow as the ground color. The ring is attached in the upper part, subapical, skirt-like, copious, membranous, persistent, orange-yellow at first, becoming yellow-orange. The saccate volva is smooth, white, with yellow tints on the inner surface, dry, membranous, firmly attached to the stem. The flesh is white, staining light yellow, and stuffed with moderately dense material.
Its stem is around 12.4–13.7 cm × 1.6–2.3 cm, with a pale yellow to orange color in the upper part of the mushroom's stem with a light yellow on the ground, becoming brown to blackish with handling, stuffed, subcylindric to cylindrical, with irregular ragged patches and strands of orange-yellow felted to membranous material on the outer surface; the stem decoration becomes more intensely orange when handled. The ring is attached in the upper part, subapical, skirt-like, copious, membranous, persistent, orange-yellow at first, becoming yellow-orange. The saccate volva is smooth, white, with yellowish tints on the inner surface, dry, membranous, firmly attached to the stem. The flesh is white, staining light yellow, and stuffed with moderately dense material.
The spores measure around approximately 9.0–11.8 (8.0–18.0) × 6.1–7.5(5.5–9.0) μm and are broadly ellipsoid to elongate (rarely cylindric) and inamyloid. Clamps are common at bases of basidia.
Similar species
Not to be confused with Amanita laurae, which grows under oaks, A. yema (under firs) and A. jacksonii, which grows in cloud forest.
Distribution and habitat
It occurs in pine forests in Mexico.
Uses
Though not as well known as other edible mushroom species, A. basii is considered to be edible and has a sweet taste. The odor is somewhat pleasantly fungoid.
See also
Amanita
List of Amanita Species
References
basii
Edible fungi
Taxa named by Gastón Guzmán
Fungus species | Amanita basii | Biology | 673 |
3,326,836 | https://en.wikipedia.org/wiki/CALIPSO | CALIPSO was a joint NASA (US) and CNES (France) environmental satellite, built in the Cannes Mandelieu Space Center, which was launched atop a Delta II rocket on April 28, 2006. Its name stands for Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations. CALIPSO launched alongside CloudSat.
Passive and active remote sensing instruments on board the CALIPSO satellite monitored aerosols and clouds 24 hours a day. CALIPSO was part of the "C-Train" alongside CloudSat, orbiting on a similar track to the "A-Train." The mission ended on August 1, 2023 after over 17 years. Final passivation occurred on December 11, 2023.
Mission
Three instruments:
Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) - a lidar that provided high-resolution vertical profiles of aerosols and clouds.
Wide Field Camera (WFC) - a modified version of the commercial off-the-shelf Ball Aerospace CT-633 star tracker camera. It was selected to match band 1 of the MODIS instrument on the Aqua satellite.
Imaging Infrared Radiometer (IIR) - used to detect cirrus cloud emissivity and particle size. The CALIOP laser beam is aligned with the center of the IIR image to optimize joint CALIOP/IIR observations.
In February 2009, CALIPSO switched over to the redundant laser as scheduled. The primary laser achieved its mission goal of three years of successful operation, and the redundant laser has been performing beyond expectations.
The CALIPSO mission was granted extended mission status in June 2009. CALIPSO moved to the C-Train in 2020. The mission ended on August 1, 2023 due to lack of propellant.
See also
A-train (satellite constellation)
Earth Observing System
List of spaceflights (2006)
References
External links
CALIPSO Outreach
CALIPSO and the A Train
The CALIPSO page at NASA
The CALIPSO page at French National Centre for Space Studies (CNES)
CALIPSO Mission Profile by NASA's Solar System Exploration
CALIPSO results in five to ten years
CALIPSO specs at NASA
(https://www.nasa.gov/feature/first-long-duration-lidar-satellite-mission-calipso-ends)
Earth observation satellites of the United States
Environmental science
Satellites orbiting Earth
Satellite meteorology
Satellites of France
Spacecraft launched by Delta II rockets
Spacecraft launched in 2006
NASA satellites
2006 in France
Spacecraft decommissioned in 2023 | CALIPSO | Environmental_science | 520 |
47,289,026 | https://en.wikipedia.org/wiki/Testosterone%20acetate | Testosterone acetate (brand names Aceto-Sterandryl, Aceto-Testoviron, Amolisin, Androtest A, Deposteron, Farmatest, Perandrone A), or testosterone ethanoate, also known as androst-4-en-17β-ol-3-one 17β-acetate, is an androgen and anabolic steroid and a testosterone ester. The drug was first described in 1936 and was one of the first androgen esters and esters of testosterone to be synthesized.
See also
List of androgen esters
References
Acetate esters
Anabolic–androgenic steroids
Androstanes
Ketones
Testosterone esters | Testosterone acetate | Chemistry | 148 |
1,503,460 | https://en.wikipedia.org/wiki/McNeill%27s%20law | In human geography, McNeill's law is the process outlined in William H. McNeill's book Plagues and Peoples. The process described concerns the role of microbial disease in the conquering of people-groups. Particularly, it describes how diseases such as smallpox, measles, typhus, scarlet fever, and sexually-transmitted diseases have significantly reduced native populations so that they are unable to resist colonization.
Concept
According to McNeill's Law, the microbiological aspect of conquest and invasion has been the deciding principle or one of the deciding principles in both the expansion of certain empires (as during the emigration to the Americas) and the containment in others (as during the crusades). The argument is that less civilized peoples were easily subjugated due to the immunological advantages of those coming from civilized countries. An evidence presented to support the hypothesis involves the manner diseases associated with Europeans were rebuffed in their forays into disease-experienced countries such as China and Japan.
McNeill's law also maintains that parasites are not only natural but also social in the sense that these organisms are part of the social continuum and that the human social evolution is inextricably linked with genetic transformations.
Instances in history
The first people-group fully wiped out due to European expansion (with the possible exception of the Arawaks) was the Guanches of the Canary Islands. Despite an inbred ferocity, superior knowledge of the land and even a possible tactical superiority, they were eventually wiped out through the concentrated efforts of the Spanish and Portuguese. McNeill's Law would place the deciding factor squarely on the introduction of deadly diseases and parasites from the mainland to the previously geographically isolated islanders.
This is the likely explanation, as what records still exist show numerous deaths by disease on the islands and a declining birth rate, leading eventually to the almost complete end of the Guanches as a race.
Other instances include the devastation of the Incas by smallpox.
References
Human geography
Epidemiology | McNeill's law | Environmental_science | 414 |
13,482,790 | https://en.wikipedia.org/wiki/List%20of%20psilocybin%20mushroom%20species | Psilocybin mushrooms are mushrooms which contain the hallucinogenic substances psilocybin, psilocin, baeocystin and norbaeocystin. The mushrooms are collected and grown as an entheogen and recreational drug, despite being illegal in many countries. Many psilocybin mushrooms are in the genus Psilocybe, but species across several other genera contain the drugs.
General
Conocybula
Galerina
Gymnopilus
Inocybe
Panaeolus
Pluteus
Psilocybe
Conocybula
Conocybe siligineoides R. Heim
Conocybula cyanopus (G.F. Atk.) T. Bau & H. B. Song
Conocybula smithii (Watling) T. Bau & H. B. Song (Galera cyanopes Kauffman, Conocybe smithii Watling)
Galerina
Galerina steglichii Besl
Gymnopilus
Gymnopilus aeruginosus (Peck) Singer (photo)
Gymnopilus braendlei (Peck) Hesler
Gymnopilus cyanopalmicola Guzm.-Dáv
Gymnopilus dilepis (Berk. & Broome) Singer
Gymnopilus dunensis H. Bashir, Jabeen & Khalid
Gymnopilus intermedius (Singer) Singer
Gymnopilus lateritius (Pat.) Murrill
Gymnopilus luteofolius (Peck) Singer (photo)
Gymnopilus luteoviridis Thiers (photo)
Gymnopilus luteus (Peck) Hesler (photo)
Gymnopilus palmicola Murrill
Gymnopilus purpuratus (Cooke & Massee) Singer (photo)
Gymnopilus subpurpuratus Guzmán-Davalos & Guzmán
Gymnopilus subspectabilis Hesler
Gymnopilus validipes (Peck) Hesler
Gymnopilus viridans Murrill
Inocybe
Inocybe aeruginascens Babos
Inocybe caerulata Matheny, Bougher & G.M. Gates
Inocybe coelestium Kuyper
Inocybe corydalina
Inocybe corydalina var. corydalina Quél.
Inocybe corydalina var. erinaceomorpha (Stangl & J. Veselsky) Kuyper
Inocybe haemacta (Berk. & Cooke) Sacc.
Inocybe tricolor Kühner
Most species in this genus are poisonous.
Panaeolus
Panaeolus affinis (E. Horak) Ew. Gerhardt
Panaeolus africanus Ola'h
Panaeolus axfordii Y. Hu, S.C. Karunarathna, P.E. Mortimer & J.C. Xu
Panaeolus bisporus (Malencon and Bertault) Singer and Weeks
Panaeolus cambodginiensis (OlaĽh et Heim) Singer & Weeks. (Merlin & Allen, 1993)
Panaeolus chlorocystis (Singer & R.A. Weeks) Ew. Gerhardt
Panaeolus cinctulus (Bolton) Britzelm.
Panaeolus cyanescens (Berk. & Broome) Sacc.
Panaeolus fimicola (Fr.) Gillet
Panaeolus lentisporus Ew. Gerhardt
Panaeolus microsporus Ola'h & Cailleux
Panaeolus moellerianus Singer
Panaeolus olivaceus F.H. Møller
Panaeolus rubricaulis Petch (= Panaeolus campanuloides Guzmán & K. Yokoy.)
Panaeolus tirunelveliensis (Natarajan & Raman) Ew. Gerhardt
Panaeolus tropicalis Ola'h
Panaeolus venezolanus Guzmán (= Panaeolus annulatus Natarajan & Raman)
Pluteus
Pluteus albostipitatus (Dennis) Singer
Pluteus americanus (P. Banerjee & Sundb.) Justo, E.F. Malysheva & Minnis (2014)
Pluteus brunneidiscus Murrill (1917).
Pluteus cyanopus Quél.
Pluteus glaucus Singer
Pluteus glaucotinctus E. Horak
Pluteus nigroviridis Babos
Pluteus phaeocyanopus Minnis & Sundb.
Pluteus salicinus (Pers. : Fr.) P. Kumm.
Pluteus saupei Justo & Minnis
Pluteus velutinornatus G. Stev. 1962.
Pluteus villosus (Bull.) Quél.
Psilocybe
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A
Psilocybe acutipilea (Speg.) Guzmán
Psilocybe allenii Borov., Rockefeller & P.G.Werner
Psilocybe alutacea Y.S. Chang & A.K. Mills
Psilocybe angulospora Yen W. Wang & S.S. Tzean
Psilocybe antioquiensis Guzmán, Saldarriaga, Pineda, García & Velázquez
Psilocybe araucariicola P. S. Silva & Ram.-Cruz
Psilocybe atlantis Guzmán, Hanlin & C. White
Psilocybe aquamarina (Pegler) Guzmán
Psilocybe armandii Guzmán & S.H. Pollock
Psilocybe aucklandiae Guzmán, C.C. King & Bandala
Psilocybe aztecorum
Psilocybe aztecorum var. aztecorum
Psilocybe aztecorum var. bonetii (Guzmán) Guzmán
Psilocybe azurescens Stamets & Gartz
B
Psilocybe baeocystis Singer & A.H. Sm. emend. Guzmán
Psilocybe banderillensis Guzmán
Psilocybe brasiliensis Guzmán
Psilocybe brunneocystidiata Guzmán & Horak
C
Psilocybe caeruleoannulata Singer ex Guzmán
Psilocybe caerulescens
Psilocybe caerulescens var. caerulescens Murrill
Psilocybe caerulescens var. ombrophila (R. Heim) Guzmán
Psilocybe caerulipes (Peck) Sacc.
Psilocybe callosa
Psilocybe carbonaria Singer
Psilocybe chuxiongensis T.Ma & K.D.Hyde
Psilocybe collybioides Singer & A.H. Sm.
Psilocybe columbiana Guzmán
Psilocybe congolensis Guzmán, S.C. Nixon & Cortés-Pérez
Psilocybe cordispora R. Heim
Psilocybe cubensis (Earle) Singer
Psilocybe cyanescens Wakef. (non-sensu Krieglsteiner)
Psilocybe cyanofibrillosa Guzmán & Stamets
D
Psilocybe dumontii Singer ex Guzmán
E
Psilocybe egonii Guzmán & T.J. Baroni
Psilocybe eximia E. Horak & Desjardin
F
Psilocybe fagicola
Psilocybe fagicola var. fagicola
Psilocybe fagicola var. mesocystidiata Guzmán
Psilocybe farinacea Rick ex Guzmán
Psilocybe fimetaria (P.D. Orton) Watling
Psilocybe fuliginosa (Murrill) A.H. Sm.
Psilocybe furtadoana Guzmán
G
Psilocybe galindoi Guzmán
Psilocybe gallaeciae Guzmán & M.L. Castro
Psilocybe graveolens Peck
Psilocybe guatapensis Guzmán, Saldarriaga, Pineda, García & Velázquez
Psilocybe guilartensis Guzmán, Tapia & Nieves-Rivera
H
Psilocybe heimii Guzmán
Psilocybe herrerae Guzmán
Psilocybe hispanica Guzmán
Psilocybe hoogshagenii
Psilocybe hoogshagenii var. hoogshagenii "(= Psilocybe caerulipes var. gastonii Singer, Psilocybe zapotecorum R. Heim s. Singer)"
Psilocybe hoogshagenii var. convexa Guzmán (= Psilocybe semperviva R. Heim & Cailleux)
Psilocybe hopii Guzmán & J. Greene
I
Psilocybe inconspicua Guzmán & Horak
Psilocybe indica Sathe & J.T. Daniel
Psilocybe ingeli B. van der Merwe, A. Rockefeller & K Jacobs
Psilocybe isabelae Guzmán
J
Psilocybe jacobsii Guzmán
Psilocybe jaliscana Guzmán
K
Psilocybe kumaenorum R. Heim
L
Psilocybe laurae Guzmán
Psilocybe lazoi Singer
Psilocybe liniformans
Psilocybe liniformans var. liniformans
Psilocybe liniformans var. americana Guzmán & Stamets
M
Psilocybe mairei Singer
Psilocybe makarorae Johnst. & Buchanan
Psilocybe maluti B. van der Merwe, A. Rockefeller & K. Jacobs
Psilocybe mammillata (Murrill) A.H. Sm.
Psilocybe medullosa (Bres.) Borovička
Psilocybe meridensis Guzmán
Psilocybe meridionalis Guzmán, Ram.-Guill. & Guzm.-Dáv.
Psilocybe mescaleroensis Guzmán, Walstad, E. Gándara & Ram.-Guill.
Psilocybe mexicana R. Heim
Psilocybe moseri Guzmán
Psilocybe muliercula Singer & A.H. Sm. (= Psilocybe wassonii R. Heim)
N
Psilocybe naematoliformis Guzmán
Psilocybe natalensis Gartz, Reid, Smith & Eicker
Psilocybe natarajanii Guzmán (= Psilocybe aztecorum var. bonetii (Guzmán) Guzmán s. Natarajan & Raman)
Psilocybe neorhombispora Guzmán & Horak
Psilocybe neoxalapensis Guzmán, Ram.-Guill. & Halling
Psilocybe ningshanensis X.L. He, W.Y. Huo, L.G. Zhang, Y. Liu & J.Z. Li
Psilocybe niveotropicalis Ostuni, Rockefeller, J. Jacobs & Birkebak
O
Psilocybe ovoideocystidiata Guzmán et Gaines
P
Psilocybe papuana Guzmán & Horak
Psilocybe paulensis (Guzmán & Bononi) Guzmán (= Psilocybe banderiliensis var. paulensis Guzmán & Bononi)
Psilocybe pelliculosa (A.H. Sm.) Singer & A.H. Sm.
Psilocybe pintonii Guzmán
Psilocybe pleurocystidiosa Guzmán
Psilocybe plutonia (Berk. & M.A. Curtis) Sacc.
Psilocybe portoricensis Guzmán, Tapia & Nieves-Rivera
Psilocybe pseudoaztecorum Natarajan & Raman
Psilocybe puberula Bas & Noordel.
Q
Psilocybe quebecensis Ola'h & R. Heim
R
Psilocybe rickii Guzmán & Cortez
Psilocybe rostrata (Petch) Pegler
Psilocybe rzedowskii Guzmán
S
Psilocybe samuiensis Guzmán, Bandala & Allen
Psilocybe schultesii Guzmán & S.H. Pollock
Psilocybe semilanceata (Fr. : Secr.) P. Kumm.
Psilocybe septentrionalis (Guzmán) Guzmán (= Psilocybe subaeriginascens Höhn. var. septentrionalis Guzmán)
Psilocybe serbica Moser & Horak (non ss. Krieglsteiner)
Psilocybe sierrae Singer (= Psilocybe subfimetaria Guzmán & A.H. Sm.)
Psilocybe silvatica (Peck) Singer & A.H. Sm.
Psilocybe singeri Guzmán
Psilocybe strictipes Singer & A.H. Sm.
Psilocybe stuntzii Guzman & Ott
Psilocybe subacutipilea Guzmán, Saldarriaga, Pineda, García & Velázquez
Psilocybe subaeruginascens Hohnel
Psilocybe subaeruginosa Cleland
Psilocybe subbrunneocystidiata P.S. Silva & Guzmán
Psilocybe subcaerulipes Hongo
Psilocybe subcubensis Guzmán
Psilocybe subpsilocybioides Guzmán, Lodge & S.A. Cantrell
Psilocybe subtropicalis Guzmán
T
Psilocybe tampanensis Guzmán & S.H. Pollock (photo)
Psilocybe tasmaniana Guzmán & Watling (1978)
Psilocybe thaiaerugineomaculans Guzmán, Karunarathna & Ram.-Guill.
Psilocybe thaicordispora Guzmán, Ram.-Guill. & Karun.
Psilocybe thaiduplicatocystidiata Guzmán, Karun. & Ram.-Guill.
U
Psilocybe uruguayensis Singer ex Guzmán
Psilocybe uxpanapensis Guzmán
V
Psilocybe venenata (S. Imai) Imaz. & Hongo (= Psilocybe fasciata Hongo; Stropharia caerulescens S. Imai)
W
Psilocybe wassoniorum Guzmán & S.H. Pollock
Psilocybe wayanadensis K.A. Thomas, Manim. & Guzmán
Psilocybe weilii Guzmán, Tapia & Stamets (photo)
Psilocybe weldenii Guzmán
Psilocybe weraroa Borovicka, Oborník & Noordel.
X
Psilocybe xalapensis Guzmán & A. López
Y
Psilocybe yungensis Singer & A.H. Sm.
Z
Psilocybe zapotecoantillarum Guzmán, T.J. Baroni & Lodge
Psilocybe zapotecocaribaea Guzmán, Ram.-Guill. & T.J. Baroni
Psilocybe zapotecorum
References
Entheogens
Psilocybin species
Psychoactive fungi
Psychedelic tryptamine carriers
Hallucinations | List of psilocybin mushroom species | Biology | 3,204 |
2,257,041 | https://en.wikipedia.org/wiki/Multireference%20configuration%20interaction | In quantum chemistry, the multireference configuration interaction (MRCI) method consists of a configuration interaction expansion of the eigenstates of the electronic molecular Hamiltonian in a set of Slater determinants which correspond to excitations of the ground state electronic configuration but also of some excited states. The Slater determinants from which the excitations are performed are called reference determinants. The higher excited determinants (also called configuration state functions (CSFs) or shortly configurations) are then chosen either by the program according to some perturbation theoretical ansatz according to a threshold provided by the user or simply by truncating excitations from these references to singly, doubly, ... excitations resulting in MRCIS, MRCISD, etc.
For the ground state using more than one reference configuration means a better correlation and so a lower energy. The problem of size inconsistency of truncated CI-methods is not solved by taking more references.
As a result of a MRCI calculation one gets a more balanced correlation of the ground and excited states. For quantitative good energy differences (excitation energies) one has to be careful in selecting the references. Taking only the dominant configuration of an excited state into the reference space leads to a correlated (lower) energy of the excited state. The generally too-high excitation energies of CIS or CISD are lowered. But usually excited states have more than one dominant configuration and so the ground state is more correlated due to: a) now including some configurations with higher excitations (triply and quadruply in MRCISD); b) the neglect of other dominant configurations of the excited states which are still uncorrelated.
Selecting the references can be done manually (), automatically (all possible configurations within an active space of some orbitals) or semiautomatically (taking all configurations as references that have been shown to be important in a previous CI or MRCI calculation)
This method has been implemented first by Robert Buenker and Sigrid D. Peyerimhoff in the seventies under the name Multi-Reference Single and Double Configuration Interaction (MRSDCI). MRCI was further streamlined in 1988 by Hans-Joachim Werner and Peter Knowles, which made previous MRCI procedures more generalizable.
The MRCI method can also be implemented in semi-empirical methods. An example for this is the OM2/MRCI method developed by Walter Thiel's group.
See also
Configuration interaction
References
Quantum chemistry | Multireference configuration interaction | Physics,Chemistry | 517 |
2,615,949 | https://en.wikipedia.org/wiki/Omega%20network | An Omega network is a network configuration often used in parallel computing architectures. It is an indirect topology that relies on the perfect shuffle interconnection algorithm.
Connection architecture
An 8x8 Omega network is a multistage interconnection network, meaning that processing elements (PEs) are connected using multiple stages of switches. Inputs and outputs are given addresses as shown in the figure. The outputs from each stage are connected to the inputs of the next stage using a perfect shuffle connection system. This means that the connections at each stage represent the movement of a deck of cards divided into 2 equal decks and then shuffled together, with each card from one deck alternating with the corresponding card from the other deck. In terms of binary representation of the PEs, each stage of the perfect shuffle can be thought of as a cyclic logical left shift; each bit in the address is shifted once to the left, with the most significant bit moving to the least significant bit.
At each stage, adjacent pairs of inputs are connected to a simple exchange element, which can be set either straight (pass inputs directly through to outputs) or crossed (send top input to bottom output, and vice versa). For N processing element, an Omega network contains N/2 switches at each stage, and log2N stages. The manner in which these switches are set determines the connection paths available in the network at any given time. Two such methods are destination-tag routing and XOR-tag routing, discussed in detail below.
The Omega Network is highly blocking, though one path can always be made from any input to any output in a free network.
Destination-tag routing
In destination-tag routing, switch settings are determined solely by the message destination. The most significant bit of the destination address is used to select the output of the switch in the first stage; if the most significant bit is 0, the upper output is selected, and if it is 1, the lower output is selected. The next-most significant bit of the destination address is used to select the output of the switch in the next stage, and so on until the final output has been selected.
For example, if a message's destination is PE 001, the switch settings are: upper, upper, lower. If a message's destination is PE 101, the switch settings are: lower, upper, lower. These switch settings hold regardless of the PE sending the message.
XOR-tag routing
In XOR-tag routing, switch settings are based on (source PE) XOR (destination PE). This XOR-tag contains 1s in the bit positions that must be swapped and 0s in the bit positions that both source and destination have in common. The most significant bit of the XOR-tag is used to select the setting of the switch in the first stage; if the most significant bit is 0, the switch is set to pass-through, and if it is 1, the switch is crossed. The next-most significant bit of the tag is used to set the switch in the next stage, and so on until the final output has been selected.
For example, if PE 001 wishes to send a message to PE 010, the XOR-tag will be 011 and the appropriate switch settings are: A2 straight, B3 crossed, C2 crossed.
Applications
In multiprocessing, omega networks may be used as connectors between the CPUs and their shared memory, in order to decrease the probability that the CPU-to-memory connection becomes a bottleneck.
This class of networks has been built into the Illinois Cedar Multiprocessor, into the IBM RP3, and into the NYU Ultracomputer.
Examples
Omega network simulation in c
See also
Clos network
Cube-connected cycles
Nonblocking minimal spanning switch
Banyan switch
Delta network
Fat tree
Crossbar switch
Network coding
References
Network architecture | Omega network | Engineering | 785 |
78,923,697 | https://en.wikipedia.org/wiki/PKS%201335%E2%88%92127 | PKS 1335-127 is a blazar located in the constellation of Virgo with a redshift of (z) 0.539. This is a compact BL Lac object containing a radio source of extragalactic origins; discovered in 1970 during the continuum survey conducted by astronomers from Ohio State University. The object shows a radio spectrum appearing as flat, thus making it a flat-spectrum radio quasar (FRSQ), but also classified as a gigahertz-peaked source (GPS) with high polarization.
Description
PKS 1335-127 is considered to be variable on the electromagnetic spectrum. It is known to produce a near-infrared flare which was detected in February 2013 showing a H-band flux value of 13.691 ± 0.08. Enhanced gamma-ray activity was observed from the object in May 2020, followed by an optical flare one month later. There is presence of large amplitude variability and evidence of position angles showing different rotations at both low and high frequencies from the object.
Radio imaging made by the Very Long Baseline Array on arcsecond scales, shows the structure of PKS 1335-127 is mainly made up of a radio core and a radio jet that is found to curve in an eastwards direction by 6.5" from the core. When imaged at 43 GHz, the jet is revealed to become less defined, with a patch of weak diffused radio emission located southeast. There is also an extended component located at a 152° position angle at a distance of 2.6 milliarcseconds. Earlier observations via a very-long baseline Interferometry (VLBI) map shows the core as unresolved while the jet is found to have an orientation of 135° indicating a perpendicular magnetic field.
Further observations also found the circular polarization in PKS 1335-127 is stable. While images at 15 and 22 GHz respectively shows the presence of compact radio emission focused on the phase center, the image at 43 GHz shows PKS 1335-127 has a double structure containing components with a much stronger southern component. There is also circular polarization towards the jet at its southwestern edge, polarized by 7.16 percent; however it is found 2 factor higher when compared to circular polarization in the jet of 3C 84 (NGC 1275).
References
External links
PKS 1335-127 on SIMBAD
PKS 1335-127 on NASA/IPAC Database
Blazars
Virgo (constellation)
Quasars
BL Lacertae objects
Active galaxies
2827642
Astronomical objects discovered in 1970 | PKS 1335−127 | Astronomy | 531 |
6,800,801 | https://en.wikipedia.org/wiki/Reichert%20value | The Reichert value (also Reichert-Meissl number, Reichert-Meissl-Wollny value or Reichert-Meissl-Wollny number) is a value determined when examining fats and oils. The Reichert value is an indicator of how much volatile fatty acid can be extracted from a particular fat or oil through saponification. It is equal to the number of millilitres of 0.1 normal hydroxide solution necessary for the neutralization of the water-soluble volatile fatty acids distilled and filtered from 5 grams of a given saponified fat. (The hydroxide solution used in such a titration is typically made from sodium hydroxide, potassium hydroxide, or barium hydroxide.)
This number is a useful indicator of non-fat compounds in edible fats, and is especially high in butter.
The value is named for the chemists who developed it, Emil Reichert and Emerich Meissl.
The Polenske value and Kirschner value are related numbers based on similar tests.
The Reichert-Meissel value for milk ranges between 28.5 and 33.
References
External links
Dimensionless numbers of chemistry
Edible oil chemistry | Reichert value | Chemistry | 248 |
4,501,325 | https://en.wikipedia.org/wiki/Plasma%20parameters | Plasma parameters define various characteristics of a plasma, an electrically conductive collection of charged and neutral particles of various species (electrons and ions) that responds collectively to electromagnetic forces. Such particle systems can be studied statistically, i.e., their behaviour can be described based on a limited number of global parameters instead of tracking each particle separately.
Fundamental
The fundamental plasma parameters in a steady state are
the number density of each particle species present in the plasma,
the temperature of each species,
the mass of each species,
the charge of each species,
and the magnetic flux density .
Using these parameters and physical constants, other plasma parameters can be derived.
Other
All quantities are in Gaussian (cgs) units except energy and temperature which are in electronvolts. For the sake of simplicity, a single ionic species is assumed. The ion mass is expressed in units of the proton mass, and the ion charge in units of the elementary charge (in the case of a fully ionized atom, equals to the respective atomic number). The other physical quantities used are the Boltzmann constant speed of light and the Coulomb logarithm
Frequencies
Lengths
Velocities
Dimensionless
number of particles in a Debye sphere
Alfvén speed to speed of light ratio
electron plasma frequency to gyrofrequency ratio
ion plasma frequency to gyrofrequency ratio
thermal pressure to magnetic pressure ratio, or beta, β
magnetic field energy to ion rest energy ratio
Collisionality
In the study of tokamaks, collisionality is a dimensionless parameter which expresses the ratio of the electron-ion collision frequency to the banana orbit frequency.
The plasma collisionality is defined as
where denotes the electron-ion collision frequency, is the major radius of the plasma, is the inverse aspect-ratio, and is the safety factor. The plasma parameters and denote, respectively, the mass and temperature of the ions, and is the Boltzmann constant.
Electron temperature
Temperature is a statistical quantity whose formal definition is
or the change in internal energy with respect to entropy, holding volume and particle number constant. A practical definition comes from the fact that the atoms, molecules, or whatever particles in a system have an average kinetic energy. The average means to average over the kinetic energy of all the particles in a system.
If the velocities of a group of electrons, e.g., in a plasma, follow a Maxwell–Boltzmann distribution, then the electron temperature is defined as the temperature of that distribution. For other distributions, not assumed to be in equilibrium or have a temperature, two-thirds of the average energy is often referred to as the temperature, since for a Maxwell–Boltzmann distribution with three degrees of freedom, .
The SI unit of temperature is the kelvin (K), but using the above relation the electron temperature is often expressed in terms of the energy unit electronvolt (eV). Each kelvin (1 K) corresponds to ; this factor is the ratio of the Boltzmann constant to the elementary charge. Each eV is equivalent to 11,605 kelvins, which can be calculated by the relation .
The electron temperature of a plasma can be several orders of magnitude higher than the temperature of the neutral species or of the ions. This is a result of two facts. Firstly, many plasma sources heat the electrons more strongly than the ions. Secondly, atoms and ions are much heavier than electrons, and energy transfer in a two-body collision is much more efficient if the masses are similar. Therefore, equilibration of the temperature happens very slowly, and is not achieved during the time range of the observation.
See also
Ball-pen probe
Langmuir probe
References
NRL Plasma Formulary – Naval Research Laboratory (2018)
Plasma parameters
Astrophysics | Plasma parameters | Physics,Astronomy | 762 |
1,236,616 | https://en.wikipedia.org/wiki/Hedorah | , also known as the Smog Monster, is a fictional monster, or kaiju who first appeared in Toho's 1971 film Godzilla vs. Hedorah. Hedorah was named for , the Japanese word for sludge, slime, vomit or chemical ooze.
Overview
Development
Whereas Godzilla was a symbol of Japanese concerns over nuclear weapons, Hedorah was envisioned as an embodiment of Yokkaichi asthma, caused by Japan's widespread smog and urban pollution at the time. Director Yoshimitsu Banno stated in an interview that his intention in creating Hedorah was to give Godzilla an adversary who was more than just a "giant lobster" and which represented "the most notorious thing in current society". He also stated that Hedorah's vertically tilted eyes were based on vaginas, which he joked were "scary". The monster was originally going to be named "Hedoron", though this changed once the TV series Spectreman introduced a character with an identical name.
The monster was realized via various props and a large sponge rubber suit donned by future Godzilla performer Kenpachiro Satsuma in his first acting role for Toho. Satsuma had been selected on account of his physical fitness, though he stated later that he had been disappointed to receive the role, as he had grown tired of taking non-speaking roles. In performing as Hedorah, Satsuma tried to emphasize Hedorah's otherworldly nature by making its movements seem more grotesque than animal-like. Several authors have noted that, unlike most Toho monsters, Hedorah's violent acts are graphically shown to claim human victims, and the creature shows genuine amusement at Godzilla's suffering. Banno wished to bring back Hedorah in a sequel set in Africa, but the project never materialized, as he was fired by producer and series co-creator Tomoyuki Tanaka, who allegedly accused him of ruining the Godzilla series. Complex listed the character as #8 on its "The 15 Most Badass Kaiju Monsters of All Time" list.
Banno had hoped to revisit Hedorah in his unrealized project Godzilla 3-D, which would have had featured a similar monster named Deathla. Like its predecessor, Deathla would have been a shape-shifting extraterrestrial, though it would have fed on chlorophyll rather than gas emissions, and all of its forms would have incorporated a skull motif.
Shōwa era (1971)
In Godzilla vs. Hedorah, Hedorah originates from the Dark Gas Nebula in the constellation of Orion. It journeys to Earth via a passing comet, and lands in Suruga Bay as a monstrous tadpole-like creature, increasing in size as it feeds on the pollutants contaminating the water. It proceeds to rampage throughout Japan, killing thousands and feeding on gas emissions and toxic waste, gradually gaining power as it advances from a water stage, to a land stage, and finally a bipedal Perfect Form that it can switch out for a smaller, flying form at any time. Godzilla confronts Hedorah at Mount Fuji, but his atomic breath has no effect on Hedorah's amorphous, water-rich body. Hedorah rapidly overpowers Godzilla using a combination of its fearsome strength and incredible durability, and almost kills the King of the Monsters after hurling him into a pit and attempting to drown him under a deluge of chemical ooze. It is later discovered that Hedorah is vulnerable to temperatures high enough to dehydrate it, so the JSDF constructs a pair of gigantic electrodes on the battlefield to use against the alien. Hedorah and Godzilla continue to fight, and the former is subsequently killed when Godzilla uses his atomic breath to power the electrodes, which cripple Hedorah and allow Godzilla to fully dehydrate its body into dust.
Millennium era (2004)
Hedorah briefly reappears in Godzilla: Final Wars as one of several monsters under the Xiliens' control before it is destroyed by Godzilla alongside Ebirah in Tokyo.
Other
Hedorah also appears in Godzilla: Planet of the Monsterss prequel novel Godzilla: Monster Apocalypse, in which it was originally a colony of sludge-like microorganisms that lived off dissolved chemicals inside a mine in Hebei, China. After the Chinese government discovered it in 1999, they studied and modified the microorganisms to operate as a giant, mist-like bioweapon with red and yellow eyes called Hedorah. In 2005, the Chinese military commenced "Operation: Hedorah" to kill Anguirus and Rodan with the bio-weapon. Hedorah successfully took down the two monsters, but it attained sentience afterwards and went on a rampage, consumed the pollutants in the surrounding area, then disappeared, leaving an estimated 8.2 million casualties in its wake.
A 2021 short film "Godzilla vs. Hedorah" created for the 50th Anniversary of the character would give the Final Wars incarnations of Hedorah and Godzilla a rematch amid an oil refinery in the daytime.
Hedorah also appears in Chibi Godzilla Raids Again.
Appearances
Films
Godzilla vs. Hedorah (1971)
Godzilla: Final Wars (2004)
Godzilla: Planet of the Monsters (2017)
Godzilla vs. Hedorah (2021, short film)
Television
Godzilla Island (1997-1998)
Godziban (2019–present)
Chibi Godzilla Raids Again (2023-2024)
Video games
Godzilla: Monster of Monsters (NES - 1988)
Godzilla / Godzilla-Kun: Kaijuu Daikessen (Game Boy - 1990)
Godzilla 2: War of the Monsters (NES - 1991)
Kaijū-ō Godzilla / King of the Monsters, Godzilla (Game Boy - 1993)
Godzilla: Battle Legends (Turbo Duo - 1993)
Godzilla Trading Battle (PlayStation - 1998)
Godzilla: Destroy All Monsters Melee (GCN, Xbox - 2002/2003)
Godzilla Unleashed: Double Smash (NDS - 2007)
Godzilla: The Game (PS3 - 2014 PS3 PS4 - 2015)
Godzilla Defense Force (2019)
Godzilla Battle Line (2021)
GigaBash (PS4, PS5, Steam, Epic Games - 2024)
Literature
Godzilla vs. Gigan and the Smog Monster (1996)
Godzilla at World’s End (1998)
Godzilla: Monster Apocalypse (2017)
Comics
Godzilla: Legends (comic - 2011–2012)
Godzilla: Ongoing (comic - 2012)
Godzilla: The Half-Century War (comic - 2012–2013)
Godzilla: Rulers of Earth (comic - 2013–2015)
Godzilla Rivals (comic - 2021)
Music
Hedorah appears on the album cover of Frank Zappa's Sleep Dirt.
Hedorah appears on the album cover of Dinosaur Jr's Sweep It Into Space.
References
Bibliography
Extraterrestrial supervillains
Fantasy film characters
Fictional amorphous creatures
Fictional characters with superhuman strength
Fictional mass murderers
Fictional parasite characters
Fictional superorganisms
Film characters introduced in 1971
Godzilla characters
Mothra characters
Horror film villains
Toho monsters | Hedorah | Biology | 1,461 |
6,165,852 | https://en.wikipedia.org/wiki/NRC%20Herzberg%20Astronomy%20and%20Astrophysics%20Research%20Centre | The NRC Herzberg Astronomy and Astrophysics Research Centre (NRC Herzberg, HAA) is the leading Canadian centre for astronomy and astrophysics. It is based in Victoria, British Columbia. The current director-general, as of 2021, is Luc Simard.
History
Named for the Nobel laureate Gerhard Herzberg, it was formed in 1975 as part of the National Research Council of Canada in Ottawa, Ontario. The NRC-HIA headquarters were moved to Victoria, British Columbia in 1995 to the site of the Dominion Astrophysical Observatory. In 2012, the organization was restructured and renamed NRC Herzberg Astronomy and Astrophysics.
Facilities
NRC-HAA also operates the Dominion Radio Astrophysical Observatory outside of Penticton, British Columbia and the Canadian Astronomy Data Centre as well as managing Canadian involvement in the Canada-France-Hawaii Telescope, Gemini Observatory, Atacama Large Millimeter Array, the Square Kilometre Array, and the Thirty Meter Telescope, as well as Canada's national astronomy data centre.
The institute is also involved in the development and construction of instruments and telescopes.
Members of NRC-HAA are currently involved in The Next Generation Virgo Cluster Survey.
Members of NRC-HAA are currently involved with Pan-Andromeda Archaeological Survey.
Plaskett Fellowship
The Plaskett Fellowship is named after John Stanley Plaskett and is awarded to an outstanding, recent doctoral graduate in astrophysics or a closely related discipline. Fellows conduct independent research in a stimulating, collegial environment at the Dominion Astrophysical Observatory in Victoria, British Columbia, Canada. Expertise in observational astrophysics is the norm, but some theoreticians were also among this distinguished group of astronomers.
Covington Fellowship
The Covington Fellowship is named after Arthur Covington and is awarded to an outstanding, recent doctoral graduate in astrophysics or a closely related discipline. Fellows conduct independent research in a stimulating, collegial environment at the institute at the Dominion Radio Astrophysical Observatory in Penticton, BC. DRAO staff expertise is in observational radio astronomy and the development of instrumentation and technology for radio telescopes. Current and past Covington Fellows are:
See also
Atacama Large Millimeter Array
Canada–France–Hawaii Telescope
Dominion Radio Astrophysical Observatory
Gemini Observatory
James Clerk Maxwell Telescope
Square Kilometre Array
Thirty Meter Telescope
James Webb Space Telescope
References
External links
National science infrastructure (NRC Herzberg)
Herzberg Astrophysics Researchers
National Research Council (Canada)
Research institutes in Canada
Astronomy institutes and departments
Astrophysics research institutes
1975 establishments in Ontario
Organizations based in Victoria, British Columbia | NRC Herzberg Astronomy and Astrophysics Research Centre | Physics,Astronomy | 525 |
16,702,705 | https://en.wikipedia.org/wiki/Dispersive%20adhesion | Dispersive adhesion, also called adsorptive adhesion, is a mechanism for adhesion which attributes attractive forces between two materials to intermolecular interactions between molecules of each material. This mechanism is widely viewed as the most important of the five mechanisms of adhesion due to its presence in every type of adhesive system and its relative strength.
Source of dispersive adhesion attractions
The source of adhesive forces, according to the dispersive adhesion mechanism, is the weak interactions that occur between molecules close together. These interactions include London dispersion forces, Keesom forces, Debye forces and hydrogen bonds. Individually, these attractions are not very strong, but when summed over the bulk of a material, they can become significant.
London dispersion
London dispersion forces arise from instantaneous dipoles between two nonpolar molecules close together. The random nature of electron orbit allows moments in which the charge distribution in a molecule is unevenly distributed, allowing an electrostatic attraction to another molecule with a temporary dipole. A larger molecule allows for a larger dipole, and thus will have stronger dispersion forces.
Keesom
Keesom forces, also known as dipole–dipole interactions, result from two molecules that have permanent dipoles due to electronegativity differences between atoms in the molecule. This dipole causes a coulombic attraction between the two molecules.
Debye
Debye forces, or dipole–induced dipole interactions, can also play a role in dispersive adhesion. These come about when a nonpolar molecule becomes temporarily polarized due to interaction with a nearby polar molecule. This "induced dipole" in the nonpolar molecule then is attracted to the permanent dipole, yielding a Debye attraction.
Hydrogen bonding
Sometimes grouped into the chemical mechanism of adhesion, hydrogen bonding can increase adhesive strength by the dispersive mechanism. Hydrogen bonding occurs between molecules with a hydrogen atom attached to a small, electronegative atom such as fluorine, oxygen or nitrogen. This bond is naturally polar, with the hydrogen atom gaining a slight positive charge and the other atom becoming slightly negative. Two molecules, or even two functional groups on one large molecule, may then be attracted to each other via Keesom forces.
Factors affecting adhesion strength
The strength of adhesion by the dispersive mechanism depends on a variety of factors, including the chemical structure of the molecules involved in the adhesive system, the degree to which coatings wet each other, and the surface roughness at the interface.
Chemical composition
The chemical structure of the materials involved in a given adhesive system plays a large role in the adhesion of the system as a whole because the structure determines the type and strength of the intermolecular interactions present. All things equal, larger molecules, which experience higher dispersion forces, will have a larger adhesive strength than smaller molecules of the same basic chemical fingerprint. Similarly, polar molecules will have Keesom and Debye forces not experienced by nonpolar molecules of similar size. Compounds which can hydrogen bond across the adhesive interface will have even greater adhesive strength.
Wetting
Wetting is a measure of the thermodynamic compatibility of two surfaces. If the surfaces are well-matched, the surfaces will "desire" to interact with each other, minimizing the surface energy of both phases, and the surfaces will come into close contact. Because the intermolecular attractions strongly correlate with distance, the closer the interacting molecules are together, the stronger the attraction. Thus, two materials that wet well and have a large amount of surface area in contact will have stronger intermolecular attractions and a larger adhesive strength due to the dispersive mechanism.
Roughness
Surface roughness can also affect the adhesive strength. Surfaces with roughness on the scale of 1–2 micrometres can yield better wetting because they have a larger surface area. Thus, more intermolecular interactions at closer distances can arise, yielding stronger attractions and larger adhesive strength. Once the roughness becomes larger, on the order of 10 micrometres, the coating can no longer wet effectively, resulting in less contact area and a smaller adhesive strength.
Macroscopic shape
Adhesive strength depends also on the size and macroscopic shape of adhesive contact. When a rigid punch with a flat but oddly shaped face is carefully pulled off its soft counterpart, the detachment does not occur instantaneously. Instead, detachment fronts start at pointed corners and travel inwards until the final configuration is reached. The main parameter determining the adhesive strength of flat contacts appears to be the maximum linear size of the contact. The process of detachment can as observed experimentally can be seen in the film.
Systems dominated by dispersive adhesion
All materials, even those not usually classified as adhesives, experience an attraction to other materials simply due to dispersion forces. In many situations, these attractions are trivial; however, dispersive adhesion plays a dominant role in various adhesive systems, especially when multiple forms of intermolecular attractions are present. It has been shown by experimental methods that the dispersive mechanism of adhesion plays a large role in the overall adhesion of polymeric systems in particular.
See also
Adhesion
Intermolecular force
Van der Waals forces
Hydrogen bonding
References
Intermolecular forces
Articles containing video clips | Dispersive adhesion | Chemistry,Materials_science,Engineering | 1,110 |
39,171,284 | https://en.wikipedia.org/wiki/Dirac%20equation%20in%20curved%20spacetime | In mathematical physics, the Dirac equation in curved spacetime is a generalization of the Dirac equation from flat spacetime (Minkowski space) to curved spacetime, a general Lorentzian manifold.
Mathematical formulation
Spacetime
In full generality the equation can be defined on or a pseudo-Riemannian manifold, but for concreteness we restrict to pseudo-Riemannian manifold with signature . The metric is referred to as , or in abstract index notation.
Frame fields
We use a set of vierbein or frame fields , which are a set of vector fields (which are not necessarily defined globally on ). Their defining equation is
The vierbein defines a local rest frame, allowing the constant Gamma matrices to act at each spacetime point.
In differential-geometric language, the vierbein is equivalent to a section of the frame bundle, and so defines a local trivialization of the frame bundle.
Spin connection
To write down the equation we also need the spin connection, also known as the connection (1-)form. The dual frame fields have defining relation
The connection 1-form is then
where is a covariant derivative, or equivalently a choice of connection on the frame bundle, most often taken to be the Levi-Civita connection.
One should be careful not to treat the abstract Latin indices and Greek indices as the same, and further to note that neither of these are coordinate indices: it can be verified that doesn't transform as a tensor under a change of coordinates.
Mathematically, the frame fields define an isomorphism at each point where they are defined from the tangent space to . Then abstract indices label the tangent space, while greek indices label . If the frame fields are position dependent then greek indices do not necessarily transform tensorially under a change of coordinates.
Raising and lowering indices is done with for latin indices and for greek indices.
The connection form can be viewed as a more abstract connection on a principal bundle, specifically on the frame bundle, which is defined on any smooth manifold, but which restricts to an orthonormal frame bundle on pseudo-Riemannian manifolds.
The connection form with respect to frame fields defined locally is, in differential-geometric language, the connection with respect to a local trivialization.
Clifford algebra
Just as with the Dirac equation on flat spacetime, we make use of the Clifford algebra, a set of four gamma matrices satisfying
where is the anticommutator.
They can be used to construct a representation of the Lorentz algebra: defining
,
where is the commutator.
It can be shown they satisfy the commutation relations of the Lorentz algebra:
They therefore are the generators of a representation of the Lorentz algebra . But they do not generate a representation of the Lorentz group , just as the Pauli matrices generate a representation of the rotation algebra but not . They in fact form a representation of However, it is a standard abuse of terminology to any representations of the Lorentz algebra as representations of the Lorentz group, even if they do not arise as representations of the Lorentz group.
The representation space is isomorphic to as a vector space. In the classification of Lorentz group representations, the representation is labelled .
The abuse of terminology extends to forming this representation at the group level. We can write a finite Lorentz transformation on as
where is the standard basis for the Lorentz algebra. These generators have components
or, with both indices up or both indices down, simply matrices which have in the index and in the index, and 0 everywhere else.
If another representation has generators then we write
where are indices for the representation space.
In the case , without being given generator components for , this is not well defined: there are sets of generator components which give the same but different
Covariant derivative for fields in a representation of the Lorentz group
Given a coordinate frame arising from say coordinates , the partial derivative with respect to a general orthonormal frame is defined
and connection components with respect to a general orthonormal frame are
These components do not transform tensorially under a change of frame, but do when combined. Also, these are definitions rather than saying that these objects can arise as partial derivatives in some coordinate chart. In general there are non-coordinate orthonormal frames, for which the commutator of vector fields is non-vanishing.
It can be checked that under the transformation
if we define the covariant derivative
,
then transforms as
This generalises to any representation for the Lorentz group: if is a vector field for the associated representation,
When is the fundamental representation for , this recovers the familiar covariant derivative for (tangent-)vector fields, of which the Levi-Civita connection is an example.
There are some subtleties in what kind of mathematical object the different types of covariant derivative are. The covariant derivative in a coordinate basis is a vector-valued 1-form, which at each point is an element of . The covariant derivative in an orthonormal basis uses the orthonormal frame to identify the vector-valued 1-form with a vector-valued dual vector which at each point is an element of using that canonically. We can then contract this with a gamma matrix 4-vector which takes values at in
Dirac equation on curved spacetime
Recalling the Dirac equation on flat spacetime,
the Dirac equation on curved spacetime can be written down by promoting the partial derivative to a covariant one.
In this way, Dirac's equation takes the following form in curved spacetime:
where is a spinor field on spacetime. Mathematically, this is a section of a vector bundle associated to the spin-frame bundle by the representation
Recovering the Klein–Gordon equation from the Dirac equation
The modified Klein–Gordon equation obtained by squaring the operator in the Dirac equation, first found by Erwin Schrödinger as cited by Pollock
is given by
where is the Ricci scalar, and is the field strength of . An alternative version of the Dirac equation whose Dirac operator remains the square root of the Laplacian is given by the Dirac–Kähler equation; the price to pay is the loss of Lorentz invariance in curved spacetime.
Note that here Latin indices denote the "Lorentzian" vierbein labels while Greek indices denote manifold coordinate indices.
Action formulation
We can formulate this theory in terms of an action. If in addition the spacetime is orientable, there is a preferred orientation known as the volume form .
One can integrate functions against the volume form:
The function
is integrated against the volume form to obtain the Dirac action
See also
Dirac equation in the algebra of physical space
Dirac spinor
Maxwell's equations in curved spacetime
Two-body Dirac equations
References
Quantum field theory
Spinors
Partial differential equations
Fermions
Curved spacetime | Dirac equation in curved spacetime | Physics,Materials_science | 1,412 |
59,131,716 | https://en.wikipedia.org/wiki/Maurice%20Zucrow | Maurice Joseph Zucrow (December 15, 1899 – June 1975) was a Russian-born American scientist and aerospace engineer known for his contributions to the development of gas turbines and jet propulsion. Zucrow was born in Kiev in Tsarist Russia and immigrated with his family to the United Kingdom in 1900. Young Maurice attended Central Foundation Boys' School in London. The family moved to the US in 1914.
In 1922 Maurice Zucrow became the first person to graduate with a BS degree in engineering from Harvard University. He went on to receive a S.M degree from the same school next year. Zucrow was also the first recipient of a doctoral degree in engineering at Purdue University completing his dissertation in Mechanical Engineering in 1928. His PhD advisor was Andrey Abraham Potter, Dean of Purdue University College of Engineering.
After leaving Purdue in 1929, Zucrow spent 17 years in industry. During his work at Elliott Company, he played an important part in the research and development of the nation's first gas turbine built in 1942. He also helped develop Aerojet’s JATO rocket, used by seaplanes to assist takeoff under adverse conditions. During the World War II, Zucrow was asked to teach a course in jet propulsion theory to engineers in the aircraft industry as part of the Engineering, Science, and Management War Training (ESMWT) program at the University of California, Los Angeles.
In 1946 he joined the faculty of Purdue School of Aeronautics and Astronautics establishing Purdue's Jet Propulsion Center. The research facility was renamed Maurice J. Zucrow Propulsion Laboratories in 1998 and is now the world's largest academic propulsion laboratory. In 1948 Zucrow published Principles of Jet Propulsion and Gas Turbines, the first textbook in this field.
In 1957 Zucrow was elected to the board of directors of American Rocket Society together with Wernher von Braun. In 1962 Zucrow received the Sigma Xi national research award and delivered the award lecture entitled "Space Propulsion Engines - Their Characteristics and Problems". He received the Distinguished Civilian Service award from the Department of the Army in 1967.
Zucrow's residence in West Lafayette was 801 Carrolton Boulevard in Hill and Dales neighborhood adjacent to the Purdue campus.
Books
Zucrow, M.J., Principles of Jet Propulsion and Gas Turbines, John Wiley & Sons, 1948.
Zucrow, M.J., Aircraft and Missile Propulsion Volume 1: Thermodynamics of Fluid Flow and Application to Propulsion Engines, John Wiley and Sons, 1958.
Zucrow, M.J., Aircraft & Missile Propulsion, Volume 2: The Gas Turbine Power Plant, the Turboprop, Turbojet, Ramjet, and Rocket Engines, John Wiley and Sons; First edition, 1958.
Zucrow, M.J., Gas Dynamics, Volumes 1 and 2, John Wiley & Sons, 1976.
References
External links
, Maurice J. Zucrow Propulsion Laboratories at Purdue University
, Online books by Maurice J. Zucrow
, Maurice J. Zucrow papers at Purdue Archives
American scientists
1899 births
1975 deaths
Purdue University College of Engineering alumni
Purdue University faculty
Aerospace engineers
Harvard John A. Paulson School of Engineering and Applied Sciences alumni
Members of the American Rocket Society
Emigrants from the Russian Empire to the United States | Maurice Zucrow | Engineering | 648 |
2,542,927 | https://en.wikipedia.org/wiki/Content%20reference%20identifier |
Overview
A content reference identifier or CRID is a concept from the standardization work done by the TV-Anytime forum. It is or closely matches the concept of the Uniform Resource Locator, or URL, as used on the World-Wide Web:
The concept of CRID permits referencing contents unambiguously, regardless of their location, i.e., without knowing specific broadcast information (time, date and channel) or how to obtain them through a network, for instance, by means of a streaming service or by downloading a file from an Internet server.
The receiver must be capable of resolving these unambiguous references, i.e. of translating them into specific data that will allow it to obtain the location of that content in order to acquire it. This makes it possible for recording processes to take place without knowing that information, and even without knowing beforehand the duration of the content to be recorded: a complete series by a simple click, a program that has not been scheduled yet, a set of programs grouped by a specific criterion…
This framework allows for the separation between the reference to a given content (the CRID) and the necessary information to acquire it, which is called a “locator”. Each CRID may lead to one or more locators which will represent different copies of the same content. They may be identical copies broadcast in different channels or dates, or cost different prices. They may also be distinct copies with different technical parameters such as format or quality.
It may also be the case that the resolution process of a CRID provides another CRID as a result (for example, its reference in a different network, where it has an alternative identifier assigned by a different operator) or a set of CRIDs (for instance, if the original CRID represents a TV series, in which case the resolution process would result in the list of CRIDs representing each episode).
From the above it can be concluded that provided that a given content can belong to many groups (each possibly defined by distinctive qualities), it is possible that many CRIDs carry the same content. That is, several CRIDs may be resolved into the same locator.
A CRID is not exactly a universal, unique and exclusive identifier for a given content. It is closely related to the authority that creates it, to the resolution service provider, and to the content provider in such a way that the same content may have different CRIDs depending on the field in which they are used (for example, a different one for each television operator that has the rights to broadcast the content).
Format
A CRID is specified much like URLs. In fact, a CRID is a so-called URI. Typically, the content creator, the broadcaster or a third party will use their DNS-names in a combination with a product-specific name to create globally unique CRIDs. That is, the syntax of a CRID is:
crid://authority/data
The authority field represents the entity that created the CRID and its format is that of a DNS name. The data field represents a string of characters that will unambiguously identify the content within the authority scope (it is a string of characters assigned by the authority itself).
As an example, let's assume that BBC wanted to make a CRID for (all the programs of) the Olympics in China. It may have looked something like this
crid://bbc.co.uk/olympics/2008/
This would be a group CRID, that is, a CRID representing a group of contents. Then, to refer to a specific event – such as the women's shot-put final – they could have used the following inside their metadata.
crid://bbc.co.uk/olympics/2008/final/shotput/women
Currently, four types of CRIDs are playing a major role in some unidirectional television networks: programme CRID, series CRID, group CRID, and recommendation CRID. One of the most important applications of CRIDs is the so-called series link recording function (SL) of modern digital video recorders (DVR, PVR).
In turn, a locator is a string of characters that contains all the necessary information for a receiver to find and acquire a given content, whether it is received through a transport stream, located in local storage, downloaded as a file from an Internet server, or through a streaming service. For example, a DVB locator will include all the necessary parameters to identify a specific content within a transport stream: network, transport stream, service, table and/or event identifiers.
The locators' format, as established in TV-Anytime, is quite generic and simple, and corresponds to:
[transport-mechanism]:[specific-data]
The first part of the locator’s format (the transport mechanism) must be a string of characters that is unique for each mechanism (transport stream, local file, HTTP Internet access…). The second part must be unambiguous only within the scope of a given transport mechanism and will be standardized by the organism in charge of the regulation of the mechanism itself.
For instance, a DVB locator to identify a content within the transport stream of networks that follow this standard would be:
dvb://112.4a2.5ec;2d22~20121212T220000Z—PT01H30M
which would indicate a content (identified by the string “2d22”) that airs on a channel available on a DVB network identified by the address “112.4a2.5ec” (network “112”, transport stream “4a2” and service “5ec”), on 12 December 2012 at 10 p.m. and with a duration of 90 minutes.
The location resolution process
The location resolution process is the procedure by which, starting from the CRID of a given content, one or several locators of that content are obtained. Resolving a CRID can be a direct process, which leads immediately to one or many locators, or it may also happen that in the first place one or many intermediate CRIDs are returned, which must undergo the same procedure to finally obtain one or several locators.
This procedure involves some information elements, among which we find two structures named resolving authority record (RAR) and ContentReferencingTable, respectively. Consulting them repeatedly will take the receiver from a CRID to one or many locators that will allow it to acquire the content.
The RAR table
The RAR table is one or many data structures that provide the receiver, for each authority that submits CRIDs, information on the corresponding resolution service provider. Among other things, it informs about which mechanism is used to provide information to resolve the CRIDs from each authority. That is, one or many RAR records must exist for each authority that indicate the receiver where it has to go to resolve the CRIDs of that particular authority.
For example, in the record of the figure (expressed by means of a XML structure, according to the XML Schema defined in the TV-Anytime) there is an authority called “tve.es”, whose resolution service provider is the entity “rtve.es”, available on the URL "http://tva.rtve.es/locres/tve", which means there is resolution information in that URL.
These RAR records will have reached the receiver in an indefinite form, unimportant for the TV-Anytime specification, which will depend on the specific transport mechanism of the network to which the receiver is connected. Each family of standards that regulates distribution networks (DVB, ATSC, ISDB, IPTV...) will have previously defined such procedure, which will be used by devices certified according to those standards.
The ContentReferencingTable table
The second structure involved in the location resolution process is a proper resolution table which, given a content's CRID, returns one or several locators that enable the receiver to access an instance of that content, or one or many CRIDs that allow it to move forward in the resolution process.
The figure shows an example of this second structure, an XML document according to the specifications of the XML Schema defined in TV-Anytime. In it, several sections are included (<Result> elements) that structure the information that describes each resolution case.
The first one declares how a CRID (crid://tv.com/Friends/all), which corresponds to a group content that encompasses several episodes (two) of the “Friends” series is resolved. The result of the resolution process provides two new CRIDs each of them corresponding to one of the two episodes.
The second <Result> element resolves the CRID of the first episode of the first season. The result of the resolution process is two DVB locators. The “acquire” attribute with “any” value indicates that any of them are good (the second one is a repetition broadcast a week later).
The third <Result> element gives information about the second episode. It indicates that it cannot be resolved yet (“status” attribute with the “cannot yet resolve” value), indicating a date on which the request for resolution information must be repeated.
The process
Once the user has selected a given content (identified by the corresponding CRID) to perform some action upon it, the receiver begins the location resolution process that shall lead to specific location information that allows access to a copy of the content.
This procedure depends mainly on the receiver’s connectivity. It is possible to make a basic distinction between unidirectional networks, where the receiver can only receive information through the broadcast channel, and bidirectional networks, where there is also a return channel through which the receiver can communicate with the outside (typically an Internet access).
For receivers connected only to a broadcast channel, it is clear that the resolution information must come directly from that channel, or be available somehow in an existing local storage system. After selecting a CRID, the first thing the receiver needs to do is check the information about where to find the resolution table. For this, it must find a RAR record associated with the authority of the selected CRID.
Once a RAR record corresponding to that authority is found, the receiver will know, by referring to the URL field, where to access (or, in this case, where to listen) to obtain the resolution information.
The information that will receive through that access point will consist of a message for each of the consulted CRIDs (for example, a <Result> element in the ContentReferencingTable).
In web casting
To make the CRID even more globally available the IETF will publish a request for comments specifying the use of the CRID over the web. This will allow consumer devices to hook up to content provider servers, much like current browsers look up webservers, requesting content by CRID.
In May 2005, an Informational RFC, No 4078, was published as the start of this work.
The long-term goal is that CRIDs should be available for use by cell phones, PDAs, digital TV receivers and other consumer devices for fetching content, either from a broadcast stream or over IP networks.
See also
BBC Programme Identifier
References
RFC 4078 (PDF) Accessed 27 October 2011
RFC 4078 (TXT) Accessed 27 October 2011
ETSI TS 102 822-2 V1.4.1 (2007–11), Page 19, Section 5: "TV-Anytime content referencing scenarios" Accessed 3 December 2012
ETSI TS 102 822-4 V1.7.1 (2012–12), Page 13, Section 8: "CRID" Accessed 9 January 2013
ETSI TS 102 323 V1.5.1 (2012-01), Page 27, Section 6: "CRIDs and other URIs in DVB networks" Accessed 1 March 2012
Television terminology
Interactive television
Digital video recorders
Digital television
Video storage
Digital media
Television time shifting technology
Digital Video Broadcasting
URI schemes
Broadcast engineering | Content reference identifier | Technology,Engineering | 2,505 |
40,816,101 | https://en.wikipedia.org/wiki/Today%20Trader | Today Trader, Inc is a trading and education service that started in 2008. As of 2013, trading with their service is done primarily through their relationship with the Ditto Trade online brokerage service. Since 2018, the venture serves as an educational hub.
History
Steve Gomez and Andy Lindloff met at a Level 2 trading seminar when they were both working as floor traders in 1998. Today Trader, Inc was started in 2008 when Gomez and Lindloff began sharing their desktops with other traders through GoToMeeting. This service was available as a subscription for active day traders.
The day trading transparency and education provided by Today Trader was unique and they were early adopters in 2010. The company utilized many social media resources to crowdsource ideas and share their ideas including YouTube, Twitter, and StockTwits.
Following the article in The New York Times, the company was approached by Brian Lund to consider becoming a "Lead Trader" with the online brokerage service Ditto Trade. Ditto Trade allows people to follow the Today Trader Lead Trader account and be included in every trade or trade alongside these "master traders" from trade alerts. In 2011, the company was performing approximately thirty transactions per month. These trades are primarily day and swing trading transactions. The service was launched later in 2010 and has been outperforming the S&P 500 since that time.
Gomez was interviewed by CNBC reporter Jane Wells after he sold a portion of his silver holdings in 2011.
On September 20, 2012, the Today Trader team discontinued their live trading service in favor of their relationship with Ditto Trade. They offer trading classes online to learn their specialized trading technique called "swing scalping". Gomez and Lindloff have also published on the importance on money management for traders.
Gomez was listed as one of the "Top 5 Trading Guru’s of 2013" by Trader Robotics.
In March 2018 Michael Hebenstreit, a stock broker from Germany, acquired Today Trader and converted the venture into an educational hub offering content and services for professional traders and students who want to learn trading.
Controversy
The Today Trader live trading service brought attention to the field of day trading. The Motley Fool author Rich Greifner argued that the day trading lifestyle and results described in The New York Times article was "impressive" but questioned the accuracy of the claim as well as offering that these results were "hardly indicative of the typical day trader's experience".
In a follow-up post, StockTwits founder Howard Lindzon described his dislike of the term "day trader" describing it as "tired" similar to the term "journalist". He also noted that traders Gomez "keep moving forward and adapt" to the current market conditions.
References
External links
TodayTrader website
TodayTrader on Twitter
TodayTrader on Youtube
Companies based in San Diego
Online financial services companies of the United States
Real-time web | Today Trader | Technology | 588 |
27,059 | https://en.wikipedia.org/wiki/Stainless%20steel | Stainless steel, also known as inox, corrosion-resistant steel (CRES), and rustless steel, is an iron-based alloy containing a minimum level of chromium that is resistant to rusting and corrosion. Stainless steel's resistance to corrosion results from the 10.5%, or more, chromium content which forms a passive film that can protect the material and self-heal in the presence of oxygen. It can also be alloyed with other elements such as molybdenum, carbon, nickel and nitrogen to develop a range of different properties depending on its specific use.
The alloy's properties, such as luster and resistance to corrosion, are useful in many applications. Stainless steel can be rolled into sheets, plates, bars, wire, and tubing. These can be used in cookware, cutlery, surgical instruments, major appliances, vehicles, construction material in large buildings, industrial equipment (e.g., in paper mills, chemical plants, water treatment), and storage tanks and tankers for chemicals and food products. Some grades are also suitable for forging and casting.
The biological cleanability of stainless steel is superior to both aluminium and copper, and comparable to glass. Its cleanability, strength, and corrosion resistance have prompted the use of stainless steel in pharmaceutical and food processing plants.
Different types of stainless steel are labeled with an AISI three-digit number. The ISO 15510 standard lists the chemical compositions of stainless steels of the specifications in existing ISO, ASTM, EN, JIS, and GB standards in a useful interchange table.
Properties
Corrosion resistance
Although stainless steel does rust, this only affects the outer few layers of atoms, its chromium content shielding deeper layers from oxidation.
The addition of nitrogen also improves resistance to pitting corrosion and increases mechanical strength. Thus, there are numerous grades of stainless steel with varying chromium and molybdenum contents to suit the environment the alloy must endure. Corrosion resistance can be increased further by the following means:
increasing chromium content to more than 11%
adding nickel to at least 8%
adding molybdenum (which also improves resistance to pitting corrosion)
Strength
The most common type of stainless steel, 304, has a tensile yield strength around in the annealed condition. It can be strengthened by cold working to a strength of in the full-hard condition.
The strongest commonly available stainless steels are precipitation hardening alloys such as 17-4 PH and Custom 465. These can be heat treated to have tensile yield strengths up to .
Melting point
Melting point of stainless steel is near that of ordinary steel, and much higher than the melting points of aluminium or copper.
As with most alloys, the melting point of stainless steel is expressed in the form of a range of temperatures, and not a single temperature. This temperature range goes from depending on the specific consistency of the alloy in question.
Conductivity
Like steel, stainless steels are relatively poor conductors of electricity, with significantly lower electrical conductivities than copper. In particular, the electrical contact resistance (ECR) of stainless steel arises as a result of the dense protective oxide layer and limits its functionality in applications as electrical connectors. Copper alloys and nickel-coated connectors tend to exhibit lower ECR values and are preferred materials for such applications. Nevertheless, stainless steel connectors are employed in situations where ECR poses a lower design criteria and corrosion resistance is required, for example in high temperatures and oxidizing environments.
Magnetism
Martensitic, duplex and ferritic stainless steels are magnetic, while austenitic stainless steel is usually non-magnetic. Ferritic steel owes its magnetism to its body-centered cubic crystal structure, in which iron atoms are arranged in cubes (with one iron atom at each corner) and an additional iron atom in the center. This central iron atom is responsible for ferritic steel's magnetic properties. This arrangement also limits the amount of carbon the steel can absorb to around 0.025%. Grades with low coercive field have been developed for electro-valves used in household appliances and for injection systems in internal combustion engines. Some applications require non-magnetic materials, such as magnetic resonance imaging. Austenitic stainless steels, which are usually non-magnetic, can be made slightly magnetic through work hardening. Sometimes, if austenitic steel is bent or cut, magnetism occurs along the edge of the stainless steel because the crystal structure rearranges itself.
Wear
Galling, sometimes called cold welding, is a form of severe adhesive wear, which can occur when two metal surfaces are in relative motion to each other and under heavy pressure. Austenitic stainless steel fasteners are particularly susceptible to thread galling, though other alloys that self-generate a protective oxide surface film, such as aluminum and titanium, are also susceptible. Under high contact-force sliding, this oxide can be deformed, broken, and removed from parts of the component, exposing the bare reactive metal. When the two surfaces are of the same material, these exposed surfaces can easily fuse. Separation of the two surfaces can result in surface tearing and even complete seizure of metal components or fasteners. Galling can be mitigated by the use of dissimilar materials (bronze against stainless steel) or using different stainless steels (martensitic against austenitic). Additionally, threaded joints may be lubricated to provide a film between the two parts and prevent galling. Nitronic 60, made by selective alloying with manganese, silicon, and nitrogen, has demonstrated a reduced tendency to gall.
Density
The density of stainless steel ranges from depending on the alloy.
History
The invention of stainless steel followed a series of scientific developments, starting in 1798 when chromium was first shown to the French Academy by Louis Vauquelin. In the early 1800s, British scientists James Stoddart, Michael Faraday, and Robert Mallet observed the resistance of chromium-iron alloys ("chromium steels") to oxidizing agents. Robert Bunsen discovered chromium's resistance to strong acids. The corrosion resistance of iron-chromium alloys may have been first recognized in 1821 by Pierre Berthier, who noted their resistance against attack by some acids and suggested their use in cutlery.
In the 1840s, both Britain's Sheffield steelmakers and then Krupp of Germany were producing chromium steel with the latter employing it for cannons in the 1850s. In 1861, Robert Forester Mushet took out a patent on chromium steel in Britain.
These events led to the first American production of chromium-containing steel by J. Baur of the Chrome Steel Works of Brooklyn for the construction of bridges. A US patent for the product was issued in 1869. This was followed with recognition of the corrosion resistance of chromium alloys by Englishmen John T. Woods and John Clark, who noted ranges of chromium from 5–30%, with added tungsten and "medium carbon". They pursued the commercial value of the innovation via a British patent for "Weather-Resistant Alloys".
Scientists researching steel corrosion in the second half of the 19th century didn't pay attention to the amount of carbon in the alloyed steels they were testing until in 1898 Adolphe Carnot and E. Goutal noted that chromium steels better resist to oxidation with acids the less carbon they contain.
Also in the late 1890s, German chemist Hans Goldschmidt developed an aluminothermic (thermite) process for producing carbon-free chromium. Between 1904 and 1911, several researchers, particularly Leon Guillet of France, prepared alloys that would be considered stainless steel today.
In 1908, the Essen firm Friedrich Krupp Germaniawerft built the 366-ton sailing yacht Germania featuring a chrome-nickel steel hull, in Germany. In 1911, Philip Monnartz reported on the relationship between chromium content and corrosion resistance. On 17 October 1912, Krupp engineers Benno Strauss and Eduard Maurer patented as Nirosta the austenitic stainless steel known today as 18/8 or AISI type 304.
Similar developments were taking place in the United States, where Christian Dantsizen of General Electric and Frederick Becket (1875–1942) at Union Carbide were industrializing ferritic stainless steel. In 1912, Elwood Haynes applied for a US patent on a martensitic stainless steel alloy, which was not granted until 1919.
Harry Brearley
While seeking a corrosion-resistant alloy for gun barrels in 1913, Harry Brearley of the Brown-Firth research laboratory in Sheffield, England, discovered and subsequently industrialized a martensitic stainless steel alloy, today known as AISI type 420. The discovery was announced two years later in a January 1915 newspaper article in The New York Times.
The metal was later marketed under the "Staybrite" brand by Firth Vickers in England and was used for the new entrance canopy for the Savoy Hotel in London in 1929. Brearley applied for a US patent during 1915 only to find that Haynes had already registered one. Brearley and Haynes pooled their funding and, with a group of investors, formed the American Stainless Steel Corporation, with headquarters in Pittsburgh, Pennsylvania.
Rustless steel
Brearley initially called his new alloy "rustless steel". The alloy was sold in the US under different brand names like "Allegheny metal" and "Nirosta steel". Even within the metallurgy industry, the name remained unsettled; in 1921, one trade journal called it "unstainable steel". Brearley worked with a local cutlery manufacturer, who gave it the name "stainless steel". As late as 1932, Ford Motor Company continued calling the alloy "rustless steel" in automobile promotional materials.
In 1929, before the Great Depression, over 25,000 tons of stainless steel were manufactured and sold in the US annually.
Major technological advances in the 1950s and 1960s allowed the production of large tonnages at an affordable cost:
AOD process (argon oxygen decarburization), for the removal of carbon and sulfur
Continuous casting and hot strip rolling
The Z-Mill, or Sendzimir cold rolling mill
The Creusot-Loire Uddeholm (CLU) and related processes which use steam instead of some or all of the argon
Families
Stainless steel is classified into five different "families" of alloys, each having a distinct set of attributes. Four of the families are defined by their predominant crystalline structure - the austenitic, ferritic, martensitic, and duplex alloys. The fifth family, precipitation hardening, is defined by the type of heat treatment used to develop its properties.
Austenitic
Austenitic stainless steel is the largest family of stainless steels, making up about two-thirds of all stainless steel production. They have a face-centered cubic crystal structure. This microstructure is achieved by alloying steel with sufficient nickel, manganese, or nitrogen to maintain an austenitic microstructure at all temperatures, ranging from the cryogenic region to the melting point. Thus, austenitic stainless steels are not hardenable by heat treatment since they possess the same microstructure at all temperatures.
Austenitic stainless steels consist of two subfamilies:
200 series are chromium-manganese-nickel alloys that maximize the use of manganese and nitrogen to minimize the use of nickel. Due to their nitrogen addition, they possess approximately 50% higher yield strength than 300-series stainless sheets of steel. Representative alloys include Type 201 and Type 202.
300 series are chromium-nickel alloys that achieve their austenitic microstructure almost exclusively by nickel alloying; some very highly alloyed grades include some nitrogen to reduce nickel requirements. 300 series is the largest group and the most widely used. Representative alloys include Type 304 and Type 316.
Ferritic
Ferritic stainless steels have a body-centered cubic crystal structure, are magnetic, and are hardenable by cold working, but not by heat treating. They contain between 10.5% and 27% chromium with very little or no nickel. Due to the near-absence of nickel, they are less expensive than austenitic stainless steels. Representative alloys include Type 409, Type 429, Type 430, and Type 446. Ferritic stainless steels are present in many products, which include:
Automobile exhaust pipes
Architectural and structural applications
Building components, such as slate hooks, roofing, and chimney ducts
Power plates in solid oxide fuel cells operating at temperatures around
Martensitic
Martensitic stainless steels have a body-centered tetragonal crystal structure, are magnetic, and are hardenable by heat treating and by cold working. They offer a wide range of properties and are used as stainless engineering steels, stainless tool steels, and creep-resistant steels. They are not as corrosion-resistant as ferritic and austenitic stainless steels due to their low chromium content. They fall into four categories (with some overlap):
Fe-Cr-C grades. These were the first grades used and are still widely used in engineering and wear-resistant applications. Representative grades include Type 410, Type 420, and Type 440C.
Fe-Cr-Ni-C grades. Some carbon is replaced by nickel. They offer higher toughness and higher corrosion resistance. Representative grades include Type 431.
Martensitic precipitation hardening grades. 17-4 PH (UNS S17400), the best-known grade, combines martensitic hardening and precipitation hardening to increase strength and toughness.
Creep-resisting grades. Small additions of niobium, vanadium, boron, and cobalt increase the strength and creep resistance up to about .
Martensitic stainless steels can be heat treated to provide better mechanical properties. The heat treatment typically involves three steps:
Austenitizing, in which the steel is heated to a temperature in the range , depending on grade. The resulting austenite has a face-centered cubic crystal structure.
Quenching. The austenite is transformed into martensite, a hard body-centered tetragonal crystal structure. The quenched martensite is very hard and too brittle for most applications. Some residual austenite may remain.
Tempering. Martensite is heated to around , held at temperature, then air-cooled. Higher tempering temperatures decrease yield strength and ultimate tensile strength but increase the elongation and impact resistance.
Duplex
Duplex stainless steels have a mixed microstructure of austenite and ferrite, the ideal ratio being a 50:50 mix, though commercial alloys may have ratios of 40:60. They are characterized by higher chromium (19–32%) and molybdenum (up to 5%) and lower nickel contents than austenitic stainless steels. Duplex stainless steels have roughly twice the yield strength of austenitic stainless steel. Their mixed microstructure provides improved resistance to chloride stress corrosion cracking in comparison to austenitic stainless steel types 304 and 316. Duplex grades are usually divided into three sub-groups based on their corrosion resistance: lean duplex, standard duplex, and super duplex. The properties of duplex stainless steels are achieved with an overall lower alloy content than similar-performing super-austenitic grades, making their use cost-effective for many applications. The pulp and paper industry was one of the first to extensively use duplex stainless steel. Today, the oil and gas industry is the largest user and has pushed for more corrosion resistant grades, leading to the development of super duplex and hyper duplex grades. More recently, the less expensive (and slightly less corrosion-resistant) lean duplex has been developed, chiefly for structural applications in building and construction (concrete reinforcing bars, plates for bridges, coastal works) and in the water industry.
Precipitation hardening
Precipitation hardening stainless steels are characterized by the abiity to be precipitation hardened to higher strength. There are three types of precipitation hardening stainless steels which are classified according to their crystalline structure:
Martensitic precipitation hardenable stainless steels are martensitic at room temperature in both the solution annealed and precipitation hardened conditions. Representative alloys include 17-4 PH (UNS S17400), 15-5 PH (UNS S15500), Custom 450 (UNS S45000) and Custom 465 (UNS S46500).
Semi-austenitic precipitation hardenable stainless steels are initially austenitic in the solution annealed condition for ease of fabrication, but are subsequently transformed to martensite to provide higher strength and to be precipitation hardened. Representative alloys include 17-7 PH (UNS S17700), 15-7 PH (UNS S15700), AM-350 (UNS S35000), and AM-355 (UNS S35500).
Austenitic precipitation hardenable stainless steels are austenitic at room temperature in both the solution annealed and precipitation hardened conditions. Representative alloys include A-286 (UNS S66286) and Discalloy (UNS S66220).
Classification systems
Several different classification systems have been developed for designating stainless steels. The main system used in the United States has been the SAE steel grades numbering system. The SAE numbering system designates stainless steels by "Type" followed by a three-digit number and sometimes a letter suffix. A newer system that was jointly developed by ASTM and SAE in 1974 is The Unified Numbering System for Metals and Alloys (UNS). The Unified Numbering System classifies stainless steels using an alpha-numeric identifier consisting of "S" followed by five digits, although some austenitic stainless steels with high nickel content may fall into the nickel-base designation which uses "N" as the alpha identifer. The UNS designations incorporate previously used designations, whether from the SAE numbering system or proprietary alloy designations. Europe has adopted EN 10088 for classification of stainless steels.
Corrosion resistance
Unlike carbon steel, stainless steels do not suffer uniform corrosion when exposed to wet environments. Unprotected carbon steel rusts readily when exposed to a combination of air and moisture. The resulting iron oxide surface layer is porous and fragile. In addition, as iron oxide occupies a larger volume than the original steel, this layer expands and tends to flake and fall away, exposing the underlying steel to further attack. In comparison, stainless steels contain sufficient chromium to undergo passivation, spontaneously forming a microscopically thin inert surface film of chromium oxide by reaction with the oxygen in the air and even the small amount of dissolved oxygen in the water. This passive film prevents further corrosion by blocking oxygen diffusion to the steel surface and thus prevents corrosion from spreading into the bulk of the metal. This film is self-repairing, even when scratched or temporarily disturbed by conditions that exceed the inherent corrosion resistance of that grade.
The resistance of this film to corrosion depends upon the chemical composition of the stainless steel, chiefly the chromium content. It is customary to distinguish between four forms of corrosion: uniform, localized (pitting), galvanic, and SCC (stress corrosion cracking). Any of these forms of corrosion can occur when the grade of stainless steel is not suited for the working environment.
Uniform
Uniform corrosion takes place in very aggressive environments, typically where chemicals are produced or heavily used, such as in the pulp and paper industries. The entire surface of the steel is attacked, and the corrosion is expressed as corrosion rate in mm/year (usually less than 0.1 mm/year is acceptable for such cases). Corrosion tables provide guidelines.
This is typically the case when stainless steels are exposed to acidic or basic solutions. Whether stainless steel corrodes depends on the kind and concentration of acid or base and the solution temperature. Uniform corrosion is typically easy to avoid because of extensive published corrosion data or easily performed laboratory corrosion testing.
Acidic solutions can be put into two general categories: reducing acids, such as hydrochloric acid and dilute sulfuric acid, and oxidizing acids, such as nitric acid and concentrated sulfuric acid. Increasing chromium and molybdenum content provides increased resistance to reducing acids while increasing chromium and silicon content provides increased resistance to oxidizing acids. Sulfuric acid is one of the most-produced industrial chemicals. At room temperature, type 304 stainless steel is only resistant to 3% acid, while type 316 is resistant to 3% acid up to and 20% acid at room temperature. Thus type 304 SS is rarely used in contact with sulfuric acid. Type 904L and Alloy 20 are resistant to sulfuric acid at even higher concentrations above room temperature. Concentrated sulfuric acid possesses oxidizing characteristics like nitric acid, and thus silicon-bearing stainless steels are also useful. Hydrochloric acid damages any kind of stainless steel and should be avoided. All types of stainless steel resist attack from phosphoric acid and nitric acid at room temperature. At high concentrations and elevated temperatures, attack will occur, and higher-alloy stainless steels are required. In general, organic acids are less corrosive than mineral acids such as hydrochloric and sulfuric acid.
Type 304 and type 316 stainless steels are unaffected by weak bases such as ammonium hydroxide, even in high concentrations and at high temperatures. The same grades exposed to stronger bases such as sodium hydroxide at high concentrations and high temperatures will likely experience some etching and cracking. Increasing chromium and nickel contents provide increased resistance.
All grades resist damage from aldehydes and amines, though in the latter case type 316 is preferable to type 304; cellulose acetate damages type 304 unless the temperature is kept low. Fats and fatty acids only affect type 304 at temperatures above and type 316 SS above , while type 317 SS is unaffected at all temperatures. Type 316L is required for the processing of urea.
Localized
Localized corrosion can occur in several ways, e.g. pitting corrosion and crevice corrosion. These localized attacks are most common in the presence of chloride ions. Higher chloride levels require more highly alloyed stainless steels.
Localized corrosion can be difficult to predict because it is dependent on many factors, including:
Chloride ion concentration. Even when chloride solution concentration is known, it is still possible for localized corrosion to occur unexpectedly. Chloride ions can become unevenly concentrated in certain areas, such as in crevices (e.g. under gaskets) or on surfaces in vapor spaces due to evaporation and condensation.
Temperature: increasing temperature increases susceptibility.
Acidity: increasing acidity increases susceptibility.
Stagnation: stagnant conditions increase susceptibility.
Oxidizing species: the presence of oxidizing species, such as ferric and cupric ions, increases susceptibility.
Pitting corrosion is considered the most common form of localized corrosion. The corrosion resistance of stainless steels to pitting corrosion is often expressed by the PREN, obtained through the formula:
,
where the terms correspond to the proportion of the contents by mass of chromium, molybdenum, and nitrogen in the steel. For example, if the steel consisted of 15% chromium %Cr would be equal to 15.
The higher the PREN, the higher the pitting corrosion resistance. Thus, increasing chromium, molybdenum, and nitrogen contents provide better resistance to pitting corrosion.
Though the PREN of certain steel may be theoretically sufficient to resist pitting corrosion, crevice corrosion can still occur when the poor design has created confined areas (overlapping plates, washer-plate interfaces, etc.) or when deposits form on the material. In these select areas, the PREN may not be high enough for the service conditions. Good design, fabrication techniques, alloy selection, proper operating conditions based on the concentration of active compounds present in the solution causing corrosion, pH, etc. can prevent such corrosion.
Stress
Stress corrosion cracking (SCC) is caused by combination of tensile stress and a corrosive environment and can lead to unexpected and sudden failure of a stainless steel component. It may occur when three conditions are met:
The part contains either applied or residual tensile stresses.
The part is in a corrosive environment.
The stainless steel is susceptible to SCC.
SCC can be prevented by eliminating one of these three conditions.
The SCC mechanism results from the following sequence of events:
Pitting occurs.
Cracks start from a pit initiation site.
Cracks then propagate through the metal in a transgranular or intergranular mode.
Failure occurs.
Galvanic
Galvanic corrosion (also called "dissimilar-metal corrosion") refers to corrosion damage induced when two dissimilar materials are coupled in a corrosive electrolyte. The most common electrolyte is water, ranging from freshwater to seawater. When a galvanic couple forms, one of the metals in the couple becomes the anode and corrodes faster than it would alone, while the other becomes the cathode and corrodes slower than it would alone. Stainless steel, due to having a more positive electrode potential than for example carbon steel and aluminium, becomes the cathode, accelerating the corrosion of the anodic metal. An example is the corrosion of aluminium rivets fastening stainless steel sheets in contact with water. The relative surface areas of the anode and the cathode are important in determining the rate of corrosion. In the above example, the surface area of the rivets is small compared to that of the stainless steel sheet, resulting in rapid corrosion. However, if stainless steel fasteners are used to assemble aluminium sheets, galvanic corrosion will be much slower because the galvanic current density on the aluminium surface will be many orders of magnitude smaller. A frequent mistake is to assemble stainless steel plates with carbon steel fasteners; whereas using stainless steel to fasten carbon-steel plates is usually acceptable, the reverse is not. Providing electrical insulation between the dissimilar metals, where possible, is effective at preventing this type of corrosion.
High-temperature
At elevated temperatures, all metals react with hot gases. The most common high-temperature gaseous mixture is air, of which oxygen is the most reactive component. To avoid corrosion in air, carbon steel is limited to approximately . Oxidation resistance in stainless steels increases with additions of chromium, silicon, and aluminium. Small additions of cerium and yttrium increase the adhesion of the oxide layer on the surface. The addition of chromium remains the most common method to increase high-temperature corrosion resistance in stainless steels; chromium reacts with oxygen to form a chromium oxide scale, which reduces oxygen diffusion into the material. The minimum 10.5% chromium in stainless steels provides resistance to approximately , while 16% chromium provides resistance up to approximately . Type 304, the most common grade of stainless steel with 18% chromium, is resistant to approximately . Other gases, such as sulfur dioxide, hydrogen sulfide, carbon monoxide, chlorine, also attack stainless steel. Resistance to other gases is dependent on the type of gas, the temperature, and the alloying content of the stainless steel. With the addition of up to 5% aluminium, ferritic grades Fe-Cr-Al are designed for electrical resistance and oxidation resistance at elevated temperatures. Such alloys include Kanthal, produced in the form of wire or ribbons.
Standard finishes
Standard mill finishes can be applied to flat rolled stainless steel directly by the rollers and by mechanical abrasives. Steel is first rolled to size and thickness and then annealed to change the properties of the final material. Any oxidation that forms on the surface (mill scale) is removed by pickling, and a passivation layer is created on the surface. A final finish can then be applied to achieve the desired aesthetic appearance.
The following designations are used in the U.S. to describe stainless steel finishes by ASTM A480/A480M-18 (DIN):
No. 0: Hot-rolled, annealed, thicker plates
No. 1 (1D): Hot-rolled, annealed and passivated
No. 2D (2D): Cold rolled, annealed, pickled and passivated
No. 2B (2B): Same as above with additional pass through highly polished rollers
No. 2BA (2R): Bright annealed (BA or 2R) same as above then bright annealed under oxygen-free atmospheric condition
No. 3 (G-2G:) Coarse abrasive finish applied mechanically
No. 4 (1J-2J): Brushed finish
No. 5: Satin finish
No. 6 (1K-2K): Matte finish (brushed but smoother than #4)
No. 7 (1P-2P): Reflective finish
No. 8: Mirror finish
No. 9: Bead blast finish
No. 10: Heat colored finish – offering a wide range of electropolished and heat colored surfaces
Joining
A wide range of joining processes are available for stainless steels, though welding is by far the most common.
The ease of welding largely depends on the type of stainless steel used. Austenitic stainless steels are the easiest to weld by electric arc, with weld properties similar to those of the base metal (not cold-worked). Martensitic stainless steels can also be welded by electric-arc but, as the heat-affected zone (HAZ) and the fusion zone (FZ) form martensite upon cooling, precautions must be taken to avoid cracking of the weld. Improper welding practices can additionally cause sugaring (oxide scaling) and heat tint on the backside of the weld. This can be prevented with the use of back-purging gases, backing plates, and fluxes. Post-weld heat treatment is almost always required while preheating before welding is also necessary in some cases. Electric arc welding of type 430 ferritic stainless steel results in grain growth in the HAZ, which leads to brittleness. This has largely been overcome with stabilized ferritic grades, where niobium, titanium, and zirconium form precipitates that prevent grain growth. Duplex stainless steel welding by electric arc is a common practice but requires careful control of the process parameters. Otherwise, the precipitation of unwanted intermetallic phases occurs, which reduces the toughness of the welds.
Electric arc welding processes include:
Gas metal arc welding, also known as MIG/MAG welding
Gas tungsten arc welding, also known as tungsten inert gas (TIG) welding
Plasma arc welding
Flux-cored arc welding
Shielded metal arc welding (covered electrode)
Submerged arc welding
MIG, MAG and TIG welding are the most common methods.
Other welding processes include:
Stud welding
Resistance spot welding
Resistance seam welding
Flash welding
Laser beam welding
Oxy-acetylene welding
Stainless steel may be bonded with adhesives such as silicone, silyl modified polymers, and epoxies. Acrylic and polyurethane adhesives are also used in some situations.
Production
Most of the world's stainless steel production is produced by the following processes:
Electric arc furnace (EAF): stainless steel scrap, other ferrous scrap, and ferrous alloys (Fe Cr, Fe Ni, Fe Mo, Fe Si) are melted together. The molten metal is then poured into a ladle and transferred into the AOD process (see below).
Argon oxygen decarburization (AOD): carbon in the molten steel is removed (by turning it into carbon monoxide gas) and other compositional adjustments are made to achieve the desired chemical composition.
Continuous casting (CC): the molten metal is solidified into slabs for flat products (a typical section is thick and wide) or blooms (sections vary widely but is the average size).
Hot rolling (HR): slabs and blooms are reheated in a furnace and hot-rolled. Hot rolling reduces the thickness of the slabs to produce about -thick coils. Blooms, on the other hand, are hot-rolled into bars, which are cut into lengths at the exit of the rolling mill, or wire rod, which is coiled.
Cold finishing (CF) depends on the type of product being finished:
Hot-rolled coils are pickled in acid solutions to remove the oxide scale on the surface, then subsequently cold rolled in Sendzimir rolling mills and annealed in a protective atmosphere until the desired thickness and surface finish is obtained. Further operations such as slitting and tube forming can be performed in downstream facilities.
Hot-rolled bars are straightened, then machined to the required tolerance and finish.
Wire rod coils are subsequently processed to produce cold-finished bars on drawing benches, fasteners on boltmaking machines, and wire on single or multipass drawing machines.
World stainless steel production figures are published yearly by the International Stainless Steel Forum. Of the EU production figures, Italy, Belgium and Spain were notable, while Canada and Mexico produced none. China, Japan, South Korea, Taiwan, India the US and Indonesia were large producers while Russia reported little production.
Breakdown of production by stainless steels families in 2017:
Austenitic stainless steels Cr-Ni (also called 300-series, see "Grades" section above): 54%
Austenitic stainless steels Cr-Mn (also called 200-series): 21%
Ferritic and martensitic stainless steels (also called 400-series): 23%
Applications
Stainless steel is used in a multitude of fields including architecture, art, chemical engineering, food and beverage manufacture, vehicles, medicine, energy and firearms.
Life cycle cost
Life cycle cost (LCC) calculations are used to select the design and the materials that will lead to the lowest cost over the whole life of a project, such as a building or a bridge.
The formula, in a simple form, is the following:
where LCC is the overall life cycle cost, AC is the acquisition cost, IC the installation cost, OC the operating and maintenance costs, LP the cost of lost production due to downtime, and RC the replacement materials cost.
In addition, N is the planned life of the project, i the interest rate, and n the year in which a particular OC or LP or RC is taking place. The interest rate (i) is used to convert expenses from different years to their present value (a method widely used by banks and insurance companies) so they can be added and compared fairly. The usage of the sum formula () captures the fact that expenses over the lifetime of a project must be cumulated after they are corrected for interest rate.
Application of LCC in materials selection
Stainless steel used in projects often results in lower LCC values compared to other materials. The higher acquisition cost (AC) of stainless steel components are often offset by improvements in operating and maintenance costs, reduced loss of production (LP) costs, and the higher resale value of stainless steel components.
LCC calculations are usually limited to the project itself. However, there may be other costs that a project stakeholder may wish to consider:
Utilities, such as power plants, water supply & wastewater treatment, and hospitals, cannot be shut down. Any maintenance will require extra costs associated with continuing service.
Indirect societal costs (with possible political fallout) may be incurred in some situations such as closing or reducing traffic on bridges, creating queues, delays, loss of working hours to the people, and increased pollution by idling vehicles.
Sustainability – recycling and reuse
The average carbon footprint of stainless steel (all grades, all countries) is estimated to be 2.90 kg of CO2 per kg of stainless steel produced, of which 1.92 kg are emissions from raw materials (Cr, Ni, Mo); 0.54 kg from electricity and steam, and 0.44 kg are direct emissions (i.e., by the stainless steel plant). Note that stainless steel produced in countries that use cleaner sources of electricity (such as France, which uses nuclear energy) will have a lower carbon footprint. Ferritics without Ni will have a lower CO2 footprint than austenitics with 8% Ni or more. Carbon footprint must not be the only sustainability-related factor for deciding the choice of materials:
Over any product life, maintenance, repairs or early end of life (planned obsolescence) can increase its overall footprint far beyond initial material differences. In addition, loss of service (typically for bridges) may induce large hidden costs, such as queues, wasted fuel, and loss of man-hours.
How much material is used to provide a given service varies with the performance, particularly the strength level, which allows lighter structures and components.
Stainless steel is 100% recyclable. An average stainless steel object is composed of about 60% recycled material of which approximately 40% originates from end-of-life products, while the remaining 60% comes from manufacturing processes. What prevents a higher recycling content is the availability of stainless steel scrap, in spite of a very high recycling rate. According to the International Resource Panel's Metal Stocks in Society report, the per capita stock of stainless steel in use in society is in more developed countries and in less-developed countries. There is a secondary market that recycles usable scrap for many stainless steel markets. The product is mostly coil, sheet, and blanks. This material is purchased at a less-than-prime price and sold to commercial quality stampers and sheet metal houses. The material may have scratches, pits, and dents but is made to the current specifications.
The stainless steel cycle starts with carbon steel scrap, primary metals, and slag. The next step is the production of hot-rolled and cold-finished steel products in steel mills. Some scrap is produced, which is directly reused in the melting shop. The manufacturing of components is the third step. Some scrap is produced and enters the recycling loop. Assembly of final goods and their use does not generate any material loss. The fourth step is the collection of stainless steel for recycling at the end of life of the goods (such as kitchenware, pulp and paper plants, or automotive parts). This is where it is most difficult to get stainless steel to enter the recycling loop, as shown in the table below:
Nanoscale stainless steel
Stainless steel nanoparticles have been produced in the laboratory. These may have applications as additives for high-performance applications. For example, sulfurization, phosphorization, and nitridation treatments to produce nanoscale stainless steel based catalysts could enhance the electrocatalytic performance of stainless steel for water splitting.
Health effects
There is extensive research indicating some probable increased risk of cancer (particularly lung cancer) from inhaling fumes while welding stainless steel. Stainless steel welding is suspected of producing carcinogenic fumes from cadmium oxides, nickel, and chromium. According to Cancer Council Australia, "In 2017, all types of welding fumes were classified as a Group 1 carcinogen."
Stainless steel is generally considered to be biologically inert. However, during cooking, small amounts of nickel and chromium leach out of new stainless steel cookware into highly acidic food. Nickel can contribute to cancer risks—particularly lung cancer and nasal cancer. However, no connection between stainless steel cookware and cancer has been established.
See also
Cobalt-chrome
Corrosion engineering
Corrugated stainless steel tubing
List of blade materials
List of steel producers
Metallic fiber
Pilling–Bedworth ratio
Rouging
Weathering steel
Notes
References
Further reading
International Standard ISO15510:2014
External links
1916 introductions
Biomaterials
Building materials
Chromium alloys
English inventions
Roofing materials | Stainless steel | Physics,Chemistry,Engineering,Biology | 8,229 |
5,411,423 | https://en.wikipedia.org/wiki/Social%20organism | Social organism is a sociological concept, or model, wherein a society or social structure is regarded as a "living organism". Individuals interacting through the various entities comprising a society, such as law, family, crime, etc., are considered as they interact with other entities of the society to meet its needs. Every entity of a society, or social organism, has a function in helping maintain the organism's stability and cohesiveness.
History
The model, or concept, of society-as-organism is traced by Walter M. Simon from Plato ('the organic theory of society'), and by George R. MacLay from Aristotle (384–322 BCE) through 19th-century and later thinkers, including the French philosopher and founder of sociology, Auguste Comte, the Scottish essayist, historian and philosopher Thomas Carlyle, the English philosopher and polymath Herbert Spencer, and the French sociologist Émile Durkheim.
According to Durkheim, the more specialized the function of an organism or society, the greater its development, and vice versa. The three core activities of a society are culture, politics, and economics. Societal health depends on the harmonious interworking of these three activities.
This concept was further developed beginning in 1904, over the next two decades, by the Austrian philosopher and social reformer Rudolf Steiner in his lectures, essays, and books on the Threefold Social Order. The "health" of a social organism can be thought of as a function of the interaction of culture, politics and rights, and economics, which in theory can be studied, modeled, and analyzed.
During his work on social order, Steiner developed his "Fundamental Social Law" of economic systems:
David Sloan Wilson, in his 2002 book, Darwin's Cathedral, applies his multilevel selection theory to social groups and proposes to think of society as an organism. Human groups thus function as single units rather than mere collections of individuals. He claims that organisms and that .
See also
Body politic
Global brain
Noosphere
The Organic Theory of Societies
Superorganism
References
Bibliography
George R. MacLay, The Social Organism: A Short History of the Idea that a Human Society May Be Regarded as a Gigantic Living Creature, North River Press, 1990, .
Henry Rawie, The Social Organism and its Natural Laws, Williams & Wilkins Co., 1990, .
Rudolf Steiner, The Renewal of the Social Organism, Steiner Books, 1985, .
Oliver Luckett, Michel J Casey, The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life, Hachette Books, 2016, .
External links
Conceivia.com – Creating a new system of society.
Social Psychology and the Social Organism
Superorganisms
Sociological theories | Social organism | Biology | 551 |
27,510,746 | https://en.wikipedia.org/wiki/Tera%20100 | Tera 100 is a supercomputer built by Bull SA for the French Commissariat à l'Énergie Atomique.
On May 26, 2010, Tera 100 was turned on. The computer, which is located in Essonne is able to sustain around 1 petaFLOPs maximum performance and a peak at 1.25 petaFLOPs. It has 4300 Bullx Series S servers ('Mesca'), 140,000 Intel Xeon 7500 processor cores, and 300 TB of memory. The Interconnect is QDR InfiniBand. The file system has a throughput of 500 GB/s and total storage of 20 PB. It uses the SLURM resource manager for scheduling batch jobs.
Tera 100 uses Bull XBAS Linux, a partly Red Hat Enterprise Linux derivative.
In June 2011, TOP500 deemed it the ninth fastest supercomputer in the world, and in 2020, it had dropped off the list.
See also
Computer science
Computing
Tera-10
References
External links
CEA HPC site
Petascale computers
Supercomputing in Europe
X86 supercomputers | Tera 100 | Technology | 238 |
59,457,604 | https://en.wikipedia.org/wiki/Hartmut%20B%C3%A4rnighausen | Hartmut Bärnighausen (born 16 February 1933 in Chemnitz) is a German chemist and crystallographer. He is known for establishing the Bärnighausen trees which describe group-subgroup relationships of crystal structures.
Life
Bärnighausen studied Chemistry at Leipzig University and received his diploma after a diploma thesis with Leopold Wolf in 1955. In May 1958, he flew from East Germany to University of Freiburg, where he worked with Georg Brauer. He finished his doctorate in the group of Georg Brauer in 1959. In 1967, he received his habilitation. From 1967 to 1998, he was a professor for inorganic chemistry at the University of Karlsruhe.
Research
His research focused on the following topics:
crystallographic group theory in crystal chemistry (Bärnighausen trees)
synthesis and characterization of new compounds in including rare earth metals
structure refinements of twinned crystals
Awards
He was awarded the Carl Hermann Medal of the German Crystallographic Society in 1997.
References
Living people
20th-century German chemists
Crystallographers
1933 births
People from Chemnitz
Academic staff of the Karlsruhe Institute of Technology
Leipzig University alumni
University of Freiburg alumni
Inorganic chemists | Hartmut Bärnighausen | Chemistry | 231 |
227,682 | https://en.wikipedia.org/wiki/Meristem | In cell biology, the meristem is a type of tissue found in plants. It consists of undifferentiated cells (meristematic cells) capable of cell division. Cells in the meristem can develop into all the other tissues and organs that occur in plants. These cells continue to divide until they become differentiated and lose the ability to divide.
Differentiated plant cells generally cannot divide or produce cells of a different type. Meristematic cells are undifferentiated or incompletely differentiated. They are totipotent and capable of continued cell division. Division of meristematic cells provides new cells for expansion and differentiation of tissues and the initiation of new organs, providing the basic structure of the plant body. The cells are small, with small vacuoles or none, and protoplasm filling the cell completely. The plastids (chloroplasts or chromoplasts) are undifferentiated, but are present in rudimentary form (proplastids). Meristematic cells are packed closely together without intercellular spaces. The cell wall is a very thin primary cell wall.
The term meristem was first used in 1858 by Swiss botanist Carl Wilhelm von Nägeli (1817–1891) in his book ("Contributions to Scientific Botany"). It is derived , in recognition of its inherent function.
There are three types of meristematic tissues: apical (at the tips), intercalary or basal (in the middle), and lateral (at the sides also known as cambium). At the meristem summit, there is a small group of slowly dividing cells, which is commonly called the central zone. Cells of this zone have a stem cell function and are essential for meristem maintenance. The proliferation and growth rates at the meristem summit usually differ considerably from those at the periphery.
Primary meristems
Apical meristems give rise to the primary plant body and are responsible for primary growth, or an increase in length or height. Apical meristems may differentiate into three kinds of primary meristem:
Protoderm: lies around the outside of the stem and develops into the epidermis.
Procambium: lies just inside of the protoderm and develops into primary xylem and primary phloem. It also produces the vascular cambium, and cork cambium (secondary meristems). The cork cambium further differentiates into the phelloderm (to the inside) and the phellem, or cork (to the outside). All three of these layers (cork cambium, phellem, and phelloderm) constitute the periderm. In roots, the procambium can also give rise to the pericycle, which produces lateral roots in eudicots.
Ground meristem: Composed of parenchyma, collenchyma and sclerenchyma cells that develop into the cortex and the pith.
Secondary meristems
After the primary growth, lateral meristems develop as secondary plant growth. This growth adds to the plant in diameter from the established stem but not all plants exhibit secondary growth. There are two types of secondary meristems: the vascular cambium and the cork cambium.
Vascular cambium, which produces secondary xylem and secondary phloem. This is a process that may continue throughout the life of the plant. This is what gives rise to wood in plants. Such plants are called arboraceous. This does not occur in plants that do not go through secondary growth (known as herbaceous plants).
Cork cambium, which gives rise to the periderm, which replaces the epidermis.
==Apical meristems==
Apical Meristems are the completely undifferentiated (indeterminate) meristems in a plant. These differentiate into three kinds of primary meristems. The primary meristems in turn produce the two secondary meristem types. These secondary meristems are also known as lateral meristems as they are involved in lateral growth.
There are two types of apical meristem tissue: shoot apical meristem (SAM), which gives rise to organs like the leaves and flowers, and root apical meristem (RAM), which provides the meristematic cells for future root growth. SAM and RAM cells divide rapidly and are considered indeterminate, in that they do not possess any defined end status. In that sense, the meristematic cells are frequently compared to the stem cells in animals, which have an analogous behavior and function.
The apical meristems are layered where the number of layers varies according to plant type. In general the outermost layer is called the tunica while the innermost layers are the corpus. In monocots, the tunica determines the physical characteristics of the leaf edge and margin. In dicots, layer two of the corpus determines the characteristics of the edge of the leaf. The corpus and tunica play a critical part of the plant physical appearance as all plant cells are formed from the meristems. Apical meristems are found in two locations: the root and the stem. Some arctic plants have an apical meristem in the lower/middle parts of the plant. It is thought that this kind of meristem evolved because it is advantageous in arctic conditions.
Shoot Apical Meristems
Shoot apical meristems are the source of all above-ground organs, such as leaves and flowers. Cells at the shoot apical meristem summit serve as stem cells to the surrounding peripheral region, where they proliferate rapidly and are incorporated into differentiating leaf or flower primordia.
The shoot apical meristem is the site of most of the embryogenesis in flowering plants. Primordia of leaves, sepals, petals, stamens, and ovaries are initiated here at the rate of one every time interval, called a plastochron. It is where the first indications that flower development has been evoked are manifested. One of these indications might be the loss of apical dominance and the release of otherwise dormant cells to develop as auxiliary shoot meristems, in some species in axils of primordia as close as two or three away from the apical dome.
The shoot apical meristem consists of four distinct cell groups:
Stem cells
The immediate daughter cells of the stem cells
A subjacent organizing center
Founder cells for organ initiation in surrounding regions
These four distinct zones are maintained by a complex signalling pathway. In Arabidopsis thaliana, 3 interacting CLAVATA genes are required to regulate the size of the stem cell reservoir in the shoot apical meristem by controlling the rate of cell division. CLV1 and CLV2 are predicted to form a receptor complex (of the LRR receptor-like kinase family) to which CLV3 is a ligand. CLV3 shares some homology with the ESR proteins of maize, with a short 14 amino acid region being conserved between the proteins. Proteins that contain these conserved regions have been grouped into the CLE family of proteins.
CLV1 has been shown to interact with several cytoplasmic proteins that are most likely involved in downstream signalling. For example, the CLV complex has been found to be associated with Rho/Rac small GTPase-related proteins. These proteins may act as an intermediate between the CLV complex and a mitogen-activated protein kinase (MAPK), which is often involved in signalling cascades. KAPP is a kinase-associated protein phosphatase that has been shown to interact with CLV1. KAPP is thought to act as a negative regulator of CLV1 by dephosphorylating it.
Another important gene in plant meristem maintenance is WUSCHEL (shortened to WUS), which is a target of CLV signaling in addition to positively regulating CLV, thus forming a feedback loop. WUS is expressed in the cells below the stem cells of the meristem and its presence prevents the differentiation of the stem cells. CLV1 acts to promote cellular differentiation by repressing WUS activity outside of the central zone containing the stem cells.
The function of WUS in the shoot apical meristem is linked to the phytohormone cytokinin. Cytokinin activates histidine kinases which then phosphorylate histidine phosphotransfer proteins. Subsequently, the phosphate groups are transferred onto two types of Arabidopsis response regulators (ARRs): Type-B ARRS and Type-A ARRs. Type-B ARRs work as transcription factors to activate genes downstream of cytokinin, including A-ARRs. A-ARRs are similar to B-ARRs in structure; however, A-ARRs do not contain the DNA binding domains that B-ARRs have, and which are required to function as transcription factors. Therefore, A-ARRs do not contribute to the activation of transcription, and by competing for phosphates from phosphotransfer proteins, inhibit B-ARRs function. In the SAM, B-ARRs induce the expression of WUS which induces stem cell identity. WUS then suppresses A-ARRs. As a result, B-ARRs are no longer inhibited, causing sustained cytokinin signaling in the center of the shoot apical meristem. Altogether with CLAVATA signaling, this system works as a negative feedback loop. Cytokinin signaling is positively reinforced by WUS to prevent the inhibition of cytokinin signaling, while WUS promotes its own inhibitor in the form of CLV3, which ultimately keeps WUS and cytokinin signaling in check.
Root apical meristem
Unlike the shoot apical meristem, the root apical meristem produces cells in two dimensions. It harbors two pools of stem cells around an organizing center called the quiescent center (QC) cells and together produces most of the cells in an adult root. At its apex, the root meristem is covered by the root cap, which protects and guides its growth trajectory. Cells are continuously sloughed off the outer surface of the root cap. The QC cells are characterized by their low mitotic activity. Evidence suggests that the QC maintains the surrounding stem cells by preventing their differentiation, via signal(s) that are yet to be discovered. This allows a constant supply of new cells in the meristem required for continuous root growth. Recent findings indicate that QC can also act as a reservoir of stem cells to replenish whatever is lost or damaged. Root apical meristem and tissue patterns become established in the embryo in the case of the primary root, and in the new lateral root primordium in the case of secondary roots.
Intercalary meristem
In angiosperms, intercalary (sometimes called basal) meristems occur in monocot (in particular, grass) stems at the base of nodes and leaf blades. Horsetails and Welwitschia also exhibit intercalary growth. Intercalary meristems are capable of cell division, and they allow for rapid growth and regrowth of many monocots. Intercalary meristems at the nodes of bamboo allow for rapid stem elongation, while those at the base of most grass leaf blades allow damaged leaves to rapidly regrow. This leaf regrowth in grasses evolved in response to damage by grazing herbivores and/or wildfires.
Floral meristem
When plants begin flowering, the shoot apical meristem is transformed into an inflorescence meristem, which goes on to produce the floral meristem, which produces the sepals, petals, stamens, and carpels of the flower.
In contrast to vegetative apical meristems and some efflorescence meristems, floral meristems cannot continue to grow indefinitely. Their growth is limited to the flower with a particular size and form. The transition from shoot meristem to floral meristem requires floral meristem identity genes, that both specify the floral organs and cause the termination of the production of stem cells. AGAMOUS (AG) is a floral homeotic gene required for floral meristem termination and necessary for proper development of the stamens and carpels. AG is necessary to prevent the conversion of floral meristems to inflorescence shoot meristems, but is identity gene LEAFY (LFY) and WUS and is restricted to the centre of the floral meristem or the inner two whorls. This way floral identity and region specificity is achieved. WUS activates AG by binding to a consensus sequence in the AG's second intron and LFY binds to adjacent recognition sites. Once AG is activated it represses expression of WUS leading to the termination of the meristem.
Through the years, scientists have manipulated floral meristems for economic reasons. An example is the mutant tobacco plant "Maryland Mammoth". In 1936, the department of agriculture of Switzerland performed several scientific tests with this plant. "Maryland Mammoth" is peculiar in that it grows much faster than other tobacco plants.
Apical dominance
Apical dominance is where one meristem prevents or inhibits the growth of other meristems. As a result, the plant will have one clearly defined main trunk. For example, in trees, the tip of the main trunk bears the dominant shoot meristem. Therefore, the tip of the trunk grows rapidly and is not shadowed by branches. If the dominant meristem is cut off, one or more branch tips will assume dominance. The branch will start growing faster and the new growth will be vertical. Over the years, the branch may begin to look more and more like an extension of the main trunk. Often several branches will exhibit this behavior after the removal of apical meristem, leading to a bushy growth.
The mechanism of apical dominance is based on auxins, types of plant growth regulators. These are produced in the apical meristem and transported towards the roots in the cambium. If apical dominance is complete, they prevent any branches from forming as long as the apical meristem is active. If the dominance is incomplete, side branches will develop.
Recent investigations into apical dominance and the control of branching have revealed a new plant hormone family termed strigolactones. These compounds were previously known to be involved in seed germination and communication with mycorrhizal fungi and are now shown to be involved in inhibition of branching.
Diversity in meristem architectures
The SAM contains a population of stem cells that also produce the lateral meristems while the stem elongates. It turns out that the mechanism of regulation of the stem cell number might be evolutionarily conserved. The CLAVATA gene CLV2 responsible for maintaining the stem cell population in Arabidopsis thaliana is very closely related to the maize gene FASCIATED EAR 2(FEA2) also involved in the same function. Similarly, in rice, the FON1-FON2 system seems to bear a close relationship with the CLV signaling system in Arabidopsis thaliana. These studies suggest that the regulation of stem cell number, identity and differentiation might be an evolutionarily conserved mechanism in monocots, if not in angiosperms. Rice also contains another genetic system distinct from FON1-FON2, that is involved in regulating stem cell number. This example underlines the innovation that goes about in the living world all the time.
Role of the KNOX-family genes
Genetic screens have identified genes belonging to the KNOX family in this function. These genes essentially maintain the stem cells in an undifferentiated state. The KNOX family has undergone quite a bit of evolutionary diversification while keeping the overall mechanism more or less similar. Members of the KNOX family have been found in plants as diverse as Arabidopsis thaliana, rice, barley and tomato. KNOX-like genes are also present in some algae, mosses, ferns and gymnosperms. Misexpression of these genes leads to the formation of interesting morphological features. For example, among members of Antirrhineae, only the species of the genus Antirrhinum lack a structure called spur in the floral region. A spur is considered an evolutionary innovation because it defines pollinator specificity and attraction. Researchers carried out transposon mutagenesis in Antirrhinum majus, and saw that some insertions led to formation of spurs that were very similar to the other members of Antirrhineae, indicating that the loss of spur in wild Antirrhinum majus populations could probably be an evolutionary innovation.
The KNOX family has also been implicated in leaf shape evolution (See below for a more detailed discussion). One study looked at the pattern of KNOX gene expression in A. thaliana, that has simple leaves and Cardamine hirsuta, a plant having complex leaves. In A. thaliana, the KNOX genes are completely turned off in leaves, but in C.hirsuta, the expression continued, generating complex leaves. Also, it has been proposed that the mechanism of KNOX gene action is conserved across all vascular plants, because there is a tight correlation between KNOX expression and a complex leaf morphology.
Indeterminate growth of meristems
Though each plant grows according to a certain set of rules, each new root and shoot meristem can go on growing for as long as it is alive. In many plants, meristematic growth is potentially indeterminate, making the overall shape of the plant not determinate in advance. This is the primary growth. Primary growth leads to lengthening of the plant body and organ formation. All plant organs arise ultimately from cell divisions in the apical meristems, followed by cell expansion and differentiation. Primary growth gives rise to the apical part of many plants.
The growth of nitrogen-fixing root nodules on legume plants such as soybean and pea is either determinate or indeterminate. Thus, soybean (or bean and Lotus japonicus) produce determinate nodules (spherical), with a branched vascular system surrounding the central infected zone. Often, Rhizobium-infected cells have only small vacuoles. In contrast, nodules on pea, clovers, and Medicago truncatula are indeterminate, to maintain (at least for some time) an active meristem that yields new cells for Rhizobium infection. Thus zones of maturity exist in the nodule. Infected cells usually possess a large vacuole. The plant vascular system is branched and peripheral.
Cloning
Under appropriate conditions, each shoot meristem can develop into a complete, new plant or clone. Such new plants can be grown from shoot cuttings that contain an apical meristem. Root apical meristems are not readily cloned, however. This cloning is called asexual reproduction or vegetative reproduction and is widely practiced in horticulture to mass-produce plants of a desirable genotype. This process known as mericloning, has been shown to reduce or eliminate viruses present in the parent plant in multiple species of plants.
Propagating through cuttings is another form of vegetative propagation that initiates root or shoot production from secondary meristematic cambial cells. This explains why basal 'wounding' of shoot-borne cuttings often aids root formation.
Induced meristems
Meristems may also be induced in the roots of legumes such as soybean, Lotus japonicus, pea, and Medicago truncatula after infection with soil bacteria commonly called Rhizobia. Cells of the inner or outer cortex in the so-called "window of nodulation" just behind the developing root tip are induced to divide. The critical signal substance is the lipo-oligosaccharide Nod factor, decorated with side groups to allow specificity of interaction. The Nod factor receptor proteins NFR1 and NFR5 were cloned from several legumes including Lotus japonicus, Medicago truncatula and soybean (Glycine max). Regulation of nodule meristems utilizes long-distance regulation known as the autoregulation of nodulation (AON). This process involves a leaf-vascular tissue located LRR receptor kinases (LjHAR1, GmNARK and MtSUNN), CLE peptide signalling, and KAPP interaction, similar to that seen in the CLV1,2,3 system. LjKLAVIER also exhibits a nodule regulation phenotype though it is not yet known how this relates to the other AON receptor kinases.
Lateral Meristems
Lateral meristems, the form of secondary plant growth, add growth to the plants in their diameter. This is primarily observed in perennial dicots that survive from year to year. There are two types of lateral meristems: vascular cambium and cork cambium.
In vascular cambium, the primary phloem and xylem are produced by the apical meristem. After this initial development, secondary phloem and xylem are produced by the lateral meristem. The two are connected through a thin layer of parenchymal cells which are differentiated into the fascicular cambium. The fascicular cambium divides to create the new secondary phloem and xylem. Following this the cortical parenchyma between vascular cylinders differentiates interfascicular cambium. This process repeats for indeterminate growth.
Cork cambium creates a protective covering around the outside of a plant. This occurs after the secondary xylem and phloem has expanded already. Cortical parenchymal cells differentiate into cork cambium near the epidermis which lays down new cells called phelloderm and cork cells. These cork cells are impermeable to water and gases because of a substance called suberin that coats them.
See also
Primary growth
Secondary growth
Stem cell
Thallus
Tissues
References
Sources
Plant Anatomy Laboratory from University of Texas; the lab of JD Mauseth. Micrographs of plant cells and tissues, with explanatory text.
Scofield and Murray (2006). The evolving concept of the meristem. Plant Molecular Biology 60:v–vii.
External links
Meristemania.org – Research on meristems
Plant anatomy
Plant physiology | Meristem | Biology | 4,640 |
23,943,342 | https://en.wikipedia.org/wiki/Image-based%20meshing | Image-based meshing is the automated process of creating computer models for computational fluid dynamics (CFD) and finite element analysis (FEA) from 3D image data (such as magnetic resonance imaging (MRI), computed tomography (CT) or microtomography). Although a wide range of mesh generation techniques are currently available, these were usually developed to generate models from computer-aided design (CAD), and therefore have difficulties meshing from 3D imaging data.
Mesh generation from 3D imaging data
Meshing from 3D imaging data presents a number of challenges but also unique opportunities for presenting a more realistic and accurate geometrical description of the computational domain. There are generally two ways of meshing from 3D imaging data:
CAD-based approach
The majority of approaches used to date still follow the traditional CAD route by using an intermediary step of surface reconstruction which is then followed by a traditional CAD-based meshing algorithm. CAD-based approaches use the scan data to define the surface of the domain and then create elements within this defined boundary. Although reasonably robust algorithms are now available, these techniques are often time consuming, and virtually intractable for the complex topologies typical of image data. They also do not easily allow for more than one domain to be meshed, as multiple surfaces are often non-conforming with gaps or overlaps at interfaces where one or more structures meet.
Image-based approach
This approach is the more direct way as it combines the geometric detection and mesh creation stages in one process which offers a more robust and accurate result than meshing from surface data. Voxel conversion technique providing meshes with brick elements and with tetrahedral elements have been proposed.
Another approach generates 3D tetrahedral or tetrahedral elements throughout the volume of the domain, thus creating the mesh directly with conforming multipart surfaces.
Generating a model
The steps involved in the generation of models based on 3D imaging data are:
Scan and image processing
An extensive range of image processing tools can be used to generate highly accurate models based on data from 3D imaging modalities, e.g. MRI, CT, MicroCT (XMT), and Ultrasound. Features of particular interest include:
Segmentation tools (e.g. thresholding, floodfill, level set methods, etc.)
Filters and smoothing tools (e.g. volume- and topology-preserving smoothing and noise reduction/artefact removing).
Volume and surface mesh generation
The image-based meshing technique allows the straightforward generation of meshes out of segmented 3D data. Features of particular interest include:
Multi-part meshing (mesh any number of structures simultaneously)
Mapping functions to apply material properties based on signal strength (e.g. Young's modulus to Hounsfield scale)
Smoothing of meshes (e.g. topological preservation of data to ensure preservation of connectivity, and volume neutral smoothing to prevent shrinkage of convex hulls)
Export to FEA and CFD codes for analysis (e.g. node sets, shell elements, material properties, contact surfaces, boundary layers, inlets/outlets)
Typical use
Biomechanics and design of medical and dental implants
Food science
Forensic science
Materials science (composites and foams)
Nondestructive testing (NDT)
Paleontology and functional morphology
Reverse engineering
Soil science
Petrophysics
See also
Image segmentation
References
External links
Computing-Objects commercial C++ libraries for mesh generation & FEM computation
ScanIP commercial image-based meshing software: www.simpleware.com
Mimics 3D image-based engineering software for FEA and CFD on anatomical data: Mimics website
Google group on image-based modelling:
Avizo Software's 3D image-based meshing tools for CFD and FEA
iso2mesh: a free 3D surface and volumetric mesh generator for matlab/octave
OOF3D, object oriented finite element analysis from the NIST
VGSTUDIO MAX, Commercial CT analysis software for industry. They offer an add-on module for FEM meshing.
Mesh generation
Computer graphics algorithms
3D computer graphics | Image-based meshing | Physics | 830 |
45,320,628 | https://en.wikipedia.org/wiki/BioFabric | BioFabric is an open-source software application for graph drawing. It presents graphs as a node-link diagram, but unlike other graph drawing tools that depict the nodes using discrete symbols, it represents nodes using horizontal lines.
Rationale
Traditional node-link methods for visualizing networks deteriorate in terms of legibility when dealing with large networks, due to the proliferation of edge crossings amassing as what are disparagingly termed 'hairballs'. BioFabric is one of a number of alternative approaches designed explicitly to tackle this scalability issue, choosing to do so by depicting nodes as lines on the horizontal axis, one per row; edges as lines on the vertical axis, one per column, terminating at the two rows associated with the endpoint nodes. As such, nodes and edges are each provided their own dimension (as opposed to solely the edges with nodes being non-dimensional points). BioFabric exploits the additional degree of freedom thus produced to place ends of incident edges in groups. This placement can potentially carry semantic information, whereas in node-link graphics the placement is often arbitrarily generated within constraints for aesthetics, such as during force-directed graph drawing, and may result in apparently informative artifacts.
Edges are drawn (vertically) in a darker shade than (horizontal) nodes, creating visual distinction. Additional edges increase the width of the graph.
Both ends of a link are represented as a square to reinforce the above effect even at small scales. Directed graphs also incorporate arrowheads.
Development
The first version, 1.0.0, was released in July 2012. Development work on BioFabric is ongoing. An open source R implementation was released in 2013, RBioFabric, for use with the igraph package, and subsequently described on the project weblog.
Features
Input
Networks can be imported using SIF files as input.
Related work
Blakley et al. have described how the technique used by BioFabric, which they refer to as a cartographic representation, can be used to compare the networks A and B by juxtaposing the edges in (A \ B), (A ∩ B), and (B \ A), a technique that is evocative of a Venn Diagram. Rossi and Magnani have developed ranked sociograms, which is a BioFabric-like presentation where the node ordering is based upon a ranking metric. This approach attaches semantic meaning to the length of the edge lines, and can be used to visualize the assortativity or dissortativity of a network.
See also
Graph drawing
Systems biology
References
External links
BioFabric site
Systems biology
Graph drawing software
Cross-platform software
Java platform software | BioFabric | Biology | 555 |
21,161,506 | https://en.wikipedia.org/wiki/Bephenium%20hydroxynaphthoate | Bephenium hydroxynaphthoate (INN, trade names Alcopara, Alcopar, Befenium, Debefenium, Francin, Nemex) is an anthelmintic agent formerly used in the treatment of hookworm infections and ascariasis. It is formulated as a salt between the active pharmaceutical ingredient, bephenium, and 3-hydroxy-2-naphthoic acid.
Bephenium is not FDA-approved and is not available in the United States.
References
Antiparasitic agents
Quaternary ammonium compounds
Benzyl compounds
Nicotinic agonists
Salicylates | Bephenium hydroxynaphthoate | Biology | 138 |
16,513,502 | https://en.wikipedia.org/wiki/Thermal%20conductivity%20measurement | There are a number of possible ways to measure thermal conductivity, each of them suitable for a limited range of materials, depending on the thermal properties and the medium temperature. Three classes of methods exist to measure the thermal conductivity of a sample: steady-state, time-domain, and frequency-domain methods.
Steady-state methods
In general, steady-state techniques perform a measurement when the temperature of the material measured does not change with time. This makes the signal analysis straightforward (steady state implies constant signals). The disadvantage is that a well-engineered experimental setup is usually needed.
Steady-state methods, in general, work by applying a known heat flux, , to a sample with a surface area, , and thickness, ; once the sample's steady-state temperature is reached, the difference in temperature, , across the thickness of the sample is measured. After assuming one-dimensional heat flow and an isotropic medium, Fourier's law is then used to calculate the measured thermal conductivity, :
Major sources of error in steady-state measurements include radiative and convective heat losses in the setup, as well as errors in the thickness of the sample propagating to the thermal conductivity.
In geology and geophysics, the most common method for consolidated rock samples is the divided bar. There are various modifications to these devices depending on the temperatures and pressures needed as well as sample sizes. A sample of unknown conductivity is placed between two samples of known conductivity (usually brass plates). The setup is usually vertical with the hot brass plate at the top, the sample in between then the cold brass plate at the bottom. Heat is supplied at the top and made to move downwards to stop any convection within the sample. Measurements are taken after the sample has reached to the steady state (with zero heat gradient or constant heat over entire sample), this usually takes about 30 minutes and over.
Other steady-state methods
For good conductors of heat, Searle's bar method can be used. For poor conductors of heat, Lee's disc method can be used.
Time-domain methods
The transient techniques perform a measurement during the process of heating up. The advantage is that measurements can be made relatively quickly. Transient methods are usually carried out by needle probes.
Non-steady-state methods to measure the thermal conductivity do not require the signal to obtain a constant value. Instead, the signal is studied as a function of time. The advantage of these methods is that they can in general be performed more quickly, since there is no need to wait for a steady-state situation. The disadvantage is that the mathematical analysis of the data is generally more difficult.
Transient hot wire method
The transient hot wire method (THW) is a very popular, accurate and precise technique to measure the thermal conductivity of gases, liquids, solids, nanofluids and refrigerants in a wide temperature and pressure range. The technique is based on recording the transient temperature rise of a thin vertical metal wire with infinite length when a step voltage is applied to it. The wire is immersed in a fluid and can act both as an electrical heating element and a resistance thermometer. The transient hot wire method has advantage over the other thermal conductivity method since there is a fully developed theory and there is no calibration or single-point calibration. Furthermore, because of the very small measuring time (1 s) there is no convection present in the measurements and only the thermal conductivity of the fluid is measured with very high accuracy.
Most of the THW sensors used in academia consist of two identical very thin wires with only difference in the length. Sensors using a single wire, are used both in academia and industry with the advantage over the two-wire sensors the ease of handling of the sensor and change of the wire.
An ASTM standard is published for the measurements of engine coolants using a single-transient hot wire method.
Transient plane source method
Transient Plane Source Method, utilizing a plane sensor and a special mathematical model describing the heat conductivity, combined with electronics, enables the method to be used to measure Thermal Transport Properties. It covers a thermal conductivity range of at least 0.01-500 W/m/K (in accordance with ISO 22007-2) and can be used for measuring various kinds of materials, such as solids, liquid, paste and thin films etc. In 2008 it was approved as an ISO-standard for measuring thermal transport properties of polymers (November 2008). This TPS standard also covers the use of this method to test both isotropic and anisotropic materials.
The Transient Plane Source technique typically employs two samples halves, in-between which the sensor is sandwiched. Normally the samples should be homogeneous, but extended use of transient plane source testing of heterogeneous material is possible, with proper selection of sensor size to maximize sample penetration. This method can also be used in a single-sided configuration, with the introduction of a known insulation material used as sensor support.
The flat sensor consists of a continuous double spiral of electrically conducting nickel (Ni) metal, etched out of a thin foil. The nickel spiral is situated between two layers of thin polyimide film Kapton. The thin Kapton films provides electrical insulation and mechanical stability to the sensor. The sensor is placed between two halves of the sample to be measured. During the measurement a constant electrical effect passes through the conducting spiral, increasing the sensor temperature. The heat generated dissipates into the sample on both sides of the sensor, at a rate depending on the thermal transport properties of the material. By recording temperature vs. time response in the sensor, the thermal conductivity, thermal diffusivity and specific heat capacity of the material can be calculated. For highly conducting materials, very large samples are needed (some litres of volume).
Modified transient plane source (MTPS) method
A variation of the above method is the Modified Transient Plane Source Method (MTPS) developed by Dr. Nancy Mathis. The device uses a one-sided, interfacial, heat reflectance sensor that applies a momentary, constant heat source to the sample. The difference between this method and traditional transient plane source technique described above is that the heating element is supported on a backing, which provides mechanical support, electrical insulation and thermal insulation. This modification provides a one-sided interfacial measurement in offering maximum flexibility in testing liquids, powders, pastes and solids.
Transient line source method
The physical model behind this method is the infinite line source with constant power per unit length. The temperature profile at a distance at time is as follows
where
is the power per unit length, in [W·m−1]
is the thermal conductivity of the sample, in [W·m−1·K−1]
is the exponential integral, a transcendent mathematical function
is the radial distance to the line source
is the thermal diffusivity, in [m2·s−1]
is the amount of time that has passed since heating has started, in [s]
When performing an experiment, one measures the temperature at a point at fixed distance, and follows that temperature in time. For large times, the exponential integral can be approximated by making use of the following relation
where
is the Euler–Mascheroni constant
This leads to the following expression
Note that the first two terms in the brackets on the RHS are constants. Thus if the probe temperature is plotted versus the natural logarithm of time, the thermal conductivity can be determined from the slope given knowledge of Q. Typically this means ignoring the first 60 to 120 seconds of data and measuring for 600 to 1200 seconds. Typically, this method is used for gases and liquids whose thermal conductivities are between 0.1 and 50 W/(mK). If the thermal conductivities are too high, the diagram often does not show a linearity, so that no evaluation is possible.
Modified transient line source method
A variation on the Transient Line Source method is used for measuring the thermal conductivity of a large mass of the earth for Geothermal Heat Pump (GHP/GSHP) system design. This is generally called Ground Thermal Response Testing (TRT) by the GHP industry. Understanding the ground conductivity and thermal capacity is essential to proper GHP design, and using TRT to measure these properties was first presented in 1983 (Mogensen). The now commonly used procedure, introduced by Eklöf and Gehlin in 1996 and now approved by ASHRAE involves inserting a pipe loop deep into the ground (in a well bore, filling the anulus of the bore with a grout substance of known thermal properties, heating the fluid in the pipe loop, and measuring the temperature drop in the loop from the inlet and return pipes in the bore. The ground thermal conductivity is estimated using the line source approximation method—plotting a straight line on the log of the thermal response measured. A very stable thermal source and pumping circuit are required for this procedure.
More advanced Ground TRT methods are currently under development. The DOE is now validating a new Advanced Thermal Conductivity test said to require half the time as the existing approach, while also eliminating the requirement for a stable thermal source. This new technique is based on multi-dimensional model-based TRT data analysis.
Laser flash method
The laser flash method is used to measure thermal diffusivity of a thin disc in the thickness direction. This method is based upon the measurement of the temperature rise at the rear face of the thin-disc specimen produced by a short energy pulse on the front face. With a reference sample specific heat can be achieved and with known density the thermal conductivity results as follows
where
is the thermal conductivity of the sample, in [W·m−1·K−1]
is the thermal diffusivity of the sample, in [m2 ·s−1]
is the specific heat capacity of the sample, in [J·kg−1·K−1]
is the density of the sample, in [kg·m−3]
It is suitable for a multiplicity of different materials over a broad temperature range (−120 °C to 2800 °C).
Time-domain thermoreflectance method
Time-domain thermoreflectance is a method by which the thermal properties of a material can be measured, most importantly thermal conductivity. This method can be applied most notably to thin film materials, which have properties that vary greatly when compared to the same materials in bulk. The idea behind this technique is that once a material is heated up, the change in the reflectance of the surface can be utilized to derive the thermal properties. The change in reflectivity is measured with respect to time, and the data received can be matched to a model which contain coefficients that correspond to thermal properties.
Frequency-domain methods
3ω-method
One popular technique for electro-thermal characterization of materials is the 3ω-method, in which a thin metal structure (generally a wire or a film) is deposited on the sample to function as a resistive heater and a resistance temperature detector (RTD). The heater is driven with AC current at frequency ω, which induces periodic joule heating at frequency 2ω due to the oscillation of the AC signal during a single period. There will be some delay between the heating of the sample and the temperature response which is dependent upon the thermal properties of the sensor/sample. This temperature response is measured by logging the amplitude and phase delay of the AC voltage signal from the heater across a range of frequencies (generally accomplished using a lock-in-amplifier). Note, the phase delay of the signal is the lag between the heating signal and the temperature response. The measured voltage will contain both the fundamental and third harmonic components (ω and 3ω respectively), because the Joule heating of the metal structure induces oscillations in its resistance with frequency 2ω due to the temperature coefficient of resistance (TCR) of the metal heater/sensor as stated in the following equation:
,
where C0 is constant. Thermal conductivity is determined by the linear slope of ΔT vs. log(ω) curve. The main advantages of the 3ω-method are minimization of radiation effects and easier acquisition of the temperature dependence of the thermal conductivity than in the steady-state techniques. Although some expertise in thin film patterning and microlithography is required, this technique is considered as the best pseudo-contact method available. (ch23)
Frequency-domain hot-wire method
The transient hot wire method can be combined with the 3ω-method to accurately measure the thermal conductivity of solid and molten compounds from room temperature up to 800 °C. In high temperature liquids, errors from convection and radiation make steady-state and time-domain thermal conductivity measurements vary widely; this is evident in the previous measurements for molten nitrates. By operating in the frequency-domain, the thermal conductivity of the liquid can be measured using a 25 μm diameter hot-wire while rejecting the influence of ambient temperature fluctuations, minimizing error from radiation, and minimizing errors from convection by keeping the probed volume below 1 μL.
Freestanding sensor-based 3ω-method
The freestanding sensor-based 3ω technique is proposed and developed as a candidate for the conventional 3ω method for thermophysical properties measurement. The method covers the determination of solids, powders and fluids from cryogenic temperatures to around 400 K. For solid samples, the method is applicable to both bulks and tens of micrometers thick wafers/membranes, dense or porous surfaces. The thermal conductivity and thermal effusivity can be measured using selected sensors, respectively. Two basic forms are now available: the linear source freestanding sensor and the planar source freestanding sensor. The range of thermophysical properties can be covered by different forms of the technique, with the exception that the recommended thermal conductivity range where the highest precision can be attained is 0.01 to 150 W/m•K for the linear source freestanding sensor and 500 to 8000 J/m2•K•s0.5 for the planar source freestanding sensor.
Measuring devices
A thermal conductance tester, one of the instruments of gemology, determines if gems are genuine diamonds using diamond's uniquely high thermal conductivity.
For an example, see Measuring Instrument of Heat Conductivity of ITP-MG4 "Zond" (Russia).
Standards
EN 12667, "Thermal performance of building materials and products. Determination of thermal resistance by means of guarded hot plate and heat flow meter methods. Products of high and medium thermal resistance", .
ISO 8301, "Thermal insulation – Determination of steady-state thermal resistance and related properties – Heat flow meter apparatus"
ISO 8497, "Thermal insulation – Determination of steady-state thermal transmission properties of thermal insulation for circular pipes",
ISO 22007-2:2008 "Plastics – Determination of thermal conductivity and thermal diffusivity – Part 2: Transient plane heat source (hot disc) method"
ISO 22007-4:2008 "Plastics – Determination of thermal conductivity and thermal diffusivity – Part 4: Laser flash method"
IEEE Standard 442–1981, "IEEE guide for soil thermal resistivity measurements", . See also soil thermal properties.
IEEE Standard 98-2002, "Standard for the Preparation of Test Procedures for the Thermal Evaluation of Solid Electrical Insulating Materials",
ASTM Standard C518 – 10, "Standard Test Method for Steady-State Thermal Transmission Properties by Means of the Heat Flow Meter Apparatus"
ASTM Standard D5334-14, "Standard Test Method for Determination of Thermal Conductivity of Soil and Soft Rock by Thermal Needle Probe Procedure"
ASTM Standard D5470-06, "Standard Test Method for Thermal Transmission Properties of Thermally Conductive Electrical Insulation Materials"
ASTM Standard E1225-04, "Standard Test Method for Thermal Conductivity of Solids by Means of the Guarded-Comparative-Longitudinal Heat Flow Technique"
ASTM Standard D5930-01, "Standard Test Method for Thermal Conductivity of Plastics by Means of a Transient Line-Source Technique"
ASTM Standard D2717-95, "Standard Test Method for Thermal Conductivity of Liquids"
ASTM Standard E1461-13(2022), "Standard Test Method for Thermal Diffusivity by the Flash Method."
References
External links
An alternative traditional method using real thermometers is described at .
A brief review of new methods measuring thermal conductivity, thermal diffusivity and specific heat within a single measurement is available at .
A brief description of Modified Transient Plane Source (MTPS) at http://patents.ic.gc.ca/opic-cipo/cpd/eng/patent/2397102/page/2397102_20120528_description.pdf
Materials testing | Thermal conductivity measurement | Materials_science,Engineering | 3,467 |
1,184,578 | https://en.wikipedia.org/wiki/Yokogawa%20Electric | is a Japanese multinational electrical engineering and software company, with businesses based on its measurement, control, and information technologies.
It has a global workforce of over 19,000 employees, 84 subsidiary and 3 affiliated companies operating in 55 countries. The company is listed on the Tokyo Stock Exchange and is a constituent of the Nikkei 225 stock index.
Yokogawa pioneered the development of distributed control systems and introduced its Centum series DCS in 1975.
Some of Yokogawa's most recognizable products are production control systems, test and measurement instruments, pressure transmitters, flow meters, oxygen analyzers, fieldbus instruments, manufacturing execution systems and advanced process control.
History
Yokogawa traces its roots back to 1915, when Dr. Tamisuke Yokogawa, a renowned architect, established an electric meter research institute in Shibuya, Tokyo. After pioneering the development and production of electric meters in Japan, this enterprise was incorporated in 1920 as Yokogawa Electric Works Ltd.
In 1933 Yokogawa began the research and manufacture of aircraft instruments and flow, temperature, and pressure controllers. In the years following the war, Yokogawa went public, developed its first electronic recorders, signed a technical assistance agreement for industrial instruments with the U.S. firm Foxboro, and opened its first overseas sales office (New York).
In the 1960s the company made a full-scale entry into the industrial analyzer market and launched the development, manufacturing, and sales of vortex flowmeters, and in the decade following established its first manufacturing plant outside Japan (Singapore), opened a sales office in Europe, and became one of the first companies to bring a distributed process control system to market. In 1983 Yokogawa merged with Hokushin Electric Works and, towards the end of the decade, entered the high-frequency measuring instrument business. In the 1990s, Yokogawa established an office in Bahrain to oversee its business in the Middle East and entered the confocal scanner and biotechnology businesses.
In 2002 the firm continued its growth with the acquisition of Ando Electric, and in 2005 set the stage for a new level of globalization in its industrial automation business with the establishment of Yokogawa Electric International in Singapore. In 2008 the company entered the drug discovery support market with a new bio test system.
In April 2020, Yokogawa acquired Scarborough-based Fluid Imaging Technologies. Terms of the deal were not disclosed.
In 2021, Yokogawa focused on cloud-based solutions and industrial IoT applications. The company launched the OpreX Control Care cloud service and acquired Industrial Control Systems, Inc. (ICSI) to strengthen its industrial cybersecurity offerings.
Businesses and main products
Yokogawa's main businesses are industrial automation and test and measurement hardware and software.
Some of Yokogawa's main hardware products are Pressure Transmitters, Flow meters, analysers, controllers, recorders and data acquisition equipment.
Yokogawa products are used in different industries requiring process control systems. Depending on the size of the project and the requirements, Yokogawa offers various control systems: DCS, PLC, SCADA and ESD (emergency shutdown). In collaboration with Shell Global Solutions, Yokogawa also offers advanced process control (APC) solutions for refineries, petrochemical plants, and chemical plants.
Centum, Yokogawa's flagship DCS, has the largest capacity among DCSs, supporting up to 1 million device tags.
Yokogawa manufactures field instruments, test and measurement instruments, and semi-conductor related products.
Yokogawa designs and manufactures the most advanced confocal spinning disks used in confocal microscopy.
Major office locations
Musashino (near Mitaka Station) (world headquarters and East Asia regional office)
Amersfoort, The Netherlands (Europe regional office)
Bahrain (Middle East and Africa regional office)
Bangalore, India (South Asia regional office)
Moscow, Russia (CIS countries headquarters)
Sugar Land, Texas, USA (North & Central America regional office)
Singapore (Asean, Oceania, South Asia and Taiwan regional office)
São Paulo, (South America regional office)
Khobar, Saudi Arabia
Jubail, Saudi Arabia
Catania, Italy
Trademark products of Yokogawa
DPharp EJA – Pressure Transmitter with Silicon Resonant Technology
DPharp EJX – Pressure Transmitter with Silicon Resonant Technology and SIL2 Certification
Rotamass – Coriolis Mass Flow and Density Meters
Indicator FVX – Fieldbus indicator
Valve Positioner YVP – Fieldbus positioner
ADMag AXF – Magnetic Flowmeter for high-end technology use
ADMag CA – Magnetic Flowmeter for substances without apparent electrode
ADMag SE – magnetic flowmeter for general use
Rotameter – Rotameter
DY – Digital Vortex Flowmeter
YTA – SMART Temperature Transmitter
US – Ultrasonic Flowmeter
Centum CS3000 and Centum VP – Distributed Control Systems
ProSafe-RS – Safety Instrumented System
ProSafe-SLS – Solid State Logic Solver - Safety Instrumented System
Fast/Tools – Web-based SCADA system
Stardom – Network based control systems
DXAdvanced – Data Acquisition Station (DAQ)
DAQMaster – Data logger
SMARTDAC+ – SMART Data Acquisition (DAQ)
ISA100 – Wireless Transmittor
GC8000 - Process Gas Chromatograph
Petro-SIM- Kinetic Process Simulator
Sponsored sports teams
Yokogawa Musashino Atlastars – rugby
Yokogawa Musashino F.C. – Football (soccer)
Yokogawa Tiger F.C. – Football (soccer) [Malaysia]
References
External links
Companies listed on the Tokyo Stock Exchange
Companies in the Nikkei 225
Engineering companies based in Tokyo
Manufacturing companies based in Tokyo
Electronics companies of Japan
Instrument-making corporations
Electronic test equipment manufacturers
Equipment semiconductor companies
Manufacturers of industrial automation
Technology companies of Japan
Defense companies of Japan
Electronics companies established in 1915
Japanese brands
Musashino, Tokyo
Japanese companies established in 1915
Multinational companies headquartered in Japan | Yokogawa Electric | Engineering | 1,196 |
58,965,261 | https://en.wikipedia.org/wiki/Ilse%20Hirsch | Ilse Hirsch (1922–2000) was a German Bund Deutscher Mädel (BDM) Hauptgruppenführerin (captain) famous as part of the six-person team that participated in Operation Carnival in 1945.
Early life
Hirsch was born in Hamm. She joined the BDM at age sixteen and became one of its principal organizers in the town of Monschau. In the late stages of World War II, she was part of Werwolf (German for "werewolf"), a German partisan group that operated behind enemy lines.
Unternehmen Karneval
Unternehmen Karneval was a Werwolf mission authorized by Heinrich Himmler to assassinate Franz Oppenhoff, who, in October 1944, was appointed mayor of Aachen by the Americans after they took control of the city. Hitler took a personal interest in this appointment and ordered Oppenhoff's elimination.
Team
The team assembled by Generalinspekteur für Spezialabwehr Hans-Adolf Prützmann, who was given the task by Himmler, was:
Untersturmführer-SS (Lt.) Herbert Wenzel
Austrian Unterscharführer-SS (Sergeant) Josef "Sepp" Leitgeb
Former border Patrolman Karl-Heinz Hennemann
Former border Patrolman Georg Heidorn
Werwolf trainee 16-year-old Erich Morgenschweiss
Werwolf Hauptgruppenführerin (Captain) Ilse Hirsch.
Plan
The team's plan was to move to their first base camp in dense woodlands along the German-Belgian frontier. Morgenschweiss and Hirsch, who knew the city well and acted as guide, would enter town and locate their target. After identifying his daily schedule, they would pass the information to Wenzel and Leitgeb. Following the assassination, the team would head east toward friendly lines. They were to stick to the plan even if separated. Traveling strictly at night, they would hide in forester and game warden cabins during daylight. All carried forged papers identifying them as members of the Reich's Organisation Todt labour force. If captured, they were to convince their interrogators that they were working on nearby border fortifications.
Assassination
On 20 March 1945 the team were flown in a captured, Luftwaffe-operated B-17 Flying Fortress from Hildesheim airfield near Hanover and parachuted around the village of Gemmenich. Upon landing they were discovered by a 20-year-old Dutch border guard, Jozef Saive, whom they shot.
The team then made for Eupener Strasse 251, where Oppenhoff lived with his wife Irmgard and their three children. He was away at a party so they asked the housekeeper to send for him. When Oppenhoff arrived, Wenzel—who had assured his accomplices he would do the shooting—lost his nerve. Leitgeb cried "Heil Hitler!", grabbed the pistol and shot Oppenhoff dead.
While escaping with Leitgeb, Hirsch tripped a buried landmine. She injured her knee, and Leitgeb was killed.
Later life
After the war the surviving members of the Werwolf group were located. At trial in 1949, they were found guilty of killing Oppenhoff and sentenced to 1–4 years in prison. Hirsch was acquitted and another of the team members was never charged. Hirsch married and had two daughters and one son. In subsequent proceedings the convicted members saw their sentences reduced and finally completely quashed under the Straffreiheitsgesetz 1954 (Impunity Law 1954) on the grounds of "command emergency".
References
1922 births
2000 deaths
Hitler Youth
Women in Nazi Germany
Youth in Germany
Explosion survivors
Nazi assassins | Ilse Hirsch | Chemistry | 776 |
5,021,791 | https://en.wikipedia.org/wiki/Aluminium%20iodide | Aluminium iodide is a chemical compound containing aluminium and iodine. Invariably, the name refers to a compound of the composition , formed by the reaction of aluminium and iodine or the action of on metal. The hexahydrate is obtained from a reaction between metallic aluminum or aluminum hydroxide with hydrogen iodide or hydroiodic acid. Like the related chloride and bromide, is a strong Lewis acid and will absorb water from the atmosphere. It is employed as a reagent for the scission of certain kinds of C-O and N-O bonds. It cleaves aryl ethers and deoxygenates epoxides.
Structure
Solid is dimeric, consisting of , similar to that of . The structure of monomeric and dimeric forms have been characterized in the gas phase. The monomer, , is trigonal planar with a bond length of 2.448(6) Å, and the bridged dimer, , at 430 K is a similar to and with bond lengths of 2.456(6) Å (terminal) and 2.670(8) Å (bridging). The dimer is described as floppy with an equilibrium geometry of D2h.
Aluminium(I) iodide
The name "aluminium iodide" is widely assumed to describe the triiodide or its dimer. In fact, a monoiodide also enjoys a role in the Al–I system, although the compound AlI is unstable at room temperature relative to the triiodide:
3AlI -> AlI3 + 2Al
An illustrative derivative of aluminium monoiodide is the cyclic adduct formed with triethylamine, .
References
External links
Iodides
iodide
Metal halides | Aluminium iodide | Chemistry | 365 |
87,372 | https://en.wikipedia.org/wiki/Additive%20function | In number theory, an additive function is an arithmetic function f(n) of the positive integer variable n such that whenever a and b are coprime, the function applied to the product ab is the sum of the values of the function applied to a and b:
Completely additive
An additive function f(n) is said to be completely additive if holds for all positive integers a and b, even when they are not coprime. Totally additive is also used in this sense by analogy with totally multiplicative functions. If f is a completely additive function then f(1) = 0.
Every completely additive function is additive, but not vice versa.
Examples
Examples of arithmetic functions which are completely additive are:
The restriction of the logarithmic function to
The multiplicity of a prime factor p in n, that is the largest exponent m for which pm divides n.
a0(n) – the sum of primes dividing n counting multiplicity, sometimes called sopfr(n), the potency of n or the integer logarithm of n . For example:
a0(4) = 2 + 2 = 4
a0(20) = a0(22 · 5) = 2 + 2 + 5 = 9
a0(27) = 3 + 3 + 3 = 9
a0(144) = a0(24 · 32) = a0(24) + a0(32) = 8 + 6 = 14
a0(2000) = a0(24 · 53) = a0(24) + a0(53) = 8 + 15 = 23
a0(2003) = 2003
a0(54,032,858,972,279) = 1240658
a0(54,032,858,972,302) = 1780417
a0(20,802,650,704,327,415) = 1240681
The function Ω(n), defined as the total number of prime factors of n, counting multiple factors multiple times, sometimes called the "Big Omega function" . For example;
Ω(1) = 0, since 1 has no prime factors
Ω(4) = 2
Ω(16) = Ω(2·2·2·2) = 4
Ω(20) = Ω(2·2·5) = 3
Ω(27) = Ω(3·3·3) = 3
Ω(144) = Ω(24 · 32) = Ω(24) + Ω(32) = 4 + 2 = 6
Ω(2000) = Ω(24 · 53) = Ω(24) + Ω(53) = 4 + 3 = 7
Ω(2001) = 3
Ω(2002) = 4
Ω(2003) = 1
Ω(54,032,858,972,279) = Ω(11 ⋅ 19932 ⋅ 1236661) = 4 ;
Ω(54,032,858,972,302) = Ω(2 ⋅ 72 ⋅ 149 ⋅ 2081 ⋅ 1778171) = 6
Ω(20,802,650,704,327,415) = Ω(5 ⋅ 7 ⋅ 112 ⋅ 19932 ⋅ 1236661) = 7.
Examples of arithmetic functions which are additive but not completely additive are:
ω(n), defined as the total number of distinct prime factors of n . For example:
ω(4) = 1
ω(16) = ω(24) = 1
ω(20) = ω(22 · 5) = 2
ω(27) = ω(33) = 1
ω(144) = ω(24 · 32) = ω(24) + ω(32) = 1 + 1 = 2
ω(2000) = ω(24 · 53) = ω(24) + ω(53) = 1 + 1 = 2
ω(2001) = 3
ω(2002) = 4
ω(2003) = 1
ω(54,032,858,972,279) = 3
ω(54,032,858,972,302) = 5
ω(20,802,650,704,327,415) = 5
a1(n) – the sum of the distinct primes dividing n, sometimes called sopf(n) . For example:
a1(1) = 0
a1(4) = 2
a1(20) = 2 + 5 = 7
a1(27) = 3
a1(144) = a1(24 · 32) = a1(24) + a1(32) = 2 + 3 = 5
a1(2000) = a1(24 · 53) = a1(24) + a1(53) = 2 + 5 = 7
a1(2001) = 55
a1(2002) = 33
a1(2003) = 2003
a1(54,032,858,972,279) = 1238665
a1(54,032,858,972,302) = 1780410
a1(20,802,650,704,327,415) = 1238677
Multiplicative functions
From any additive function it is possible to create a related which is a function with the property that whenever and are coprime then:
One such example is Likewise if is completely additive, then is completely multiplicative. More generally, we could consider the function , where is a nonzero real constant.
Summatory functions
Given an additive function , let its summatory function be defined by . The average of is given exactly as
The summatory functions over can be expanded as where
The average of the function is also expressed by these functions as
There is always an absolute constant such that for all natural numbers ,
Let
Suppose that is an additive function with
such that as ,
Then where is the Gaussian distribution function
Examples of this result related to the prime omega function and the numbers of prime divisors of shifted primes include the following for fixed where the relations hold for :
See also
Sigma additivity
Prime omega function
Multiplicative function
Arithmetic function
References
Further reading
Janko Bračič, Kolobar aritmetičnih funkcij (Ring of arithmetical functions), (Obzornik mat, fiz. 49 (2002) 4, pp. 97–108) (MSC (2000) 11A25)
Iwaniec and Kowalski, Analytic number theory, AMS (2004).
Arithmetic functions | Additive function | Mathematics | 1,326 |
2,189,901 | https://en.wikipedia.org/wiki/Microstructure | Microstructure is the very small scale structure of a material, defined as the structure of a prepared surface of material as revealed by an optical microscope above 25× magnification. The microstructure of a material (such as metals, polymers, ceramics or composites) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behaviour or wear resistance. These properties in turn govern the application of these materials in industrial practice.
Microstructure at scales smaller than can be viewed with optical microscopes is often called nanostructure, while the structure in which individual atoms are arranged is known as crystal structure. The nanostructure of biological specimens is referred to as ultrastructure. A microstructure’s influence on the mechanical and physical properties of a material is primarily governed by the different defects present or absent of the structure. These defects can take many forms but the primary ones are the pores. Even if those pores play a very important role in the definition of the characteristics of a material, so does its composition. In fact, for many materials, different phases can exist at the same time. These phases have different properties and if managed correctly, can prevent the fracture of the material.
Methods
The concept of microstructure is observable in macrostructural features in commonplace objects. Galvanized steel, such as the casing of a lamp post or road divider, exhibits a non-uniformly colored patchwork of interlocking polygons of different shades of grey or silver. Each polygon is a single crystal of zinc adhering to the surface of the steel beneath. Zinc and lead are two common metals which form large crystals (grains) visible to the naked eye. The atoms in each grain are organized into one of seven 3d stacking arrangements or crystal lattices (cubic, tetrahedral, hexagonal, monoclinic, triclinic, rhombohedral and orthorhombic). The direction of alignment of the matrices differ between adjacent crystals, leading to variance in the reflectivity of each presented face of the interlocked grains on the galvanized surface. The average grain size can be controlled by processing conditions and composition, and most alloys consist of much smaller grains not visible to the naked eye. This is to increase the strength of the material (see Hall-Petch Strengthening).
Microstructure characterizations
To quantify microstructural features, both morphological and material property must be characterized. Image processing is a robust technique for determination of morphological features such as volume fraction, inclusion morphology, void and crystal orientations. To acquire micrographs, optical as well as electron microscopy are commonly used.
To determine material property, Nanoindentation is a robust technique for determination of properties in micron and submicron level for which conventional testing are not feasible. Conventional mechanical testing such as tensile testing or dynamic mechanical analysis (DMA) can only return macroscopic properties without any indication of microstructural properties. However, nanoindentation can be used for determination of local microstructural properties of homogeneous as well as heterogeneous materials. Microstructures can also be characterized using high-order statistical models through which a set of complicated statistical properties are extracted from the images. Then, these properties can be used to produce various other stochastic models.
Microstructure generation
Microstructure generation is also known as stochastic microstructure reconstruction.
Computer-simulated microstructures are generated to replicate the microstructural features of actual microstructures. Such microstructures are referred to as synthetic microstructures. Synthetic microstructures are used to investigate what microstructural feature is important for a given property. To ensure statistical equivalence between generated and actual microstructures, microstructures are modified after generation to match the statistics of an actual microstructure. Such procedure enables generation of theoretically infinite number of computer simulated microstructures that are statistically the same (have the same statistics) but stochastically different (have different configurations).
Influence of pores and composition
A pore in a microstructure, unless desired, is a disadvantage for the properties. In fact, in nearly all of the materials, a pore will be the starting point for the rupture of the material. It is the initiation point for the cracks. Furthermore, a pore is usually quite hard to get rid of. Those techniques described later involve a high temperature process. However, even those processes can sometimes make the pore even bigger. Pores with large coordination number (surrounded by many particles) tend to grow during the thermal process. This is caused by the thermal energy being converted to a driving force for the growth of the particles which will induce the growth of the pore as the high coordination number prohibits the growth towards the pore.
For many materials, it can be seen from their phase diagram that multiple phases can exist at the same time. Those different phases might exhibit different crystal structure, thus exhibiting different mechanical properties. Furthermore, these different phases also exhibit a different microstructure (grain size, orientation). This can also improve some mechanical properties as crack deflection can occur, thus pushing the ultimate breakdown further as it creates a more tortuous crack path in the coarser microstructure.
Improvement techniques
In some cases, simply changing the way the material is processed can influence the microstructure. An example is the titanium alloy TiAl6V4. Its microstructure and mechanical properties are enhanced using SLM (selective laser melting) which is a 3D printing technique using powder and melting the particles together using high powered laser. Other conventional techniques for improving the microstructure are thermal processes. Those processes rely in the principle that an increase in temperature will induce the reduction or annihilation of pores. Hot isostatic pressing (HIP) is a manufacturing process, used to reduce the porosity of metals and increase the density of many ceramic materials. This improves the material's mechanical properties and workability.
The HIP process exposes the desired material to an isostatic gas pressure as well as high temperature in a sealed vessel (high pressure). The gas used during this process is mostly Argon. The gas needs to be chemically inert so that no reaction occurs between it and the sample. The pressure is achieved by simply applying heat to the hermetically sealed vessel. However, some systems also associate gas pumping to the process to achieve the required pressure level. The pressure applied on the materials is equal and comes from all directions (hence the term “isostatic”). When castings are treated with HIP, the simultaneous application of heat and pressure eliminates internal voids and microporosity through a combination of plastic deformation, creep, and diffusion bonding; this process improves fatigue resistance of the component.
See also
References
External links
Materials science
Metallurgy | Microstructure | Physics,Chemistry,Materials_science,Engineering | 1,447 |
7,214,369 | https://en.wikipedia.org/wiki/Global%20distance%20test | The global distance test (GDT), also written as GDT_TS to represent "total score", is a measure of similarity between two protein structures with known amino acid correspondences (e.g. identical amino acid sequences) but different tertiary structures. It is most commonly used to compare the results of protein structure prediction to the experimentally determined structure as measured by X-ray crystallography, protein NMR, or, increasingly, cryoelectron microscopy.
The GDT metric was developed by Adam Zemla at Lawrence Livermore National Laboratory and originally implemented in the Local-Global Alignment (LGA) program. It is intended as a more accurate measurement than the common root-mean-square deviation (RMSD) metric - which is sensitive to outlier regions created, for example, by poor modeling of individual loop regions in a structure that is otherwise reasonably accurate. The conventional GDT_TS score is computed over the alpha carbon atoms and is reported as a percentage, ranging from 0 to 100. In general, the higher the GDT_TS score, the more closely a model approximates a given reference structure.
GDT_TS measurements are used as major assessment criteria in the production of results from the Critical Assessment of Structure Prediction (CASP), a large-scale experiment in the structure prediction community dedicated to assessing current modeling techniques. The metric was first introduced as an evaluation standard in the third iteration of the biannual experiment (CASP3) in 1998. Various extensions to the original method have been developed; variations that accounts for the positions of the side chains are known as global distance calculations (GDC).
Calculation
The GDT score is calculated as the largest set of amino acid residues' alpha carbon atoms in the model structure falling within a defined distance cutoff of their position in the experimental structure, after iteratively superimposing the two structures. By the original design the GDT algorithm calculates 20 GDT scores, i.e. for each of 20 consecutive distance cutoffs (0.5 Å, 1.0 Å, 1.5 Å, ... 10.0 Å). For structure similarity assessment it is intended to use the GDT scores from several cutoff distances, and scores generally increase with increasing cutoff. A plateau in this increase may indicate an extreme divergence between the experimental and predicted structures, such that no additional atoms are included in any cutoff of a reasonable distance. The conventional GDT_TS total score in CASP is the average result of cutoffs at 1, 2, 4, and 8 Å.
Variations and extensions
The original GDT_TS is calculated based on the superimpositions and GDT scores produced by the Local-Global Alignment (LGA) program. A "high accuracy" version called GDT_HA is computed by selection of smaller cutoff distances (half the size of GDT_TS) and thus more heavily penalizes larger deviations from the reference structure. It was used in the high accuracy category of CASP7. CASP8 defined a new "TR score", which is GDT_TS minus a penalty for residues clustered too close, meant to penalize steric clashes in the predicted structure, sometimes to game the cutoff measure of GDT.
The primary GDT assessment uses only the alpha carbon atoms. To apply superposition‐based scoring to the amino acid residue side chains, a GDT‐like score called "global distance calculation for sidechains" (GDC_sc) was designed and implemented within the LGA program in 2008. Instead of comparing residue positions on the basis of alpha carbons, GDC_sc uses a predefined "characteristic atom" near the end of each residue for the evaluation of inter-residue distance deviations. An "all atoms" variant of the GDC score (GDC_all) is calculated using full-model information, and is one of the standard measures used by CASP's organizers and assessors to evaluate accuracy of predicted structural models.
GDT scores are generally computed with respect to a single reference structure. In some cases, structural models with lower GDT scores to a reference structure determined by protein NMR are nevertheless better fits to the underlying experimental data. Methods have been developed to estimate the uncertainty of GDT scores due to protein flexibility and uncertainty in the reference structure.
See also
Root mean square deviation (bioinformatics) — A different structure comparison measure.
TM-score — A different structure comparison measure.
References
External links
CASP14 results - summary tables of the latest CASP experiment run in 2020, including example plots of GDT score as a function of cutoff distance
GDT, GDC, LCS and LGA description services and documentation on structure comparison and similarity measures.
Bioinformatics
Computational chemistry | Global distance test | Chemistry,Engineering,Biology | 984 |
4,012,894 | https://en.wikipedia.org/wiki/HttpUnit | HttpUnit is an open-source software testing framework used to perform testing of web sites without the need for a web browser. HttpUnit supports HTML form submission, JavaScript, HTTP basic access authentication, automatic page redirection, and cookies. Written in Java, HttpUnit allows Java test code to process returned pages as text, XML DOM, or containers of forms, tables and links. HttpUnit is well suited to be used in combination with JUnit, in order to easily write tests that verify the proper behaviour of a web site.
The use of HttpUnit allows for automated testing of web applications and as a result, assists in regression testing.
See also
Software performance testing
Performance Engineering
Software
HtmlUnit
References
Further reading
External links
HttpUnit
Java platform
Unit testing frameworks | HttpUnit | Technology | 162 |
3,375,111 | https://en.wikipedia.org/wiki/Great%20Northern%20Concrete%20Toboggan%20Race | The Great Northern Concrete Toboggan Race (GNCTR) is an annual event that challenges the creativity of engineering students. The competition originated in 1974 and was created by Dr. S. H. Simmonds, president of the Alberta chapter of the American Concrete Institute. The first race was held in 1975 with participants from the University of Alberta, University of Calgary, Northern Alberta Institute of Technology, and Southern Alberta Institute of Technology. Since its beginning, GNCTR has grown to include universities and technical schools from across Canada with occasional entries from the United States and Europe.
Rules
The project involves designing and constructing a toboggan with a metal frame and a running surface made completely out of concrete and racing it down a steep snow-covered hill. The sled must weigh less than 350 pounds (158.8 kg), have a working braking system, and be fitted with a roll cage to protect its five passengers. Each competing team must complete a technical report summarizing the design, which is presented at a public technical exhibition.
Spirit
It is traditional for teams to choose a theme for their sled; they often wear appropriate costumes and incorporate elements of the design into their technical exhibit and sled aesthetics. Themes have become a major part of the competition, making up a large part of the spirit award, as well as the best uniforms award. Theme ideas are most often drawn from popular culture, retro references, or are based on the team's home university/college and its location.
Awards
Teams are judged for top speed, best run, most improved, braking, steering, and aesthetics. Each year, an award is also given for the best overall entry.
The current record holder for top speed in a successfully completed run at GNCTR is the University of Toronto, who set a speed of 73 km/h on Feb 1, 2020.
Competition host
In the early years of the competition, the winning team was asked to host the subsequent competition. By the mid-1990s, this practice had changed to an alternating scheme between Western and Eastern Canadian schools; the dividing line is the Manitoba-Ontario border. The competition usually runs from Wednesday to Sunday, at the end of January or over the first weekend in February.
See also
Concrete canoe
References
External links
Official GNCTR 2024 Website
Official GNCTR 2023 Website
Official GNCTR 2019 Website
Official GNCTR 2018 Website
Sledding
Concrete
Sports competitions in Canada
1974 establishments in Alberta
Recurring sporting events established in 1974 | Great Northern Concrete Toboggan Race | Engineering | 501 |
2,266,155 | https://en.wikipedia.org/wiki/Telescopic%20handler | A telescopic handler, also called a lull, telehandler, teleporter, reach forklift, or zoom boom, is a machine widely used in agriculture and industry. It is somewhat like a forklift but has a boom (telescopic cylinder), making it more a crane than a forklift, with the increased versatility of a single telescopic boom that can extend forwards and upwards from the vehicle. The boom can be fitted with different attachments, such as a bucket, pallet forks, muck grab, or winch.
History
The first telescopic handler was believed to have been manufactured by French company Sambron in 1957.
In 1971, Liner Construction Equipment of Hull launched the Giraffe 4WD, 4WS telehandler based on a design by Matbro who created a similar machine based on their articulated forestry machines.
JCB launched their 2WD, rear steer Loadall in October 1977. The JCB 520 was originally aimed at construction sites, the potential for agricultural uses soon followed. JCB sold 100,000 units by
Uses
In industry, the most common attachment for a telehandler is pallet forks and the most common application is to move loads to and from places unreachable for a conventional forklift. For example, telehandlers have the ability to remove palletised cargo from within a trailer and to place loads on rooftops and other high places. The latter application would otherwise require a crane, which is not always practical or time-efficient.
In agriculture the most common attachment for a telehandler are buckets or bucket grabs; again the most common application is to move loads to and from places unreachable for a 'conventional machine' which in this case is a wheeled loader or backhoe loader. For example, telehandlers have the ability to reach directly into a high-sided trailer or hopper. The latter application would otherwise require a loading ramp, conveyor, or something similar.
The telehandler can also work with a crane jib for lifting loads. Attachments on the market include dirt buckets, grain buckets, rotators, and power booms. Agricultural models can also be fitted with three-point linkage and power take-off.
The advantage of the telehandler is also its biggest limitation: as the boom extends or raises while bearing a load, it acts as a lever and causes the vehicle to become increasingly unstable, despite counterweights in the rear. This means that the lifting capacity quickly decreases as the working radius (distance between the front of the wheels and the centre of the load) increases. When used as a loader the single boom (rather than twin arms) is very highly loaded and even with careful design is a weakness. A vehicle with a capacity with the boom retracted may be able to safely lift as little as with it fully extended at a low boom angle. The same machine with a lift capacity with the boom retracted may be able to support as much as with the boom raised to 70°. The operator is equipped with a load chart which helps him determine whether a given task is possible, taking into account weight, boom angle and height. Failing this, most telehandlers now utilize a computer which uses sensors to monitor the vehicle and will warn the operator and/or cut off further control input if the limits of the vehicle are exceeded, the latter being a legal requirement in Europe controlled by EN15000. Machines can also be equipped with front stabilizers which extend the lifting capability of the equipment while stationary. Machines that are fully stabilised with a rotary joint between upper and lower frames can be called mobile cranes; they can typically still use a bucket, and are also often referred to as 'Roto' machines, and may be considered a hybrid between a telehandler and small crane.
Operator licensing
In some jurisdictions, a license is required in order to operate a telehandler under law or regulations of a national or other jurisdictional authority.
For example, in Australia, a Gold Card can be obtained for telehandlers with a capacity of three tonnes or less for standard attachments where the machine is operated from below. The Gold Card is issued by the Telescopic Handler Association of Australia (TSHA). The Gold Card is not a legally required qualification however verbal instruction is not considered an appropriate training method as it lacks evidence of competency training. Competency training with evidence of learning and written assessment is legally required in Australia.
In Victoria, Australia, a WorkSafe CN licence is a legally required licence for machines with a capacity of over three tonnes with standard attachments where the machine is operated from below.
Telehandlers fitted with elevated work platform attachments and are operated from the basket are classified as elevated work platforms and require elevated work platform licences, such as the EWPA Yellow Card or Worksafe WP Licence.
A WorkSafe C2 licence or higher may apply when using slewing-type telehandlers.
See also
Reach stacker
References
External links
Agricultural machinery
Engineering vehicles
Mobile cranes | Telescopic handler | Engineering | 1,034 |
1,612,994 | https://en.wikipedia.org/wiki/Committee%20on%20Space%20Research | The Committee on Space Research (COSPAR) was established on October 3, 1958 by the International Council for Scientific Unions (ICSU) and its first chair was Hildegard Korf Kallmann-Bijl. Among COSPAR's objectives are the promotion of scientific research in space on an international level, with emphasis on the free exchange of results, information, and opinions, and providing a forum, open to all scientists, for the discussion of problems that may affect space research. These objectives are achieved through the organization of symposia, publication, and other means.
COSPAR has created a number of research programmes on different topics, a few in cooperation with other scientific Unions. The long-term project COSPAR international reference atmosphere started in 1960; since then it has produced several editions of the high-atmosphere code CIRA. The code "IRI" of the URSI-COSPAR working group on the International Reference Ionosphere was first edited in 1978 and is yearly updated.
General Assembly
Every second year, COSPAR calls for a General Assembly (also called Scientific Assembly). These are conferences currently gathering almost three thousand participating space researchers. The most recent assemblies are listed in the table below; as of two previous leap years, two General Assemblies were cancelled. The 41st General Assembly in Istanbul was cancelled due to the 2016 Turkish coup d'état attempt, while the 43rd General Assembly in Sydney was also cancelled due to the COVID-19 pandemic.
Scientific Structure
Scientific Commissions
Scientific Commission A Space Studies of the Earth's Surface, Meteorology and Climate
Task Group on GEO
Subcommission A1 on Atmosphere, Meteorology and Climate
Subcommission A2 on Ocean Dynamics, Productivity and the Cryosphere
Subcommission A3 on Land Processes and Morphology
Scientific Commission B Space Studies of the Earth-Moon System, Planets, and Small Bodies of the Solar System
Sub-Commission B1 on Small Bodies
Sub-Commission B2 on International Coordination of Space Techniques for Geodesy (a joint Sub-Commission with IUGG/IAG Commission I on Reference Frames)
Sub-Commission B3 on The Moon
Sub-Commission B4 on Terrestrial Planets
Sub-Commission B5 on Outer Planets and Satellites
Sub-Commission B6/E4 on Exoplanets Detection, Characterization and Modelling
Scientific Commission C Space Studies of the Upper Atmospheres of the Earth and Planets Including Reference Atmospheres
Sub-Commission C1 on The Earth's Upper Atmosphere and Ionosphere
Sub-Commission C2 on The Earth's Middle Atmosphere and Lower Ionosphere
Sub-Commission C3 on Planetary Atmospheres and Aeronomy
Task Group on Reference Atmospheres of Planets and Satellites (RAPS)
URSI/COSPAR Task Group on the International Reference Ionosphere (IRI)
COSPAR/URSI Task Group on Reference Atmospheres, including ISO WG4 (CIRA)
Sub-Commission C5/D4 on Theory and Observations of Active Experiments
Scientific Commission D Space Plasmas in the Solar System, Including Planetary Magnetospheres
Sub-Commission D1 on The Heliosphere
Sub-Commission D2/E3 on The Transition from the Sun to the Heliosphere
Sub-Commission D3 on Magnetospheres
Sub-Commission C5/D4 on Theory and Observations of Active Experiments
Scientific Commission E Research in Astrophysics from Space
Sub-Commission E1 on Galactic and Extragalactic Astrophysics
Sub-Commission E2 on The Sun as a Star
Sub-Commission D2/E3 on The Transition from the Sun to the Heliosphere
Sub-Commission B6/E4 on Exoplanets Detection, Characterization and Modelling
Scientific Commission F Life Sciences as Related to Space
Sub-Commission F1 on Gravitational and Space Biology
Sub-Commission F2 on Radiation Environment, Biology and Health
Sub-Commission F3 on Astrobiology
Sub-Commission F4 on Natural and Artificial Ecosystems
Sub-Commission F5 on Gravitational Physiology in Space
Scientific Commission G Materials Sciences in Space
Scientific Commission H Fundamental Physics in Space
Panels
Technical Panel on Satellite Dynamics (PSD)
Panel on Technical Problems Related to Scientific Ballooning (PSB)
Panel on Potentially Environmentally Detrimental Activities in Space (PEDAS)
Panel on Radiation Belt Environment Modelling (PRBEM)
Panel on Space Weather (PSW)
Panel on Planetary Protection (PPP)
Panel on Capacity Building (PCB)
Panel on Capacity Building Fellowship Program and Alumni (PCB FP)
Panel on Education (PE)
Panel on Exploration (PEX)
Panel on Interstellar Research (PIR)
Task Group on Establishing an international Constellation of Small Satellites (TGCSS)
Sub-Group on Radiation Belts (TGCSS-SGRB)
Panel on Social Sciences and Humanities (PSSH)
Panel on Innovative Solutions (PoIS)
Task Group on Establishing an International Geospace Systems Program (TGIGSP)
Planetary Protection Policy
Responding to concerns raised in the scientific community that spaceflight missions to the Moon and other celestial bodies might compromise their future scientific exploration, in 1958 the International Council of Scientific Unions (ICSU) established an ad-hoc Committee on Contamination by Extraterrestrial Exploration (CETEX) to provide advice on these issues. In the next year, this mandate was transferred to the newly founded Committee on Space Research (COSPAR), which as an interdisciplinary scientific committee of the ICSU (now the International Science Council - ISC) was considered to be the appropriate place to continue the work of CETEX. Since that time, COSPAR has provided an international forum to discuss such matters under the terms “planetary quarantine” and later “planetary protection”, and has formulated a COSPAR planetary protection policy with associated implementation requirements as an international standard to protect against interplanetary biological and organic contamination, and after 1967 as a guide to compliance with Article IX of the United Nations Outer Space Treaty in that area ().
The COSPAR Planetary Protection Policy, and its associated requirements, is not legally binding under international law, but it is an internationally agreed standard with implementation guidelines for compliance with Article IX of the Outer Space Treaty. States Parties to the Outer Space Treaty are responsible for national space activities under Article VI of this Treaty, including the activities of governmental and non-governmental entities. It is the State that ultimately will be held responsible for wrongful acts committed by its jurisdictional subjects.
Updating the COSPAR Planetary Protection Policy, either as a response to new discoveries or based on specific requests, is a process that involves appointed members of the COSPAR Panel on Planetary Protection who represent, on the one hand, their national or international authority responsible for compliance with the United Nations Outer Space Treaty of 1967, and, on the other hand, COSPAR Scientific Commissions B – Space Studies of the Earth-Moon System, Planets and Small Bodies of the Solar Systems, and F - Life Sciences as Related to Space. After reaching a consensus among the involved parties, the proposed recommendation for updating the Policy is formulated by the COSPAR Panel on Planetary Protection and submitted to the COSPAR Bureau for review and approval.
The new structure of the Panel and its work was described in recent publications (;).
The recently updated COSPAR Policy on Planetary Protection was published in the August 2020 issue of COSPAR's journal Space Research Today. It contains some updates with respect to the previously approved version () based on recommendations formulated by the Panel and approved by the COSPAR Bureau.
Participating member countries
The table contains the list of countries participating in the Committee on Space Research:
See also
Space research
Planetary protection, for other bodies and Earth
International Planetary Data Alliance
List of government space agencies
References
External links
Scientific organizations based in France
Astronomy organizations
Space research
International organizations based in France
International scientific organizations | Committee on Space Research | Astronomy | 1,546 |
7,643,455 | https://en.wikipedia.org/wiki/Enteroendocrine%20cell | Enteroendocrine cells are specialized cells of the gastrointestinal tract and pancreas with endocrine function. They produce gastrointestinal hormones or peptides in response to various stimuli and release them into the bloodstream for systemic effect, diffuse them as local messengers, or transmit them to the enteric nervous system to activate nervous responses. Enteroendocrine cells of the intestine are the most numerous endocrine cells of the body. They constitute an enteric endocrine system as a subset of the endocrine system just as the enteric nervous system is a subset of the nervous system. In a sense they are known to act as chemoreceptors, initiating digestive actions and detecting harmful substances and initiating protective responses. Enteroendocrine cells are located in the stomach, in the intestine and in the pancreas. Microbiota play key roles in the intestinal immune and metabolic responses in these enteroendocrine cells via their fermentation product (short chain fatty acid), acetate.
Intestinal enteroendocrine cells
Intestinal enteroendocrine cells are not clustered together but spread as single cells throughout the intestinal tract.
Hormones secreted include somatostatin, motilin, cholecystokinin, neurotensin, vasoactive intestinal peptide, and enteroglucagon. The enteroendocrine cells sense the metabolites from intestinal commensal microbiota and, in turn, coordinate antibacterial, mechanical, and metabolic branches of the host intestinal innate immune response to the commensal microbiota.
K cell
K cells secrete gastric inhibitory peptide, an incretin, which also promotes triglyceride storage. K cells are mostly found in the duodenum.
L cell
L cells secrete glucagon-like peptide-1, an incretin, peptide YY3-36, oxyntomodulin and glucagon-like peptide-2. L cells are primarily found in the ileum and large intestine (colon), but some are also found in the duodenum and jejunum.
I cell
I cells secrete cholecystokinin (CCK), and have the highest mucosal density in the duodenum with a decreasing amount throughout the small intestine. They modulate bile secretion, exocrine pancreas secretion, and satiety.
G cell
Stomach enteroendocrine cells, which release gastrin, and stimulate gastric acid secretion.
Enterochromaffin cell
Enterochromaffin cells are enteroendocrine and neuroendocrine cells with a close similarity to adrenomedullary chromaffin cells secreting serotonin.
Enterochromaffin-like cell
Enterochromaffin-like cells or ECL cells are a type of neuroendocrine cell secreting histamine.
N cell
Located in a increasing manner throughout the small intestine, with the highest levels found in the in ileum, N cells release neurotensin, and control smooth muscle contraction.
S cell
S cells secrete secretin mostly from the duodenum, but also in decreasing amounts throughout the rest of the small intestine, and stimulate exocrine pancreatic secretion.
D cell
Also called Delta cells, D cells secrete somatostatin.
Mo cell (or M cell)
found in crypts of the small intestine, especially in the duodenum and jejunum.
Different from the Microfold cells (M cells) that are in Peyer's patches.
Secrete motilin
Gastric enteroendocrine cells
Gastric enteroendocrine cells are found in the gastric glands, mostly at their base. The G cells secrete gastrin, post-ganglionic fibers of the vagus nerve can release gastrin-releasing peptide during parasympathetic stimulation to stimulate secretion. Enterochromaffin-like cells are enteroendocrine and neuroendocrine cells also known for their similarity to chromaffin cells secreting histamine, which stimulates G cells to secrete gastrin.
Other hormones produced include cholecystokinin, somatostatin, vasoactive intestinal peptide, substance P, alpha and gamma-endorphin.
Pancreatic enteroendocrine cells
Pancreatic enteroendocrine cells are located in the islets of Langerhans and produce most importantly the hormones insulin and glucagon. The autonomous nervous system strongly regulates their secretion, with parasympathetic stimulation stimulating insulin secretion and inhibiting glucagon secretion and sympathetic stimulation having opposite effect.
Other hormones produced include somatostatin, pancreatic polypeptide, amylin and ghrelin.
Clinical significance
Rare and slow growing carcinoid and non-carcinoid tumors develop from these cells. When a tumor arises it has the capacity to secrete large volumes of hormones.
History
The very discovery of hormones occurred during studies of how the digestive system regulates its activities, as explained at Secretin § Discovery.
Other organisms
In rats (Rattus rattus) the short-chain fatty acid receptor is expressed both by this cell type and by mast cells of the mucosa.
See also
APUD cells
Neuroendocrine tumors
List of human cell types derived from the germ layers
References
External links
- "Endocrine System: duodenum, enteroendocrine cells"
Endocrine system
Animal cells
Stomach
Secretory cells | Enteroendocrine cell | Biology | 1,182 |
33,518,991 | https://en.wikipedia.org/wiki/Setrobuvir | Setrobuvir (also known as ANA-598) was an experimental drug candidate for the treatment of hepatitis C that was discovered at Anadys Pharmaceuticals, which was acquired by Roche in 2011; Roche terminated development in July 2015. It was in Phase IIb clinical trials, used in combination with interferon and ribavirin, targeting hepatitis C patients with genotype 1.
Setrobuvir works by inhibiting the hepatitis C enzyme NS5B, an RNA polymerase.
References
Abandoned drugs
NS5B (polymerase) inhibitors
Sulfonamides | Setrobuvir | Chemistry | 116 |
13,981,054 | https://en.wikipedia.org/wiki/Mark%20Inghram | Mark Gordon Inghram (November 13, 1919 – September 29, 2003) was an American physicist. Inghram was a member of the National Academy of Sciences.
Life
Inghram was born in Livingston, Montana. He did undergraduate work at Olivet College, receiving a B.A. in 1939. He worked in the Manhattan Project during World War II and at Argonne National Laboratories from 1945 to 1947. He received his Ph.D. from the University of Chicago in 1947. He began teaching at the University of Chicago as an instructor in 1947 and remained there until his retirement in 1985. He died at his home in Holland, Michigan in 2003.
Age of the Earth
Together with Clair Patterson and George Tilton, Inghram was one of the first scientists to combine measurements on meteorites and the Earth to find the age of the Earth. Over time, the decay of uranium to lead will change the isotopic makeup of terrestrial lead. Patterson, Tilton and Inghram assumed that iron meteorites, which contain lead but virtually no uranium, formed at the same time as the Earth. Assuming this to be true, the isotopic makeup of lead in iron meteorites should still be the same as that of the newly formed Earth, so the Earth can be dated by comparing the composition of lead in iron meteorites with that in new volcanic material on the Earth. Patterson, Tilton and Inghram did this and found that the Earth was approximately 4.5 billion years old.
References
1919 births
2003 deaths
20th-century American physicists
University of Chicago alumni
Members of the United States National Academy of Sciences
Mass spectrometrists
Olivet College alumni
Manhattan Project people
Physicists from Montana | Mark Inghram | Physics,Chemistry | 347 |
40,141,710 | https://en.wikipedia.org/wiki/Open%20Windows%20%28film%29 | Open Windows is a 2014 found footage techno-thriller film directed and written by Nacho Vigalondo. The film stars Elijah Wood, Sasha Grey and Neil Maskell, and had its world premiere at South by Southwest on 10 March 2014. It is Vigalondo's first English-language film.
Plot
Nick Chambers wins a contest to meet his favorite actress, Jill Goddard. Nick, the webmaster of a fansite dedicated to Jill, is crushed when Chord, Jill's manager, informs him that she has not only failed to invite him to the film's publicity event but also canceled the contest. Chord remotely sends Nick a link to his laptop that opens a live stream. Chord explains that he has hacked into Jill's cell phone and activated the microphone and camera without her knowledge. Although uneasy about invading her privacy, Nick goes along with Chord's plans to spy on her. By eavesdropping on her phone conversations, they learn that she will secretly meet her agent, Tony, with whom she is having an affair, at the same hotel in which Nick is staying.
Chord directs Nick to use preexisting high-end surveillance equipment to spy on Jill and Tony. As he watches them, Nick is briefly contacted by a trio of hackers who address him as Nevada. Jill leaves Tony's room. When Nick's lights spontaneously turn on and Tony can see the camera pointed at his room, Nick panics as Tony leaves his room to investigate. Chord orders Nick to use a Taser to incapacitate Tony. Feeling that he has no choice, Nick agrees. Nick initially refuses to tie up Tony but does so once Chord threatens to stop helping him. Suspicious of why all this equipment is available in his hotel room, Nick questions who Chord really is; Chord ignores him and guides him out of the hotel by hacking into its security system.
Chord blackmails Nick into further compliance by revealing that the entire contest was a hoax, and Chord now has video proof of Nick's crimes. Chord forces Nick to follow Jill to her house, and he is contacted once again by the trio of hackers who believe Nick to be a famous hacker. They offer to help him in his latest hack and Nick recruits them to counteract Chord. Meanwhile, Chord hacks into Jill's PC when she goes home. When Nick refuses to send her PC a file, Chord demonstrates that he is capable of sneaking into Jill's house and killing her.
The file turns out to be a live feed of Tony's torture by electric current. Horrified, Nick attempts to bargain with Chord for Tony's release, but Chord only tortures Tony further. Chord forces Nick to give commands to Jill through her PC, and Nick demands that she reveal her breasts. Satisfied with the resulting video, Chord breaks the connection. Nick frantically attempts to warn Jill but she is kidnapped by Chord. With the help of the hackers, Nick pursues Chord. However, once they realize that Chord is actually the master hacker, Nevada, their loyalties are torn. Although they continue to help him, they warn Nick that Nevada is the best in the world and a veteran of numerous anarchist operations, though none of them have resulted in physical harm to anyone.
The hackers later discover that Chord has killed Nevada and taken his place. After both Nick and Chord throw off the police, Nick crashes his car and Chord shoots him. Chord hacks into the entire Internet and virtually every website is replaced with a teaser of Jill's revealing video. When the site goes live, Chord explains that instead of a sex tape, she will be killed live on the Internet unless her fans immediately close the browser window. The site's traffic increases dramatically and Chord fakes her death at an abandoned factory. Jill plays along with Chord and says that she understands the point about society that he is making. However, when his guard is down, she flees.
Nevada reveals to Chord that he is still alive and has been impersonating Nick the whole time. The real Nick was safely hidden in Nevada's car trunk, and the whole scenario was an operation designed to flush Chord out. Nevada and Jill escape to safety in a bunker before explosives blow up the factory, killing Chord in his own trap. Nevada and Jill discuss what to do next, and she asks to accompany him as he retreats back into the underground hacker movement.
Cast
Production
Vigalondo was inspired to create Open Windows after he was asked to create a thriller film that heavily featured the Internet, akin to Mike Nichols's Closer. He found writing the script a challenge, as he had to create the film's plot as well as give specific reasons for each window that opened and why the point of view would shift between the characters. Vigalondo approached Wood specifically to star in the film and actress Sasha Grey was brought on board the project after she asked her manager to get her a copy of the script and set up a meeting with the director. The film appealed to Grey, as she was a fan of Vigalondo's work but was also intrigued by the character of Jill as a public figure and as someone who has to deal with "criticism and scrutiny and online haters and cyber stalkers". On 1 April 2014, Cinedigm acquired the US distribution rights to the film.
Filming took place in Madrid, Spain, during the last week of October 2012, and in Austin, Texas.
Release
The film had its world premiere at South by Southwest on 10 March 2014, and was screened in Los Angeles as part of SpectreFest on 4 October that year.
Reception
Rotten Tomatoes, a review aggregator, reports a 46% score of 40 critics surveyed; the average rating is 5.35/10. The site's consensus states: "Open Windows is undeniably ambitious; unfortunately, director Nacho Vigalondo's reach far exceeds his grasp." The film has a score of 51 out of 100 based on 10 reviews at Metacritic.
We Got This Covered praised the acting of Wood and Grey while stating overall that "Open Windows spams audiences with an overload of development without much explanation, much like those information-less ads claiming to solve your impotency problem with a magic formula." Shock Till You Drop panned the film and gave it a rating of 4 out of 10, criticizing it as the "biggest disappointment of the fest." Justin Chang of Variety wrote, "A fiendishly inventive thriller built around an audacious if unsustainable gimmick, Open Windows elevates Hitchcockian suspense to jittery new levels of mayhem and paranoia." John DeFore of The Hollywood Reporter wrote that only genre diehards are likely to accept the level of suspension of disbelief necessary to enjoy the film. Jeannette Catsoulis of The New York Times described it as "cleverly designed but hellish to watch" due to its overdone plot twists.
References
External links
2014 films
2014 thriller films
American thriller films
English-language Spanish films
Films about kidnapping
Films about computing
Films about security and surveillance
Films directed by Nacho Vigalondo
Films shot in Austin, Texas
Films shot in Madrid
Spanish thriller films
Techno-thriller films
Works about computer hacking
Screenlife films
2010s English-language films
2010s American films
2010s Spanish films
English-language thriller films | Open Windows (film) | Technology | 1,489 |
49,120,365 | https://en.wikipedia.org/wiki/C10H9NO3S | {{DISPLAYTITLE:C10H9NO3S}}
The molecular formula C10H9NO3S (molar mass: 223.25 g/mol, exact mass: 223.0303 u) may refer to:
Aminonaphthalenesulfonic acids
Naphthionic acid
Tobias acid (2-amino-1-naphthalenesulfonic acid) | C10H9NO3S | Chemistry | 84 |
21,550,149 | https://en.wikipedia.org/wiki/Delta%202000 | The Delta 2000 series was an American expendable launch system which was used to conduct forty-four orbital launches between 1974 and 1981. It was a member of the Delta family of rockets, sometimes called Thorad Delta. Several variants existed, which were differentiated by a four digit numerical code. The Delta 1000, 2000 and 3000 series used surplus NASA Apollo program rockets engines for its first and second stages.
The first stage was an Extended Long Tank Thor, re-engined with the Rocketdyne RS-27 replacing the earlier MB-3-III engine. The RS-27 engine was a rebranded H-1 engine used in the Saturn 1B with minor changes. Three or nine Castor-2 solid rocket boosters were attached to increase thrust at lift-off. The Delta-P second stage used the TRW TR-201 engine. The TR-201 engine was a Lunar Module Descent Engine reconfigured for fixed thrust output. Launches which required a three-stage configuration in order to reach higher orbits used the Thiokol Star-37D or Star-37E upper stage as an apogee kick motor.
Delta 2000 launches occurred from Space Launch Complex 2W at Vandenberg AFB and both pads of Launch Complex 17 at Cape Canaveral. Forty-three out of forty-four launches were successful. The single failure being the maiden flight, 19 January 1974, which placed Skynet 2A into a useless orbit. A short circuit in an electronics package circuit board (on second stage) left the upper stages and satellite in an unstable low orbit (96 x 3,406 km x 37.6°) that rapidly decayed. An investigation revealed that a substandard coating had been used on the circuit board.
The cost of each launch was estimated on average at $28.52 million, depending on the combination of carrier rocket.
References
Delta (rocket family) | Delta 2000 | Astronomy | 382 |
63,326,424 | https://en.wikipedia.org/wiki/Discrete%20fixed-point%20theorem | In discrete mathematics, a discrete fixed-point is a fixed-point for functions defined on finite sets, typically subsets of the integer grid .
Discrete fixed-point theorems were developed by Iimura, Murota and Tamura, Chen and Deng and others. Yang provides a survey.
Basic concepts
Continuous fixed-point theorems often require a continuous function. Since continuity is not meaningful for functions on discrete sets, it is replaced by conditions such as a direction-preserving function. Such conditions imply that the function does not change too drastically when moving between neighboring points of the integer grid. There are various direction-preservation conditions, depending on whether neighboring points are considered points of a hypercube (HGDP), of a simplex (SGDP) etc. See the page on direction-preserving function for definitions.
Continuous fixed-point theorems often require a convex set. The analogue of this property for discrete sets is an integrally-convex set.
A fixed point of a discrete function f is defined exactly as for continuous functions: it is a point x for which f(x)=x.
For functions on discrete sets
We focus on functions , where the domain X is a nonempty subset of the Euclidean space . ch(X) denotes the convex hull of X.
Iimura-Murota-Tamura theorem: If X is a finite integrally-convex subset of , and is a hypercubic direction-preserving (HDP) function, then f has a fixed-point.
Chen-Deng theorem: If X is a finite subset of , and is simplicially direction-preserving (SDP), then f has a fixed-point.
Yang's theorems:
[3.6] If X is a finite integrally-convex subset of , is simplicially gross direction preserving (SGDP), and for all x in X there exists some g(x)>0 such that , then f has a zero point.
[3.7] If X is a finite hypercubic subset of , with minimum point a and maximum point b, is SGDP, and for any x in X: and , then f has a zero point. This is a discrete analogue of the Poincaré–Miranda theorem. It is a consequence of the previous theorem.
[3.8] If X is a finite integrally-convex subset of , and is such that is SGDP, then f has a fixed-point. This is a discrete analogue of the Brouwer fixed-point theorem.
[3.9] If X = , is bounded and is SGDP, then f has a fixed-point (this follows easily from the previous theorem by taking X to be a subset of that bounds f).
[3.10] If X is a finite integrally-convex subset of , a point-to-set mapping, and for all x in X: , and there is a function f such that and is SGDP, then there is a point y in X such that . This is a discrete analogue of the Kakutani fixed-point theorem, and the function f is an analogue of a continuous selection function.
[3.12] Suppose X is a finite integrally-convex subset of , and it is also symmetric in the sense that x is in X iff -x is in X. If is SGDP w.r.t. a weakly-symmetric triangulation of ch(X) (in the sense that if s is a simplex on the boundary of the triangulation iff -s is), and for every pair of simplicially-connected points x, y in the boundary of ch(X), then f has a zero point.
See the survey for more theorems.
For discontinuous functions on continuous sets
Discrete fixed-point theorems are closely related to fixed-point theorems on discontinuous functions. These, too, use the direction-preservation condition instead of continuity.
Herings-Laan-Talman-Yang fixed-point theorem:
Let X be a non-empty convex compact subset of . Let f: X → X be a locally gross direction preserving (LGDP) function: at any point x that is not a fixed point of f, the direction of is grossly preserved in some neighborhood of x, in the sense that for any two points y, z in this neighborhood, its inner product is non-negative, i.e.: . Then f has a fixed point in X.
The theorem is originally stated for polytopes, but Philippe Bich extends it to convex compact sets.Note that every continuous function is LGDP, but an LGDP function may be discontinuous. An LGDP function may even be neither upper nor lower semi-continuous. Moreover, there is a constructive algorithm for approximating this fixed point.
Applications
Discrete fixed-point theorems have been used to prove the existence of a Nash equilibrium in a discrete game, and the existence of a Walrasian equilibrium in a discrete market.
References
Discrete mathematics
Fixed-point theorems | Discrete fixed-point theorem | Mathematics | 1,048 |
58,214,031 | https://en.wikipedia.org/wiki/Bouvardin | Bouvardin is a bicyclic hexapeptide isolated from Bouvardia ternifolia. Its chemical formula is C40H48N6O10. It is derived from the amino acid sequence Ala-Ala-Tyr-Ala-Tyr-Tyr. It has demonstrated certain anti-cancerous activities by inhibiting protein synthesis through inhibition of 80S ribosomes (eukaryotic ribosomes).
References
Hexapeptides
Cyclic peptides | Bouvardin | Chemistry | 103 |
87,599 | https://en.wikipedia.org/wiki/G.%20H.%20Hardy | Godfrey Harold Hardy (7 February 1877 – 1 December 1947) was an English mathematician, known for his achievements in number theory and mathematical analysis. In biology, he is known for the Hardy–Weinberg principle, a basic principle of population genetics.
G. H. Hardy is usually known by those outside the field of mathematics for his 1940 essay A Mathematician's Apology, often considered one of the best insights into the mind of a working mathematician written for the layperson.
Starting in 1914, Hardy was the mentor of the Indian mathematician Srinivasa Ramanujan, a relationship that has become celebrated. Hardy almost immediately recognised Ramanujan's extraordinary albeit untutored brilliance, and Hardy and Ramanujan became close collaborators. In an interview by Paul Erdős, when Hardy was asked what his greatest contribution to mathematics was, Hardy unhesitatingly replied that it was the discovery of Ramanujan. In a lecture on Ramanujan, Hardy said that "my association with him is the one romantic incident in my life".
Biography
G. H. Hardy was born on 7 February 1877, in Cranleigh, Surrey, England, into a teaching family. His father was Bursar and Art Master at Cranleigh School; his mother had been a senior mistress at Lincoln Training College for teachers. Both of his parents were mathematically inclined, though neither had a university education. He and his sister Gertrude "Gertie" Emily Hardy (1878–1963) were brought up by their educationally enlightened parents in a typical Victorian nursery attended by a nurse. At an early age, he argued with his nurse about the existence of Santa Claus and the efficacy of prayer. He read aloud to his sister books such as Don Quixote, Gulliver's Travels, and Robinson Crusoe.
Hardy's own natural affinity for mathematics was perceptible at an early age. When just two years old, he wrote numbers up to millions, and when taken to church he amused himself by factorising the numbers of the hymns.
After schooling at Cranleigh, Hardy was awarded a scholarship to Winchester College for his mathematical work. In 1896, he entered Trinity College, Cambridge. He was first tutored under Robert Rumsey Webb, but found it unsatisfying, and briefly considered switching to history. He then was tutored by Augustus Love, who recommended him to read Camille Jordan's Cours d'analyse, which taught him for the first time "what mathematics really meant". After only two years of preparation under his coach, Robert Alfred Herman, Hardy was fourth in the Mathematics Tripos examination. Years later, he sought to abolish the Tripos system, as he felt that it was becoming more an end in itself than a means to an end. While at university, Hardy joined the Cambridge Apostles, an elite, intellectual secret society.
Hardy cited as his most important influence his independent study of Cours d'analyse de l'École Polytechnique by the French mathematician Camille Jordan, through which he became acquainted with the more precise mathematics tradition in continental Europe. In 1900 he passed part II of the Tripos, and in the same year he was elected to a Prize Fellowship at Trinity College. In 1903 he earned his M.A., which was the highest academic degree at English universities at that time. When his Prize Fellowship expired in 1906 he was appointed to the Trinity staff as a lecturer in mathematics, where teaching six hours per week left him time for research.
On 16 January 1913, Ramanujan wrote to Hardy, who Ramanujan had known from studying Orders of Infinity (1910). Hardy read the letter in the morning, suspected it was a crank or a prank, but thought it over and realized in the evening that it was likely genuine because "great mathematicians are commoner than thieves or humbugs of such incredible skill". He then invited Ramanujan to Cambridge and began "the one romantic incident in my life".
In the aftermath of the Bertrand Russell affair during World War I, in 1919 he left Cambridge to take the Savilian Chair of Geometry (and thus become a Fellow of New College) at Oxford. Hardy spent the academic year 1928–1929 at Princeton University in an academic exchange with Oswald Veblen, who spent the year at Oxford. Hardy gave the Josiah Willard Gibbs lecture for 1928. Hardy left Oxford and returned to Cambridge in 1931, becoming again a fellow of Trinity College and holding the Sadleirian Professorship until 1942. It is believed that he left Oxford for Cambridge to avoid the compulsory retirement at 65.
He was on the governing body of Abingdon School from 1922 to 1935.
In 1939, he suffered a coronary thrombosis, which prevented him from playing tennis, squash, etc. He also lost his creative powers in mathematics. He was constantly bored and distracted himself by writing a privately circulated memoir about the Bertrand Russell affair. In the early summer of 1947, he attempted suicide by barbiturate overdose. After that, he resolved to simply wait for death. He died suddenly one early morning while listening to his sister read out from a book of the history of Cambridge University cricket.
Work
Hardy is credited with reforming British mathematics by bringing rigour into it, which was previously a characteristic of French, Swiss and German mathematics. British mathematicians had remained largely in the tradition of applied mathematics, in thrall to the reputation of Isaac Newton (see Cambridge Mathematical Tripos). Hardy was more in tune with the cours d'analyse methods dominant in France, and aggressively promoted his conception of pure mathematics, in particular against the hydrodynamics that was an important part of Cambridge mathematics.
Hardy preferred to work only 4 hours every day on mathematics, spending the rest of the day talking, playing cricket, and other gentlemanly activities.
From 1911, he collaborated with John Edensor Littlewood, in extensive work in mathematical analysis and analytic number theory. This (along with much else) led to quantitative progress on Waring's problem, as part of the Hardy–Littlewood circle method, as it became known. In prime number theory, they proved results and some notable conditional results. This was a major factor in the development of number theory as a system of conjectures; examples are the first and second Hardy–Littlewood conjectures. Hardy's collaboration with Littlewood is among the most successful and famous collaborations in mathematical history. In a 1947 lecture, the Danish mathematician Harald Bohr reported a colleague as saying, "Nowadays, there are only three really great English mathematicians: Hardy, Littlewood, and Hardy–Littlewood."
Hardy is also known for formulating the Hardy–Weinberg principle, a basic principle of population genetics, independently from Wilhelm Weinberg in 1908. He played cricket with the geneticist Reginald Punnett, who introduced the problem to him in purely mathematical terms. Hardy, who had no interest in genetics and described the mathematical argument as "very simple", may never have realised how important the result became.
Hardy was elected an international honorary member of the American Academy of Arts and Sciences in 1921, an international member of the United States National Academy of Sciences in 1927, and an international member of the American Philosophical Society in 1939.
Hardy's collected papers have been published in seven volumes by Oxford University Press.
Pure mathematics
Hardy preferred his work to be considered pure mathematics, perhaps because of his detestation of war and the military uses to which mathematics had been applied. He made several statements similar to that in his Apology:
However, aside from formulating the Hardy–Weinberg principle in population genetics, his famous work on integer partitions with his collaborator Ramanujan, known as the Hardy–Ramanujan asymptotic formula, has been widely applied in physics to find quantum partition functions of atomic nuclei (first used by Niels Bohr) and to derive thermodynamic functions of non-interacting Bose–Einstein systems. Though Hardy wanted his maths to be "pure" and devoid of any application, much of his work has found applications in other branches of science.
Moreover, Hardy deliberately pointed out in his Apology that mathematicians generally do not "glory in the uselessness of their work", but rather – because science can be used for evil ends as well as good – "mathematicians may be justified in rejoicing that there is one science at any rate, and that their own, whose very remoteness from ordinary human activities should keep it gentle and clean." Hardy also rejected as a "delusion" the belief that the difference between pure and applied mathematics had anything to do with their utility. Hardy regards as "pure" the kinds of mathematics that are independent of the physical world, but also considers some "applied" mathematicians, such as the physicists Maxwell and Einstein, to be among the "real" mathematicians, whose work "has permanent aesthetic value" and "is eternal because the best of it may, like the best literature, continue to cause intense emotional satisfaction to thousands of people after thousands of years." Although he admitted that what he called "real" mathematics may someday become useful, he asserted that, at the time in which the Apology was written, only the "dull and elementary parts" of either pure or applied mathematics could "work for good or ill".
Personality
Hardy was extremely shy as a child and was socially awkward, cold and eccentric throughout his life. During his school years, he was top of his class in most subjects, and won many prizes and awards but hated having to receive them in front of the entire school. He was uncomfortable being introduced to new people, and could not bear to look at his own reflection in a mirror. It is said that, when staying in hotels, he would cover all the mirrors with towels.
Socially, Hardy was associated with the Bloomsbury Group and the Cambridge Apostles; G. E. Moore, Bertrand Russell and J. M. Keynes were friends. Apart from close friendships, he had a few platonic relationships with young men who shared his sensibilities, and often his love of cricket. A mutual interest in cricket led him to befriend the young C. P. Snow. Hardy was a lifelong bachelor and in his final years he was cared for by his sister.
He was an avid cricket fan. Maynard Keynes observed that if Hardy had read the stock exchange for half an hour every day with as much interest and attention as he did the day's cricket scores, he would have become a rich man. He liked to speak of the best class of mathematical research as "the Hobbs class", and later, after Bradman appeared as an even greater batsman, "the Bradman class".
Around the age of 20, he decided that he did not believe in God, which proved a minor issue as attending the chapel was compulsory at Cambridge University. He wrote a letter to his parents explaining that, and from then on he refused to go into any college chapel, even for purely ritualistic duties.
He was at times politically involved, if not an activist. He took part in the Union of Democratic Control during World War I, and For Intellectual Liberty in the late 1930s. He admired America and the Soviet Union roughly equally. He found both sides of the Second World War objectionable.
Paul Hoffman writes that "His concerns were wide-ranging, as evidenced by six New Year's resolutions he set in a postcard to a friend: prove the Riemann hypothesis; (2) make 211 not out in the fourth innings of the last Test Match at the Oval; (3) find an argument for the nonexistence of God which shall convince the general public; (4) be the first man at the top of Mount Everest; (5) be proclaimed the first president of the U. S. S. R. of Great Britain and Germany; and (6) murder Mussolini.
Cultural references
Hardy is a key character, played by Jeremy Irons, in the 2015 film The Man Who Knew Infinity, based on the biography of Ramanujan with the same title. Hardy is a major character in David Leavitt's historical fiction novel The Indian Clerk (2007), which depicts his Cambridge years and his relationship with John Edensor Littlewood and Ramanujan. Hardy is a secondary character in Uncle Petros and Goldbach's Conjecture (1992), a mathematics novel by Apostolos Doxiadis. Hardy is also a character in the 2014 Indian film, Ramanujan, played by Kevin McGowan.
Bibliography
Full text The reprinted Mathematician's Apology with an introduction by C.P. Snow was recommended by Marcus du Sautoy in the BBC Radio program A Good Read in 2007.
Full text
See also
Critical line theorem
Campbell–Hardy theorem
Hardy hierarchy
Hardy notation
Hardy space
Hardy–Hille formula
Hardy–Littlewood definition
Hardy–Littlewood inequality
Hardy–Littlewood maximal function
Hardy–Littlewood tauberian theorem
Hardy–Littlewood zeta function conjectures
Hardy–Ramanujan Journal
Hardy–Ramanujan number
Hardy–Ramanujan theorem
Hardy's inequality
Hardy's theorem
Hardy field
Hardy Z function
Pisot–Vijayaraghavan number
Ulam spiral
Notes
References
Further reading
Reprinted as
External links
Quotations of G. H. Hardy
Hardy's work on Number Theory
1877 births
1947 deaths
Mathematical analysts
British number theorists
British population geneticists
19th-century English mathematicians
20th-century English mathematicians
Savilian Professors of Geometry
Fellows of the Royal Society
Members of the French Academy of Sciences
Foreign associates of the National Academy of Sciences
Fellows of Trinity College, Cambridge
Alumni of Trinity College, Cambridge
Cambridge University Moral Sciences Club
English atheists
People educated at Cranleigh School
People educated at Winchester College
Royal Medal winners
Recipients of the Copley Medal
People from Cranleigh
Fellows of New College, Oxford
De Morgan Medallists
Mathematics writers
Governors of Abingdon School
British textbook writers
Sadleirian Professors of Pure Mathematics
Members of the American Philosophical Society | G. H. Hardy | Mathematics | 2,829 |
22,059,637 | https://en.wikipedia.org/wiki/August%202035%20lunar%20eclipse | A partial lunar eclipse will occur at the Moon’s descending node of orbit on Sunday, August 19, 2035, with an umbral magnitude of 0.1049. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A partial lunar eclipse occurs when one part of the Moon is in the Earth's umbra, while the other part is in the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring about 4.9 days before apogee (on August 14, 2035, at 2:10 UTC), the Moon's apparent diameter will be smaller.
Visibility
The eclipse will be completely visible over South America, Africa, and Europe, seen rising over North America and setting over west, central, and south Asia.
Eclipse details
Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2035
A penumbral lunar eclipse on February 22.
An annular solar eclipse on March 9.
A partial lunar eclipse on August 19.
A total solar eclipse on September 2.
Metonic
Preceded by: Lunar eclipse of October 30, 2031
Followed by: Lunar eclipse of June 6, 2039
Tzolkinex
Preceded by: Lunar eclipse of July 6, 2028
Followed by: Lunar eclipse of September 29, 2042
Half-Saros
Preceded by: Solar eclipse of August 12, 2026
Followed by: Solar eclipse of August 23, 2044
Tritos
Preceded by: Lunar eclipse of September 18, 2024
Followed by: Lunar eclipse of July 18, 2046
Lunar Saros 119
Preceded by: Lunar eclipse of August 7, 2017
Followed by: Lunar eclipse of August 29, 2053
Inex
Preceded by: Lunar eclipse of September 7, 2006
Followed by: Lunar eclipse of July 28, 2064
Triad
Preceded by: Lunar eclipse of October 18, 1948
Followed by: Lunar eclipse of June 20, 2122
Lunar eclipses of 2035–2038
Saros 119
Tritos series
Half-Saros cycle
A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two total solar eclipses of Solar Saros 126.
See also
List of lunar eclipses and List of 21st-century lunar eclipses
Notes
External links
2035-08
2035-08
2035 in science | August 2035 lunar eclipse | Astronomy | 620 |
61,860,767 | https://en.wikipedia.org/wiki/Hafnium%E2%80%93tungsten%20dating | Hafnium–tungsten dating is a geochronological radiometric dating method utilizing the radioactive decay system of hafnium-182 to tungsten-182. The half-life of the system is million years. Today hafnium-182 is an extinct radionuclide, but the hafnium–tungsten radioactive system is useful in studies of the early Solar system since hafnium is lithophilic while tungsten is moderately siderophilic, which allows the system to be used to date the differentiation of a planet's core. It is also useful in determining the formation times of the parent bodies of iron meteorites.
The use of the hafnium-tungsten system as a chronometer for the early Solar system was suggested in the 1980s, but did not come into widespread use until the mid-1990s when the development of multi-collector inductively coupled plasma mass spectrometry enabled the use of samples with low concentrations of tungsten.
Basic principle
The radioactive system behind hafnium–tungsten dating is a two-stage decay as follows:
→ + +
→ + +
The first decay has a half-life of 8.9 million years, while the second has a half-life of only 114 days, such that the intermediate nuclide tantalum-182 (182Ta) can effectively be ignored.
Since hafnium-182 is an extinct radionuclide, hafnium–tungsten chronometry is performed by examining the abundance of tungsten-182 relative to other stable isotopes of tungsten, of which there are effectively five in total, including the extremely long-lived isotope tungsten-180, which has a half-life much longer than the current age of the universe.
The abundance of tungsten-182 can be influenced by processes other than the decay of hafnium-182, but the existence of a large number of stable isotopes is very helpful for disentangling variations in tungsten-182 due to a different cause. For example, while 182W, 183W, 184W and 186W are all produced by the r- and s-processes, the rare isotope tungsten-180 is only produced by the p-process. Variations in tungsten isotopes caused by r- and s-process nucleosynthetic contributions also result in correlated changes in the ratios 182W/184W and 183W/184W, which means that the 183W/184W ratio can be used to quantify how much of the tungsten-182 variation is due to nucleosynthetic contributions.
The influence of cosmic rays is more difficult to correct for since cosmic ray interactions affect the abundance of tungsten-182 much more than any of the other tungsten isotopes. Nonetheless, cosmic ray effects can be corrected for by examining other isotope systems such as platinum, osmium or the stable isotopes of hafnium, or simply by taking samples from the interior that have not been exposed to cosmic rays, though the latter requires large samples.
Tungsten isotopic data is usually plotted in terms of ε182W and ε183W, which represent deviations in the ratios 182W/184W and 183W/184W in parts per 10,000 relative to terrestrial standards. Since Earth is differentiated the crust and mantle of Earth are enriched in tungsten-182 relative to the initial composition of the Solar system. Undifferentiated chondritic meteorites have ε182W = relative to Earth, which is extrapolated to give a value of for the initial ε182W of the Solar system.
Dating planetary core formation
A primordial planet is undifferentiated, meaning that it is not layered according to density (with the densest material being towards the interior of the planet). When a planet undergoes differentiation the dense materials, particularly iron, separate from lighter components and sink to the interior forming the core of the planet. If this process took place relatively early in a planet's history, hafnium-182 would not have sufficient time to decay to tungsten-182. Since hafnium is a lithophile element the (undecayed) hafnium-182 would remain in the mantle (i.e. the outer layers of the planet). Then, after some time, the hafnium-182 would decay to tungsten-182 leaving an excess of tungsten-182 in the mantle. On the other hand, if differentiation occurred later in a planet's history, then most of the hafnium-182 would have decayed to tungsten-182 before differentiation began. Being moderately siderophilic, much of the tungsten-182 would sink towards the interior of the planet along with iron. In this scenario, not much tungsten-182 would subsequently be present in the outer layers of the planet. As such, by looking at how much tungsten-182 is present in the outer layers of a planet, relative to other isotopes of tungsten, the time of differentiation can be quantified.
Model ages
If we have a sample from the mantle (or core) of a body and want to calculate a core formation age from the tungsten-182 abundance we need to also know the composition of the bulk planet. Since we do not have samples from the core of Earth (or any other intact planet) the composition of chondritic meteorites is generally substituted for that of the bulk planet. Hafnium and tungsten are both refractory elements so there is not expected to be any fractionation between hafnium and tungsten due to heating of the planet during or after formation. A model age for the time of core formation can then be calculated using the equation
,
where is the decay constant for hafnium-182 (0.078±0.002 Ma−1), the ε182W values are those of the sample, chondritic meteorites (taken to represent the bulk planet) and the Solar System Initial value, and accounts for any differences in the general abundance of hafnium between the sample and chondritic meteorites,
.
It is important to note that this equation assumes that core formation is instantaneous. This can be a reasonable assumption for small bodies, like iron meteorites, but is not true for large bodies like Earth whose accretion likely took many millions of years. Instead more complex models that model core formation as a continuous process are more reasonable, and should be used.
Core formation times for Solar system bodies
The method of hafnium-tungsten dating has been applied to many samples from Solar system bodies and used to provide estimates for the date of core formation. For iron meteorites hafnium-tungsten dating yields ages ranging from less than a million years after the formation of the first solids (calcium-aluminium-rich inclusions, usually called CAIs) to around 3 million years for different meteorite groups. While chondritic meteorites are not differentiated as a whole, hafnium-tungsten dating can still be useful for constraining formation ages by applying it to smaller melt regions in which metals and silicates have separated. For the very well studied carbonaceous chondrite Allende this gives a formation age of around 2.2 million years after the formation of CAIs. Martian meteorites have been examined and indicate that Mars may have been fully formed within 10 million years of the formation of CAIs, which has been used to suggest that Mars is a primordial planetary embryo. For Earth, models of accretion and core formation are strongly dependent on how much giant impacts, like that presumed to have formed the Moon, re-mixed the core and mantle, yielding dates of between 30 and 100 million years after CAIs depending on assumptions.
See also
Radiometric dating
Isotope geochemistry
Planetary differentiation
References
Radiometric dating
Hafnium
Tungsten
Planetary geology | Hafnium–tungsten dating | Chemistry | 1,624 |
22,614,305 | https://en.wikipedia.org/wiki/Space%20science%20in%20Estonia | The cornerstone of the Estonian cosmological research is the Tartu Observatory which was founded in 1812. The observatory itself has a long tradition of studying galaxies and theoretically modeling the structure of the universe and its formation. Till today this facility is Estonia's main research centre for astronomy and atmospheric physics, with fundamental research focusing on physics of galaxies, stellar physics and remote sensing of the Earth's atmosphere and ground surface. The observatory has also played a vital role in catapulting the career of Jaan Einasto, one of the most famous and eminent Estonian astrophysicists and one of the discoverers of "Dark Matter" and of the cellular structure of the Universe.
Mir involvement
During the Cold War Estonia was associated with and active in the extensive space program of the USSR. In the early 1970s The first Soviet Saljut type space station was equipped with the Estonian built Mikron a shining night clouds observer device. Several upgrades of the device were in service till the mid-1980s till the introduction of a more advanced technology. In the mid-1980s a telespectrometer FAZA (also known as Phasa) was constructed in Estonia for the Soviet orbital space station Mir. The FAZA had a 10 arc-sec field-of-view and operated at 340-2850 nm and was fitted outside the Kvant-2 module. The device was used for study of the atmosphere and pollutants.
The first FAZA which was shipped on orbit at the Baikonur cosmodrome to enter service in the station Saljut 7, crashed down along with the station a year later in South America causing an international scandal for the Soviet Union in the region. Several years later in 1991 a joint space flight conducted by the Soviet Union and Austria ended the service of FAZA as the device was retired from service.
ESA involvement
Estonia was the first Baltic State to sign a cooperation treaty with the European Space Agency in 2007. The Estonian Space Office coordinates with ESA within the country.
In 2015, Estonia joined the European Space Agency.
Current programs
After re-gaining independence in 1991 its space research has mainly focused on cosmology. Since the 2000 the Estonian industry is again involved with the space sector where various specializing has taken place. Many Estonian companies are involved in the production of antennas for ground station for satellite communication, which have also contributed to the Mars Express mission. Furthermore, one of the Estonian companies built a large antenna reflector back structure for an ESA 35 metres radio telescope in Australia, which tracked Mars Express on its way to the red planet.
Science programs
For several years Estonian scientists have collaborated with the ESA program Gaia, which plans to launch a space probe in 2011 to measure the brightness and exact coordinates of millions of space objects, both in the Milky Way galaxy as well as in more distant galaxies. Estonian scientists have offered their advice on how to measure these objects by using spectrophotometry.
Satellite programs
2013 a joint research satellite with Finland - a mission to gather data about the possible use of solar winds.
On 7 May 2013 ESTCube-1 was launched by a Vega rocket. Its mission was officially ended on 17 February 2015.
On 9 October 2023 ESTCube-2 was launched by a Vega rocket.
See also
Soviet space program
Mir orbital space station
References
External links
Strategy for Estonian Space Affairs 2011–2013
Astronomy in Estonia
Space science | Space science in Estonia | Astronomy | 682 |
76,097,725 | https://en.wikipedia.org/wiki/Fallacinol | Fallacinol (teloschistin) is an organic compound in the structural class of chemicals known as anthraquinones. It is found in some lichens, particularly in the family Teloschistaceae, as well as a couple of plants and non lichen-forming fungi. In 1936, Japanese chemists isolated a pigment they named fallacin from the lichen Oxneria fallax, which was later refined and assigned a tentative structural formula; by 1949, Indian chemists had isolated a substance from Teloschistes flavicans with an identical structural formula to fallacin. Later research further separated fallacin into two distinct pigments, fallacin-A (later called fallacinal) and fallacin-B (fallacinol). The latter compound is also known as teloschistin due to its structural match with the substance isolated earlier.
History
In 1936, Japanese chemists Mitizo Asano and Sinobu Fuziwara reported on their chemical investigations into the colour pigments of the lichen Xanthoria fallax (now known as Oxneria fallax), found growing on the bark of mulberry trees. They isolated a pigment they named fallacin. A few years later Asano and Yosio Arata further purified the crude material from this lichen, ultimately obtaining an orange-yellow compound with a molecular formula of C16H12O6. Using information from additional chemical tests, they proposed a tentative structural formula for fallacin. In 1949, T. R. Seshadri and S. Subramanian published their investigations into the chemistry of Teloschistes flavicans, a lichen from which they isolated an orange substance they named teloschistin, and which had a structural formula identical to that of fallacin proposed by Asano and Arata years earlier.
In 1956, Takao Murakami reported reexamining the crude pigment obtainable from Xanthoria fallax using Asano's original 1936 procedure. He separated out fallacin from parietin, a co-occurring substance, using several rounds of column chromatography, and showed that Asano's original pigment was actually a combination of two pigments with different melting points, which he designated as fallacin-A and fallacin-B. After chemically determining the structure of fallacin-A, Murakami designated this substance as fallacinal. He named the biogenically related compound fallacin-B as fallacinol. Because of Seshadri and Subramanian's work, this substance is also known as "teloschistin" in the literature.
Extraction and isolation
In an early chemical examination of the lichen Teloschistes flavicans, Friedrich Wilhelm Zopf identified two substances: physcion (now known more commonly as parietin) with a melting point (m.p.) of and an unidentified colourless compound with a m.p. of . Subsequent studies by Seshadri and Sankara Subramanian refined the extraction process, utilising a series of solvents—ether, acetone, and water—to isolate the constituents. The ether phase was found to contain all the crystalline compounds, while subsequent solvents did not yield additional extracts.
Within the ether extract, a colourless compound, referred to as substance A, was separated based on its insolubility in alkali. The alkali-soluble fraction exhibited characteristics of parietin, though impurities complicated its purification. It was eventually purified to a parietin fraction with a melting point of after multiple stages of fractional crystallisation using an alcohol-chloroform mixture.
The presence of another compound with a higher melting point posed a purification challenge, which was resolved by employing petroleum ether and chloroform for sequential extraction. The petroleum ether extract contained the colourless substance A and a majority of parietin, allowing for easier purification of the latter. The chloroform extract revealed the higher-melting compound, which the authors thought was a novel substance, and which they named "teloschistin".
In 1951, Neelakantan and colleagues expanded on the initial identification of fallacinol, focusing on its chemical structure. They confirmed its molecular formula as C16H12O6 and identified it as a hydroxyl derivative of parietin, lacking specific hydroxy groupings that would typically cause fluorescence or colour changes in acidic conditions. To conclusively determine the position of its methoxyl group, fallacinol was chemically altered into a compound with a known methoxyl position, establishing it firmly in the 7-position. This process involved a series of reactions, including demethylation, reduction, and oxidation. Additionally, comparisons with similar anthraquinone derivatives through hydrolysis and other reactions further substantiated the structural findings.
The research also noted the slower-than-expected reaction rates during oxidation, suggesting a distinctive reactivity pattern for fallacinol, possibly due to its additional hydroxyl group. Finally, the study described the anthranol form of fallacinol, providing a reference for its properties and transformative behaviour.
Properties
Fallacinol is a member of the class of chemical compounds called anthraquinones. Its IUPAC name is 1,8-dihydroxy-3-hydroxymethyl-6-methoxyanthraquinone. The absorbance maxima (λmax) of fallacinol in the ultraviolet spectrum has five peaks of maximum absorption at 223, 251, 266, 287 nanometres; the visible spectrum has peaks at 434 and 455 nm. In the infrared spectrum, it has peaks at 1624, 1631, 1670, 3450, 3520 cm−1. Fallacinol's molecular formula is C16H12O6; it has a molecular mass of 300.26 grams per mole. In its purified crystalline form, it exists as orange needles, with a melting point of .
It is soluble in cold dilute potassium hydroxide, forming red-violet crystals, and is insoluble in sodium bicarbonate and carbonate solutions. Similar to parietin, it produces a reddish-brown colour with alcoholic ferric chloride and yields a deep orange-red solution with concentrated sulfuric acid, which appears eosin-like in thin layers. To early researchers, these properties suggested that fallacinol was structurally similar to parietin but with an additional oxygen atom, inferred to be a hydroxyl group, based on its higher melting point and reduced solubility. The sparing solubility of its potassium salt and its insolubility in aqueous sodium carbonate suggested a methoxyl group placement consistent with other known compounds like parietin and erythroglaucin.
Like many anthraquinones, fallacinol can be reduced to its anthranol form, where one of the quinone groups is converted to an alcohol group. This reduced form can be prepared by treating fallacinol with zinc dust in boiling acetic acid, yielding lemon-yellow prismatic crystals with a melting point of 249–250 °C. The anthranol form shows distinct colour reactions, dissolving in cold concentrated sulfuric acid to give a golden yellow colour that changes to dark green after about an hour, and slowly dissolving in sodium hydroxide to form a pink solution that eventually deposits violet-red crystals. The compound can be readily oxidised back to fallacinol using chromic acid, demonstrating the reversible nature of this reduction.
Fallacinol was shown to have antifungal activity and antibacterial activity in laboratory tests; it was particularly active against the fungus species Trichoderma harzianum, Aspergillus niger, and Penicillium verrucosum. In a study exploring lichen compounds for COVID-19 therapeutics, fallacinol demonstrated the highest binding energy against SARS-CoV-2's spike protein, suggesting its potential as an inhibitor of virus growth.
Chemical synthesis
A synthetic route to fallacinol has been developed using parietin as an intermediate, highlighting a biogenetic link between the two compounds found in the lichen. The process involves the conversion of parietin diacetate to an ω-bromo derivative via N-bromosuccinimide in the presence of benzoyl peroxide, a technique also applied to various anthraquinones and related compounds. The brominated intermediate is then converted to fallacinol triacetate using silver acetate and acetic anhydride, yielding the target compound. Final steps include hydrolysis with methanolic sulfuric acid to produce fallacinol and a methylation stage for complete conversion. The synthesis not only mirrors the natural biogenesis but also achieves a melting point of , consistent with the purified natural product. An alternative synthesis was proposed in 1984, using a methodology employing Diels–Alder additions of napthoquinones to mixed trimethylsilyl vinylketene acetals as a route to synthetic hydroxyanthraquinones.
Occurrence
Fallacinol occurs in many species of the Teloschistaceae, a large family of mostly lichen-forming fungi. Historically, the substance was most associated with Caloplaca, Teloschistes, and Xanthoria, but these genera have since been subdivided into many smaller, monophyletic genera. The cultivated mycobiont of Xanthoria fallax, grown in isolation from its green algal , does not produce fallacinol.
Fallacinol is a common secondary metabolite in the lichen genus Teloschistes, typically occurring in smaller amounts alongside parietin and other related compounds like fallacinal and emodin. In 1970, Johan Santesson proposed a possible biogenetic relationship between the anthraquinone compounds commonly found in Caloplaca. According to this scheme, emodin is methylated to give parietin, which then undergoes three successive oxidations, sequentially forming fallacinol, fallacinal, and then parietinic acid. A chemosyndrome is set of lichen products produced by a species, which typically includes one or more major compounds and a set of biosynthetically related minor compounds. In 2002, Ulrik Søchting and Patrik Frödén identified chemosyndrome A, the most common chemosyndrome in the genus Teloschistes and in the entire family Teloschistaceae, which features parietin as the main substance with smaller proportions of fallacinol, fallacinal, parietinic acid, and emodin.
Fallacinol has been reported from the bushy shrub plant Senna didymobotrya, widespread in eastern and central Africa, as well as from Reynoutria japonica, a plant in the knotweed family. The substance has been isolated from a culture of the marine sponge-associated fungus Talaromyces stipitatus, and from Dermocybe mushrooms, and has been detected chromatographically in extracts from several Cortinarius mushroom species.
References
Dihydroxyanthraquinones
Lichen products
Polyketides
Methoxy compounds | Fallacinol | Chemistry | 2,389 |
18,195,193 | https://en.wikipedia.org/wiki/Thermo-magnetic%20motor | Thermomagnetic motors (also known as Curie wheels, Curie-motors and pyromagnetic motors) convert heat into kinetic energy using the thermomagnetic effect, i.e., the influence of temperature on the magnetic material magnetization.
Historical background
This technology dates back to 19th century, when a number of scientists submitted patents on the so-called "pyro-magnetic generators". These systems operate in a magnetic Brayton cycle, in a reverse way of the magnetocaloric refrigerators. Experiments have produced only extremely inefficient working prototypes, however, thermodynamic analysis indicate that thermomagnetic motors present high efficiency related to Carnot efficiency for small temperature differences around the magnetic material Curie temperature. The thermomagnetic motor principle has been studied as a possible actuator in smart materials, being successful in the generation of electric energy from ultra-low temperature gradients.
See also
Thermomagnetic convection
References
Electric motors
Magnetic devices | Thermo-magnetic motor | Technology,Engineering | 211 |
58,529,226 | https://en.wikipedia.org/wiki/Jensenism | Jensenism is a term coined by New York Times writer Lee Edson. Named after educational psychologist Arthur Jensen, it was originally defined as "the theory that IQ is largely determined by the genes". The term was coined after Jensen published the article "How Much Can We Boost IQ and Scholastic Achievement?" in the Harvard Educational Review in 1969. It has since been included in several dictionaries.
Background
The gap of the IQ between white and black students was a subject of debate in the United States, particularly around the 1970s. One view, which is referred to among behavioral geneticists as the genetic position, holds that IQ is determined by hereditary factors - about 80 percent of the variability of intelligence while 20 percent is attributed to environmental factors. The gap, therefore, was associated with race. Jensenism was as one of the most notable theories to have emerged from this sector. It was based on Arthur Jensen's 1969 article that talked about the failure of compensatory education. He cited several evidence that demonstrated how IQ is inherited. For instance, he said that if one looks at studies of adopted children, "you find that their intelligence relates more closely to their natural parents." He also proposed that the measured 15-point difference between American blacks and whites could never be eliminated by education.
Reception
Many reactions to Jensen's article and the arguments it contained quickly ensued, some highly favorable and others relentlessly negative, with some directly equating it with racism. Among the latter was a paper by behavioral geneticist Jerry Hirsch, who claimed that Jensenism was an "intellectual disgrace", while also criticizing some of Jensen's earlier critics as resorting to "inarticulate and self-defeating hooliganism". In a 1970 article responding to Jensen, biologist Richard Lewontin argued that Jensenism was a more recent manifestation of 17th-century Jansenism, referring to the former as a "doctrine" that is "as erroneous in the twentieth century as it was in the seventeenth." Evolutionary biologist Stephen Jay Gould also criticized Jensenism, arguing that it rested "on a rotten edifice." Jensen's ideas reportedly received a more favorable reception in the Nixon administration; Lewontin quoted then-United States ambassador to India Daniel Patrick Moynihan in 1974 as saying, "The winds of Jensenism are blowing through Washington with gale force."
More recently, several favorable articles defending Jensen and his ideas have criticized the frequent negative use of the term "Jensenism". These include the journal Intelligence, which devoted an entire issue honoring Jensen and his work. Linda Gottfredson also claimed:
Despite such defenses, however, the current scientific consensus is that genetics do not explain IQ differences between racial groups.
References
Cognitive psychology
20th-century neologisms
Race and intelligence controversy
Intelligence quotient | Jensenism | Biology | 572 |
2,146,043 | https://en.wikipedia.org/wiki/Thermochromic%20ink | Thermochromic ink (also called thermochromatic ink) is a type of dye that changes color in response to a change in temperature. It was first used in the 1970s in novelty toys like mood rings, but has found some practical uses in things such as thermometers, product packaging, and pens. The ink has also found applications within the medical field for specific medical simulations in medical training. Thermochromic ink can also turn transparent when heat is applied; an example of this type of ink can be found on the corners of an examination mark sheet to prove that the sheet has not been edited or photocopied.
Composition
There are two main variants of thermochromic ink, one composed of leuco dyes and one composed of liquid crystals. For both types of ink, the chemicals need to be contained within capsules around 3 to 5 microns long. This protects the dyes and crystals from mixing with other chemicals that might affect the functionality of the ink.
Leuco dyes
The leuco dye variant is typically composed of leuco dyes with additional chemicals to add different desired effects. It is the most commonly used type because it is easier to manufacture. They can be designed to react to changes in temperature that range from -15 °C to 60 °C. Most common applications of the ink have activation temperatures at -10 °C (cold), 31 °C (body temperature), or 43 °C (warm). At lower temperatures, the ink appears to be a certain color, and once the temperature increases, the ink becomes either translucent or lightly colored, allowing hidden patterns to be seen. This gives the effect of a change in color, and the process can also be reversed by lowering the temperature again.
Liquid crystals
Liquid crystals can change from liquid to solid in response to a change in temperature. At lower temperatures, the crystals are mostly solid and hardly reflect any light, causing it to appear black. As it gradually increases in temperature, the crystals become more spaced out, causing light to reflect differently and changing the color of the crystals. The temperatures at which these crystals change their properties can range from -30 °C to 90 °C.
Applications
On June 20, 2017, the United States Postal Service released the first application of thermochromic ink to postage stamps in its Total Eclipse of the Sun Forever stamp to commemorate the solar eclipse of August 21, 2017. When pressed with a finger, body heat turns the black circle in the center of the stamp into an image of the full moon. The stamp image is a photo of a total solar eclipse seen in Jalu, Libya, on March 29, 2006. The photo was taken by retired NASA astrophysicist Fred Espenak, aka "Mr. Eclipse".
Medical uses
In medical training, thermochromic ink can be used to imitate human blood because it shares its color changing property. It is currently being tested in medical simulations involving extracorporeal membrane oxygenation (ECMO). In these procedures, a change in color of blood between a dark and light red indicates blood oxygenation and blood deoxygenation, which describes the oxygen concentration levels within a person's blood sample. It's important to accurately identify this change in order to safely and correctly operate the ECMO machines. This has led to simulation-based trainings (SBT) which allows medical students to run simulations that mimic real ECMO machines before using them in serious situations. By using thermochromic ink in these simulations, the color changing effect can be realistically copied and observed without using real human blood or other costly methods.
Artificial blood or animal blood is typically used in these simulations; however, there are some advantages in using thermochromic ink as an alternative. It can be reused for multiple simulations with minimal variance in the outcomes and it is more cost effective. There are limitations to using this as the ink does not share any other properties with blood, so its only practical use is to observe the change in color of blood.
Product packaging
Product packaging is an important aspect of maintaining the quality of consumer goods. Modern day packaging is split into 2 categories; active packaging and smart packaging. Thermochromic ink has found use in smart packaging, which is the aspect of packaging that deals with monitoring the condition of the products. Since most consumer goods are affected by changes in temperature, using thermochromic ink as an indicator of those temperature changes allows consumers to recognize when the quality of a product has changed. It can also be used to tell consumers the right temperatures to consume the product.
Erasable ink pens
In 2006, Pilot Corporation, Japan developed a pen with erasable ink that utilized thermochromic ink. It was composed of a solvent, a colorant, and a resin film-forming agent. At temperatures below 65 °C, the ink stayed in a colored state. Once temperatures went above 65 °C, the ink began to melt and became colorless, creating the effect of erasable ink. The ink was able to return to its colored state by cooling the temperature down to below -10 °C.
See also
Thermochromism
Security printing
Active packaging
References
Thermochromism
Dyes
Spectroscopy
Materials science | Thermochromic ink | Physics,Chemistry,Materials_science,Engineering | 1,066 |
27,135,340 | https://en.wikipedia.org/wiki/Seaweed%20farming | Seaweed farming or kelp farming is the practice of cultivating and harvesting seaweed. In its simplest form farmers gather from natural beds, while at the other extreme farmers fully control the crop's life cycle.
The seven most cultivated taxa are Eucheuma spp., Kappaphycus alvarezii, Gracilaria spp., Saccharina japonica, Undaria pinnatifida, Pyropia spp., and Sargassum fusiforme. Eucheuma and K. alvarezii are attractive for carrageenan (a gelling agent); Gracilaria is farmed for agar; the rest are eaten after limited processing. Seaweeds are different from mangroves and seagrasses, as they are photosynthetic algal organisms and are non-flowering.
The largest seaweed-producing countries as of 2022 are China (58.62%) and Indonesia (28.6%); followed by South Korea (5.09%) and the Philippines (4.19%). Other notable producers include North Korea (1.6%), Japan (1.15%), Malaysia (0.53%), Zanzibar (Tanzania, 0.5%), and Chile (0.3%). Seaweed farming has frequently been developed to improve economic conditions and to reduce fishing pressure.
The Food and Agriculture Organization (FAO) reported that world production in 2019 was over 35 million tonnes. North America produced some 23,000 tonnes of wet seaweed. Alaska, Maine, France, and Norway each more than doubled their seaweed production since 2018. As of 2019, seaweed represented 30% of marine aquaculture.
Seaweed farming is a carbon negative crop, with a high potential for climate change mitigation. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" as a mitigation tactic. World Wildlife Fund, Oceans 2050, and The Nature Conservancy publicly support expanded seaweed cultivation.
Methods
The earliest seaweed farming guides in the Philippines recommended the cultivation of Laminaria seaweed and reef flats at approximately one meter's depth at low tide. They also recommended cutting off seagrasses and removing sea urchins before farm construction. Seedlings are tied to monofilament lines and strung between mangrove stakes in the substrate. This off-bottom method remains a primary method.
Long-line cultivation methods can be used in water approximately in depth. Floating cultivation lines are anchored to the bottom and are widely used in North Sulawesi, Indonesia. Species cultured by long-line include those of the genera Saccharina, Undaria, Eucheuma, Kappaphycus, and Gracilaria.
Cultivation in Asia is relatively low-technology with a high labor requirement. Attempts to introduce technology to cultivate detached plant growth in tanks on land to reduce labor have yet to attain commercial viability.
Diseases
A bacterial infection called ice-ice stunts seaweed crops. In the Philippines 15 percent reduction in one species appeared in 2011 to 2013, representing 268,000 tonnes of seaweed.
Ecological impacts
Seaweed is an extractive crop that has little need for fertilisers or water, meaning that seaweed farms typically have a smaller environmental footprint than other agriculture or fed aquaculture. Many of the impacts of seaweed farms, both positive and negative, remain understudied and uncertain.
Nonetheless, many environmental problems can result from seaweed farming. For instance, seaweed farmers sometimes cut down mangroves to use as stakes. Removing mangroves negatively affects farming by reducing water quality and mangrove biodiversity. Farmers may remove eelgrass from their farming areas, damaging water quality.
Seaweed farming can pose a biosecurity risk, as farming activities have the potential to introduce or facilitate invasive species. For this reason, regions such as the UK, Maine and British Columbia only allow native varieties.
Farms may also have positive environmental effects. They may support welcome ecosystem services such as nutrient cycling, carbon uptake, and habitat provision.
Evidence suggests that seaweed farming can have positive impacts which include supplementing human diets, feeding livestock, creating biofuels, slowing climate change and providing crucial habitat for a marine life, but must scale sustainably in order to have these effects. One way for seaweed farming to scale at terrestrial farming levels is with the use of ROVs, which can install low-cost helical anchors that can extend seaweed farming into unprotected waters.
Seaweed can be used to capture, absorb, and incorporate excess nutrients into living tissue, aka nutrient bioextraction/bioharvesting, is the practice of farming and harvesting shellfish and seaweed to remove nitrogen and other nutrients from natural water bodies.
Similarly, seaweed farms may offer habitat that enhances biodiversity. Seaweed farms have been proposed to protect coral reefs by increasing diversity, providing habitat for local marine species. Farming may increase the production of herbivorous fish and shellfish. Pollinac reported an increase in Siginid population after the start of farming of Eucheuma seaweed in villages in North Sulawesi.
Economic impacts
In Japan the annual production of nori amounts to US$2 billion and is one of the world's most valuable aquaculture crops. The demand for seaweed production provides plentiful work opportunities.
A study conducted by the Philippines reported that plots of approximately one hectare could produce net income from Eucheuma farming was 5 to 6 times the average wage of an agriculture worker. The study also reported an increase in seaweed exports from 675 metric tons (MT) in 1967 to 13,191 MT in 1980, and 28,000 MT by 1988.
About 0.7 million tonnes of carbon are removed from the sea each year by commercially harvested seaweeds. In Indonesia, seaweed farms account for 40 percent of the national fisheries output and employ about one million people.
The Safe Seaweed Coalition is a research and industry group that promotes seaweed cultivation.
Tanzania
Seaweed farming has had widespread socio-economic impacts in Tanzania, has become a very important source of resources for women, and is the third biggest contributor of foreign currency to the country. 90% of the farmers are women, and much of it is used by the skincare and cosmetics industry.
In 1982 Adelaida K. Semesi began a programme of research into seaweed cultivation in Zanzibar and its application resulted in greater investment in the industry.
Uses
Farmed seaweed is used in industrial products, as food, as an ingredient in animal feed, and as source material for biofuels.
Chemicals
Seaweeds are used to produce chemicals that can be used for various industrial, pharmaceutical, or food products. Two major derivative products are carrageenan and agar. Bioactive ingredients can be used for industries such as pharmaceuticals, industrial food, and cosmetics.
Carrageenan
Agar
Food
Fuel
Climate change mitigation
Seaweed cultivation in the open ocean can act as a form of carbon sequestration to mitigate climate change. Studies have reported that nearshore seaweed forests constitute a source of blue carbon, as seaweed detritus is carried into the middle and deep ocean thereby sequestering carbon. Macrocystis pyrifera (also known as giant kelp) sequesters carbon faster than any other species. It can reach in length and grow as rapidly as a day. According to one study, covering 9% of the world's oceans with kelp forests could produce "sufficient biomethane to replace all of today's needs in fossil fuel energy, while removing 53 billion tons of CO2 per year from the atmosphere, restoring pre-industrial levels".
Seaweed farming may be an initial step towards adapting to and mitigating climate change. These include shoreline protection through the dissipation of wave energy, which is especially important to mangrove shorelines. Carbon dioxide intake would raise pH locally, benefitting calcifiers (e.g. crustaceans) or in reducing coral bleaching. Finally, seaweed farming could provide oxygen input to coastal waters, thus countering ocean deoxygenation driven by rising ocean temperature.
Tim Flannery claimed that growing seaweeds in the open ocean, facilitated by artificial upwelling and substrate, can enable carbon sequestration if seaweeds are sunk to depths greater than one kilometer.
Seaweed contributes approximately 16–18.7% of the total marine-vegetation sink. In 2010 there were 19.2 × tons of aquatic plants worldwide, 6.8 × tons for brown seaweeds; 9.0 × tons for red seaweeds; 0.2 × tons of green seaweeds; and 3.2 × tons of miscellaneous aquatic plants. Seaweed is largely transported from coastal areas to the open and deep ocean, acting as a permanent storage of carbon biomass within marine sediments.
Ocean afforestation is a proposal for farming seaweed for carbon removal. After harvesting seaweed is decomposed into biogas (60% methane and 40% carbon dioxide) in an anaerobic digester. The methane can be used as a biofuel, while the carbon dioxide can be stored to keep it from the atmosphere.
Marine permaculture
Similarly, the NGO Climate Foundation and permaculture experts claimed that offshore seaweed ecosystems can be cultivated according to permaculture principles, constituting marine permaculture. The concept envisions using artificial upwelling and floating, submerged platforms as substrate to replicate natural seaweed ecosystems that provide habitat and the basis of a trophic pyramid for marine life. Seaweeds and fish can be sustainably harvested. As of 2020, successful trials had taken place in Hawaii, the Philippines, Puerto Rico and Tasmania. The idea featured as a solution covered by the documentary 2040 and in the book Drawdown: The Most Comprehensive Plan Ever Proposed to Reverse Global Warming.
History
Human use of seaweed is known from the Neolithic period. Cultivation of gim (laver) in Korea is reported in books from the 15th century. Seaweed farming began in Japan as early as 1670 in Tokyo Bay. In autumn of each year, farmers would throw bamboo branches into shallow, muddy water, where the spores of the seaweed would collect. A few weeks later these branches would be moved to a river estuary. Nutrients from the river helped the seaweed to grow.
In the 1940s, the Japanese improved this method by placing nets of synthetic material tied to bamboo poles. This effectively doubled production. A cheaper variant of this method is called the hibi method—ropes stretched between bamboo poles. In the early 1970s, demand for seaweed and seaweed products outstripped supply, and cultivation was viewed as the best means to increase production.
In the tropics, commercial cultivation of Caulerpa lentillifera (sea grapes) was pioneered in the 1950s in Cebu, Philippines, after accidental introduction of C. lentillifera to fish ponds on the island of Mactan. This was further developed by local research, particularly through the efforts of Gavino Trono, since recognized as a National Scientist of the Philippines. Local research and experimental cultures led to the development of the first commercial farming methods for other warm-water algae (since cold-water red and brown edible algae favored in East Asia do not grow in the tropics), including the first successful commercial cultivation of carrageenan-producing algae. These include Eucheuma spp., Kappaphycus alvarezii, Gracilaria spp., and Halymenia durvillei. In 1997, it was estimated that 40,000 people in the Philippines made their living through seaweed farming. The Philippines was the world's largest producer of carrageenan for several decades until it was overtaken by Indonesia in 2008.
Seaweed farming spread beyond Japan and the Philippines to southeast Asia, Canada, Great Britain, Spain, and the United States.
In the 2000s, seaweed farming has been getting increasing attention due to its potential for mitigating both climate change and other environmental issues, such as agricultural runoff. Seaweed farming can be mixed with other aquaculture, such as shellfish, to improve water bodies, such as in the practices developed by American non-profit GreenWave. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" as a mitigation tactic.
In 2024 a commercial-scale seaweed farm began construction within the Hollandse Kust Zuid (HKZ) 139 turbine wind farm. The project uses 13-metre long "Eco-anchors" that cover the surface with a marine life habitat using materials such as oyster shells, wood, and cork.
See also
Seaweed fertilizer
Algaculture
Aquaculture of giant kelp
Natural resources of island countries
Seaweed cultivator
References
Sources
External links
Algaculture
Blue carbon
Seaweeds
Sustainable food system | Seaweed farming | Biology | 2,648 |
28,579,539 | https://en.wikipedia.org/wiki/Flora%20Lapponica | Flora Lapponica (Amsterdam, 1737) is an account of the plants of Lapland written by botanist, zoologist and naturalist Carl Linnaeus (1707–1788) following his expedition to Lapland.
Background
Over the period from 12 May 1732 to 10 September 1732, and with a grant from the Royal Society of Sciences in Uppsala for his journey, Linnaeus was able to combine his interest in medicine with that of natural history to travel for five months in Lapland collecting animals, plants, and minerals.
Classification used
In Flora Lapponica Linnaeus's ideas about nomenclature and classification were first used in a practical way, making this the first proto-modern Flora. The account covered 534 species, used the Linnaean classification system and included, for the described species, geographical distribution and taxonomic notes. It was Augustin Pyramus de Candolle who attributed Linnaeus with Flora Lapponica as the first example in the botanical genre of Flora writing. Botanical historian E.L. Greene described Flora Lapponica as “the most classic and delightful of Linnaeus’s writings”.
Commemorative personal names
A Lapland plant, Linnaea borealis, was named by the eminent botanist Jan Frederik Gronovius in commemoration of Linnaeus's achievements. In the Critica Botanica Linnaeus uses this name to advocate the use of commemorative personal names as botanical names:
Updated edition
An update of this work was published in 1792 by James Edward Smith, citing Linnaeus as the main author and using Linnaeus' binomial nomenclature. These books are not to be confused with Gerog (Göran) Walhenberg's 1812 "Flora Lapponica", who organized species according to their vegetation types and geographic areas.
References
External links
1737 non-fiction books
1737 in science
1792 non-fiction books
1792 in science
18th-century books in Latin
Florae (publication)
Carl Linnaeus
Flora of Finland
Flora of Sweden
Sápmi
Botany in Europe
History of Lapland (Finland) | Flora Lapponica | Biology | 394 |
4,401,864 | https://en.wikipedia.org/wiki/Tirgan | Tirgan (, Tirgān), is an early summer ancient Iranian festival, celebrated annually on Tir 13 (July 2, 3, or 4).
It is celebrated by splashing water, dancing, reciting poetry, and serving traditional foods such as spinach soup and sholezard. The custom of tying rainbow-colored bands on wrists, which are worn for ten days and then thrown into a stream, is also a way to rejoice for children.
Overview
Tirgan is an ancient Iranian tradition which is still celebrated in various regions of Iran, including Mazenderan, Khorasan, and Arak. It is widely attested by historians such as Gardezi, Biruni, and Masudi, as well as European travelers during the Safavid era.
The celebration is dedicated to Tishtrya, a Yazata who appeared in the sky to generate thunder and lightning for much needed rain.
Legend says that Arash the Archer was a man chosen to settle a land dispute between the leaders of the lands Iran and Turan. Arash was to loose his arrow, on the 13th day of Tir, and where the arrow landed, would lie the border between the two kingdoms. Turan had suffered from the lack of rain, and Iran rejoiced at the settlement of the borders, then rain poured onto the two countries and there was peace between them.
It is stated in Biruni's chronology that "by the order of God, the wind bore the arrow away from the mountains of Amol and brought the utmost frontier of Khorasan between Fergana and Tapuria." Gardizi has given a similar description, although he notes that "the arrow of Arash fell in the area between Fargana and Bactria."
Ceremony
Tirgan is celebrated every year in Mazandaran Province and Amol in northern Iran, the capital Tehran, Karaj, and the central and southern cities of Yazd, Meybod, Ardakan, Kerman, Bam, Shiraz, Isfahan, Ahvaz, and Farahan. Iranians of the Zoroastrian faith also celebrate this outside Iran, in Europe and the US.
Zoroastrianism
In the Avesta, a religious text associated with Zoroastrianism, it is said: We praise Tishter, the star of the mighty and glorious, who swiftly moves towards that direction. The star Chisht flies towards it, speeding toward the vast ocean; just as that arrow in the air shot by Arash the archer, the greatest archer of the Aryans, was thrown from the mountain of Airwakhshoth towards the mountain of Khwanvant.
See also
Tirgan Festival
Summer solstice
Mehregan
Nowruz
Yalda
Notes
Ancient Iranian religion
Festivals in Iran
July observances
Observances set by the Solar Hijri calendar
Persian words and phrases
Religious festivals in Iran
Summer events in Iran
Zoroastrian festivals
Summer solstice | Tirgan | Astronomy | 608 |
65,503,180 | https://en.wikipedia.org/wiki/Xenocs | Xenocs is a scientific instrumentation company based in Grenoble, France, providing instruments, software and related services for x-ray characterization of materials, in particular Small Angle X-ray Scattering (SAXS) and Wide Angle X-ray Scattering (WAXS).
Xenocs products are typically used by universities, research institutes and corporate labs in projects focused on research, development and process optimization of a wide range of new materials. Application segments range from nanomaterials, polymers, food, consumer care, energy to biomaterials and pharmaceuticals.
As of September 2020, the Xenocs group reported 75 employees.
History
Xenocs was founded in 2000 as a spin-off from Institut Laue Langevin in Grenoble, France, by Ian Anderson, Frédéric Bossan and Peter Høghøj, with the latter two forming the management team.
In 2001 the company moved to nearby Sassenage and set up facilities for production of X-ray, EUV and neutron optics. In 2002 it launched the FOX2D line of single reflection multilayer coated x-ray optics, followed in 2006 by the GeniX micro-focus x-ray source and the FOX3D aspheric multilayer coated x-ray optics building on a range of patents. In 2008 it launched products for (virtually) scatterless x-ray collimation, allowing for increased performance of SAXS equipment, leading to the launch of the Xeuss SAXS instrument product-line in 2010.
In 2014, Xenocs launched the Nano-inXider compact SAXS equipment at the IUCr conference in collaboration with CEA and Arkema. The same year was also the International Year of Crystallography and Xenocs co-organized the IUCr-UNESCO Open Factory held in December.
At the end of 2016, Xenocs acquired SAXSLAB with offices near Copenhagen, Denmark and Amherst, MA, USA. Xenocs combined their own Xeuss product line with the newly acquired SAXSLAB product lines and developed the Xeuss 3.0 SAXS/WAXS beamline and Xenocs XSACT software for data reduction and analysis.
References
External links
Official website
X-ray crystallography
X-ray equipment manufacturers
Privately held companies of France
French companies established in 2000 | Xenocs | Chemistry,Materials_science | 472 |
4,827 | https://en.wikipedia.org/wiki/Biomedical%20engineering | Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as a clinical engineer.
Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields. Such an evolution is common as a new field transitions from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, imaging technologies such as MRI and EKG/ECG, regenerative tissue growth, and the development of pharmaceutical drugs including biopharmaceuticals.
Subfields and related fields
Bioinformatics
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences.
Biomechanics
Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics.
Biomaterials
A biomaterial is any matter, surface, or construct that interacts with living systems. As a science, biomaterials is about fifty years old. The study of biomaterials is called biomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science.
Biomedical optics
Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment. It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies include optical coherence tomography (OCT), fluorescence microscopy, confocal microscopy, and photodynamic therapy (PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as the retina in the eye or the coronary arteries in the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently, adaptive optics is helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging.
Tissue engineering
Tissue engineering, like genetic engineering (see below), is a major segment of biotechnology – which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solid jawbones and tracheas from human stem cells towards this end. Several artificial urinary bladders have been grown in laboratories and transplanted successfully into human patients. Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.
Genetic engineering
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but see biological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.
Neural engineering
Neural engineering (also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.
Pharmaceutical engineering
Pharmaceutical engineering is an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations of chemical engineering, and pharmaceutical analysis. It may be deemed as a part of pharmacy due to its focus on the use of technology on chemical agents in providing better medicinal treatment.
Hospital and medical devices
This is an extremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:
the diagnosis of disease or other conditions
in the cure, mitigation, treatment, or prevention of disease.
Some examples include pacemakers, infusion pumps, the heart-lung machine, dialysis machines, artificial organs, implants, artificial limbs, corrective lenses, cochlear implants, ocular prosthetics, facial prosthetics, somato prosthetics, and dental implants.
Stereolithography is a practical example of medical modeling being used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies, treatments, patient monitoring, of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see also Regulation):
Class I devices present minimal potential for harm to the user and are often simpler in design than Class II or Class III devices. Devices in this category include tongue depressors, bedpans, elastic bandages, examination gloves, and hand-held surgical instruments, and other similar types of common equipment.
Class II devices are subject to special controls in addition to the general controls of Class I devices. Special controls may include special labeling requirements, mandatory performance standards, and postmarket surveillance. Devices in this class are typically non-invasive and include X-ray machines, PACS, powered wheelchairs, infusion pumps, and surgical drapes.
Class III devices generally require premarket approval (PMA) or premarket notification (510k), a scientific review to ensure the device's safety and effectiveness, in addition to the general controls of Class I. Examples include replacement heart valves, hip and knee joint implants, silicone gel-filled breast implants, implanted cerebellar stimulators, implantable pacemaker pulse generators and endosseous (intra-bone) implants.
Medical imaging
Medical/biomedical imaging is a major segment of medical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
Alternatively, navigation-guided equipment utilizes electromagnetic tracking technology, such as catheter placement into the brain or feeding tube placement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in the GI tract.
Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including: fluoroscopy, magnetic resonance imaging (MRI), nuclear medicine, positron emission tomography (PET), PET-CT scans, projection radiography such as X-rays and CT scans, tomography, ultrasound, optical microscopy, and electron microscopy.
Medical implants
An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Bionics
Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools.
Biomedical sensors
In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma. The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals.
Clinical engineering
Clinical engineering is the branch of biomedical engineering dealing with the actual implementation of medical equipment and technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervising biomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly.
Their inherent focus on practical implementation of technology has tended to keep them oriented more towards incremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, see safety engineering for a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items.
Rehabilitation engineering
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.
The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation.
Regulatory issues
Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA), Class I recall is associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide. For example, in the medical device regulations, a product must be: 1) safe and 2) effective and 3) for all the manufactured devices (why is this part deleted?)
A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards (death, injuries, ...) in its intended use. Protective measures have to be introduced on the devices to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it.
A product is effective if it performs as specified by the manufacturer in the intended use. Effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle.
The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medical devices, drugs, biologics, and combination products. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under 21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by the Consumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the European Medical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. The technical file contains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide.
In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear a CE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about the optimal extent of regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments.
RoHS II
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.
RoHS seeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products.
IEC 60601
The new International Standard IEC 60601 for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard.
AS/NZS 3551:2012
AS/ANS 3551:2012 is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital). The standards are based on the IEC 606101 standards.
The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning.
Training and certification
Education
Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., or MD-PhD) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging as its own discipline rather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including the Bachelor of Science in Biomedical Engineering which includes enough biological science content that many students use it as a "pre-med" major in preparation for medical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.
In the U.S., an increasing number of undergraduate programs are also becoming recognized by ABET as accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.
In Canada and Australia, accredited graduate programs in biomedical engineering are common. For example, McMaster University offers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering. The first Canadian undergraduate BME program was offered at University of Guelph as a four-year B.Eng. program. The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering as is Flinders University.
As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program.
Graduate education is a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them. Since most BME-related professions involve scientific research, such as in pharmaceutical and medical device development, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, or another engineering discipline (plus certain life science coursework), or life science (plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards. Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education. Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.
Licensure/certification
As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registered Professional Engineer (PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.
In the UK, mechanical engineers working in the areas of Medical Engineering, Bioengineering or Biomedical engineering can gain Chartered Engineer status through the Institution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division. The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
The Fundamentals of Engineering exam – the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers.
Career prospects
In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022. Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions. Now as of 2023, there are 19,700 jobs for this degree, the average pay for a person in this field is around $100,730.00 and making around $48.43 an hour. There is also expected to be a 7% increase in jobs from here 2023 to 2033 (even faster than the last average).
Notable figures
Julia Tutelman Apter (deceased) – One of the first specialists in neurophysiological research and a founding member of the Biomedical Engineering Society
Earl Bakken (deceased) – Invented the first transistorised pacemaker, co-founder of Medtronic.
Forrest Bird (deceased) – aviator and pioneer in the invention of mechanical ventilators
Y.C. Fung (deceased) – professor emeritus at the University of California, San Diego, considered by many to be the founder of modern biomechanics
Leslie Geddes (deceased) – professor emeritus at Purdue University, electrical engineer, inventor, and educator of over 2000 biomedical engineers, received a National Medal of Technology in 2006 from President George Bush for his more than 50 years of contributions that have spawned innovations ranging from burn treatments to miniature defibrillators, ligament repair to tiny blood pressure monitors for premature infants, as well as a new method for performing cardiopulmonary resuscitation (CPR).
Willem Johan Kolff (deceased) – pioneer of hemodialysis as well as in the field of artificial organs
Robert Langer – Institute Professor at MIT, runs the largest BME laboratory in the world, pioneer in drug delivery and tissue engineering
John Macleod (deceased) – one of the co-discoverers of insulin at Case Western Reserve University.
Alfred E. Mann – Physicist, entrepreneur and philanthropist. A pioneer in the field of Biomedical Engineering.
J. Thomas Mortimer – Emeritus professor of biomedical engineering at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Robert M. Nerem – professor emeritus at Georgia Institute of Technology. Pioneer in regenerative tissue, biomechanics, and author of over 300 published works. His works have been cited more than 20,000 times cumulatively.
P. Hunter Peckham – Donnell Professor of Biomedical Engineering and Orthopaedics at Case Western Reserve University. Pioneer in Functional Electrical Stimulation (FES)
Nicholas A. Peppas – Chaired Professor in Engineering, University of Texas at Austin, pioneer in drug delivery, biomaterials, hydrogels and nanobiotechnology.
Robert Plonsey – professor emeritus at Duke University, pioneer of electrophysiology
Otto Schmitt (deceased) – biophysicist with significant contributions to BME, working with biomimetics
Ascher Shapiro (deceased) – Institute Professor at MIT, contributed to the development of the BME field, medical devices (e.g. intra-aortic balloons)
Gordana Vunjak-Novakovic – University Professor at Columbia University, pioneer in tissue engineering and bioreactor design
John G. Webster – professor emeritus at the University of Wisconsin–Madison, a pioneer in the field of instrumentation amplifiers for the recording of electrophysiological signals
Fred Weibell, coauthor of Biomedical Instrumentation and Measurements
U.A. Whitaker (deceased) – provider of the Whitaker Foundation, which supported research and education in BME by providing over $700 million to various universities, helping to create 30 BME programs and helping finance the construction of 13 buildings
See also
Biomedical Engineering and Instrumentation Program (BEIP)
References
45. ^Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook, "Bioengineers and Biomedical Engineers", retrieved October 27, 2024.
Further reading
External links | Biomedical engineering | Engineering,Biology | 6,530 |
815,776 | https://en.wikipedia.org/wiki/Sixteen-segment%20display | A sixteen-segment display (SISD) is a type of display based on sixteen segments that can be turned on or off to produce a graphic pattern. It is an extension of the more common seven-segment display, adding four diagonal and two vertical segments and splitting the three horizontal segments in half. Other variants include the fourteen-segment display which does not split the top or bottom horizontal segments, and the twenty-two-segment display that allows lower-case characters with descenders.
Often a character generator is used to translate 7-bit ASCII character codes to the 16 bits that indicate which of the 16 segments to turn on or off.
Applications
Sixteen-segment displays were originally designed to display alphanumeric characters (Latin letters and Arabic digits). Later they were used to display Thai numerals and Persian characters. Non-electronic displays using this pattern existed as early as 1902.
Before the advent of inexpensive dot-matrix displays, sixteen and fourteen-segment displays were used to produce alphanumeric characters on calculators and other embedded systems. Later they were used on videocassette recorders (VCR), DVD players, microwave ovens, car stereos, telephone Caller ID displays, and slot machines.
Sixteen-segment displays may be based on one of several technologies, the three most common optoelectronics types being LED, LCD and VFD. The LED variant is typically manufactured in single or dual character packages, to be combined as needed into text line displays of a suitable length for the application in question; they can also be stacked to build multiline displays.
As with seven and fourteen-segment displays, a decimal point and/or comma may be present as an additional segment, or pair of segments; the comma (used for triple-digit groupings or as a decimal separator in many regions) is commonly formed by combining the decimal point with a closely 'attached' leftwards-descending arc-shaped segment. This way, a point or comma may be displayed between character positions instead of occupying a whole position by itself, which would be the case if employing the bottom middle vertical segment as a point and the bottom left diagonal segment as a comma. Such displays were very common on pinball machines for displaying the score and other information, before the widespread use of dot-matrix display panels.
Examples
See also
Seven-segment display
Eight-segment display
Nine-segment display
Fourteen-segment display
Dot-matrix display
Nixie tube display
Vacuum fluorescent display
References
External links
View and create sixteen-segment display characters - Editable SVG-Font, Open Font License
Sixteen Segment Display with the HTML5 Canvas
Web App to design segment-display
Spinning segment display
TwentyfourSixteen — CC0 sixteen segment TTF font based on the HP/Siemens/Litronix DL-2416 character set
Display technology | Sixteen-segment display | Engineering | 581 |
1,627,125 | https://en.wikipedia.org/wiki/Glucose%20meter | A glucose meter, also referred to as a "glucometer", is a medical device for determining the approximate concentration of glucose in the blood. It can also be a strip of glucose paper dipped into a substance and measured to the glucose chart. It is a key element of glucose testing, including home blood glucose monitoring (HBGM) performed by people with diabetes mellitus or hypoglycemia. A small drop of blood, obtained from slightly piercing a fingertip with a lancet, is placed on a disposable test strip that the meter reads and uses to calculate the blood glucose level. The meter then displays the level in units of mg/dL or mmol/L.
Since approximately 1980, a primary goal of the management of type 1 diabetes and type 2 diabetes mellitus has been achieving closer-to-normal levels of glucose in the blood for as much of the time as possible, guided by HBGM several times a day. The benefits include a reduction in the occurrence rate and severity of long-term complications from hyperglycemia as well as a reduction in the short-term, potentially life-threatening complications of hypoglycemia.
History
Leland Clark presented his first paper about the oxygen electrode, later named the Clark electrode, on 15 April 1956, at a meeting of the American Society for Artificial Organs during the annual meetings of the Federated Societies for Experimental Biology.
In 1962, Clark and Ann Lyons from the Cincinnati Children's Hospital developed the first glucose enzyme electrode. This biosensor was based on a thin layer of glucose oxidase (GOx) on an oxygen electrode. Thus, the readout was the amount of oxygen consumed by GOx during the enzymatic reaction with the substrate glucose. This publication became one of the most often cited papers in life sciences. Due to this work he is considered the “father of biosensors,” especially with respect to the glucose sensing for diabetes patients.
Another early glucose meter was the Ames Reflectance Meter by Anton H. Clemens. It was used in American hospitals in the 1970s. A moving needle indicated the blood glucose after about a minute.
Home glucose monitoring was demonstrated to improve glycemic control of type 1 diabetes in the late 1970s, and the first meters were marketed for home use around 1981. The two models initially dominant in North America in the 1980s were the Glucometer, introduced in November 1981, whose trademark is owned by Bayer, and the Accu-Chek meter (by Roche). Consequently, these brand names have become synonymous with the generic product to many health care professionals. In Britain, a health care professional or a patient may refer to "taking a BM": "Mrs X's BM is 5", etc. BM stands for Boehringer Mannheim, now part of Roche, who produce test strips called 'BM-test' for use in a meter.
In North America, hospitals resisted adoption of meter glucose measurements for inpatient diabetes care for over a decade. Managers of laboratories argued that the superior accuracy of a laboratory glucose measurement outweighed the advantage of immediate availability and made meter glucose measurements unacceptable for inpatient diabetes management. Patients with diabetes and their endocrinologists eventually persuaded acceptance. Prior to its discontinuation in July 2021, the YSI 2300 STAT PLUS Glucose and Lactate Analyzer was widely accepted as the de facto standard for reference measurements and system calibration by most manufacturers of glucometers for the past 30 years, despite there being no such regulatory requirement.
Home glucose testing was adopted for type 2 diabetes more slowly than for type 1, and a large proportion of people with type 2 diabetes have never been instructed in home glucose testing. This has mainly come about because health authorities are reluctant to bear the cost of the test strips and lancets.
Non-meter test strips
Test strips that changed color and could be read visually, without a meter, have been widely used since the 1980s. They had the added advantage that they could be cut longitudinally to save money. Critics argued that test strips read by eye are not as accurate or convenient as meter testing. The manufacturer cited studies that show the product is just as effective despite not giving an answer to one decimal place, something they argue is unnecessary for control of blood sugar. This debate also happened in Germany where "Glucoflex-R" was an established strip for type 2 diabetes. As meter accuracy and insurance coverage improved, they lost popularity.
"Glucoflex-R" is Australia manufacturer National Diagnostic Products alternative to the BM test strip. It has versions that can be used either in a meter or read visually. It is also marketed under the brand name Betachek. On May 1, 2009, the UK distributor Ambe Medical Group reduced the price of their "Glucoflex-R" test strip to the NHS, by approximately 50%.
Types of meters
Hospital glucose meters
Special glucose meters for multi-patient hospital use are now used. These provide more elaborate quality control records. Their data handling capabilities are designed to transfer glucose results into electronic medical records and the laboratory computer systems for billing purposes.
Test strip meters
There are several key characteristics of glucose meters which may differ from model to model:
Size: The typical size is smaller than the palm of the hand. They are battery-powered.
Test strips: A consumable element, different for each meter, containing spots impregnated with glucose oxidase, which reacts with glucose, and other components. A drop of blood is absorbed by a spot for each measurement. Some models use single-use plastic test strips with a spot; other models use discs, drums, or cartridges with multiple spots to make several readings.
Coding: Since test strips may vary from batch to batch, some models require a code to be provided, either by the user or on a plug-in chip supplied with each batch of test strips, to calibrate the meter to the strips of the batch. An incorrect code can cause errors of up to 4 mmol/L (72 mg/dL), with possibly serious consequences, including risk of hypoglycemia. Some test media contain the code information in the strip.
Volume of blood sample: The size of the drop of blood needed by different models varies from 0.3 to 1 μl. Older models required larger blood samples, usually defined as a "hanging drop" from the fingertip. Smaller volume requirements reduce the frequency of pricks that do not produce enough blood.
Alternate site testing: Smaller drop volumes have enabled "alternate site testing" – pricking the forearms or other less sensitive areas instead of the fingertips. Manufacturers recommend that this type of testing should only be used when blood glucose levels are stable, such as before meals, when fasting, or just before going to sleep.
Duration of test: The time it takes for a reading to be displayed may range from 3 to 60 seconds from application of blood for different models.
Display: The glucose value in mg/dL or mmol/L (1 mmol/L = 18.0 mg/dL) is displayed on a digital display. Different countries use different measurement units: for example mg/dL are used in the US, France, Japan, Iran, Israel, and India; mmol/L are used in Australia, Canada, China, and the UK. In Germany both units are used. Many meters can display either unit of measure. Instances have been published in which a patient has interpreted a reading in mmol/L as a very low reading in mg/dL or vice versa. Usually mmol/L readings have a decimal point and mg/dL readings do not.
Countries that use mmol/L include Australia, Canada, China, Croatia, Czech Republic, Denmark, Finland, Hong Kong, Hungary, Iceland, Ireland, Jamaica, Kazakhstan, Latvia, Lithuania, Malaysia, Malta, Netherlands, New Zealand, Norway, Russia, Slovakia, Slovenia, South Africa, Sweden, Switzerland, and United Kingdom.
Countries that use mg/dL include Algeria, Argentina, Austria, Bangladesh, Belgium, Brazil, Chile, Columbia, Cyprus, Ecuador, Egypt, France, Georgia, Germany, Greece, India, Indonesia, Iran, Israel, Italy, Japan, Jordan, Korea, Lebanon, Mexico, Peru, Poland, Portugal, South Korea, Spain, Syria, Taiwan, Thailand, Tunisia, Turkey, United Arab Emirates, United States, Uruguay, Venezuela, and Yemen.
Glucose vs. plasma glucose: Glucose levels in plasma (one of the components of blood) are higher than glucose measurements in whole blood; the difference is about 11% when the hematocrit is normal. This is important because home blood glucose meters measure the glucose in whole blood while most lab tests measure the glucose in plasma. Currently, there are many meters on the market that give results as "plasma equivalent," even though they are measuring whole blood glucose. The plasma equivalent is calculated from the whole blood glucose reading using an equation built into the glucose meter. This allows patients to easily compare their glucose measurements in a lab test and at home. It is important for patients and their health care providers to know whether the meter gives its results as "whole blood equivalent" or "plasma equivalent." One model measures beta-hydroxybutyrate in the blood to detect ketosis for measuring both unhealthy ketoacidosis and healthy nutritional ketosis.
Memory and timestamping: Most meters include a memory to store test results, timestamped by a clock set by the user, and many can display an average of recent readings. Stored data will only reflect trends accurately if the clock is set to approximately the right time.
Data transfer: Some meters can transfer stored data, typically to a computer or mobile phone running diabetes management software. Meters have been combined with devices such as insulin injection devices, PDAs, and cellular transmitters.
Cost
The cost of home blood glucose monitoring can be substantial due to the cost of the test strips. In 2006, the US cost to consumers of each glucose strip ranged from about US$0.35 to $1.00. Manufacturers often provide meters at no cost to encourage use of the profitable test strips. Type 1 diabetics may test as often as 4 to 10 times a day due to the dynamics of insulin adjustment, whereas type 2 typically test less frequently, especially when insulin is not part of treatment. In the UK, where the National Health Service (NHS) rather than patients pay for medications including test strips, a 2015 study on the comparative cost-effectiveness of all options for the self-monitoring of blood glucose funded by the NHS uncovered considerable variation in the price charged, which could not be explained by the availability of advanced meter features. It estimated that a total of £12m was invested in providing 42 million self-monitoring blood glucose tests with systems that failed to meet acceptable accuracy standards, and efficiency savings of £23.2m per annum were achievable if the NHS were to disinvest from technologies providing less functionality than available alternatives, but at a much higher price.
Batches of counterfeit test strips for some meters were found in the United States, producing erratic test results that do not meet the legitimate manufacturer's performance specifications.
Noninvasive meters
The search for a successful technique began about 1975 and has continued to the present without a clinically or commercially viable product. , only one such product had ever been approved for sale by the FDA, based on a technique for electrically pulling glucose through intact skin, and it was withdrawn after a short time owing to poor performance and occasional damage to the skin of users.
Continuous glucose monitors
Continuous glucose monitor systems can consist of a disposable sensor placed under the skin, a transmitter connected to the sensor and a reader that receives and displays the measurements. The sensor can be used for several days before it needs to be replaced. The devices provide real-time measurements, and reduce the need for fingerprick testing of glucose levels. A drawback is that the meters are not as accurate because they read the glucose levels in the interstitial fluid which lags behind the levels in the blood. Continuous blood glucose monitoring systems are also relatively expensive.
Accuracy
Accuracy of glucose meters is a common topic of clinical concern. Blood glucose meters must meet accuracy standards set by the International Organization for Standardization (ISO). According to ISO 15197 Blood glucose meters must provide results that are within ±15% of a laboratory standard for concentrations above 100 mg/dL or within ±15 mg/dL for concentrations below 100 mg/dL at least 95% of the time. However, a variety of factors can affect the accuracy of a test. Factors affecting accuracy of various meters include calibration of meter, ambient temperature, pressure use to wipe off strip (if applicable), size and quality of blood sample, high levels of certain substances (such as ascorbic acid) in blood, hematocrit, dirt on meter, humidity, and aging of test strips. Models vary in their susceptibility to these factors and in their ability to prevent or warn of inaccurate results with error messages. The Clarke Error Grid has been a common way of analyzing and displaying accuracy of readings related to management consequences. More recently an improved version of the Clarke Error Grid has come into use: It is known as the Consensus Error Grid. Older blood glucose meters often need to be "coded" with the lot of test strips used, otherwise, the accuracy of the blood glucose meter may be compromised due to lack of calibration.
Future
One noninvasive glucose meter has been approved by the U.S. FDA: The GlucoWatch G2 Biographer made by Cygnus Inc. The device was designed to be worn on the wrist and used electric fields to draw out body fluid for testing. The device did not replace conventional blood glucose monitoring. One limitation was that the GlucoWatch was not able to cope with perspiration at the measurement site. Sweat must be allowed to dry before measurement can resume. Due to this limitation and others, the product is no longer on the market.
The market introduction of noninvasive blood glucose measurement by spectroscopic measurement methods, in the field of near-infrared (NIR), by extracorporal measuring devices, has not been successful because the devices measure tissue sugar in body tissues and not the blood sugar in blood fluid. To determine blood glucose, the measuring beam of infrared light, for example, has to penetrate the tissue for measurement of blood glucose.
There are currently three CGMS (continuous glucose monitoring system) available. The first is Medtronic's Minimed Paradigm RTS with a sub-cutaneous probe attached to a small transmitter (roughly the size of a quarter) that sends interstitial glucose levels to a small pager sized receiver every five minutes. The Dexcom System is another system, available in two different generations in the US, the G4 and the G5. (1Q 2016). It is a hypodermic probe with a small transmitter. The receiver is about the size of a cell phone and can operate up to twenty feet from the transmitter. The Dexcom G4 transmits via radio frequency and requires a dedicated receiver. The G5 version utilizes Bluetooth low energy for data transmission, and can transmit data directly to a compatible cellular telephone. Currently, Apple's iPhone and Android devices can be used as a receiver. Aside from a two-hour calibration period, monitoring is logged at five-minute intervals for up to 1 week. The user can set the high and low glucose alarms. The third CGMS available is the FreeStyle Navigator from Abbott Laboratories.
There is currently an effort to develop an integrated treatment system with a glucose meter, insulin pump, and wristop controller, as well as an effort to integrate the glucose meter and a cell phone. Testing strips are proprietary and available only through the manufacturer (no insurance availability). These "Glugophones" are currently offered in three forms: as a dongle for the iPhone, an add-on pack for LG model UX5000, VX5200, and LX350 cell phones, as well as an add-on pack for the Motorola Razr cell phone. In US, this limits providers to AT&T and Verizon. Similar systems have been tested for a longer time in Finland.
Recent advances in cellular data communications technology have enabled the development of glucose meters that directly integrate cellular data transmission capability, enabling the user to both transmit glucose data to the medical caregiver and receive direct guidance from the caregiver on the screen of the glucose meter. The first such device, from Telcare, Inc., was exhibited at the 2010 CTIA International Wireless Expo, where it won an E-Tech award. This device then underwent clinical testing in the US and internationally.
In early 2014 Google reported testing prototypes of contact lenses that monitor glucose levels and alert users when glucose levels cross certain thresholds. Apple has patented methods for determining blood sugar levels by absorption spectroscopy, as well as by analyzing exhaled air in its electronic devices.
Technology
Many glucose meters employ the oxidation of glucose to gluconolactone catalyzed by glucose oxidase (sometimes known as GOx). Others use a similar reaction catalysed instead by another enzyme, glucose dehydrogenase (GDH). This has the advantage of sensitivity over glucose oxidase but is more susceptible to interfering reactions with other substances.
The first-generation devices relied on the same colorimetric reaction that is still used nowadays in glucose test strips for urine. Besides glucose oxidase, the test kit contains a benzidine derivative, which is oxidized to a blue polymer by the hydrogen peroxide formed in the oxidation reaction. The disadvantage of this method was that the test strip had to be developed after a precise interval (the blood had to be washed away), and the meter needed to be calibrated frequently.
Most glucometers today use an electrochemical method. Test strips contain a capillary that sucks up a reproducible amount of blood. The glucose in the blood reacts with an enzyme electrode containing glucose oxidase (or dehydrogenase). The enzyme is reoxidized with an excess of a mediator reagent, such as a ferricyanide ion, a ferrocene derivative or osmium bipyridyl complex. The mediator in turn is reoxidized by reaction at the electrode, which generates an electric current. In order for the mediator to operate over long timeframes, it needs to be stable in both oxidised and reduced states. This is to allow for continuous regeneration of the oxidised form of the mediator for shuttling of electrons from enzyme to active site. Osmium-based polypyridyl redox complexes and polymers are attractive candidates as mediators due to their stability in oxidised and reduced forms, tunable redox potential, ease of co-immobilisation and ability to operate at low potentials.
The total charge passing through the electrode is proportional to the amount of glucose in the blood that has reacted with the enzyme. The coulometric method is a technique where the total amount of charge generated by the glucose oxidation reaction is measured over a period of time. The amperometric method is used by some meters and measures the electric current generated at a specific point in time by the glucose reaction. This is analogous to throwing a ball and using the speed at which it is travelling at a point in time to estimate how hard it was thrown. The coulometric method can allow for variable test times, whereas the test time on a meter using the amperometric method is always fixed. Both methods give an estimation of the concentration of glucose in the initial blood sample.
The same principle is used in test strips that have been commercialized for the detection of diabetic ketoacidosis (DKA). These test strips use a beta-hydroxybutyrate-dehydrogenase enzyme instead of a glucose oxidizing enzyme and have been used to detect and help treat some of the complications that can result from prolonged hyperglycemia.
Blood alcohol sensors using the same approach, but with alcohol dehydrogenase enzymes, have been tried and patented but have not yet been successfully commercially developed.
Meter use for hypoglycemia
Although the apparent value of immediate measurement of blood glucose might seem to be higher for hypoglycemia (low sugar) than hyperglycemia (high sugar), meters have been less useful. The primary problems are precision and ratio of false positive and negative results. An imprecision of ±15% is less of a problem for high glucose levels than low. There is little difference in the management of a glucose of 200 mg/dL compared with 260 (i.e., a "true" glucose of 230±15%), but a ±15% error margin at a low glucose concentration brings greater ambiguity with regards to glucose management.
The imprecision is compounded by the relative likelihoods of false positives and negatives in populations with diabetes and those without. People with type 1 diabetes usually have a wider range of glucose levels, and glucose peaks above normal, often ranging from 40 to 500 mg/dL (2.2 to 28 mmol/L), and when a meter reading of 50 or 70 (2.8 or 3.9 mmol/L) is accompanied by their usual hypoglycemic symptoms, there is little uncertainty about the reading representing a "true positive" and little harm done if it is a "false positive." However, the incidence of hypoglycemia unawareness, hypoglycemia-associated autonomic failure (HAAF) and faulty counterregulatory response to hypoglycemia make the need for greater reliability at low levels particularly urgent in patients with type 1 diabetes mellitus, while this is seldom an issue in the more common form of the disease, type 2 diabetes mellitus.
In contrast, people who do not have diabetes may periodically have hypoglycemic symptoms but may also have a much higher rate of false positives to true, and a meter is not accurate enough to base a diagnosis of hypoglycemia upon. A meter can occasionally be useful in the monitoring of severe types of hypoglycemia (e.g., congenital hyperinsulinism) to ensure that the average glucose when fasting remains above 70 mg/dL (3.9 mmol/L).
See also
ISO/IEEE 11073
References
Diabetes-related supplies and medical equipment
Physiological instruments
Medical testing equipment
Drugs developed by Hoffmann-La Roche
American inventions | Glucose meter | Technology,Engineering | 4,667 |
34,057,814 | https://en.wikipedia.org/wiki/De-identification | De-identification is the process used to prevent someone's personal identity from being revealed. For example, data produced during human subject research might be de-identified to preserve the privacy of research participants. Biological data may be de-identified in order to comply with HIPAA regulations that define and stipulate patient privacy laws.
When applied to metadata or general data about identification, the process is also known as data anonymization. Common strategies include deleting or masking personal identifiers, such as personal name, and suppressing or generalizing quasi-identifiers, such as date of birth. The reverse process of using de-identified data to identify individuals is known as data re-identification. Successful re-identifications cast doubt on de-identification's effectiveness. A systematic review of fourteen distinct re-identification attacks found "a high re-identification rate […] dominated by small-scale studies on data that was not de-identified according to existing standards".
De-identification is adopted as one of the main approaches toward data privacy protection. It is commonly used in fields of communications, multimedia, biometrics, big data, cloud computing, data mining, internet, social networks, and audio–video surveillance.
Examples
In designing surveys
When surveys are conducted, such as a census, they collect information about a specific group of people. To encourage participation and to protect the privacy of survey respondents, the researchers attempt to design the survey in a way that when people participate in a survey, it will not be possible to match any participant's individual response(s) with any data published.
Before using information
When an online shopping website wants to know its users' preferences and shopping habits, it decides to retrieve customers' data from its database and do analysis on them. The personal data information include personal identifiers which were collected directly when customers created their accounts. The website needs to pre-handle the data through de-identification techniques before analyzing data records to avoid violating their customers' privacy.
Anonymization
Anonymization refers to irreversibly severing a data set from the identity of the data contributor in a study to prevent any future re-identification, even by the study organizers under any condition. De-identification may also include preserving identifying information which can only be re-linked by a trusted party in certain situations. There is a debate in the technology community on whether data that can be re-linked, even by a trusted party, should ever be considered de-identified.
Techniques
Common strategies of de-identification are masking personal identifiers and generalizing quasi-identifiers. Pseudonymization is the main technique used to mask personal identifiers from data records, and k-anonymization is usually adopted for generalizing quasi-identifiers.
Pseudonymization
Pseudonymization is performed by replacing real names with a temporary ID. It deletes or masks personal identifiers to make individuals unidentified. This method makes it possible to track the individual's record over time, even though the record will be updated. However, it can not prevent the individual from being identified if some specific combinations of attributes in the data record indirectly identify the individual.
k-anonymization
k-anonymization defines attributes that indirectly points to the individual's identity as quasi-identifiers (QIs) and deal with data by making at least k individuals have some combination of QI values. QI values are handled following specific standards. For example, the k-anonymization replaces some original data in the records with new range values and keep some values unchanged. New combination of QI values prevents the individual from being identified and also avoid destroying data records.
Applications
Research into de-identification is driven mostly for protecting health information. Some libraries have adopted methods used in the healthcare industry to preserve their readers' privacy.
In big data, de-identification is widely adopted by individuals and organizations. With the development of social media, e-commerce, and big data, de-identification is sometimes required and often used for data privacy when users' personal data are collected by companies or third-party organizations who will analyze it for their own personal usage.
In smart cities, de-identification may be required to protect the privacy of residents, workers and visitors. Without strict regulation, de-identification may be difficult because sensors can still collect information without consent.
Data De-identification
PHI (Protected Health Information) can be present in various data and each format need specific techniques and tools for de-identify it:
For Text de-identification is using rule based and NLP (Natural language processing) approaches.
Pdf de-identification is based on text de-identification, also required in most cases OCR and specific techniques for hide PHI in PDF.
DICOM de-identification required to clean metadata, pixel data, encapsulated documents.
Limits
Whenever a person participates in genetics research, the donation of a biological specimen often results in the creation of a large amount of personalized data. Such data is uniquely difficult to de-identify.
Anonymization of genetic data is particularly difficult because of the huge amount of genotypic
information in biospecimens, the ties that specimens often have to medical history, and the advent of modern bioinformatics tools for data mining. There have been demonstrations that data for individuals in aggregate collections of genotypic data sets can be tied to the identities of the specimen donors.
Some researchers have suggested that it is not reasonable to ever promise participants in genetics research that they can retain their anonymity, but instead, such participants should be taught the limits of using coded identifiers in a de-identification process.
De-identification laws in the United States of America
In May 2014, the United States President's Council of Advisors on Science and Technology found de-identification "somewhat useful as an added safeguard" but not "a useful basis for policy" as "it is not robust against near‐term future re‐identification methods".
The HIPAA Privacy Rule provides mechanisms for using and disclosing health data responsibly without the need for patient consent. These mechanisms center on two HIPAA de-identification standards – Safe Harbor and the Expert Determination Method. Safe harbor relies on the removal of specific patient identifiers (e.g. name, phone number, email address, etc.), while the Expert Determination Method requires knowledge and experience with generally accepted statistical and scientific principles and methods to render information not individually identifiable.
Safe harbor
The safe harbor method uses a list approach to de-identification and has two requirements:
The removal or generalization of 18 elements from the data.
That the Covered Entity or Business Associate does not have actual knowledge that the residual information in the data could be used alone, or in combination with other information, to identify an individual. Safe Harbor is a highly prescriptive approach to de-identification. Under this method, all dates must be generalized to year and zip codes reduced to three digits. The same approach is used on the data regardless of the context. Even if the information is to be shared with a trusted researcher who wishes to analyze the data for seasonal variations in acute respiratory cases and, thus, requires the month of hospital admission, this information cannot be provided; only the year of admission would be retained.
Expert Determination
Expert Determination takes a risk-based approach to de-identification that applies current standards and best practices from the research to determine the likelihood that a person could be identified from their protected health information. This method requires that a person with appropriate knowledge of and experience with generally accepted statistical and scientific principles and methods render the information not individually identifiable. It requires:
That the risk is very small that the information could be used alone, or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information;
Documents the methods and results of the analysis that justify such a determination.
Research on decedents
The key law about research in electronic health record data is HIPAA Privacy Rule. This law allows use of electronic health record of deceased subjects for research (HIPAA Privacy Rule (section 164.512(i)(1)(iii))).
See also
Adversarial stylometry
Genetic privacy
Statistical disclosure control
References
External links
A training series on United States government de-identification standards
Guidance Regarding Methods for De-identification of Protected Health Information
Information privacy
Data protection
Research ethics
Electronic health records | De-identification | Technology,Engineering | 1,711 |
8,327,127 | https://en.wikipedia.org/wiki/Dense-in-itself | In general topology, a subset of a topological space is said to be dense-in-itself or crowded
if has no isolated point.
Equivalently, is dense-in-itself if every point of is a limit point of .
Thus is dense-in-itself if and only if , where is the derived set of .
A dense-in-itself closed set is called a perfect set. (In other words, a perfect set is a closed set without isolated point.)
The notion of dense set is distinct from dense-in-itself. This can sometimes be confusing, as "X is dense in X" (always true) is not the same as "X is dense-in-itself" (no isolated point).
Examples
A simple example of a set that is dense-in-itself but not closed (and hence not a perfect set) is the set of irrational numbers (considered as a subset of the real numbers). This set is dense-in-itself because every neighborhood of an irrational number contains at least one other irrational number . On the other hand, the set of irrationals is not closed because every rational number lies in its closure. Similarly, the set of rational numbers is also dense-in-itself but not closed in the space of real numbers.
The above examples, the irrationals and the rationals, are also dense sets in their topological space, namely . As an example that is dense-in-itself but not dense in its topological space, consider . This set is not dense in but is dense-in-itself.
Properties
A singleton subset of a space can never be dense-in-itself, because its unique point is isolated in it.
The dense-in-itself subsets of any space are closed under unions. In a dense-in-itself space, they include all open sets. In a dense-in-itself T1 space they include all dense sets. However, spaces that are not T1 may have dense subsets that are not dense-in-itself: for example in the dense-in-itself space with the indiscrete topology, the set is dense, but is not dense-in-itself.
The closure of any dense-in-itself set is a perfect set.
In general, the intersection of two dense-in-itself sets is not dense-in-itself. But the intersection of a dense-in-itself set and an open set is dense-in-itself.
See also
Nowhere dense set
Glossary of topology
Dense order
Notes
References
Topology | Dense-in-itself | Physics,Mathematics | 516 |
22,479,749 | https://en.wikipedia.org/wiki/Sepiapterin | Sepiapterin, also known as 2-amino-6-[(2S)-2-hydroxypropanoyl]-7,8-dihydro-1H-pteridin-4-one, is a member of the pteridine class of organic chemicals.
Sepiapterin can be metabolized into tetrahydrobiopterin via a salvage pathway. Tetrahydrobiopterin is an essential cofactor in humans for breakdown of phenylalanine and a catalyst of the metabolism of phenylalanine, tyrosine, and tryptophan to precursors of the neurotransmitters dopamine and serotonin.
Deficiency of tetrahydrobiopterin can cause toxic buildup of phenylalanine (phenylketonuria) as well as deficiencies of dopamine, norepinephrine, and epinephrine, leading to dystonia and other neurological illnesses. This has led to clinical study of sepiapterin in humans to treat tetrahydrobiopterin deficiency.
Since atherosclerosis and other circulatory diseases associated with diabetes are also associated with tetrahydrobiopterin deficiency, animal studies of the value of sepiaterin in these vascular diseases have been done. These studies show that relaxation of the blood vessels studied was impaired after animals were given sepiapterin, even though their levels of tetrahydrobiopterin were replenished.
References
Pteridines
Secondary alcohols
Ketones
Lactams | Sepiapterin | Chemistry | 331 |
38,662,839 | https://en.wikipedia.org/wiki/Transmembrane%20Protein%20205 | Transmembrane Protein 205 (TMEM205) is a protein encoded on chromosome 19 by the TMEM205 gene.
Gene
TMEM205 is located on the minus strand of chromosome 19 from base pair 11,453,452 to 11,456,981. In close proximity to TMEM205, CCDC159 is located slightly upstream and RAB3D slightly down stream of the genomic sequence.
Homology
TMEM205 has no known Paralogs in the Human genome. Using UCSC genome browser BLAT against the human protein sequence it was found that the closest relative to humans to contain a paralog of the TMEM205 gene in its genome is the Bushbaby. TMEM205 does however have a large range of ortholog sequences.
Protein
The human homologue of TMEM205 is 189 amino acids long and has a molecular weight of 21.2 kDa. It contains 4 hydrophobic helical domains that are predicted to be transmembrane domains.
Expression
TMEM205 has been shown to be expressed in greater amounts in tissues that have secretory function. These tissues include the thyroid, adrenal gland, pancrease, and mammary tissues. The protein has also been shown to have increased expression in tumor tissue that have become resistant to platinum based chemotherapy drugs.
Function
TMEM205 is thought to be a multi-pass transmembrane protein. It has been shown to be located at the plasma membrane in humans tissues and translocates to the nuclear envelope when cells become resistant to Cisplatin. It contains four domains predicted to be trans membrane domains by TMHMM analysis.
Interacting proteins
TMEM205 has been shown to be co-located with RAB8 a known GTPase involved in vesicular traffic.
Clinical significance
TMEM205 has been shown to be involved in Cisplatin resistance. Cisplatin is a chemotherapeutic drug that is commonly used to treat solid malignancies such as carcinomas, sarcomas, and lymphomas. In addition to being involved in Cisplatin resistance there is growing evidence that the protein is also involved in the diseases thyroiditis and prostatitis
Notes
Genes on human chromosome 19
Proteins | Transmembrane Protein 205 | Chemistry | 485 |
4,983,922 | https://en.wikipedia.org/wiki/J.%20Hyam%20Rubinstein | Joachim Hyam Rubinstein FAA (born 7 March 1948, in Melbourne) an Australian top mathematician specialising in low-dimensional topology; he is currently serving as an honorary professor in the Department of Mathematics and Statistics at the University of Melbourne, having retired in 2019.
He has spoken and written widely on the state of the mathematical sciences in Australia, with particular focus on the impacts of reduced Government spending for university mathematics departments.
Education
In 1965, Rubinstein matriculated (i.e. graduated) from Melbourne High School in Melbourne, Australia winning the maximum of four exhibitions. In 1969, he graduated from Monash University in Melbourne, with a B.Sc.(Honours) degree in mathematics.
In 1974, Rubinstein received his Ph.D. from the University of California, Berkeley under the advisership of John Stallings. His dissertation was on the topic of Isotopies of Incompressible Surfaces in Three Dimensional Manifolds.
Research interests
His major contributions include results involving almost normal Heegaard splittings and the closely related joint work with Jon T. Pitts relating strongly irreducible Heegaard splittings to minimal surfaces, joint work with William Jaco on special triangulations of 3-manifolds (namely 0-efficient and 1-efficient triangulations), and joint work with Martin Scharlemann on the Rubinstein–Scharlemann graphic. He is a key figure in the algorithmic theory of 3-manifolds, and one of the initial developers of the Regina program, which implements his 3-sphere recognition algorithm.
His research interests also include: shortest networks applied to underground mine design, machine learning, learning theory, financial mathematics, and stock market trading systems.
Honours
Past President of the Australian Mathematical Society.
Chair of the Australian Committee for the Mathematical Sciences.
Elected Fellow of the Australian Academy of Science in 2003.
Recipient of the Hannan Medal in 2004.
Recipient of the George Szekeres Medal in 2008.
Fellow of the American Mathematical Society, 2012.
From July 11 to 22, 2011, a workshop and conference in his honour, jointly titled “Hyamfest: Geometry & Topology Down Under”, were held at the University of Melbourne.
References
External links
Interview
LinkedIn page
1948 births
20th-century Australian mathematicians
21st-century Australian mathematicians
Topologists
University of California, Berkeley alumni
Academic staff of the University of Melbourne
Mathematicians from Melbourne
Living people
People educated at Melbourne High School
Fellows of the Australian Academy of Science
Fellows of the American Mathematical Society | J. Hyam Rubinstein | Mathematics | 501 |
39,813,016 | https://en.wikipedia.org/wiki/Dis-unification | Dis-unification, in computer science and logic, is an algorithmic process of solving inequations between symbolic expressions.
Publications on dis-unification
"Anti-Unification" here refers to inequation-solving, a naming which nowadays has become quite unusual, cf. Anti-unification (computer science).
Comon shows that the first-order logic theory of equality and sort membership is decidable, that is, each first-order logic formula built from arbitrary function symbols, "=" and "∈", but no other predicates, can effectively be proven or disproven. Using the logical negation (¬), non-equality (≠) can be expressed in formulas, but order relations (<) cannot. As an application, he proves sufficient completeness of term rewriting systems.
See also
Unification (computer science): solving equations between symbolic expressions
Constraint logic programming: incorporating solving algorithms for particular classes of inequalities (and other relations) into Prolog
Constraint programming: solving algorithms for particular classes of inequalities
Simplex algorithm: solving algorithm for linear inequations
Inequation: kinds of inequations in mathematics in general, including a brief section on solving
Equation solving: how to solve equations in mathematics
Logic programming
Theoretical computer science
Unification (computer science) | Dis-unification | Mathematics | 274 |
540,176 | https://en.wikipedia.org/wiki/Melena | Melena is a form of blood in stool which refers to the dark black, tarry feces that are commonly associated with upper gastrointestinal bleeding. The black color and characteristic strong odor are caused by hemoglobin in the blood being altered by digestive enzymes and intestinal bacteria.
Iron supplements may cause a grayish-black stool that should be distinguished from melena, as should black coloration caused by a number of medications, such as bismuth subsalicylate (the active ingredient in Pepto-Bismol), or by foods such as beetroot, black liquorice, or blueberries.
Causes
The most common cause of melena is peptic ulcer disease. However, any bleeding within the upper gastrointestinal tract or the ascending colon can lead to melena. Melena may also be a complication of anticoagulant medications, such as warfarin.
Causes of upper gastrointestinal bleeding that may result in melena include malignant tumors affecting the esophagus, stomach or small intestine, hemorrhagic blood diseases, such as thrombocytopenia and hemophilia, gastritis, stomach cancer, esophageal varices, Meckel's diverticulum and Mallory-Weiss syndrome.
Causes of "false" melena include iron supplements, Pepto-Bismol, Maalox, and lead, blood swallowed as a result of a nose bleed (epistaxis), and blood ingested as part of the diet, as with consumption of black pudding (blood sausage), or with the traditional African Maasai diet, which includes much blood drained from cattle.
Melena is considered a medical emergency as it arises from a significant amount of bleeding. Urgent care is required to rule out serious causes and prevent potentially life-threatening emergencies.
A less serious, self-limiting case of melena can occur in newborns two to three days after delivery, due to swallowed maternal blood.
Diagnosis
In acute cases, with a large amount of blood loss, patients may present with anemia or low blood pressure. However, aside from the melena itself, many patients may present with few symptoms. Often, the first approach is to use endoscopy to look for obvious signs of a bleed. In cases where the source of the bleed is unclear, but melena is present, an upper endoscopy is recommended, to try to ascertain the source of the bleed.
Lower gastrointestinal bleeding sources usually present with hematochezia or frank blood. A test with poor sensitivity/specificity that may detect the source of bleeding is the tagged red blood cell scan. This is especially used for slow bleeding (<0.5 ml/min). However, for rapid bleeding (>0.5 ml/min), mesenteric angiogram ± embolization is the gold standard. Colonoscopy is often first line, however.
Melena versus hematochezia
Bleeds that originate from the lower gastrointestinal tract (such as the sigmoid colon and rectum) are generally associated with the passage of bright red blood, or hematochezia, particularly when brisk. Only blood that originates from a more proximal source (such as the small intestine), or bleeding from a lower source that occurs slowly enough to allow for enzymatic breakdown, is associated with melena. For this reason, melena is often associated with blood in the stomach or duodenum (upper gastrointestinal bleeding), for example by a peptic ulcer. A rough estimate is that it takes about 14 hours for blood to be broken down within the intestinal lumen; therefore if transit time is less than 14 hours the patient will have hematochezia, and if greater than 14 hours the patient will exhibit melena. One often-stated rule of thumb is that melena only occurs if the source of bleeding is above the ligament of Treitz although, as noted below, exceptions occur with enough frequency to render it unreliable.
Etymology
The origin of melena is dated to the early 19th century via modern Latin, via Greek melaina (feminine of melas, black).
See also
Blood in stool
Dieulafoy's lesion
Hematemesis
References
External links
Bleeding
Digestive disease symptoms
Feces | Melena | Biology | 901 |
62,025,667 | https://en.wikipedia.org/wiki/AAindex | AAindex is a database of amino acid indices, amino acid mutation matrices, and pair-wise contact potentials. The data represent various physicochemical and biochemical properties of amino acids and pairs of amino acids.
See also
Proteinogenic amino acid
References
External links
Official AAindex website
Biological databases | AAindex | Biology | 63 |
66,858,127 | https://en.wikipedia.org/wiki/Vigia%20%28nautical%29 | A vigia is a warning on a nautical chart indicating a possible rock, shoal, or other hazard which has been reported but not yet verified or surveyed.
Some non-existent vigias have remained on successive charts for centuries as a precaution by hesitant hydrographers. One such example was 'Las Casses Bank', a vigia between Menorca and Sardinia in the Mediterranean Sea which first appeared on charts in 1373 and remained on some charts as late as 1852.
Another notable false vigia was 'Aitkins' Rock' off the northwest coast of Ireland, first reported in 1740 with six further reports over the following eighty years the supposed rock was blamed for numerous lost ships. Surveys by the Royal Navy in 1824, 1827, and 1829 failed to locate the rock, until a final extensive six week survey in 1840 using two brigs led to the conclusion that the rock had never existed. Captain Alexander Thomas Emeric Vidal, who led the final survey, noted that such false sightings were likely due to floating debris or whales.
The term vigia is derived from the Spanish vigía or Portuguese vigia, from the Latin vigilia.
See also
References
Cartography
Hydrography
Phantom islands | Vigia (nautical) | Environmental_science | 246 |
1,914,171 | https://en.wikipedia.org/wiki/Havriliak%E2%80%93Negami%20relaxation | The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers, by adding two exponential parameters to the Debye equation:
where is the permittivity at the high frequency limit, where is the static, low frequency permittivity, and is the characteristic relaxation time of the medium. The exponents and describe the asymmetry and broadness of the corresponding spectra.
Depending on application, the Fourier transform of the stretched exponential function can be a viable alternative that has one parameter less.
For the Havriliak–Negami equation reduces to the Cole–Cole equation, for to the Cole–Davidson equation.
Mathematical properties
Real and imaginary parts
The storage part and the loss part of the permittivity (here: with ) can be calculated as
and
with
Loss peak
The maximum of the loss part lies at
Superposition of Lorentzians
The Havriliak–Negami relaxation can be expressed as a superposition of individual Debye relaxations
with the real valued distribution function
where
if the argument of the arctangent is positive, else
Noteworthy, becomes imaginary valued for
and complex valued for
Logarithmic moments
The first logarithmic moment of this distribution, the average logarithmic relaxation time is
where is the digamma function and the Euler constant.
Inverse Fourier transform
The inverse Fourier transform of the Havriliak-Negami function (the corresponding time-domain relaxation function) can be numerically calculated. It can be shown that the series expansions involved are special cases of the Fox–Wright function. In particular, in the time-domain the corresponding of can be represented as
where is the Dirac delta function and
is a special instance of the Fox–Wright function and, precisely, it is the three parameters Mittag-Leffler function also known as the Prabhakar function. The function can be numerically evaluated, for instance, by means of a Matlab code
.
See also
Debye relaxation
Cole–Cole equation
Cole–Davidson equation
Curie–von Schweidler law
Dielectric spectroscopy
Dipole
References
Electric and magnetic fields in matter | Havriliak–Negami relaxation | Physics,Chemistry,Materials_science,Engineering | 492 |
47,768,197 | https://en.wikipedia.org/wiki/Phellodon%20indicus | Phellodon indicus is a species of tooth fungus in the family Bankeraceae. Found in Himachal Pradesh, India, it was described as new to science in 1978.
References
External links
Fungi described in 1971
Fungi of India
Inedible fungi
indicus
Fungus species | Phellodon indicus | Biology | 55 |
1,820,960 | https://en.wikipedia.org/wiki/Transpersonal | The transpersonal is a term used by different schools of philosophy and psychology in order to describe experiences and worldviews that extend beyond the personal level of the psyche, and beyond mundane worldly events.
Definition and context
The transpersonal has been defined as experiences in which the sense of identity or self extends beyond (trans) the individual or personal to encompass wider aspects of humankind, life, psyche or cosmos. On the other hand, transpersonal practices are those structured activities that focus on inducing transpersonal experiences.
In the Textbook of Transpersonal Psychiatry and Psychology, Scotton defined the term as "development beyond conventional, personal or individual levels." It is associated with a developmental model of psychology that includes three successive stages: the prepersonal (before ego-formation), the personal (the functioning ego), and the transpersonal (ego remains available but is superseded by higher development).
One of the founders of the field of transpersonal psychology, Stanislav Grof, has defined transpersonal states of awareness as such: "The common denominator of this otherwise rich and ramified group of phenomena is the feeling of the individual that his consciousness expanded beyond the usual ego boundaries and the limitations of time and space."
The term is related to the terminology of peak experience, altered states of consciousness, and spiritual experiences. The term is also associated with psychedelic work, and psychotechnologies, that includes research with psychedelic plants and chemicals such as LSD, ibogaine, ketamine, peyote, ayahuasca and the vast variety of substances available to all human cultures throughout history.
Etymology
The term has an early precedent in the writing of philosopher William James, who used the term "Trans-personal" in one of his lectures from 1905. However, this early terminology, introduced by James, had a different meaning than the current one and its context was philosophy, not psychology, which is where the term is mostly used these days.
There has also been some speculation of an early precedent of the term in the writings of Carl Jung, as a result of the work of Jung's translators. It regards the Jungian term ueberpersonlich, used by Jung in a paper from 1917, which in later English translations appeared as superpersonal, and later, transpersonal. In a later, revised, version of the Psychology of the Unconscious (1942) there was even a chapter heading called "The Personal and the Collective (or Transpersonal) Unconscious".
However, the etymology, as it is currently used in academic writing, is mostly associated with the human potential movement of the 1960s and the founders of the field of transpersonal psychology; Anthony Sutich, Abraham Maslow and Stanislav Grof. According to Vich all three had used the term as early as 1967, in order to describe new ideas in the field of Psychology. In 1968 the term was selected by the founding editors of the Journal of Transpersonal Psychology, Abraham Maslow and Anthony Sutich, in order to represent a new area of psychological inquiry. Porter locates the start of the so-called transpersonal psychology movement to the American west-coast in the late 1960s. In addition to Maslow, Vich and Grof the movement was associated with the names of Ken Wilber, Frances Vaughan, Roger Walsh and Seymoor Boorstein.
According to Powers the term "transpersonal" starts to show up in academic journals from 1970 and onwards. The use of the term in academic literature is documented in Psychological Abstracts and Dissertations Abstracts. The use of the term grew during the 1970s and 1980s and stabilized in the 1990s.
Movement
The collective of people and organizations with an interest in the transpersonal is called the transpersonal movement. Walsh and Vaughan defines the transpersonal movement as the interdisciplinary movement that includes various individual transpersonal disciplines.
The philosophy of William James, the school of psychosynthesis (founded by Roberto Assagioli), and the analytical school of Carl Jung are often considered to be forerunners to the establishment of transpersonal theory. However, the start of the movement is associated with the emergence and growth of the related field of humanistic psychology. Several of the academic profiles of the early transpersonal movement, such as Abraham Maslow and Anthony Sutich, had their background in humanistic psychology.
The formative years of the transpersonal movement can be characterized by the founding of a few key organizations and institutions, such as: Transpersonal Institute in 1969, the Institute of Noetic Sciences in 1973, The International Transpersonal Psychology Association in 1973, Naropa Institute in 1974, and the California Institute of Transpersonal Psychology in 1975. The California Institute of Transpersonal Psychology later emerged as the Institute of Transpersonal Psychology (ITP) and is today known as Sofia University.
Contemporary transpersonal disciplines include transpersonal psychology, transpersonal psychiatry, transpersonal anthropology, transpersonal sociology and transpersonal ecology. Other academic orientations, whose main focus lies elsewhere, but that are associated with a transpersonal perspective, include humanistic psychology and near-death studies. Contemporary institutions include: the Association for Transpersonal Psychology (ATP), the European Transpersonal Psychology Association (EPTA), the International Transpersonal Association (ITA), the Ibero-American Transpersonal Association (ATI) and the European Transpersonal Association (Eurotas). Leading publications within the movement include: the Journal of Transpersonal Psychology, the International Journal of Transpersonal Studies, and the Journal of Transpersonal Research.
Transpersonal studies
Several commentators note how the transpersonal field, and its vision, moved beyond the perspective of psychology and into other transpersonal domains during the 1980s and 1990s. This expansion of the transpersonal concept resulted in an interdisciplinary situation, and a dialogue with such fields as social work, ecology, art, literature, acting, law, business, entrepreneurship, ecopsychology, feminism and education.
In this respect, commentators have suggested that there is a difference between the founding field of transpersonal psychology and a broader field of transpersonal inquiry, transpersonal studies. This differentiation of the transpersonal field has to do with the scope of the subjects under study, and the interest of researchers and theorists.
In their review of transpersonal definitions, published in 1993, Walsh and Vaughan noted that transpersonal studies had grown beyond the founding field of transpersonal psychology. Commenting on the criticisms of transpersonal psychology in the 1980s, Chinen noted how the criticism did not differentiate between transpersonal psychology, on the one hand, and a broad range of popularized transpersonal orientations, on the other. The same line of reasoning was picked up by Friedman, who differentiated between a broad domain of inquiry known as transpersonal studies, and a more narrow field of transpersonal psychology. Both authors argued that the confounding of the two domains resulted in confusion. In a summary of contemporary viewpoints on transpersonal psychology Jorge Ferrer placed transpersonal psychology within the wider "umbrella" known as transpersonal studies.
Among institutions of higher learning that promote transpersonal studies we find Sofia University and California Institute of Integral Studies. In 2012 Sofia University announced that they were expanding their graduate program in order to include transpersonal studies. The new program was named the Graduate School of Transpersonal Studies.
The International Journal of Transpersonal Studies was established in 1981. It is sponsored by the California Institute of Integral Studies and serves as the official publication of the International Transpersonal Association.
See also
Analytical psychology
Humanistic psychology
Near-death studies
Notes
a. Grabovac & Ganesan, 2003: Table 3.
b. See Winkelman & Roberts, 2007: "Part III. Transpersonal Dimensions of Healing with Psychedelic States"
c. John Beebe, San Francisco Jung Institute Library Journal
d. The term was considered to be an improvent upon an earlier term called «transhumanistic».
References
Human development
Transpersonal psychology
Transpersonal studies | Transpersonal | Biology | 1,593 |
349,223 | https://en.wikipedia.org/wiki/Group%20ring | In algebra, a group ring is a free module and at the same time a ring, constructed in a natural way from any given ring and any given group. As a free module, its ring of scalars is the given ring, and its basis is the set of elements of the given group. As a ring, its addition law is that of the free module and its multiplication extends "by linearity" the given group law on the basis. Less formally, a group ring is a generalization of a given group, by attaching to each element of the group a "weighting factor" from a given ring.
If the ring is commutative then the group ring is also referred to as a group algebra, for it is indeed an algebra over the given ring. A group algebra over a field has a further structure of a Hopf algebra; in this case, it is thus called a group Hopf algebra.
The apparatus of group rings is especially useful in the theory of group representations.
Definition
Let be a group, written multiplicatively, and let be a ring. The group ring of over , which we will denote by , or simply , is the set of mappings of finite support ( is nonzero for only finitely many elements ), where the module scalar product of a scalar in and a mapping is defined as the mapping , and the module group sum of two mappings and is defined as the mapping . To turn the additive group into a ring, we define the product of and to be the mapping
The summation is legitimate because and are of finite support, and the ring axioms are readily verified.
Some variations in the notation and terminology are in use. In particular, the mappings such as are sometimes written as what are called "formal linear combinations of elements of with coefficients in
":
or simply
Note that if the ring is in fact a field , then the module structure of the group ring is in fact a vector space over .
Examples
1. Let , the cyclic group of order 3, with generator and identity element 1G. An element r of C[G] can be written as
where z0, z1 and z2 are in C, the complex numbers. This is the same thing as a polynomial ring in variable such that i.e. C[G] is isomorphic to the ring C[]/.
Writing a different element s as , their sum is
and their product is
Notice that the identity element 1G of G induces a canonical embedding of the coefficient ring (in this case C) into C[G]; however strictly speaking the multiplicative identity element of C[G] is 1⋅1G where the first 1 comes from C and the second from G. The additive identity element is zero.
When G is a non-commutative group, one must be careful to preserve the order of the group elements (and not accidentally commute them) when multiplying the terms.
2. The ring of Laurent polynomials over a ring R is the group ring of the infinite cyclic group Z over R.
3. Let Q be the quaternion group with elements . Consider the group ring RQ, where R is the set of real numbers. An arbitrary element of this group ring is of the form
where is a real number.
Multiplication, as in any other group ring, is defined based on the group operation. For example,
Note that RQ is not the same as the skew field of quaternions over R. This is because the skew field of quaternions satisfies additional relations in the ring, such as , whereas in the group ring RQ, is not equal to . To be more specific, the group ring RQ has dimension 8 as a real vector space, while the skew field of quaternions has dimension 4 as a real vector space.
4. Another example of a non-abelian group ring is where is the symmetric group on 3 letters. This is not an integral domain since we have where the element is the transposition that swaps 1 and 2. Therefore the group ring need not be an integral domain even when the underlying ring is an integral domain.
Some basic properties
Using 1 to denote the multiplicative identity of the ring R, and denoting the group unit by 1G, the ring R[G] contains a subring isomorphic to R, and its group of invertible elements contains a subgroup isomorphic to G. For considering the indicator function of {1G}, which is the vector f defined by
the set of all scalar multiples of f is a subring of R[G] isomorphic to R. And if we map each element s of G to the indicator function of {s}, which is the vector f defined by
the resulting mapping is an injective group homomorphism (with respect to multiplication, not addition, in R[G]).
If R and G are both commutative (i.e., R is commutative and G is an abelian group), R[G] is commutative.
If H is a subgroup of G, then R[H] is a subring of R[G]. Similarly, if S is a subring of R, S[G] is a subring of R[G].
If G is a finite group of order greater than 1, then R[G] always has zero divisors. For example, consider an element g of G of order . Then 1 − g is a zero divisor:
For example, consider the group ring Z[S3] and the element of order 3 g = (123). In this case,
A related result: If the group ring is prime, then G has no nonidentity finite normal subgroup (in particular, G must be infinite).
Proof: Considering the contrapositive, suppose is a nonidentity finite normal subgroup of . Take . Since for any , we know , therefore . Taking , we have . By normality of , commutes with a basis of , and therefore
.
And we see that are not zero, which shows is not prime. This shows the original statement.
Group algebra over a finite group
Group algebras occur naturally in the theory of group representations of finite groups. The group algebra K[G] over a field K is essentially the group ring, with the field K taking the place of the ring. As a set and vector space, it is the free vector space on G over the field K. That is, for x in K[G],
The algebra structure on the vector space is defined using the multiplication in the group:
where on the left, g and h indicate elements of the group algebra, while the multiplication on the right is the group operation (denoted by juxtaposition).
Because the above multiplication can be confusing, one can also write the basis vectors of K[G] as eg (instead of g), in which case the multiplication is written as:
Interpretation as functions
Thinking of the free vector space as K-valued functions on G, the algebra multiplication is convolution of functions.
While the group algebra of a finite group can be identified with the space of functions on the group, for an infinite group these are different. The group algebra, consisting of finite sums, corresponds to functions on the group that vanish for cofinitely many points; topologically (using the discrete topology), these correspond to functions with compact support.
However, the group algebra K[G] and the space of functions are dual: given an element of the group algebra
and a function on the group these pair to give an element of K via
which is a well-defined sum because it is finite.
Representations of a group algebra
Taking K[G] to be an abstract algebra, one may ask for representations of the algebra acting on a K-vector space V of dimension d. Such a representation
is an algebra homomorphism from the group algebra to the algebra of endomorphisms of V, which is isomorphic to the ring of d × d matrices: . Equivalently, this is a left K[G]-module over the abelian group V.
Correspondingly, a group representation
is a group homomorphism from G to the group of linear automorphisms of V, which is isomorphic to the general linear group of invertible matrices: . Any such representation induces an algebra representation
simply by letting and extending linearly. Thus, representations of the group correspond exactly to representations of the algebra, and the two theories are essentially equivalent.
Regular representation
The group algebra is an algebra over itself; under the correspondence of representations over R and R[G] modules, it is the regular representation of the group.
Written as a representation, it is the representation g ρg with the action given by , or
Semisimple decomposition
The dimension of the vector space K[G] is just equal to the number of elements in the group. The field K is commonly taken to be the complex numbers C or the reals R, so that one discusses the group algebras C[G] or R[G].
The group algebra C[G] of a finite group over the complex numbers is a semisimple ring. This result, Maschke's theorem, allows us to understand C[G] as a finite product of matrix rings with entries in C. Indeed, if we list the complex irreducible representations of G as Vk for k = 1, . . . , m, these correspond to group homomorphisms and hence to algebra homomorphisms . Assembling these mappings gives an algebra isomorphism
where dk is the dimension of Vk. The subalgebra of C[G] corresponding to End(Vk) is the two-sided ideal generated by the idempotent
where is the character of Vk. These form a complete system of orthogonal idempotents, so that , for j ≠ k, and . The isomorphism is closely related to Fourier transform on finite groups.
For a more general field K, whenever the characteristic of K does not divide the order of the group G, then K[G] is semisimple. When G is a finite abelian group, the group ring K[G] is commutative, and its structure is easy to express in terms of roots of unity.
When K is a field of characteristic p which divides the order of G, the group ring is not semisimple: it has a non-zero Jacobson radical, and this gives the corresponding subject of modular representation theory its own, deeper character.
Center of a group algebra
The center of the group algebra is the set of elements that commute with all elements of the group algebra:
The center is equal to the set of class functions, that is the set of elements that are constant on each conjugacy class
If , the set of irreducible characters of G forms an orthonormal basis of Z(K[G]) with respect to the inner product
Group rings over an infinite group
Much less is known in the case where G is countably infinite, or uncountable, and this is an area of active research. The case where R is the field of complex numbers is probably the one best studied. In this case, Irving Kaplansky proved that if a and b are elements of C[G] with , then . Whether this is true if R is a field of positive characteristic remains unknown.
A long-standing conjecture of Kaplansky (~1940) says that if G is a torsion-free group, and K is a field, then the group ring K[G] has no non-trivial zero divisors. This conjecture is equivalent to K[G] having no non-trivial nilpotents under the same hypotheses for K and G.
In fact, the condition that K is a field can be relaxed to any ring that can be embedded into an integral domain.
The conjecture remains open in full generality, however some special cases of torsion-free groups have been shown to satisfy the zero divisor conjecture. These include:
Unique product groups (e.g. orderable groups, in particular free groups)
Elementary amenable groups (e.g. virtually abelian groups)
Diffuse groups – in particular, groups that act freely isometrically on R-trees, and the fundamental groups of surface groups except for the fundamental groups of direct sums of one, two or three copies of the projective plane.
The case where G is a topological group is discussed in greater detail in the article Group algebra of a locally compact group.
Category theory
Adjoint
Categorically, the group ring construction is left adjoint to "group of units"; the following functors are an adjoint pair:
where takes a group to its group ring over R, and takes an R-algebra to its group of units.
When , this gives an adjunction between the category of groups and the category of rings, and the unit of the adjunction takes a group G to a group that contains trivial units: In general, group rings contain nontrivial units. If G contains elements a and b such that and b does not normalize then the square of
is zero, hence . The element is a unit of infinite order.
Universal property
The above adjunction expresses a universal property of group rings. Let be a (commutative) ring, let be a group, and let be an -algebra. For any group homomorphism , there exists a unique -algebra homomorphism such that where is the inclusion
In other words, is the unique homomorphism making the following diagram commute:
Any other ring satisfying this property is canonically isomorphic to the group ring.
Hopf algebra
The group algebra K[G] has a natural structure of a Hopf algebra. The comultiplication is defined by , extended linearly, and the antipode is , again extended linearly.
Generalizations
The group algebra generalizes to the monoid ring and thence to the category algebra, of which another example is the incidence algebra.
Filtration
If a group has a length function – for example, if there is a choice of generators and one takes the word metric, as in Coxeter groups – then the group ring becomes a filtered algebra.
See also
Group algebra of a locally compact group
Monoid ring
Kaplansky's conjectures
Representation theory
Group representation
Regular representation
Category theory
Categorical algebra
Group of units
Incidence algebra
Quiver algebra
Notes
References
Milies, César Polcino; Sehgal, Sudarshan K. An introduction to group rings. Algebras and applications, Volume 1. Springer, 2002.
Charles W. Curtis, Irving Reiner. Representation theory of finite groups and associative algebras, Interscience (1962)
D.S. Passman, The algebraic structure of group rings, Wiley (1977)
Ring theory
Representation theory of groups
Harmonic analysis
de:Monoidring | Group ring | Mathematics | 3,072 |
3,143,591 | https://en.wikipedia.org/wiki/Euclid%27s%20theorem | Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proven by Euclid in his work Elements. There are several proofs of the theorem.
Euclid's proof
Euclid offered a proof published in his work Elements (Book IX, Proposition 20), which is paraphrased here.
Consider any finite list of prime numbers p1, p2, ..., pn. It will be shown that there exists at least one additional prime number not included in this list. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then q is either prime or not:
If q is prime, then there is at least one more prime that is not in the list, namely, q itself.
If q is not prime, then some prime factor p divides q. If this factor p were in our list, then it would also divide P (since P is the product of every number in the list). If p divides P and q, then p must also divide the difference of the two numbers, which is (P + 1) − P or just 1. Since no prime number divides 1, p cannot be in the list. This means that at least one more prime number exists that is not in the list.
This proves that for every finite list of prime numbers there is a prime number not in the list. In the original work, Euclid denoted the arbitrary finite set of prime numbers as A, B, Γ.
Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers, though it is actually a proof by cases, a direct proof method. The philosopher Torkel Franzén, in a book on logic, states, "Euclid's proof that there are infinitely many primes is not an indirect proof [...] The argument is sometimes formulated as an indirect proof by replacing it with the assumption 'Suppose are all the primes'. However, since this assumption isn't even used in the proof, the reformulation is pointless."
Variations
Several variations on Euclid's proof exist, including the following:
The factorial n! of a positive integer n is divisible by every integer from 2 to n, as it is the product of all of them. Hence, is not divisible by any of the integers from 2 to n, inclusive (it gives a remainder of 1 when divided by each). Hence is either prime or divisible by a prime larger than n. In either case, for every positive integer n, there is at least one prime bigger than n. The conclusion is that the number of primes is infinite.
Euler's proof
Another proof, by the Swiss mathematician Leonhard Euler, relies on the fundamental theorem of arithmetic: that every integer has a unique prime factorization. What Euler wrote (not with this modern notation and, unlike modern standards, not restricting the arguments in sums and products to any finite sets of integers) is equivalent to the statement that we have
where denotes the set of the first prime numbers, and is the set of the positive integers whose prime factors are all in
To show this, one expands each factor in the product as a geometric series, and distributes the product over the sum (this is a special case of the Euler product formula for the Riemann zeta function).
In the penultimate sum, every product of primes appears exactly once, so the last equality is true by the fundamental theorem of arithmetic. In his first corollary to this result Euler denotes by a symbol similar to the "absolute infinity" and writes that the infinite sum in the statement equals the "value" , to which the infinite product is thus also equal (in modern terminology this is equivalent to saying that the partial sum up to of the harmonic series diverges asymptotically like ). Then in his second corollary, Euler notes that the product
converges to the finite value 2, and there are consequently more primes than squares. This proves Euclid's Theorem.
In the same paper (Theorem 19) Euler in fact used the above equality to prove a much stronger theorem that was unknown before him, namely that the series
is divergent, where denotes the set of all prime numbers (Euler writes that the infinite sum equals , which in modern terminology is equivalent to saying that the partial sum up to of this series behaves asymptotically like ).
Erdős's proof
Paul Erdős gave a proof that also relies on the fundamental theorem of arithmetic. Every positive integer has a unique factorization into a square-free number and a square number . For example, .
Let be a positive integer, and let be the number of primes less than or equal to . Call those primes . Any positive integer which is less than or equal to can then be written in the form
where each is either or . There are ways of forming the square-free part of . And can be at most , so . Thus, at most numbers can be written in this form. In other words,
Or, rearranging, , the number of primes less than or equal to , is greater than or equal to . Since was arbitrary, can be as large as desired by choosing appropriately.
Furstenberg's proof
In the 1950s, Hillel Furstenberg introduced a proof by contradiction using point-set topology.
Define a topology on the integers , called the evenly spaced integer topology, by declaring a subset to be an open set if and only if it is either the empty set, , or it is a union of arithmetic sequences (for ), where
Then a contradiction follows from the property that a finite set of integers cannot be open and the property that the basis sets are both open and closed, since
cannot be closed because its complement is finite, but is closed since it is a finite union of closed sets.
Recent proofs
Proof using the inclusion-exclusion principle
Juan Pablo Pinasco has written the following proof.
Let p1, ..., pN be the smallest N primes. Then by the inclusion–exclusion principle, the number of positive integers less than or equal to x that are divisible by one of those primes is
Dividing by x and letting x → ∞ gives
This can be written as
If no other primes than p1, ..., pN exist, then the expression in (1) is equal to and the expression in (2) is equal to 1, but clearly the expression in (3) is not equal to 1. Therefore, there must be more primes than p1, ..., pN.
Proof using Legendre's formula
In 2010, Junho Peter Whang published the following proof by contradiction. Let k be any positive integer. Then according to Legendre's formula (sometimes attributed to de Polignac)
where
But if only finitely many primes exist, then
(the numerator of the fraction would grow singly exponentially while by Stirling's approximation the denominator grows more quickly than singly exponentially),
contradicting the fact that for each k the numerator is greater than or equal to the denominator.
Proof by construction
Filip Saidak gave the following proof by construction, which does not use reductio ad absurdum or Euclid's lemma (that if a prime p divides ab then it must divide a or b).
Since each natural number greater than 1 has at least one prime factor, and two successive numbers n and (n + 1) have no factor in common, the product n(n + 1) has more different prime factors than the number n itself. So the chain of pronic numbers:1×2 = 2 {2}, 2×3 = 6 {2, 3}, 6×7 = 42 {2, 3, 7}, 42×43 = 1806 {2, 3, 7, 43}, 1806×1807 = 3263442 {2, 3, 7, 43, 13, 139}, · · ·provides a sequence of unlimited growing sets of primes.
Proof using the incompressibility method
Suppose there were only k primes (p1, ..., pk). By the fundamental theorem of arithmetic, any positive integer n could then be represented as
where the non-negative integer exponents ei together with the finite-sized list of primes are enough to reconstruct the number. Since for all i, it follows that for all i (where denotes the base-2 logarithm). This yields an encoding for n of the following size (using big O notation):
bits.
This is a much more efficient encoding than representing n directly in binary, which takes bits. An established result in lossless data compression states that one cannot generally compress N bits of information into fewer than N bits. The representation above violates this by far when n is large enough since . Therefore, the number of primes must not be finite.
Proof using an even-odd argument
Romeo Meštrović used an even-odd argument to show that if the number of primes is not infinite then 3 is the largest prime, a contradiction.
Suppose that are all the prime numbers. Consider and note that by assumption all positive integers relatively prime to it are in the set . In particular, is relatively prime to and so is . However, this means that is an odd number in the set , so , or . This means that must be the largest prime number which is a contradiction.
The above proof continues to work if is replaced by any prime with , the product becomes and even vs. odd argument is replaced with a divisible vs. not divisible by argument. The resulting contradiction is that must, simultaneously, equal and be greater than , which is impossible.
Stronger results
The theorems in this section simultaneously imply Euclid's theorem and other results.
Dirichlet's theorem on arithmetic progressions
Dirichlet's theorem states that for any two positive coprime integers a and d, there are infinitely many primes of the form a + nd, where n is also a positive integer. In other words, there are infinitely many primes that are congruent to a modulo d.
Prime number theorem
Let be the prime-counting function that gives the number of primes less than or equal to , for any real number . The prime number theorem then states that is a good approximation to , in the sense that the limit of the quotient of the two functions and as increases without bound is 1:
Using asymptotic notation this result can be restated as
This yields Euclid's theorem, since
Bertrand–Chebyshev theorem
In number theory, Bertrand's postulate is a theorem stating that for any integer , there always exists at least one prime number such that
Equivalently, writing for the prime-counting function (the number of primes less than or equal to ), the theorem asserts that for all .
This statement was first conjectured in 1845 by Joseph Bertrand (1822–1900). Bertrand himself verified his statement for all numbers in the interval
His conjecture was completely proved by Chebyshev (1821–1894) in 1852 and so the postulate is also called the Bertrand–Chebyshev theorem or Chebyshev's theorem.
Notes
References
External links
Euclid's Elements, Book IX, Prop. 20 (Euclid's proof, on David Joyce's website at Clark University)
Articles containing proofs
Theorems about prime numbers | Euclid's theorem | Mathematics | 2,420 |
9,335,972 | https://en.wikipedia.org/wiki/Lipman%20Bers | Lipman Bers (Latvian: Lipmans Berss; May 22, 1914 – October 29, 1993) was a Latvian-American mathematician, born in Riga, who created the theory of pseudoanalytic functions and worked on Riemann surfaces and Kleinian groups. He was also known for his work in human rights activism.
Biography
Bers was born in Riga, then under the rule of the Russian Czars, and spent several years as a child in Saint Petersburg; his family returned to Riga in approximately 1919, by which time it was part of independent Latvia. In Riga, his mother was the principal of a Jewish elementary school, and his father became the principal of a Jewish high school, both of which Bers attended, with an interlude in Berlin while his mother, by then separated from his father, attended the Berlin Psychoanalytic Institute. After high school, Bers studied at the University of Zurich for a year, but had to return to Riga again because of the difficulty of transferring money from Latvia in the international financial crisis of the time. He continued his studies at the University of Riga, where he became active in socialist politics, including giving political speeches and working for an underground newspaper. In the aftermath of the Latvian coup in 1934 by right-wing leader Kārlis Ulmanis, Bers was targeted for arrest but fled the country, first to Estonia and then to Czechoslovakia.
Bers received his Ph.D. in 1938 from the University of Prague. He had begun his studies in Prague with Rudolf Carnap, but when Carnap moved to the US he switched to Charles Loewner, who would eventually become his thesis advisor. In Prague, he lived with an aunt, and married his wife Mary (née Kagan) whom he had met in elementary school and who had followed him from Riga. Having applied for postdoctoral studies in Paris, he was given a visa to go to France soon after the Munich Agreement, by which Nazi Germany annexed the Sudetenland. He and his wife Mary had a daughter in Paris. They were unable to obtain a visa there to emigrate to the US, as the Latvian quota had filled, so they escaped to the south of France ten days before the fall of Paris, and eventually obtained an emergency US visa in Marseilles, one of a group of 10,000 visas set aside for political refugees by Eleanor Roosevelt. The Bers family rejoined Bers' mother, who had by then moved to New York City and become a psychoanalyst, married to thespian Beno Tumarin. At this time, Bers worked for the YIVO Yiddish research agency.
Bers spent World War II teaching mathematics as a research associate at Brown University, where he was joined by Loewner. After the war, Bers found an assistant professorship at Syracuse University (1945–1951), before moving to New York University (1951–1964) and then Columbia University (1964–1982), where he became the Davies Professor of Mathematics, and where he chaired the mathematics department from 1972 to 1975. His move to NYU coincided with a move of his family to New Rochelle, New York, where he joined a small community of émigré mathematicians. He was a visiting scholar at the Institute for Advanced Study in 1949–51. He was a Vice-President (1963–65) and a President (1975–77) of the American Mathematical Society, chaired the Division of Mathematical Sciences of the United States National Research Council from 1969 to 1971, chaired the U.S. National Committee on Mathematics from 1977 to 1981, and chaired the Mathematics Section of the National Academy of Sciences from 1967 to 1970.
Late in his life, Bers suffered from Parkinson's disease and strokes. He died on October 29, 1993.
Mathematical research
Bers' doctoral work was on the subject of potential theory. While in Paris, he worked on Green's function and on integral representations. After first moving to the US, while working for YIVO, he researched Yiddish mathematics textbooks rather than pure mathematics.
At Brown, he began working on problems of fluid dynamics, and in particular on the two-dimensional subsonic flows associated with cross-sections of airfoils. At this time, he began his work with Abe Gelbart on what would eventually develop into the theory of pseudoanalytic functions. Through the 1940s and 1950s he continued to develop this theory, and to use it to study the planar elliptic partial differential equations associated with subsonic flows. Another of his major results in this time concerned the singularities of the partial differential equations defining minimal surfaces. Bers proved an extension of Riemann's theorem on removable singularities, showing that any isolated singularity of a pencil of minimal surfaces can be removed; he spoke on this result at the 1950 International Congress of Mathematicians and published it in Annals of Mathematics.
Later, beginning with his visit to the Institute for Advanced Study, Bers "began
a ten-year odyssey that took him from pseudoanalytic functions and elliptic equations to quasiconformal mappings, Teichmüller theory, and
Kleinian groups". With Lars Ahlfors, he solved the "moduli problem", of finding a holomorphic parameterization of the Teichmüller space, each point of which represents a compact Riemann surface of a given genus. During this period he also coined the popular phrasing of a question on eigenvalues of planar domains, "Can one hear the shape of a drum?", used as an article title by Mark Kac in 1966 and finally answered negatively in 1992 by an academic descendant of Bers. In the late 1950s, by way of adding a coda to his earlier work, Bers wrote several major retrospectives of flows, pseudoanalytic functions, fixed point methods, Riemann surface theory prior to his work on moduli, and the theory of several complex variables. In 1958, he presented his work on Riemann surfaces in a second talk at the International Congress of Mathematicians.
Bers' work on the parameterization of Teichmüller space led him in the 1960s to consider the boundary of the parameterized space, whose points corresponded to new types of Kleinian groups, eventually to be called singly-degenerate Kleinian groups. He applied Eichler cohomology, previously developed for applications in number theory and the theory of Lie groups, to Kleinian groups. He proved the Bers area inequality, an area bound for hyperbolic surfaces that became a two-dimensional precursor to William Thurston's work on geometrization of 3-manifolds and 3-manifold volume, and in this period Bers himself also studied the continuous symmetries of hyperbolic 3-space.
Quasi-Fuchsian groups may be mapped to a pair of Riemann surfaces by taking the quotient by the group of one of the two connected components of the complement of the group's limit set; fixing the image of one of these two maps leads to a subset of the space of Kleinian groups called a Bers slice. In 1970, Bers conjectured that the singly degenerate Kleinian surface groups can be found on the boundary of a Bers slice; this statement, known as the Bers density conjecture, was finally proven by Namazi, Souto, and Ohshika in 2010 and 2011. The Bers compactification of Teichmüller space also dates to this period.
Advising
Over the course of his career, Bers advised approximately 50 doctoral students, among them Enrico Arbarello, Irwin Kra, Linda Keen, Murray H. Protter, and Lesley Sibner. Approximately a third of Bers' doctoral students were women, a high proportion for mathematics. Having felt neglected by his own advisor, Bers met regularly for meals with his students and former students, maintained a keen interest in their personal lives as well as their professional accomplishments, and kept up a friendly competition with Lars Ahlfors over who could bring to larger number of academic descendants to mathematical gatherings.
Human rights activism
As a small child with his mother in Saint Petersburg, Bers had cheered the Russian Revolution and the rise of the Soviet Union, but by the late 1930s he had become disillusioned with communism after the assassination of Sergey Kirov and Stalin's ensuing purges. His son, Victor Bers, later said that "His experiences in Europe motivated his activism in the human rights movement," and Bers himself attributed his interest in human rights to the legacy of Menshevik leader Julius Martov. He founded the Committee on Human Rights of the National Academy of Sciences, and beginning in the 1970s worked to allow the emigration of dissident Soviet mathematicians including Yuri Shikhanovich, Leonid Plyushch, Valentin Turchin, and David and Gregory Chudnovsky. Within the U.S., he also opposed the American involvement in the Vietnam War and southeast Asia, and the maintenance of the U.S. nuclear arsenal during the Cold War.
Awards and honors
In 1961, Bers was elected a Fellow of the American Academy of Arts and Sciences, and in 1965 he became a Fellow of the American Association for the Advancement of Science. He joined the National Academy of Sciences in 1964. He was a member of the Finnish Academy of Sciences, and the American Philosophical Society. He received the AMS Leroy P. Steele Prize for mathematical exposition in 1975 for his paper "Uniformization, moduli, and Kleinian groups". In 1986, the New York Academy of Sciences gave him their Human Rights Award. In the early 1980s, the Association for Women in Mathematics held a symposium to honor Bers' accomplishments in mentoring women mathematicians.
Publications
Books
Bers, Lipman (1976), Calculus, Holt, Rinehart and Winston, (in collaboration with Frank Karal)
Selected articles
with Abe Gelbart:
with Shmuel Agmon:
with Leon Ehrenpreis:
References
External links
20th-century American mathematicians
20th-century Latvian mathematicians
Latvian emigrants to the United States
Scientists from Riga
Latvian Jews
New York University faculty
Columbia University faculty
Syracuse University faculty
Fellows of the American Academy of Arts and Sciences
Fellows of the American Association for the Advancement of Science
Institute for Advanced Study visiting scholars
Members of the United States National Academy of Sciences
Complex analysts
1914 births
1993 deaths
Presidents of the American Mathematical Society
People from New Rochelle, New York
Mathematical analysts
Mathematicians from New York (state) | Lipman Bers | Mathematics | 2,120 |
47,777,700 | https://en.wikipedia.org/wiki/Amphipols | Amphipols (a portmanteau of amphiphilic polymers) are a class of amphiphilic polymers designed to keep membrane proteins soluble in water without the need for detergents, which are traditionally used to this end but tend to be denaturing. Amphipols adsorb onto the hydrophobic transmembrane surface of membrane proteins thanks to their hydrophobic moieties and keep the complexes thus formed water-soluble thanks to the hydrophilic ones. Amphipol-trapped membrane proteins are, as a rule, much more stable than detergent-solubilized ones, which facilitates their study by most biochemical and biophysical approaches. Amphipols can be used to fold denatured membrane proteins to their native form and have proven particularly precious in the field of single-particle electron cryo-microscopy (cryo-EM; see e.g. ).The properties and uses of amphipols and other non-conventional surfactants are the subject of a book by Jean-Luc Popot.
See also
Peptitergents - synthetic peptide sequences which can substitute to detergents to keep membrane proteins water-soluble.
Nanodisc - water-soluble protein-stabilized lipid discs that can trap and stabilize membrane proteins.
References
Surfactants | Amphipols | Chemistry | 266 |
73,141,922 | https://en.wikipedia.org/wiki/Pegunigalsidase%20alfa | Pegunigalsidase alfa, sold under the brand name Elfabrio, is an enzyme replacement therapy for the treatment of Fabry disease. It is a recombinant human α-galactosidase-A. It is a hydrolytic lysosomal neutral glycosphingolipid-specific enzyme.
The most common side effects are infusion-related reactions, hypersensitivity and asthenia.
Pegunigalsidase alfa was approved for medical use in both the European Union and the United States in May 2023.
Medical uses
Pegunigalsidase alfa is indicated for long-term enzyme replacement therapy in adults with a confirmed diagnosis of Fabry disease (deficiency of alpha-galactosidase).
Society and culture
Legal status
On 23 February 2023, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Elfabrio, intended for the treatment of Fabry disease. The applicant for this medicinal product is Chiesi Farmaceutici S.p.A. Elfabrio was approved for medical use in the European Union in May 2023.
References
Further reading
Drugs acting on the gastrointestinal system and metabolism
Orphan drugs
Recombinant proteins
Medical treatments | Pegunigalsidase alfa | Biology | 280 |
404,181 | https://en.wikipedia.org/wiki/Closed%20and%20exact%20differential%20forms | In mathematics, especially vector calculus and differential topology, a closed form is a differential form α whose exterior derivative is zero (), and an exact form is a differential form, α, that is the exterior derivative of another differential form β. Thus, an exact form is in the image of d, and a closed form is in the kernel of d.
For an exact form α, for some differential form β of degree one less than that of α. The form β is called a "potential form" or "primitive" for α. Since the exterior derivative of a closed form is zero, β is not unique, but can be modified by the addition of any closed form of degree one less than that of α.
Because , every exact form is necessarily closed. The question of whether every closed form is exact depends on the topology of the domain of interest. On a contractible domain, every closed form is exact by the Poincaré lemma. More general questions of this kind on an arbitrary differentiable manifold are the subject of de Rham cohomology, which allows one to obtain purely topological information using differential methods.
Examples
A simple example of a form that is closed but not exact is the 1-form given by the derivative of argument on the punctured plane Since is not actually a function (see the next paragraph) is not an exact form. Still, has vanishing derivative and is therefore closed.
Note that the argument is only defined up to an integer multiple of since a single point can be assigned different arguments etc. We can assign arguments in a locally consistent manner around but not in a globally consistent manner. This is because if we trace a loop from counterclockwise around the origin and back to the argument increases by Generally, the argument changes by
over a counter-clockwise oriented loop
Even though the argument is not technically a function, the different local definitions of at a point differ from one another by constants. Since the derivative at only uses local data, and since functions that differ by a constant have the same derivative, the argument has a globally well-defined derivative
The upshot is that is a one-form on that is not actually the derivative of any well-defined function We say that is not exact. Explicitly, is given as:
which by inspection has derivative zero. Because has vanishing derivative, we say that it is closed.
On the other hand, for the one-form
.
Thus is not even closed, never mind exact.
The form generates the de Rham cohomology group meaning that any closed form is the sum of an exact form and a multiple of where accounts for a non-trivial contour integral around the origin, which is the only obstruction to a closed form on the punctured plane (locally the derivative of a potential function) being the derivative of a globally defined function.
Examples in low dimensions
Differential forms in and were well known in the mathematical physics of the nineteenth century. In the plane, 0-forms are just functions, and 2-forms are functions times the basic area element , so that it is the 1-forms
that are of real interest. The formula for the exterior derivative here is
where the subscripts denote partial derivatives. Therefore the condition for to be closed is
In this case if is a function then
The implication from 'exact' to 'closed' is then a consequence of the symmetry of second derivatives, with respect to and .
The gradient theorem asserts that a 1-form is exact if and only if the line integral of the form depends only on the endpoints of the curve, or equivalently,
if the integral around any smooth closed curve is zero.
Vector field analogies
On a Riemannian manifold, or more generally a pseudo-Riemannian manifold, k-forms correspond to k-vector fields (by duality via the metric), so there is a notion of a vector field corresponding to a closed or exact form.
In 3 dimensions, an exact vector field (thought of as a 1-form) is called a conservative vector field, meaning that it is the derivative (gradient) of a 0-form (smooth scalar field), called the scalar potential. A closed vector field (thought of as a 1-form) is one whose derivative (curl) vanishes, and is called an irrotational vector field.
Thinking of a vector field as a 2-form instead, a closed vector field is one whose derivative (divergence) vanishes, and is called an incompressible flow (sometimes solenoidal vector field). The term incompressible is used because a non-zero divergence corresponds to the presence of sources and sinks in analogy with a fluid.
The concepts of conservative and incompressible vector fields generalize to n dimensions, because gradient and divergence generalize to n dimensions; curl is defined only in three dimensions, thus the concept of irrotational vector field does not generalize in this way.
Poincaré lemma
The Poincaré lemma states that if B is an open ball in Rn, any closed p-form ω defined on B is exact, for any integer p with .
More generally, the lemma states that on a contractible open subset of a manifold (e.g., ), a closed p-form, p > 0, is exact.
Formulation as cohomology
When the difference of two closed forms is an exact form, they are said to be cohomologous to each other. That is, if ζ and η are closed forms, and one can find some β such that
then one says that ζ and η are cohomologous to each other. Exact forms are sometimes said to be cohomologous to zero. The set of all forms cohomologous to a given form (and thus to each other) is called a de Rham cohomology class; the general study of such classes is known as cohomology. It makes no real sense to ask whether a 0-form (smooth function) is exact, since d increases degree by 1; but the clues from topology suggest that only the zero function should be called "exact". The cohomology classes are identified with locally constant functions.
Using contracting homotopies similar to the one used in the proof of the Poincaré lemma, it can be shown that de Rham cohomology is homotopy-invariant.
Relevance to thermodynamics
Consider a thermodynamic system whose equilibrium states are specified by thermodynamic variables, . The first law of thermodynamics can be stated as follows: In any process that results in an infinitesimal change of state where the internal energy of the system changes by an amount and
an amount of work is done on the system, one must also supply an amount of heat
The second law of thermodynamics is an empirical law of nature which says that there is no thermodynamic system for which in every circumstance, or in mathematical terms that, the differential form is not closed. Caratheodory's theorem further states that there exists an integrating denominator such that
is a closed 1-form. The integrating denominator is the temperature, and the state function is the equilibrium entropy.
Application in electrodynamics
In electrodynamics, the case of the magnetic field produced by a stationary electrical current is important. There one deals with the vector potential of this field. This case corresponds to , and the defining region is the full . The current-density vector is It corresponds to the current two-form
For the magnetic field one has analogous results: it corresponds to the induction two-form and can be derived from the vector potential , or the corresponding one-form ,
Thereby the vector potential corresponds to the potential one-form
The closedness of the magnetic-induction two-form corresponds to the property of the magnetic field that it is source-free: i.e., that there are no magnetic monopoles.
In a special gauge, , this implies
(Here is the magnetic constant.)
This equation is remarkable, because it corresponds completely to a well-known formula for the electrical field , namely for the electrostatic Coulomb potential of a charge density . At this place one can already guess that
and
and
and
can be unified to quantities with six rsp. four nontrivial components, which is the basis of the relativistic invariance of the Maxwell equations.
If the condition of stationarity is left, on the left-hand side of the above-mentioned equation one must add, in the equations for to the three space coordinates, as a fourth variable also the time t, whereas on the right-hand side, in the so-called "retarded time", must be used, i.e. it is added to the argument of the current-density. Finally, as before, one integrates over the three primed space coordinates. (As usual c is the vacuum velocity of light.)
Notes
Citations
References
.
Differential forms
Lemmas in analysis | Closed and exact differential forms | Mathematics,Engineering | 1,848 |
1,575,643 | https://en.wikipedia.org/wiki/Mass%20flow%20meter | A mass flow meter, also known as an inertial flow meter, is a device that measures mass flow rate of a fluid traveling through a tube. The mass flow rate is the mass of the fluid traveling past a fixed point per unit time.
The mass flow meter does not measure the volume per unit time (e.g. cubic meters per second) passing through the device; it measures the mass per unit time (e.g. kilograms per second) flowing through the device. Volumetric flow rate is the mass flow rate divided by the fluid density. If the density is constant, then the relationship is simple. If the fluid has varying density, then the relationship is not simple. For example, the density of the fluid may change with temperature, pressure, or composition. The fluid may also be a combination of phases such as a fluid with entrained bubbles. Actual density can be determined due to dependency of sound velocity on the controlled liquid concentration.
Operating principle of a Coriolis flow meter
The Coriolis flow meter is based on the Coriolis force, which bends rotating objects depending on their velocity.
There are two basic configurations of Coriolis flow meter: the curved tube flow meter and the straight tube flow meter. This article discusses the curved tube design.
The animations on the right do not represent an actually existing Coriolis flow meter design. The purpose of the animations is to illustrate the operating principle, and to show the connection with rotation.
Fluid is being pumped through the mass flow meter. When there is mass flow, the tube twists slightly. The arm through which fluid flows away from the axis of rotation must exert a force on the fluid, to increase its angular momentum, so it bends backwards. The arm through which fluid is pushed back to the axis of rotation must exert a force on the fluid to decrease the fluid's angular momentum again, hence that arm will bend forward. In other words, the inlet arm (containing an outwards directed flow), is lagging behind the overall rotation, the part which in rest is parallel to the axis is now skewed, and the outlet arm (containing an inwards directed flow) leads the overall rotation.
The animation on the right represents how curved tube mass flow meters are designed. The fluid is led through two parallel tubes. An actuator (not shown) induces equal counter vibrations on the sections parallel to the axis, to make the measuring device less sensitive to outside vibrations. The actual frequency of the vibration depends on the size of the mass flow meter, and ranges from 80 to 1000 Hz. The amplitude of the vibration is too small to be seen, but it can be felt by touch.
When no fluid is flowing, the motion of the two tubes is symmetrical, as shown in the left animation. The animation on the right illustrates what happens during mass flow: some twisting of the tubes. The arm carrying the flow away from the axis of rotation must exert a force on the fluid to accelerate the flowing mass to the vibrating speed of the tubes at the outside (increase of absolute angular momentum), so it is lagging behind the overall vibration. The arm through which fluid is pushed back towards the axis of movement must exert a force on the fluid to decrease the fluid's absolute angular speed (angular momentum) again, hence that arm leads the overall vibration.
The inlet arm and the outlet arm vibrate with the same frequency as the overall vibration, but when there is mass flow the two vibrations are out of sync: the inlet arm is behind, the outlet arm is ahead. The two vibrations are shifted in phase with respect to each other, and the degree of phase-shift is a measure for the amount of mass that is flowing through the tubes and line.
Density and volume measurements
The mass flow of a U-shaped Coriolis flow meter is given as:
where Ku is the temperature dependent stiffness of the tube, K is a shape-dependent factor, d is the width, τ is the time lag, ω is the vibration frequency, and Iu is the inertia of the tube. As the inertia of the tube depend on its contents, knowledge of the fluid density is needed for the calculation of an accurate mass flow rate.
If the density changes too often for manual calibration to be sufficient, the Coriolis flow meter can be adapted to measure the density as well. The natural vibration frequency of the flow tubes depends on the combined mass of the tube and the fluid contained in it. By setting the tube in motion and measuring the natural frequency, the mass of the fluid contained in the tube can be deduced. Dividing the mass on the known volume of the tube gives us the density of the fluid.
An instantaneous density measurement allows the calculation of flow in volume per time by dividing mass flow with density.
Calibration
Both mass flow and density measurements depend on the vibration of the tube. Calibration is affected by changes in the rigidity of the flow tubes.
Changes in temperature and pressure will cause the tube rigidity to change, but these can be compensated for through pressure and temperature zero and span compensation factors.
Additional effects on tube rigidity will cause shifts in the calibration factor over time due to degradation of the flow tubes. These effects include pitting, cracking, coating, erosion or corrosion. It is not possible to compensate for these changes dynamically, but efforts to monitor the effects may be made through regular meter calibration or verification checks. If a change is deemed to have occurred, but is considered to be acceptable, the offset may be added to the existing calibration factor to ensure continued accurate measurement.
See also
Coriolis effect
Flow measurement
Gaspard-Gustave Coriolis
Oscillating U-tube
References
External links
Lecture slides on flow measurement, University of Minnesota
Classical mechanics
Flow meters
Mass | Mass flow meter | Physics,Chemistry,Mathematics,Technology,Engineering | 1,191 |
4,743,663 | https://en.wikipedia.org/wiki/Comparison%20of%20iPod%20file%20managers | This is a list of iPod file managers, i.e. software that permits the transferring of media files. In the case of iPod file managers, this takes place between an iPod and a computer or vice versa.
iTunes is the official iPod managing software, but 3rd parties have created alternatives to work around restrictions in the program, or for those avoiding known issues with iTunes.
General
Media organization and transfer features
iPod syncing and maintenance features
iPhone & iPod Touch compatibility
See also
iPod
iTunes
iPhone
References
iPod Managers
ITunes | Comparison of iPod file managers | Technology | 102 |
22,205,229 | https://en.wikipedia.org/wiki/Field%20of%20Streams | The Field of Streams is a patch of sky where several stellar streams are visible and crisscross.
It was discovered by Vasily Belokurov and Daniel Zucker's team in 2006 by analyzing the Sloan Digital Sky Survey II (SDSS-II) data. The team named the area Field of Streams because of so many crisscrossing trails of stars.
The Sagittarius Stream of the Sagittarius Dwarf Elliptical Galaxy (SagDEG) dominates the Field. It has a split trail within the area of the Field of Streams, because SagDEG has wrapped around the Milky Way Galaxy multiple times, which has resulted in overlapping trails. The forking of the trail has made it possible to infer the organization of dark matter in the inner halo of the Milky Way Galaxy, resulting in the determination that it is distributed in a round spherical manner, as opposed to the expected flattened spheroid. The shape of the streams also implies that the dark matter is very cold, due to the thin trails, and persisting existence.
Also appearing in the Field is the Monoceros Ring, which was discovered before the Field.
See also
Stellar stream
List of stellar streams
References
Sources
The Astrophysical Journal, Volume 658, Issue 1, pp. 337–344. "An Orphan in the Field of Streams"; 2007 March; Belokurov, V. et al.; ; ;
The Astrophysical Journal, Volume 642, Issue 2, pp. L137-L140. "The Field of Streams: Sagittarius and Its Siblings"; 2006 May; Belokurov, V. et al.; ; ;
Monthly Notices of the Royal Astronomical Society, Volume 375 Issue 4, pp. 1171–1179. "Is Ursa Major II the progenitor of the Orphan Stream?"; M. Fellhauer et al.; ;
Science News, "Galactic de Gustibus: Milky Way's snacks shed light on dark matter and galaxy growth", Ron Cowen, 2006 July 1
Christian Science Monitor, "Big galaxies, it seems, eat little ones", Robert C. Cowen, 2006 May 18 (accessed 2009 March 29)
New Scientist, "Our galaxy's halo is round not squashed", Maggie McKee, 2006 May 9 (accessed 2009 March 29)
Milky Way
Stellar streams
Sky regions
Astronomical objects discovered in 2006 | Field of Streams | Astronomy | 490 |
406,922 | https://en.wikipedia.org/wiki/Complete%20Fermi%E2%80%93Dirac%20integral | In mathematics, the complete Fermi–Dirac integral, named after Enrico Fermi and Paul Dirac, for an index j is defined by
This equals
where is the polylogarithm.
Its derivative is
and this derivative relationship is used to define the Fermi-Dirac integral for nonpositive indices j. Differing notation for appears in the literature, for instance some authors omit the factor . The definition used here matches that in the NIST DLMF.
Special values
The closed form of the function exists for j = 0:
For x = 0, the result reduces to
where is the Dirichlet eta function.
See also
Incomplete Fermi–Dirac integral
Gamma function
Polylogarithm
References
External links
GNU Scientific Library - Reference Manual
Fermi-Dirac integral calculator for iPhone/iPad
Notes on Fermi-Dirac Integrals
Section in NIST Digital Library of Mathematical Functions
npplus: Python package that provides (among others) Fermi-Dirac integrals and inverses for several common orders.
Wolfram's MathWorld: Definition given by Wolfram's MathWorld.
Special functions | Complete Fermi–Dirac integral | Mathematics | 233 |
30,979,615 | https://en.wikipedia.org/wiki/Ahlfors%20finiteness%20theorem | In the mathematical theory of Kleinian groups, the Ahlfors finiteness theorem describes the quotient of the domain of discontinuity by a finitely generated Kleinian group. The theorem was proved by , apart from a gap that was filled by .
The Ahlfors finiteness theorem states that if Γ is a finitely-generated Kleinian group with region of discontinuity Ω, then
Ω/Γ has a finite number of components, each of which is a compact Riemann surface with a finite number of points removed.
Bers area inequality
The Bers area inequality is a quantitative refinement of the Ahlfors finiteness theorem proved by . It states that if Γ is a non-elementary finitely-generated Kleinian group with N generators and with region of discontinuity Ω, then
Area(Ω/Γ) ≤
with equality only for Schottky groups. (The area is given by the Poincaré metric in each component.)
Moreover, if Ω1 is an invariant component then
Area(Ω/Γ) ≤ 2Area(Ω1/Γ)
with equality only for Fuchsian groups of the first kind (so in particular there can be at most two invariant components).
References
Discrete groups
Lie groups
Kleinian groups
Theorems in analysis | Ahlfors finiteness theorem | Mathematics | 265 |
60,630,955 | https://en.wikipedia.org/wiki/Annie%20Birch%20House | The Annie Birch House, near Hoytsville, Utah, was built in 1875. It was listed on the National Register of Historic Places in 1984.
It is located at approximately 900 S. West Hoytsville Rd., off I-80.
It is a vernacular Pair House.
References
Pair-houses
National Register of Historic Places in Summit County, Utah
Houses completed in 1875 | Annie Birch House | Engineering | 77 |
44,024,971 | https://en.wikipedia.org/wiki/Matja%C5%BE%20Perc | Matjaž Perc is Professor of Physics at the University of Maribor in Slovenia, and director of the Complex Systems Center Maribor. He is member of Academia Europaea and among top 1% most cited physicists according to Thomson Reuters Highly Cited Researchers. He is Outstanding Referee of the Physical Review and Physical Review Letters journals, and Distinguished Referee of EPL. He received the Young Scientist Award for Socio-and Econophysics in 2015. His research has been widely reported in the media and professional literature.
Biography
Matjaž Perc studied physics at the University of Maribor. He completed his doctoral thesis on noise-induced pattern formation in spatially extended systems with applications to the nervous system, game-theoretical models, and social complexity. In 2009 he received the Zois Certificate of Recognition for outstanding research achievements in theoretical physics. In 2010 he became head of the Institute of Physics at the University of Maribor, and in 2011 he became full Professor of Physics. In 2015, Matjaž Perc established the Complex Systems Center Maribor. His research on complex systems covers evolutionary game theory, agent-based modeling, data analysis, and network science.
Research
Matjaž Perc is an expert on the theory of cooperation on networks. He has applied Monte Carlo simulations and dynamical mean field theory to discover that stochastic perturbations resolve social dilemmas in a resonance-like manner. He has also pioneered self-organization as a way of stabilizing reward and punishment in structured populations, and he has proposed the introduction of discrete strategies in ultimatum games, which has contributed to the understanding of the fascinating complexity behind human bargaining. His research has helped to reveal the full potential of methods of non-equilibrium statistical physics in evolutionary game theory.
He has done research on the evolution of moral and double moral standards, the evolution of the most common English words and phrases, and the rise and fall of new words. He has discovered self-organization in the way how major scientific ideas propagate across the physics literature, which culminated in a simple mathematical regularity that is able to identify scientific memes.
In addition to his various original contributions, Matjaž Perc has provided the research community with several reviews and introductory articles on evolutionary games, the emergence of organized crime, collective phenomena in socio-economic systems, energy-saving mechanisms in nature, and the Matthew effect. By the number of citations, he is the most cited author of several physics journals including New Journal of Physics, Biosystems, Journal of the Royal Society Interface, Physical Review E.
Awards
2018 received the Zois Award (named after Sigmund Zois).
2017 received USERN Prize in social sciences, for "Transitions Towards Cooperation in Human Societies".
2009 received the Zois Certificate of Recognition. Being only 30 years old at the time, he was the youngest receiver of the Zois certificate ever.
Publications
For a full list see Matjaž Perc's ORCID page.
Spatial coherence resonance in excitable media, Matjaž Perc, Phys. Rev. E 72, 016207 (2005)
Coherence resonance in a spatial prisoner's dilemma game, Matjaž Perc, New J. Phys. 8, 22 (2006)
Transition from Gaussian to Lévy distributions of stochastic payoff variations in the spatial prisoner's dilemma game, Matjaž Perc, Phys. Rev. E 75, 022101 (2007)
Noise-guided evolution within cyclical interactions, Matjaž Perc and Attila Szolnoki, New J. Phys. 9, 267 (2007)
Social diversity and promotion of cooperation in the spatial prisoner's dilemma game, Matjaž Perc and Attila Szolnoki, Phys. Rev. E 77, 011904 (2008)
Making new connections towards cooperation in the prisoner's dilemma game, Attila Szolnoki, Matjaž Perc and Zsuzsa Danku, EPL 84, 50007 (2008)
Topology-independent impact of noise on cooperation in spatial public goods games, Attila Szolnoki, Matjaž Perc and György Szabó, Phys. Rev. E 80, 056109 (2009)
Resolving social dilemmas on evolving random networks, Attila Szolnoki and Matjaž Perc, EPL 86, 30007 (2009)
Coevolutionary games - A mini review, Matjaž Perc and Attila Szolnoki, BioSystems 99, 109-125 (2010)
Does strong heterogeneity promote cooperation by group interactions?, Matjaž Perc, New J. Phys. 13, 123027 (2011)
Defense mechanisms of empathetic players in the spatial ultimatum game, Attila Szolnoki, Matjaž Perc and György Szabó, Phys. Rev. Lett. 109, 078701 (2012)
Evolution of the most common English words and phrases over the centuries, Matjaž Perc, J. R. Soc. Interface 9, 3323-3328 (2012) (tables containing the most common English words and phrases are here)
Self-organization of punishment in structured populations, Matjaž Perc and Attila Szolnoki, New J. Phys. 14, 043013 (2012)
Evolutionary advantages of adaptive rewarding, Attila Szolnoki and Matjaž Perc, New J. Phys. 14, 093016 (2012)
Correlation of positive and negative reciprocity fails to confer an evolutionary advantage: Phase transitions to elementary strategies, Attila Szolnoki and Matjaž Perc, Phys. Rev. X 3, 041021 (2013)
Evolutionary dynamics of group interactions on structured populations: A review, Matjaž Perc, Jesús Gómez Gardeñes, Attila Szolnoki, Luis M. Floría and Yamir Moreno, J. R. Soc. Interface 10, 20120997 (2013)
Inheritance patterns in citation networks reveal scientific memes, Tobias Kuhn, Matjaž Perc and Dirk Helbing, Phys. Rev. X 4, 041036 (2014) (see also the Physics Focus story here)
Interdependent network reciprocity in evolutionary games, Zhen Wang, Attila Szolnoki and Matjaž Perc, Scientific Reports 3, 1183 (2013)
Self-organization of progress across the century of physics, Matjaž Perc, Scientific Reports 3, 1720 (2013) (the n-gram viewer for publications of the American Physical Society is here)
Antisocial pool rewarding does not deter public cooperation, Attila Szolnoki and Matjaž Perc, Proc. R. Soc. B 282, 20151975 (2015)
The Matthew effect in empirical data, Matjaž Perc, J. R. Soc. Interface 11, 20140378 (2014)
Saving human lives: What complexity science and information systems can contribute, Dirk Helbing, Dirk Brockmann, Thomas Chadefaux, Karsten Donnay, Ulf Blanke, Olivia Woolley-Meza, Mehdi Moussaid, Anders Johansson, Jens Krause, Sebastian Schutte and Matjaž Perc, J. Stat. Phys. 158, 735-781 (2015)
Statistical physics of crime: A review, Maria R. D'Orsogna and Matjaž Perc, Phys. Life Rev. 12, 1-21 (2015)
Editorial work
Matjaž Perc is editorial board member at Physical Review E, New Journal of Physics, EPL, European Physical Journal B, Advances in Complex Systems, Frontiers in Interdisciplinary Physics, International Journal of Bifurcation and Chaos, PLOS ONE, Scientific Reports, Royal Society Open Science, Chaos, Solitons & Fractals, and Applied Mathematics and Computation. He was also guest editor for the Proceedings of the National Academy of Sciences of the United States of America.
References and notes
1979 births
21st-century physicists
Slovenian physicists
Living people
Academic staff of the University of Maribor
Statistical physicists
Fellows of the American Physical Society | Matjaž Perc | Physics | 1,698 |
9,733,327 | https://en.wikipedia.org/wiki/Blackwell%20channel | The Blackwell channel is a deterministic broadcast channel model used in coding theory and information theory. It was first proposed by mathematician David Blackwell. In this model, a transmitter transmits one of three symbols to two receivers. For two of the symbols, both receivers receive exactly what was sent; the third symbol, however, is received differently at each of the receivers. This is one of the simplest examples of a non-trivial capacity result for a non-stochastic channel.
Definition
The Blackwell channel is composed of one input (transmitter) and two outputs (receivers). The channel input is ternary (three symbols) and is selected from {0, 1, 2}. This symbol is broadcast to the receivers; that is, the transmitter sends one symbol simultaneously to both receivers. Each of the channel outputs is binary (two symbols), labeled {0, 1}.
Whenever a 0 is sent, both outputs receive a 0. Whenever a 1 is sent, both outputs receive a 1. When a 2 is sent, however, the first output is 0 and the second output is 1. Therefore, the symbol 2 is confused by each of the receivers in a different way.
The operation of the channel is memoryless and completely deterministic.
Capacity of the Blackwell channel
The capacity of the channel was found by S. I. Gel'fand. It is defined by the region:
1. R1 = 1, 0 ≤ R2 ≤
2. R1 = H(a), R2 = 1 − a, for ≤ a ≤
3. R1 + R2 = log2 3, log2 3 - ≤ R1 ≤
4. R1 = 1 − a, R2 = H(a), for ≤ a ≤
5. 0 ≤ R1 ≤ , R2 = 1
A solution was also found by Pinkser et al. (1995).
References
Coding theory | Blackwell channel | Mathematics | 387 |
2,670,705 | https://en.wikipedia.org/wiki/Lambda2%20Sculptoris | {{DISPLAYTITLE:Lambda2 Sculptoris}}
Lambda2 Sculptoris is an orange-hued star in the southern constellation of Sculptor. On dark nights it is faintly visible to the naked eye, having an apparent visual magnitude of +5.90. Based upon an annual parallax shift of 9.63 mas as measured from Earth, it is located roughly 340 light-years from the Sun. It has a relatively large proper motion, advancing per year across the sky.
At an age of about 3.58 billion years, Lambda2 Sculptoris is an evolved red-clump giant star with a stellar classification of K0 III. It is presently on the horizontal branch and is generating energy through the nuclear fusion of helium at its core. The star has an estimated 1.49 times the mass of the Sun and has expanded to about 14 times the Sun's radius. It is radiating 63 times the solar luminosity from its photosphere at an effective temperature of .
References
K-type giants
Horizontal-branch stars
Sculptoris, Lambda-2
Sculptor (constellation)
004211
003456
0195
Durchmusterung objects | Lambda2 Sculptoris | Astronomy | 236 |
55,081,903 | https://en.wikipedia.org/wiki/Nigroporus%20macroporus | Nigroporus macroporus is a species of poroid fungus in the family Steccherinaceae. It was described as new to science in 2003 by mycologists Leif Ryvarden and Teresa Iturriaga. Found in Venezuela and Brazil, it is a wood-decay fungus that causes a white rot in the hardwood Dimorphandra macrostachya.
Description
Fruit bodies of the fungus are bracket-like with a semicircular dark brown to black cap measuring up to wide, long, and thick. They have a leathery texture when fresh that becomes dry and woody in age. The dark brown pore surface comprises pores numbering 1–2 per millimetre, or fewer on decurrent parts where the pores get stretched out. The spores of N. macroporus are cylindrical, smooth and thin-walled, hyaline, and measure 5–6 by 1.7–2 μm.
The neotropical species Nigroporus rigidus is quite similar in appearance to N. macroporus, but the former species has much smaller pores, numbering 7–9 per millimetre.
References
Fungi described in 2003
Fungi of South America
Steccherinaceae
Taxa named by Leif Ryvarden
Fungus species | Nigroporus macroporus | Biology | 260 |
26,989,304 | https://en.wikipedia.org/wiki/Structural%20repairs | In construction, structural repairs is a technical term describing maintenance of a property structure in order to bring it up to local health and safety standards. It is contrasted to renovations or non-structural repairs. Unlike renovations, structural repairs add relatively little value to a property.
Leases often include provisions that define what types of changes amount to structural repairs and assign responsibility to either the tenant or the landlord.
References
Construction | Structural repairs | Engineering | 81 |
59,158,616 | https://en.wikipedia.org/wiki/Li%C3%B1%C3%A1n%27s%20equation | In the study of diffusion flame, Liñán's equation is a second-order nonlinear ordinary differential equation which describes the inner structure of the diffusion flame, first derived by Amable Liñán in 1974. The equation reads as
subjected to the boundary conditions
where is the reduced or rescaled Damköhler number and is the ratio of excess heat conducted to one side of the reaction sheet to the total heat generated in the reaction zone. If , more heat is transported to the oxidizer side, thereby reducing the reaction rate on the oxidizer side (since reaction rate depends on the temperature) and consequently greater amount of fuel will be leaked into the oxidizer side. Whereas, if , more heat is transported to the fuel side of the diffusion flame, thereby reducing the reaction rate on the fuel side of the flame and increasing the oxidizer leakage into the fuel side. When , all the heat is transported to the oxidizer (fuel) side and therefore the flame sustains extremely large amount of fuel (oxidizer) leakage.
The equation is, in some aspects, universal (also called as the canonical equation of the diffusion flame) since although Liñán derived the equation for stagnation point flow, assuming unity Lewis numbers for the reactants, the same equation is found to represent the inner structure for general laminar flamelets, having arbitrary Lewis numbers.
Existence of solutions
Near the extinction of the diffusion flame, is order unity. The equation has no solution for , where is the extinction Damköhler number. For with , the equation possess two solutions, of which one is an unstable solution. Unique solution exist if and . The solution is unique for , where is the ignition Damköhler number.
Liñán also gave a correlation formula for the extinction Damköhler number, which is increasingly accurate for ,
Generalized Liñán's equation
The generalized Liñán's equation is given by
where and are constant reaction orders of fuel and oxidizer, respectively.
Large Damköhler number limit
In the Burke–Schumann limit, . Then the equation reduces to
An approximate solution to this equation was developed by Liñán himself using integral method in 1963 for his thesis,
where is the error function and
Here is the location where reaches its minimum value . When , , and .
See also
Liñán's diffusion flame theory
References
Fluid dynamics
Combustion
Ordinary differential equations | Liñán's equation | Chemistry,Engineering | 480 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.