id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
30,030,151 | https://en.wikipedia.org/wiki/Flight%20dynamics | Flight dynamics in aviation and spacecraft, is the study of the performance, stability, and control of vehicles flying through the air or in outer space. It is concerned with how forces acting on the vehicle determine its velocity and attitude with respect to time.
For a fixed-wing aircraft, its changing orientation with respect to the local air flow is represented by two critical angles, the angle of attack of the wing ("alpha") and the angle of attack of the vertical tail, known as the sideslip angle ("beta"). A sideslip angle will arise if an aircraft yaws about its centre of gravity and if the aircraft sideslips bodily, i.e. the centre of gravity moves sideways. These angles are important because they are the principal source of changes in the aerodynamic forces and moments applied to the aircraft.
Spacecraft flight dynamics involve three main forces: propulsive (rocket engine), gravitational, and atmospheric resistance. Propulsive force and atmospheric resistance have significantly less influence over a given spacecraft compared to gravitational forces.
Aircraft
Flight dynamics is the science of air-vehicle orientation and control in three dimensions. The critical flight dynamics parameters are the angles of rotation with respect to the three aircraft's principal axes about its center of gravity, known as roll, pitch and yaw.
Aircraft engineers develop control systems for a vehicle's orientation (attitude) about its center of gravity. The control systems include actuators, which exert forces in various directions, and generate rotational forces or moments about the center of gravity of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw. For example, a pitching moment is a vertical force applied at a distance forward or aft from the center of gravity of the aircraft, causing the aircraft to pitch up or down.
Roll, pitch and yaw refer, in this context, to rotations about the respective axes starting from a defined equilibrium state. The equilibrium roll angle is known as wings level or zero bank angle, equivalent to a level heeling angle on a ship. Yaw is known as "heading".
A fixed-wing aircraft increases or decreases the lift generated by the wings when it pitches nose up or down by increasing or decreasing the angle of attack (AOA). The roll angle is also known as bank angle on a fixed-wing aircraft, which usually "banks" to change the horizontal direction of flight. An aircraft is streamlined from nose to tail to reduce drag making it advantageous to keep the sideslip angle near zero, though aircraft are deliberately "side-slipped" when landing in a cross-wind, as explained in slip (aerodynamics).
Spacecraft and satellites
The forces acting on space vehicles are of three types: propulsive force (usually provided by the vehicle's engine thrust); gravitational force exerted by the Earth and other celestial bodies; and aerodynamic lift and drag (when flying in the atmosphere of the Earth or another body, such as Mars or Venus). The vehicle's attitude must be controlled during powered atmospheric flight because of its effect on the aerodynamic and propulsive forces. There are other reasons, unrelated to flight dynamics, for controlling the vehicle's attitude in non-powered flight (e.g., thermal control, solar power generation, communications, or astronomical observation).
The flight dynamics of spacecraft differ from those of aircraft in that the aerodynamic forces are of very small, or vanishingly small effect for most of the vehicle's flight, and cannot be used for attitude control during that time. Also, most of a spacecraft's flight time is usually unpowered, leaving gravity as the dominant force.
See also
References
Aerospace engineering
Aerodynamics
Spaceflight concepts | Flight dynamics | Chemistry,Engineering | 742 |
58,817,314 | https://en.wikipedia.org/wiki/Emma%20Kowal | Emma Kowal is an Australian cultural and medical anthropologist, physician and scholar of science and technology studies. She is most well known for her books Trapped in the Gap: Doing Good in Indigenous Australia, and the co-edited volumes of Force, Movement, Intensity: The Newtonian Imagination in the Humanities and Social Sciences (with Ghassan Hage), Cryopolitics: Frozen Life in a Melting World (with Joanna Radin).
Early life and education
She received her Bachelor of Medicine and Bachelor of Surgery and a Bachelor of Arts in history and philosophy of science from University of Melbourne in 2000 and worked for a few years as a physician and a public health professional in the Northern Territory of Australia. She returned to the University of Melbourne to receive her PhD in public health anthropology in 2007. She is currently a professor in Anthropology at Deakin University and Convenor of the Deakin Science and Society Network.
Career
In 2014, she received the Paul Bourke Award for Early Career Research from the Academy of the Social Sciences in Australia. She was the deputy director for the National Centre for Indigenous Genomics at Australian National University between 2013 and 2017. In 2019, she was elected to the Fellowship of the Academy of the Social Sciences in Australia. Since 2021, Emma Kowal is president of the Society for Social Studies of Science. She was elected a Fellow of the Australian Academy of Health and Medical Sciences in 2022.
Publications
Emma Kowal has contributed to a large number of scholarly articles.
References
Australian anthropologists
Science and technology studies scholars
Australian women anthropologists
Medical anthropologists
Living people
University of Melbourne alumni
Academic staff of Deakin University
Year of birth missing (living people)
Fellows of the Academy of the Social Sciences in Australia
Fellows of the Australian Academy of Health and Medical Sciences | Emma Kowal | Technology | 356 |
21,241,734 | https://en.wikipedia.org/wiki/Richard%20J.%20Wood | Richard J. Wood is a mathematics professor at Dalhousie University in Halifax, Nova Scotia, Canada. He graduated from McMaster University in 1972 with his M.Sc. and then later went on to do his Ph.D. at Dalhousie University. He is interested in category theory and lattice theory.
References
Publications
External links
Year of birth missing (living people)
Living people
Canadian mathematicians
Category theorists
Lattice theorists | Richard J. Wood | Mathematics | 87 |
6,222,875 | https://en.wikipedia.org/wiki/Data%20governance | Data governance is a term used on both a macro and a micro level. The former is a political concept and forms part of international relations and Internet governance; the latter is a data management concept and forms part of corporate data governance.
Macro level
Data governance at the macro level involves regulating cross-border data flows among countries, which is more precisely termed international data governance. This field formed in the early 2000s and consists of "norms, principles and rules governing various types of data."
There have been several international groups established by research organizations that aim to grant access to their data. These groups that enable an exchange of data are, as a result, exposed to domestic and international legal interpretations that ultimately decide how data is used. However, as of 2023, there are no international laws or agreements specifically focused on data protection.
Micro level
Here the focus is on an individual company. Here data governance is a data management concept concerning the capability that enables an organization to ensure that high data quality exists throughout the complete lifecycle of the data, and data controls are implemented that support business objectives. The key focus areas of data governance include availability, usability, consistency, data integrity and security, and standards compliance. The practice also includes establishing processes to ensure effective data management throughout the enterprise, such as accountability for the adverse effects of poor data quality, and ensuring that the data which an enterprise has can be utilized by the entire organization.
A data steward is a role that ensures that data governance processes are followed and that guidelines are enforced, and recommends improvements to data governance processes.
Data governance involves the coordination of people, processes, and information technology necessary to ensure consistent and proper management of an organization's data across the business enterprise. It provides all data management practices with the necessary foundation, strategy, and structure needed to ensure that data is managed as an asset and transformed into meaningful information. Goals may be defined at all levels of the enterprise and doing so may aid in acceptance of processes by those who will use them. Some goals include:
Increasing consistency and confidence in decision making
Decreasing the risk of regulatory fines
Improving data security
Defining and verifying the requirements for data distribution policies
Maximizing the income generation potential of data
Designating accountability for information quality
Enabling better planning by supervisory staff
Minimizing or eliminating re-work
Optimizing staff effectiveness
Establishing process performance baselines to enable improvement efforts
Acknowledging and holding all gain
These goals are realized by the implementation of data governance programs, or initiatives using change management techniques.
When companies seek to take charge of their data, whether by choice or necessity, they empower their employees, establish processes, and utilize technology to accomplish this objective.
Data governance drivers
While data governance initiatives can be driven by a desire to improve data quality, they are often driven by C-level leaders responding to external regulations. In a recent report conducted by CIO WaterCooler community, 54% stated the key driver was efficiencies in processes; 39% - regulatory requirements; and only 7% customer service. Examples of these regulations include Sarbanes–Oxley Act, Basel I, Basel II, HIPAA, GDPR, cGMP, and a number of data privacy regulations. To achieve compliance with these regulations, business processes and controls require formal management processes to govern the data subject to these regulations. Successful programs identify drivers meaningful to both supervisory and executive leadership.
Common themes among the external regulations center on the need to manage risk. The risks can be financial misstatement, inadvertent release of sensitive data, or poor data quality for key decisions. Methods to manage these risks vary from industry to industry. Examples of commonly referenced best practices and guidelines include COBIT, ISO/IEC 38500, and others. The proliferation of regulations and standards creates challenges for data governance professionals, particularly when multiple regulations overlap the data being managed. Organizations often launch data governance initiatives to address these challenges.
Data governance initiatives (Dimensions)
Data governance initiatives improve quality of data by assigning a team responsible for data's accuracy, completeness, consistency, timeliness, validity, and uniqueness. This team usually consists of executive leadership, project management, line-of-business managers, and data stewards. The team usually employs some form of methodology for tracking and improving enterprise data, such as Six Sigma, and tools for data mapping, profiling, cleansing, and monitoring data.
Data governance initiatives may be aimed at achieving a number of objectives including offering better visibility to internal and external customers (such as supply chain management), compliance with regulatory law, improving operations after rapid company growth or corporate mergers, or to aid the efficiency of enterprise knowledge workers by reducing confusion and error and increasing their scope of knowledge. Many data governance initiatives are also inspired by past attempts to fix information quality at the departmental level, leading to incongruent and redundant data quality processes. Most large companies have many applications and databases that can not easily share information. Therefore, knowledge workers within large organizations often do not have access to the data they need to best do their jobs. When they do have access to the data, the data quality may be poor. By setting up a data governance practice or corporate data authority (individual or area responsible for determining how to proceed, in the best interest of the business, when a data issue arises), these problems can be mitigated.
Implementation
Implementation of a data governance initiative may vary in scope as well as origin. Sometimes, an executive mandate will arise to initiate an enterprise wide effort. Sometimes the mandate will be to create a pilot project or projects, limited in scope and objectives, aimed at either resolving existing issues or demonstrating value. Sometimes an initiative will originate lower down in the organization's hierarchy and will be deployed in a limited scope to demonstrate value to potential sponsors higher up in the organization. The initial scope of an implementation can vary greatly as well, from review of a one-off IT system, to a cross-organization initiative.
Data governance tools
Leaders of successful data governance programs declared at the Data Governance Conference in Orlando, FL, in December 2006 that data governance is about 80 to 95 percent communication. That stated, it is a given that many of the objectives of a data governance program must be accomplished with appropriate tools. Many vendors are now positioning their products as data governance tools. Due to the different focus areas of various data governance initiatives, a given tool may or may not be appropriate. Additionally, many tools that are not marketed as governance tools address governance needs and demands.
See also
Data sovereignty
Information architecture
Information governance
Information technology governance
Business semantics management
Semantics of Business Vocabulary and Business Rules
Master data management
COBIT
ISO/IEC 38500
ISO/TC 215
Operational risk management
Basel II Accord
HIPAA
Sarbanes-Oxley Act
Information technology controls
Data Protection Directive (EU)
Universal Data Element Framework
Asset Description Metadata Schema
Simulation Governance
List of datasets for machine-learning research
Data governance within data domain
References
Further reading
External links
Information technology governance
Data management | Data governance | Technology | 1,400 |
70,758,621 | https://en.wikipedia.org/wiki/Sony%20Xperia%201%20IV | The Sony Xperia 1 IV is an Android smartphone manufactured by Sony. Launched on May 11, 2022, it succeeds the Xperia 1 III as the latest flagship of Sony's Xperia series. The device was announced along with the mid-range Xperia 10 IV, with expected release dates by June 2022 (Asian markets) and as late as September 2022 for other markets including the US. US shipments were delayed and ultimately began in late October 2022.
Design
The Xperia 1 IV is designed with more professionalism in mind, while improving on the now-signature designs of its predecessors, the Xperia 1 II and Xperia 1 III. It features a grippier matte frame and rear frosted glass finish akin to the Xperia PRO-I, and a boxier design than the previous flagships. The phone has Corning Gorilla Glass Victus protection both on the front and the back as well as IP65 and IP68 certifications for water resistance.
The display still has symmetrical bezels on the top and the bottom, a hallmark Xperia design, where the front-facing dual stereo speakers and the front camera are placed. The left side of the phone is completely devoid of any controls or ports, with only antenna bands present. The microSD/SIM card combo tray now found at the bottom (or right-side if placed in landscape) along with the USB-C 3.2 port and the primary microphone, while the right side contains the fingerprint reader embedded into the power button, a volume rocker, and a dedicated 2-stage shutter button with an embossed finish, the previously included customisable shortcut button from the Mark 3 omitted. Xperia 1 IV is also the last Xperia 1 series to feature LED notification light as Xperia 1 V removed the feature the following year.
The rear cameras are arranged in a vertical strip like its predecessor, with the LED flash and color spectrum sensor along the top. The phone will be available in three colors: Black, White, and Purple, with only Black and Purple being available in the North American market.
Specifications
Hardware
The Xperia 1 IV is powered by the 4 nm (4LPE) Qualcomm Snapdragon 8 Gen 1 SoC and an Adreno 730 GPU, accompanied by 12 GB of LPDDR5 RAM, 256 GB or 512 GB storage space (expandable up to 1 TB), and single/dual-hybrid nano-SIM card slot depending on region. The phone features a 21:9 4K CinemaWide HDR 10-bit 120 Hz OLED display first seen in the Xperia 1 III, now improved with 50% more brightness. The Xperia 1 IV's touch sampling rate is 240 Hz. The phone has a larger 5000 mAh battery (from 4500 mAh of the 1 III), and supports 30 W Fast Charging alongside Qi wireless charging with reverse wireless charging support. The phone has front-facing dual stereo speakers with redesigned drivers, and support for 360 Reality Audio. There is also a 3.5 mm stereo audio jack with support for both high-resolution audio output as well as microphone input for plugged in peripherals such as an external microphone for vlogging.
Camera
The Xperia 1 IV has an improved triple camera setup from the 1 III. All three cameras are still 12 Megapixels, but sporting new sensors and optics for the ultrawide and telephoto. They consist of the main 12 MP Exmor RS IMX557 sensor behind a 24 mm f/1.7 lens with optical image stabilization (OIS), an ultrawide 12MP IMX563 sensor with 16 mm f/2.2 lens, both of which have phase-detection autofocus, and a 0.3 MP IMX316 3D TOF depth sensor. The latter is also the final time it was included in any Sony Xperia device as Xperia 1 V removes the 3D TOF depth sensor and RGBC-IR sensor as well.
The highlight of the 1 IV is its continuous zoom telephoto lens, a major improvement over its predecessor's variable zoom telephoto. It is a 12 MP 1/3.5" sensor with 1.0 μm pixels and PDAF, contained in the same periscope design like the 1 III, it can now zoom between 85mm all the way up to 125mm without any stepping or using digital zoom, just like a true digital camera. There is no confirm detail on the specific Sony IMX sensor used on the telephoto, other than some insights by independent reviewers such as GSMArena where they've discovered that it is "presumably" an IMX650, a 40-MP sensor with a 1/1.7-inch optical format that was last used on the Huawei P30 and P30 Pro smartphones. Whether or not this is true, either implementing the same 12-MP crop as the Xperia PRO-I on the IMX650, or the hardware information app HWiNFO used could be reporting incorrect data (which according to Notebookcheck seems unlikely), or if it's using a new or unknown IMX sensor altogether, remains to be seen.
All 3 cameras of the 1 IV use ZEISS T✻ (T-Star) anti-reflective coating on each lens and has support for 4K video recording up to 120 FPS and 2K for up to 120 FPS like its predecessors, and it improves on the 20 FPS burst feature where it is now available on all 3 cameras. Digital zoom on the main camera can reach the equivalent of 300mm with the "AI super resolution zoom" first featured on the 1 III. It also has improved Realtime Tracking with enhanced Eye AF for human, animals and birds, instantly locking focus on the subject's eyes without losing track upon sudden loss of focus from the frame.
For the first time, a new 12 MP front-facing camera with support for 4K video recording is present in the 1 IV. Surprisingly, it is the Sony IMX663 (in place of the previous Samsung ISOCELL sensor), the same sensor that was first used as the telephoto sensor for the Xperia 1 III and the Xperia PRO-I, making it on-par with the likes of Google's Pixel 6 Pro smartphone and marking another improvement over its predecessors' outdated 8 MP-resolution front cameras.
Software
The Xperia 1 IV runs on the latest Android 12, with promise for 2 major Android software revisions and 3 years of software support. It is also equipped with 3 different camera apps specifically made to take advantage of the 1 IV's camera hardware: "Photo Pro", developed by Sony's α (Alpha) camera division, focuses on the full manual control setup and configuration commonly seen on Sony Alpha line of professional cameras; the professional movie-oriented "Cinema Pro", developed by Sony's cinematography division CineAlta, and the "Basic Mode" first seen on the 1 III, replacing the stock camera app but with additional controls from the "Photo Pro".
See also
List of longest smartphone telephoto lenses
Notes
References
Android (operating system) devices
Flagship smartphones
Sony smartphones
Mobile phones introduced in 2022
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording | Sony Xperia 1 IV | Technology | 1,502 |
49,312,984 | https://en.wikipedia.org/wiki/Twelve-angled%20stone | The twelve-angled stone is an archeological artifact in Cusco, Peru. It was part of a stone wall of an Inca palace, and is considered to be a national heritage object. The stone is currently part of a wall in the palace of the Archbishop of Cusco.
Characteristics
The twelve-angled stone is composed of a formation of diorite rocks and is recognized by its fine finishing and twelve-angled border, an example of perfectionist Incan architecture. The block is categorized as Cultural Heritage of the Nation of Peru and is located in the city of Cusco, 1105 km from Lima. The stone is a great example of Inca knowledge in the evolution of construction. There are other stones with the same vertices but the twelve-angled stone is the most famous.
As an example of the Incas' advanced stonework, the stone is a popular tourist attraction in Cusco and a site of pride for many locals. The perfectly cut stone is part of a wall known as the Hatun Rumiyoc, which makes up the outside of the Archbishop's palace.
See also
Inca architecture
Inca civilization
List of individual rocks
References
Ruins in Peru
Inca
Archaeological sites in Peru
Archaeological sites in Cusco Region
Tourist attractions in Cusco Region
Stones | Twelve-angled stone | Physics | 250 |
27,211,191 | https://en.wikipedia.org/wiki/Electrodeionization | Electrodeionization (EDI) is a water treatment technology that utilizes DC power, ion exchange membranes, and ion exchange resin to deionize water. EDI is typically employed as a polishing treatment following reverse osmosis (RO), and is used in the production of ultrapure water. It differs from other RO polishing methods, like chemically regenerated mixed beds, by operating continuously without chemical regeneration.
Electrodeionization can be used to produce high purity water, reaching electrical resistivity values as high as 18.2 MΩ/cm.
Electrodeionization (EDI) integrates three distinct processes:
Electrolysis: A continuous DC current directs positive and negative ions toward electrodes with opposing electrical charges. The electrical potential draws anions and cations from diluting chambers, through cation or anion exchange membranes, into concentrating chambers.
Ion exchange: An ion exchange resin fills the diluting chambers. As water flows through the resin bed, cations and anions become affixed to resin sites.
Electrochemical regeneration: Unlike chemically regenerated mixed beds, EDI accomplishes regeneration through water splitting induced by the continuous electric current. Water splits from H2O into H+ and OH− to effectively regenerate the resin without the need for external chemical additives.
EDI is sometimes labeled "continuous electrodeionization" (CEDI) because the electric current continually regenerates the ion exchange resin mass.
Quality of the feed
To maximize the purity of product water, EDI feedwater needs pre-treatment, usually done via reverse osmosis. When fed with feedwater that is low in total dissolved solids (e.g., purified by RO), the product can reach very high purity levels. The contents of the feedwater must be kept within certain parameters to prevent damage to the EDI instrument.
Common feedwater quality concerns are:
Hardness, which is often limited to 1 part per million (ppm) of CaCO3 or corresponding molecule, with limited exceptions up to 2 ppm.
Silica content (SiO2), which generally must be no more than 1 ppm in most EDI cells or 2 ppm in thin-cell modules.
CO2, which must be monitored to prevent excessive loading of anion exchange resin.
TOC, which can foul resins and membranes, must be minimized.
Chlorine, ozone, and other oxidizers can oxidize resins and membranes and create permanent damage, and must be minimized.
History
Electrodeionization was developed in the early 1950s to eliminate or minimize the concentration polarization phenomenon present in electrolysis systems of the time. A patent on the technology was filed in 1953, and subsequent publications popularized the technology.
The technology was limited in application because of the low tolerance of total dissolved solids, hardness and organics. During the 1970s and 1980s, reverse osmosis became a preferred technology to ion exchange resin for high TDS waters. As RO gained popularity, EDI emerged as a suitable polishing technology. Packaged RO and EDI systems began to displace chemically regenerated ion exchange systems.
In 1986 and 1989, several companies developed new EDI devices. The initial devices were large, costly, and often unreliable. However, in the 1990s, smaller and less costly modular designs were introduced. Nonetheless, these designs and their contemporary descendants still face limitations such as cost and limited operational envelope.
Applications
In the electronics industry, deionized water is used to rinse components during manufacturing. This is necessary to avoid potential short circuits that could destroy electronic chips. As electronic chips are very small, there is little free space between component elements and unwanted electricity may conduct across components via even a small number of ions, causing a short circuit. Using deionized water to clean the components helps minimize the ions on their surfaces and thus minimizes short circuits.
In the pharmaceutical industry, the presence of unwanted ions in water used in drug development can lead to unwanted side reactions and introduce harmful impurities.
In power generation, the presence of ions in boiler feedwater can lead to the buildup of solids or the degradation of boiler walls, both of which can lower boiler efficiency and present safety hazards.
Due to the large financial and safety concerns present in these three industries, their economic demand for highly pure water provides the bulk of the demand for EDI devices and development.
Electrodeionization systems have also been applied to the removal of heavy metals from different types of wastewater from mining, electroplating, and nuclear processes. The primary ions removed in these processes are chromium, copper, cobalt, and caesium, though EDI sees use in the removal of others as well.
Theory
The electrodes in an electrochemical cell are each classified as either an anode or a cathode. An anode is an electrode at which electrons leave the cell and oxidation occurs, while a cathode is an electrode at which electrons enter the cell and reduction occurs. Each electrode may become either an anode or a cathode depending on the voltage applied to the cell.
Each deionization cell consists of an electrode and an electrolyte with ions that undergo either oxidation or reduction. Because they commonly consist of ions in solution, the electrolytes are often known as "ionic solutions", but molten and solid electrolytes are also possible.
Water passes between an anode and a cathode. Ion-selective membranes allow positive ions to separate from the water toward the negative electrode and negative ions toward the positive electrode. As a result, the ions cannot escape the cell and deionized water is produced.
When using a current that is higher than necessary for the movement of the ions, a portion of the incident water will be split, forming hydroxide (OH−) anions and hydrogen (H+) cations. These species will replace the impurity anions and cations in the resin. This process is called "in situ regeneration" of the resin. Because this replacement occurs alongside the deionization process it allows for continuous purification, as opposed to deionization techniques that require a pause in operation to chemically regenerate ion exchange resins.
The purpose of the ion exchange resin is to maintain a stable conductance across the feedwater. Without the resin, ions could be removed initially, but the conductance would drop dramatically as the concentration of ions decreases. With lower conductance, the electrodes would become less able to efficiently direct the flow of electrons across the cell, whereas with the addition of resin and thus a steady conductance, electron flow remains steady and ensures a steady rate of ion removal. With a resin, therefore, the final remaining ion concentrations in the processed water can be lower by orders of magnitude.
Installation scheme
The typical EDI installation has the following components: electrodes, anion exchange membranes, cation exchange membranes, and resin. The simplest configurations comprise three compartments. To increase production intensity or efficiency, the number of compartments or cells can be increased as desired.
Once the system is installed and feedwater begins to flow through it, cations flow toward the cathode and anions flow toward the anode. Only anions can go through the anion exchange membrane, and only cations can go through the cation exchange membrane. This configuration allows anions and cations to flow in only one direction because of the selectivity of the membranes and the electrical forces, rendering the feedwater relatively free of ions. It also allows for the separate collection of cation and anion concentration flows, creating the opportunity for more selective waste disposal, recycling, or reuse; this is especially useful in the removal of heavy metal cations.
See also
Electrodialysis
Ionization
Purified water
Water purification
Water treatment
References
External links
video.
Electrodeionzation Systems.
Advanced Electrodeionization Technology for Product Desalting, Argonne National Laboratory
Understanding the working principle of electrodeionization.
Water treatment
Ions
Physical chemistry
Separation processes
Industrial water treatment | Electrodeionization | Physics,Chemistry,Engineering,Environmental_science | 1,620 |
73,081,028 | https://en.wikipedia.org/wiki/Gaia%20BH2 | Gaia BH2 (Gaia DR3 5870569352746779008) is a binary system consisting of a red giant and what is very likely a stellar-mass black hole. Gaia BH2 is located about 3,800 light years away ( away) in the constellation of Centaurus, making it the third-closest known black hole system to Earth. Gaia BH2 is the second black hole discovered from Gaia DR3 astrometric data.
The black hole and red giant orbit the system barycentre every 1,277 days, or around 3.5 years, with a moderate eccentricity of 0.518. The black hole's mass is around , which means its Schwarzschild radius should be about . The red giant has a mass of and a radius of . Its temperature is estimated at .
Discovery
Gaia BH2 was originally discovered as a black hole binary candidate in 2022, found via astrometric observations with Gaia, along with Gaia BH1. At that time it was not clear if Gaia BH2 did definitely harbour a black hole, but it was the only plausible candidate in the Gaia data other than Gaia BH1. Later radial velocity observations confirmed this black hole system and refined its orbital parameters.
References
Centaurus
Red giants
Stellar black holes
Astrometric binaries
20230215 | Gaia BH2 | Physics,Astronomy | 286 |
58,446,396 | https://en.wikipedia.org/wiki/Keratinophyton%20durum | Keratinophyton durum is a keratinophilic fungus, that grows on keratin found in decomposing or shed animal hair and bird feathers. Various studies conducted in Canada, Japan, India, Spain, Poland, Ivory Coast and Iraq have isolated this fungus from decomposing animal hair and bird feathers using SDA and hair-bait technique. Presence of fungus in soil sediments and their ability to decompose hairs make them a potential human pathogen.
History and taxonomy
Keratinophyton durum was first described by Hugo Zukal in 1890 as Gymnoascus durus, and subsequently has been the subject of taxonomic confusion. Nearly 100 years after Zukal's description, Currah transferred the fungus from genus Gymnoascus to Keratinophyton (1985) and eventually to genus Aphanoascus (1986) due to similarity in their ascospores. At the same time, Guého and Vroey treated the fungus under the name, Anixiopsis biplanata (1986). The genera Aphanoascus and Anixiopsis have ascospores of similar appearance. This led to a debate within the mycological community upon its naming, wherein some mycologists used Anixiopsis biplanata and others used Aphanoascus durus to describe this species. Amidst this debate, the fungus was misclassified under the genus Ascocalvatia by von Arx in 1986. Ascocalvatia has cylindrical ascospores whereas Aphanoascus spores are flat and circular (oblate) in shape. Finally, in 1990 it was re-classified as Aphanoascus durus by Cano and Guarro. The name Keratinophyton durum was originally applied to the sexual state of the fungus (teleomorph), but now is used to include all life stages. The correct name for this species is K. durum.
Genomic information
Listed below are identified gene sequences for this fungus.
Morphology and growth
Ascospores found within the ascoma, are oblate (shaped like an M&M candy) and yellow-yellowish brown in color when observed in transmitted light microscopy. Typically, the lateral sides of these ascospores are smooth with a bumpy, pitted equatorial edge. Ascospores of this fungus are similar in appearance (morphology) to those of A. terreus by Apnis (Cano & Guarro, Randhawa & Sandhu) A. clathratus and A. hispanicus by Cano & Guarro. Ascospores of A. terreus and A. hispanicus are small, pitted and diamond-shaped (rhomboid), whereas A. clathratus has circular (oblate) ascospores with reduced pitting. Ascomata are spherical (globose) to oval (subglobose), pale yellowish brown to dark reddish brown in color, and range from 280 to 800 μm in diameter. Ascomata are encased in white aerial hyphae and conidia.
In laboratory, this fungus can be grown on 2% malt agar, potato-carrot agar (PCA), phytone yeast extract agar (PYE) and yeast-starch agar (YpSs) growth mediums. On 2% malt agar, post 2-week incubation, K.durum colonies reach up to 35–40 mm in diameter. Colonies appear fluffy and white but have uneven growth. Mostly thinly spread colonies are denser at the centre and can reach 2 mm in height. Hyphae are usually branched, hyaline, smooth-walled and septate. Colonies also contain aleurioconidia. On potato-carrot agar (PCA), rapid growth is observed at 25 °C. Within 14 days of PCA culturing, circular colonies measuring 53–67 mm in diameter can be seen. Initial colonies appear white, but later on change their color and appear greenish grey. In this growth media, resulting ascomata are scattered and there is limited production of conidia. On phytone yeast extract agar (PYE), fungus grows rapidly into white-yellowish white colonies. While conidiogenesis is prominent, ascomata are not produced. On YpSs growth medium, under dark conditions and 28 °C, it grows at the rate of 2–3 mm per day. Cream-coloured colonies with smooth, septate, hyaline hyphae can be observed. Hyphae are thin-walled and wide, measuring 1.7–2.5 μm in width. Ascoma maturation can be observed in 20–23 days. Acomata are encased in round, dark-brown aerial mycelium measuring 500–1050 μm in diameter.
Additionally, the fungus can be isolated using hair-baiting technique and followed by incubation on Sabouraud's dextrose agar (SDA). Isolation using Sabouraud's dextrose agar supplemented with (50 mg/L) chloramphenicol and cycloheximide (500 mg/L), requires 5–10 day, room temperature incubation. Similarly, hair baiting technique involving sterile human or horse hair can also be used to isolate this fungus from wet soils (rivers and lakes). Pocket-like surface erosion in human hair caused by this fungus can be observed under a light microscope following staining with lactophenol cotton blue.
Habitat and ecology
Typically, it has been isolated from depth of 3–5 cm in soils containing decomposing feather and animal hair. K. durum is known from soils of Gir Forest National Park (India), Sanjay Gandhi National Park, Kaziranga national park, Lonar crater lake as well as Shatt Al-Arab river (Iraq). In terrestrial ecosystems, this fungus is predominantly found areas where there is increased animal and bird densities. In underwater sediments, it prefers alkaline pH conditions. A study conducted in Shatt Al-Arab river by Abdullah and Hassan, demonstrated that the soil pH range for this fungi is 6.9. It is also found in Lonar lake sediments where water is highly alkaline (pH of 10.5–11.2) with increased concentrations of sodium chloride, fluorides and bicarbonates.
Pathogenicity
In their study, Deskhmukh & Verekar determined that it releases keratin at the rate of 234.6 μg/mL and is capable of decomposing 26.4% of human hair within four weeks of incubation. Since this fungus occurs in close proximity of animals and birds, it may be pathogenic to animal and humans.
References
Onygenales
Taxa named by Hugo Zukal
Fungus species | Keratinophyton durum | Biology | 1,432 |
11,420,764 | https://en.wikipedia.org/wiki/Cripavirus%20internal%20ribosome%20entry%20site | The Cripavirus internal ribosome entry site (CrPV IRES) is an RNA element required for the production of capsid proteins through ribosome recruitment to an intergenic region IRES (IGR IRES).
See also
Cricket paralysis virus
Internal ribosome entry site (IRES)
References
External links
Cis-regulatory RNA elements
Dicistroviridae | Cripavirus internal ribosome entry site | Chemistry | 79 |
36,953,973 | https://en.wikipedia.org/wiki/Power%20residue%20symbol | In algebraic number theory the n-th power residue symbol (for an integer n > 2) is a generalization of the (quadratic) Legendre symbol to n-th powers. These symbols are used in the statement and proof of cubic, quartic, Eisenstein, and related higher reciprocity laws.
Background and notation
Let k be an algebraic number field with ring of integers that contains a primitive n-th root of unity
Let be a prime ideal and assume that n and are coprime (i.e. .)
The norm of is defined as the cardinality of the residue class ring (note that since is prime the residue class ring is a finite field):
An analogue of Fermat's theorem holds in If then
And finally, suppose These facts imply that
is well-defined and congruent to a unique -th root of unity
Definition
This root of unity is called the n-th power residue symbol for and is denoted by
Properties
The n-th power symbol has properties completely analogous to those of the classical (quadratic) Jacobi symbol ( is a fixed primitive -th root of unity):
In all cases (zero and nonzero)
All power residue symbols mod n are Dirichlet characters mod n, and the m-th power residue symbol only contains the m-th roots of unity, the m-th power residue symbol mod n exists if and only if m divides (the Carmichael lambda function of n).
Relation to the Hilbert symbol
The n-th power residue symbol is related to the Hilbert symbol for the prime by
in the case coprime to n, where is any uniformising element for the local field .
Generalizations
The -th power symbol may be extended to take non-prime ideals or non-zero elements as its "denominator", in the same way that the Jacobi symbol extends the Legendre symbol.
Any ideal is the product of prime ideals, and in one way only:
The -th power symbol is extended multiplicatively:
For then we define
where is the principal ideal generated by
Analogous to the quadratic Jacobi symbol, this symbol is multiplicative in the top and bottom parameters.
If then
Since the symbol is always an -th root of unity, because of its multiplicativity it is equal to 1 whenever one parameter is an -th power; the converse is not true.
If then
If then is not an -th power modulo
If then may or may not be an -th power modulo
Power reciprocity law
The power reciprocity law, the analogue of the law of quadratic reciprocity, may be formulated in terms of the Hilbert symbols as
whenever and are coprime.
See also
Modular_arithmetic#Residue_class
Quadratic_residue#Prime_power_modulus
Artin symbol
Gauss's lemma
Notes
References
Algebraic number theory | Power residue symbol | Mathematics | 589 |
50,244,489 | https://en.wikipedia.org/wiki/Scotomization | Scotomization is a psychological term for the mental blocking of unwanted perceptions, analogous to the visual blindness of an actual scotoma.
Controversies
This term initially was used by Charcot in connection with hysteria.
Psychoanalysis
Reviving in the 1920s this term, Rene Laforgue and Edouard Pichon introduced the idea of scotomization into psychoanalysis – a move initially welcomed by Freud in 1926 as a useful description of the hysterical avoidance of distressing perceptions. The following year, however, he attacked the term for suggesting that the perception was wholly blotted out (as with a retina's blind spot), whereas his clinical experience showed that on the contrary intense psychic measures had to be taken to keep the unwanted perception out of consciousness. A debate followed between Freud and Laforgue, further illuminated by Pichon's 1928 article on 'The Psychological Significance of Negation in French', where he argued that "The French language expresses the desire for scotomisation through the forclusif."
Lacan
Decades later in the 1950s, the question of scotomization re-emerged in a phenomological context under the influence of Jacques Lacan. Lacan used scotomization to represent the ego's relationship to the unconscious – speaking of "everything that the ego, neglects, scotomizes, misconstrues in...reality" – as well as to challenge Sartre's concept of the gaze. Most significantly of all, however, he developed it into his influential update of Pichon's concept of foreclosure, thus endowing that idea with a conflation of visual and verbal elements.
See also
References
External links
Scotomization
Abnormal psychology
Defence mechanisms | Scotomization | Biology | 356 |
23,395,389 | https://en.wikipedia.org/wiki/Walden%207 | Walden 7 is an apartment building designed by Ricardo Bofill Taller de Arquitectura and located in Sant Just Desvern near Barcelona, in Catalonia, Spain. It was built in 1975.
Name
The name of the building is inspired by B. F. Skinner's novel, Walden Two, which depicts a utopian community and itself is a reference to Henry David Thoreau's novel Walden. It is noted for its use of modules to create apartments and many public community spaces.
Structure and usage
The original project includes 446 residences. With a budget lower than the norm for subsidized housing at the time, Walden 7 was built in the area to the west of Barcelona. It was originally designed as one of five similar blocks. The building is composed of 18 towers which are displaced from their base, forming a curve and coming into contact with the neighbouring towers, described as a "vertical labyrinth with seven interconnecting interior courtyards." The area originally devoted to communal uses was reduced to allow an increased number of apartments. These apartments are formed on the basis of one or more modules, which creates, on different levels, dwellings that range from single-module studios to large multiple-module apartments. Partitions between modules may be modified, designed to shift as family structures shift.
Walden 7 was designed with small, uniform windows and no central heating. Its original design included a bath in the middle of the room, which most residents removed. The original exterior façade was covered with small, red ceramic tiles backed with the wrong adhesive, creating a pedestrian hazard as the tiles fell off the building., The local government began repairing the structure in the 1990s, and a 1995 refurbishment removed most of the tiles and replaced them with red paint. The only remaining tiles exist on the small balconies. The interior is painted in blue, purple, and yellow. It is accessible by tram; the stop near the building is called Walden.
Although the building is a private apartment complex, it offers public tours.
Reception
Walden 7 became instantly iconic and made the cover of the prestigious Architectural Design magazine in July 1975.
In an article for Architectural Digest, architectural historian Vincent Scully described Walden 7 as "a wildly expressionistic apartment house, part Gaudí, part Archigram."
In popular culture
In the 1993 film The Bilingual Lover, two characters live in Walden 7. The falling tiles are mentioned.
See also
Unité d'habitation
La Muralla Roja
Les Espaces d'Abraxas
Antigone, Montpellier
List of works by Ricardo Bofill Taller de Arquitectura
Gallery
Notes
External links
ArchDaily
Official website
Walden 7
1975 establishments in Spain
Ricardo Bofill buildings
Buildings and structures in Catalonia
Modernist architecture in Barcelona
Postmodern architecture
Apartment buildings in Spain | Walden 7 | Engineering | 562 |
7,454,585 | https://en.wikipedia.org/wiki/Epideme | "Epideme" is the seventh episode of science fiction comedy series Red Dwarf VII and the 43rd in the series run. It was first broadcast on the British television channel BBC2 on 28 February 1997. Written by Paul Alexander and Doug Naylor, and directed by Ed Bye, the episode involves Lister contracting an intelligent, but deadly, virus.
Plot
The crew encounters an abandoned ship, the Leviathan, which is buried in the middle of an ice planetoid. In it, they find the frozen body of Caroline Carmen, one of Lister's former crushes. She is taken on board the Starbug, where the crew attempts to thaw her out, but they are unable to melt the ice. That night, Carmen defrosts of her own accord and turns out to be in an advanced state of decomposition. She attacks Lister and spits part of her jaw and tongue down his throat, infecting him with Epideme, an intelligent virus (with an annoying personality) that was supposed to cure nicotine addiction, but in practice kills its victims within 48 hours, then reanimates their corpse to find a new victim to transfer itself to.
Lister tries reasoning with Epideme directly through a communication link, but has no luck in convincing the virus to leave. Kochanski comes up with a drastic plan to save Lister's life: coax the virus to move down toward Lister's hand and then cut off the hand, isolating the virus outside his body. However, they end up cutting off Lister's right arm instead of the left one as he had requested, and they only manage to dispose of part of the Epideme virus, with the result that they only succeed in prolonging Lister's life by an hour. Lister sneaks aboard the Leviathan with some explosives, intending to kill both himself and Epideme, but the virus talks him out of it by revealing that the destination of the Leviathan was Delta VII, a research base that might have a cure.
When Starbug arrives at Delta VII, it turns out that the planet has been destroyed in order to deal with a massive Epideme outbreak – a fact that the virus was fully aware of, and used in its attempt to prevent Lister from killing himself. With Lister on the verge of death, Kochanski injects Lister with a drug that stops his heart, then gets his corpse to bite her left hand, infecting it. After amputating her left arm she reveals that it was actually Caroline Carmen's arm, and that her own left arm is intact. Kryten and Kochanski then revive the now virus-free but now one-armed Lister.
Production
For Paul Alexander's second script, he used an old Jasper Carrott joke for the premise of the plot – "What if your flu could talk to you? Wouldn't it just say that it was doing its job?" Again, Naylor helped out with the script, tweaking it to conform to the Red Dwarf universe.
An alternate ending was scripted and filmed for the episode – involving the dead arm, containing the Epideme virus, flying through space and then towards the camera – but it was decided to end the episode just before this scene.
Of the many new props needed for the new series was a laser bone-saw – used for the scenes of severing the Epideme-infected arm. For the scene, Chloë Annett had taken several attempts to cut the arm off.
Voice artist Gary Martin played the talking virus Epideme. He was recommended by Danny John-Jules, his friend of many years' standing, and had even been with Danny when he auditioned for the role of the Cat in the mid-eighties.
Nicky Leatherbarrow also appeared, in heavy make-up, as Caroline Carmen – the initial carrier of the Epideme virus.
Reception
Originally broadcast on the British television channel BBC2 on 28 February 1997 in the 9:00 pm evening slot, this episode's television ratings were high. Although Series VII as a whole received a mixed response from fans and critics alike, this was considered one of the better episodes.
DVDActive thought the episode was "a nice idea, and one that is well-executed ... the final scene is one of the funniest of the series." DVD Verdict thought that this episode was the first in which the character of Kochanski finally "reached her stride" after all the "attitude and aggravation during those first few shows". Sci-Fi Online noted that the episode was "particularly reminiscent of Confidence and Paranoia, since it deals with a talking disease."
References
External links
Series VII episode guide at www.reddwarf.co.uk
Red Dwarf VII episodes
1997 British television episodes
Fictional viruses
Fictional microorganisms | Epideme | Biology | 983 |
23,156,336 | https://en.wikipedia.org/wiki/Camber%20thrust | Camber thrust and camber force are terms used to describe the force generated perpendicular to the direction of travel of a rolling tire due to its camber angle and finite contact patch. Camber thrust is generated when a point on the outer surface of a leaned and rotating tire, that would normally follow a path that is elliptical when projected onto the ground, is forced to follow a straight path while coming in contact with the ground, due to friction. This deviation towards the direction of the lean causes a deformation in the tire tread and carcass that is transmitted to the vehicle as a force in the direction of the lean.
Camber thrust is approximately linearly proportional to camber angle for small angles, reaches its steady-state value nearly instantaneously after a change in camber angle, and so does not have an associated relaxation length. Bias-ply tires have been found to generate more camber thrust than radial tires. Camber stiffness is a parameter used to describe the camber thrust generated by a tire, and it is influenced by inflation pressure and normal load. The net camber thrust is usually in front of the center of the wheel and so generates a camber torque, twisting torque, or twisting moment. The orientation of this torque is such that it tends to steer a tire towards the direction that it is leaned. An alternate explanation for this torque is that the two sides of the contact patch are at different radii from the axle, and so would travel forward at different rates unless constrained by friction with the pavement.
On bikes
On bicycles and motorcycles, camber thrust contributes to the centripetal force necessary to cause the vehicle to deviate from a straight path, along with cornering force due to the slip angle, can be the largest contributor, and in some cases is the sole contributor. Camber thrust contributes to the ability of bikes to negotiate a turn with the same radius as automobiles but with a smaller steering angle. When a bike is steered and leaned in the same direction, the camber angle of the front tire is greater than that of the rear and so can generate more camber thrust, all else being equal.
On automobiles
On automobiles, camber thrust may contribute to or subtract from the total centripetal force generated by the tire, depending on the camber angle. On a well-aligned vehicle, camber thrust from the tires on each side balances out. On a surface rough enough for one front tire to momentarily lose traction, camber thrust from the other front tire can cause the vehicle to wander or feel skittish.
See also
Bicycle and motorcycle dynamics
Cornering force
Motorcycle tires
Relaxation length
Vehicle dynamics
References
Tires
Motorcycle technology
Motorcycle dynamics
Automotive steering technologies
Vehicle technology | Camber thrust | Engineering | 550 |
56,398,645 | https://en.wikipedia.org/wiki/Valentine%20Diner | A Valentine Diner was a prefabricated mail order small diner produced in Wichita, Kansas after the Great Depression. The concept was created by Arthur Valentine in the 1930s, who had experience operating lunch rooms. Originally the diners were manufactured by the Ablah Hotel Supply Company.
In 1947, manufacturing was taken over by the Valentine Manufacturing Company. After World War II and the implementation of the Interstate Highway System in the U.S. in the late 1950s, prefabricated diners saw a boom in business as motorists took to the roads in greater numbers for longer journeys and would stop for a meal. Valentine Diners were produced until the 1970s, and several survive as operating business (sometimes as a restaurant, sometimes as other businesses) around the United States today. A few have become historical roadside attractions, such as along historic Route 66.
At least twelve different Valentine Diners styles were produced. Diners can be identified by either their wall safe, which will have a Valentine logo (a heart with an arrow through it), or the Valentine diner steel serial number plate, which has the word “Valentine” written on it.
References
Diner manufacturers
Prefabricated buildings
Great Depression
History of Wichita, Kansas
Manufactured goods | Valentine Diner | Engineering | 246 |
18,819,901 | https://en.wikipedia.org/wiki/Research%20Institute%20for%20Symbolic%20Computation | The Research Institute for Symbolic Computation (RISC Linz) is a research institute in the area of symbolic computation, including automated theorem proving and computer algebra. It is located in Schloß Hagenberg in Hagenberg near Linz in Austria. RISC was founded in 1987 under Bruno Buchberger and moved to Hagenberg in 1989. The present chairman of RISC is Peter Paule.
External links
RISC Linz
Softwarepark Hagenberg
Computer science organizations | Research Institute for Symbolic Computation | Technology | 95 |
27,561,554 | https://en.wikipedia.org/wiki/Nebulae%20%28computer%29 | Nebulae () is a petascale supercomputer located at the National Supercomputing Center in Shenzhen, Guangdong, China. Built from a Dawning TC3600 Blade system with Intel Xeon X5650 processors and Nvidia Tesla C2050 GPUs, it has a peak performance of 1.271 petaflops using the LINPACK benchmark suite. Nebulae was ranked the second most powerful computer in the world in the June 2010 list of the fastest supercomputers according to TOP500. Nebulae has a theoretical peak performance of 2.9843 petaflops. This computer is used for multiple applications requiring advanced processing capabilities. It is ranked 10th among the June 2012 list of top500.org.
See also
Computer science
Computing
Supercomputer centers in China
TOP500
References
Further reading
"National Supercomputing Center starts construction in Shenzhen", People's Daily, November 17, 2009
Fildes, Jonathan, "China aims to become supercomputer superpower", BBC News, 31 May 2010
GPGPU supercomputers
One-of-a-kind computers
Petascale computers
Supercomputing in China
X86 supercomputers | Nebulae (computer) | Technology | 251 |
20,716,195 | https://en.wikipedia.org/wiki/Salt%20well | A salt well (or brine well) is used to mine salt from caverns or deposits. Water is used as a solution to dissolve the salt or halite deposits so that they can be extracted by pipe to an evaporation process, which results in either a brine or a dry product for sale or local use. In the United States during the 19th century, salt wells were a significant source of income for operators and the government. Locating underground salt deposits was usually based on locations of existing salt springs.
In mountainous areas, a similar technique called sink works (from German sinkwerk) is used.
History
The Chinese have been using brine wells and a form of salt solution mining as part of their civilization for more than 2000 years. The first recorded salt well in China was dug in the Sichuan province around 2,250 years ago. This was the first time that ancient water well technology was applied successfully for the exploitation of salt, and marked the beginning of Sichuan's salt drilling industry. Shaft wells were sunk as early as 220 BC in the Sichuan and Yunnan Provinces. By 1035 AD, Chinese in the Sichuan area were using percussion drilling to recover deep brines, a technique that would not be introduced to the West for another 600 to 800 years. Medieval and modern European travelers to China between 1400 and 1700 AD reported salt and natural gas production from dense networks of brine wells. Archaeological evidence of Song dynasty salt drilling tools used are kept and displayed in the Zigong Salt Industry Museum. Many of the wells were sunk deeper than 450 m and at least one well was more than 1000 meters deep. The medieval Venetian traveler to China, Marco Polo, reported an annual production in a single province of more than 30,000 tonnes of brine during his time there. According to Salt: A World History, a Qing dynasty well, also in Zigong, "continued down to 3,300 feet (1,000 m) making it at the time the deepest drilled well in the world."
References
Chinese inventions
History of Sichuan
Mining techniques
Salt production | Salt well | Chemistry | 415 |
35,766,545 | https://en.wikipedia.org/wiki/5-Methylcytidine | 5-Methylcytidine is a modified nucleoside derived from 5-methylcytosine. It is found in ribonucleic acids of animal, plant, and bacterial origin.
References
Nucleosides
Pyrimidones
Hydroxymethyl compounds | 5-Methylcytidine | Chemistry | 58 |
47,135,293 | https://en.wikipedia.org/wiki/List%20of%20monuments%20damaged%20by%20conflict%20in%20the%20Middle%20East%20during%20the%2021st%20century | This is a list of monuments suffering damage from conflict in the Middle East during the 21st century. It is sorted by country.
Egypt
The Museum of Islamic Art in Cairo is home to one of the world's most impressive collections of Islamic art. It includes over 100,000 pieces that cover the entirety of Islamic history. The Cairo site was first built in 1881 and underwent a multi-million dollar renovation between 2003 and 2010.
On January 24th, 2014 a car bomb attack targeting the Cairo police headquarters on the other side of the street caused considerable damage to the museum and destroyed many artifacts. It is estimated that 20-30% of the artifacts will need restoration. The blast also severely damaged the buildings facade, wiping out intricate designs in the Islamic style. The Egyptian National Library and Archives in the same building was also affected.
Iraq
Dair Mar Elia, also known as Saint Elijah's monastery. The Christian monastery near Mosul was founded in the late 6th century, and its sanctuary was built in the 11th century. The monastery was damaged during the invasion of 2003, before being completely destroyed by ISIL in 2014.
Nimrud. The ancient Assyrian city around Nineveh Province, Iraq was home to countless treasures of the empire, including statues, monuments, and jewels. Following the 2003 invasion the site has been devastated by looting, with many of the stolen pieces finding homes in a museum abroad.
Great Mosque of Samarra. Once the largest mosque in the world, built in the 9th century on the Tigris River north of Baghdad. The mosque is famous for Malwiya Tower, a 52-meter minaret with spiraling ramps for worshipers to climb. The site was bombed in 2005, in an insurgent attack on a NATO position, destroying the top of the minaret and surrounding walls.
Al-Askari Shrine was severely damaged in a bombing in 2006 by unknown, masked assailants which resulted in the complete destruction of its golden dome.
Tomb of Jonah. The purported resting place of the biblical prophet Jonah, along with a tooth by some believed to be from the whale that consumed him in the myth. The site dated to the 8th century BC, and was of great importance to Christian and Muslim faiths. It was entirely blown up by ISIL militants in 2014 as part of their campaign against perceived apostasy.
Lebanon
Old Beirut suffered through a brutal 15-year civil war, successive battles with Israel, and sweeping urban development. It is referred to as the "Paris of the Middle East" and is known for its impressive landscape Ottoman, French and Art Deco architecture. Officials report that just 400 of 1200 protected historic buildings remain.
Tibnin Castle was damaged during the Israeli invasion of Lebanon in 2024 and one of its walls collapsed.
Libya
Cyrene (Libya). A key city for the Greeks and Romans, established in 630 BC. Famed as the basis for enduring myths and legends, such as that of the huntress heroine of the same name and bride of Apollo. The ruins were some of the best preserved from that period.
In May 2011, a number of objects excavated from Cyrene in 1917 and held in the vault of the National Commercial Bank in Benghazi were stolen. Looters tunnelled into the vault and broke into two safes that held the artefacts which were part of the so-called 'Benghazi Treasure' . The whereabouts of these objects are currently unknown.
Parts of the UNESCO World Heritage Site of Cyrene were destroyed in August 2013 by locals to make way for homes and shops. Approximately 200 vaults and tombs were leveled, as well as a section of a viaduct dating to the third century BC. Artifacts were thrown into a nearby river.
Palestine
Al-Omari Mosque, Gaza. Ancient monument in the heart of Jabalya's old town that dates back to the Mamluk Era. The walls, dome and roof were destroyed by Israeli airstrikes during the 2015 fighting in Gaza, along with dozens more historic sites.According to tradition, the mosque stands on the site of the Philistine temple dedicated to Dagon—the god of fertility—which Samson toppled in the Book of Judges. Later, a temple dedicated to Marnas—god of rain and grain—was erected. Local legend today claims that Samson is buried under the present mosque. The mosque is well known for its minaret, which is square-shaped in its lower half and octagonal in its upper half, typical of Mamluk architectural style. The minaret is constructed of stone from the base to the upper, hanging balcony, including the four-tiered upper half. The pinnacle is mostly made of woodwork and tiles, and is frequently renewed. A simple cupola springs from the octagonal stone drum and is of light construction similar to most mosques in the Levant.
Syria
The ancient city of Bosra. Continually inhabited for 2,500 years, and became the capital of the Romans' Arabian empire/ The centerpiece is a magnificent Roman theatre dating back to the second century that survived intact until the current century. Archeologists have revealed the site is now severely damaged from mortar shelling in 2011-2012 during the Arab Spring.
Citadel of Aleppo. The fortress spans at least four millennia, from the days of Alexander the Great, through Roman, Mongol, and Ottoman rule. The site has barely changed since the 16th century and is one of Syria's most popular World Heritage sites.
In August 2012, during the Battle of Aleppo of the Syrian civil war, the external gate of the citadel was damaged after being shelled during a clash between the Free Syrian Army and the Syrian Army to gain control over the citadel.
During the conflict, the Syrian Army used the Citadel as a military base, with the walls acting as cover while shelling surrounding areas and ancient arrow slits in walls being used by snipers to target rebels. As a result of this contemporary usage, the Citadel has received significant damage.
Armenian Genocide Memorial Church (Der Zor). Memorial site to the 1.5 million killed between 1915 and 1923, the Deir Ez-zor became a yearly destination for pilgrims from around the world. The site included a church, museum, and fire that burned continuously. On 21 September 2014, the memorial complex was blown up by militants of the Islamic State of Iraq and the Levant.
Al-Madina Souq. The covered markets in the Old City are a famous trade center for the region's finest produce, with dedicated sub-souks for fabrics, food, and accessories. The tunnels became the scene of fierce fighting and many of the oldest are now damaged beyond recognition. This was described by UNESCO as a tragedy.
Deir ez-Zor suspension bridge. This French-built suspension bridge was a popular pedestrian crossing and vantage point for its views of the Euphrates River. The bridge was destroyed by Free Syrian Army militiamen during the Syrian civil war in May 2013. Deir Ez-zor's Siyasiyeh Bridge was also destroyed.
Khalid ibn al-Walid Mosque. Among Syria's most famous Ottoman-style mosques, which also shows Mamluk influence through its light and dark contrasts. As of 2007, activities in the mosque were organized by shaykhs Haytham al-Sa'id and Ahmad Mithqan. Stamps depicting the mosque have been issued in several denominations.
The Khalid ibn al-Walid Mosque has been a symbol of anti-government rebels during the Syrian civil war. According to The New York Times, Syrian security forces killed 10 protesters participating in a funeral procession as they were leaving the mosque on 18 July 2011. The mosque, which the Syrian government stated had been turned by the rebels into an "arms and ammunition depot", was abandoned by the rebels on 27 July 2013. Shelling by government forces damaged Khalid's tomb inside the mosque. Following its capture by the Syrian Army, state media showed heavy damage inside the mosque, including some parts of it being burned, and the door to the tomb destroyed.
Krak des Chevaliers. The Crusader castle from the 11th century survived centuries of battles and natural disasters, becoming a World Heritage Site in 2006 along with the adjacent castle of Qal'at Salah El-Din.
During the Syrian Civil War which began in 2011 UNESCO voiced concerns that the conflict might lead to the damage of important cultural sites such as Krak des Chevaliers. It has been reported that the castle was shelled in August 2012 by the Syrian Arab Army, and the Crusader chapel has been damaged. The castle was reported to have been damaged in July 2013 by an airstrike during the Siege of Homs, and once more on the 18th of August 2013 it was clearly damaged yet the amount of destruction is unknown. The Syrian Arab Army recaptured the castle and the village of al-Hosn from rebel forces during the Battle of Hosn on March 20, 2014, although the extent of damage from earlier mortar hits remained unclear.
Palmyra. An "oasis in the Syrian desert" according to UNESCO, this Aramaic city has stood since the second millennium BC and featured some of the most advanced architecture of the period. The site subsequently evolved through Greco-Roman and Persian periods, providing unique historic insight into those cultures.
As a result of the Syrian Civil War, Palmyra experienced widespread looting and damage by combatants. During the summer of 2012, concerns about looting in the museum and the site increased when an amateur video of Syrian soldiers carrying funerary stones was posted. However, according to France 24's report, "From the information gathered, it is impossible to determine whether pillaging was taking place." The following year the facade of the temple of Bel sustained a large hole from mortar fire, and colonnade columns have been damaged by shrapnel. According to Maamoun Abdulkarim, director of antiquities and museums at the Syrian Ministry of Culture, the Syrian Army positioned its troops in some archaeological-site areas, while Syrian opposition soldiers stationed themselves in gardens around the city.
On 13 May 2015, the ISIL launched an attack on the modern town, sparking fears that the iconoclastic group would destroy the site. On 21 May, ISIL forces entered the World Heritage Site. Local residents reported that the Syrian air force bombed the site on 13 June, damaging the northern wall next to the Temple of Baalshamin. The Temple of Baalshamin and the Temple of Bel were demolished by ISIL in August 2015.
The Great Mosque of Aleppo. A World Heritage Site originally built in 715 by the Umayyad dynasty, ranking it among the oldest mosques in the world. The epic structure evolved through successive eras, gaining its famous minaret in the late 11th century.
On 13 October 2012 the mosque was seriously damaged during clashes between the armed groups of the Free Syrian Army and the Syrian Army forces. President Bashar al-Assad issued a presidential decree to form a committee to repair the mosque by the end of 2013.
The mosque was seized by rebel forces in early 2013, and, as of April 2013, is within an area of heavy fighting, with government force stationed away.
On 24 April 2013 the minaret of the mosque was reduced to rubble during an exchange of heavy weapons fire between government forces and rebels during the ongoing Syrian civil war. The Syrian Arab News Agency (SANA) reported that members of Jabhat al-Nusra detonated explosives inside the minaret, while opposition activists said that the minaret was destroyed by Syrian Army tank fire as part of an offensive. Countering assertions by the state media of Jabhat al-Nusra's involvement, opposition sources described them as rebels from the Tawhid Brigades who were fighting government forces around the mosque. The opposition's main political bloc, the Syrian National Coalition (SNC), condemned the minaret's destruction, calling it "an indelible disgrace" and "a crime against human civilization."
Yemen
Sana'a old city. Yemen's capital city of Sana'a has been struck by suicide bombings (for which ISIL has claimed responsibility) and air-strikes by the Saudi-led coalition. These have affected the old fortified city—inscribed on UNESCO's World Heritage List since 1986—and the archaeological site of the pre-Islamic walled city of Baraqish, causing, according to UNESCO, "severe damage".
See also
Destruction of art
List of heritage sites damaged during the Syrian Civil War
List of World Heritage in Danger
Lost artworks
List of destroyed heritage
Destruction of Art in Afghanistan
References
Architecture lists
Middle East
Cultural lists
Lists of demolished buildings and structures
21st century-related lists
Middle East-related lists
Lists of monuments and memorials
Monuments and memorials in Asia | List of monuments damaged by conflict in the Middle East during the 21st century | Engineering | 2,576 |
447,151 | https://en.wikipedia.org/wiki/Dividend%20yield | The dividend yield or dividend–price ratio of a share is the dividend per share divided by the price per share. It is also a company's total annual dividend payments divided by its market capitalization, assuming the number of shares is constant. It is often expressed as a percentage.
Dividend yield is used to calculate the dividend earning on investments.
Analysis
Historically, a higher dividend yield has been considered to be desirable among many investors. A high dividend yield can be considered to be evidence that a stock is underpriced or that the company has fallen on hard times and future dividends will not be as high as previous ones. Similarly a low dividend yield can be considered evidence that the stock is overpriced or that future dividends might be higher. Some investors may find a higher dividend yield attractive, for instance as an aid to marketing a fund to retail investors, or maybe because they cannot get their hands on the capital, which may be tied up in a trust arrangement. In contrast some investors may find a higher dividend yield unattractive, perhaps because it increases their tax bill.
Dividend yield fell out of favor somewhat during the 1990s because of an increasing emphasis on price appreciation over dividends as the main form of return on investments.
The importance of the dividend yield in determining investment strength is still a debated topic; most recently, Foye and Valentincic (2017) suggested that high dividend yield stocks tend to outperform. The persistent historic low in the Dow Jones dividend yield during the early 21st century is considered by some investors as indicative that the market is still overvalued.
Dow Industrials
The dividend yield of the Dow Jones Industrial Average, which is obtained from the annual dividends of all 30 companies in the average divided by their cumulative stock price, has also been considered to be an important indicator of the strength of the U.S. stock market. Historically, the Dow Jones dividend yield has fluctuated between 3.2% (during market highs, for example in 1929) and around 8.0% (during typical market lows). The highest ever Dow Jones dividend yield occurred in 1932 when it yielded over 15%, which was years after the famous stock market collapse of 1929, when it yielded only 3.1%.
With the decreased emphasis on dividends since the mid-1990s, the Dow Jones dividend yield has fallen well below its historical low-water mark of 3.2% and reached as low as 1.4% during the stock market peak of 2000.
The Dogs of the Dow is a popular investment strategy which invests in the ten highest dividend yield Dow stocks at the beginning of each calendar year.
S&P 500
In 1982 the dividend yield on the S&P 500 Index reached 6.7%. Over the following 16 years, the dividend yield declined to just a percentage value of 1.4% during 1998, because stock prices increased faster than dividend payments from earnings, and public company earnings increased more slowly than stock prices. During the 20th century, the highest growth rates for earnings and dividends over any 30-year period were 6.3% annually for dividends, and 7.8% for earnings.
Overview
Terminology
Current yield
The current yield is the ratio of the annual dividend to the current market price, which will vary over time.
Trailing and forward
Trailing dividend yield gives the dividend percentage paid over a prior period, typically one year. A trailing twelve month dividend yield, denoted as "TTM", includes all dividends paid during the past year in order to calculate the dividend yield. While a trailing dividend can be indicative of future dividends, it can be misleading as it does not account for dividend increases or cuts, nor does it account for a special dividend that may not occur again in the future.
Forward dividend yield is some estimation of the future yield of a stock. This may be an analyst's estimate, or just using the company's guidance. For example, if a company has announced a dividend increase, even though nothing has been paid, this may be assumed to be the payment for the next year. Similarly, if a company has said that it will suspend its dividend, the yield would be assumed to be zero.
The calculation is done by taking the first dividend payment and annualizing it and then divide that number by the current stock price. In other words, if the first quarterly dividend were and the current stock price were the forward dividend yield would be .
The trailing dividend yield is done in reverse by taking the last dividend annualized divided by the current stock price.
Yield on cost
Yield is sometimes computed based on the amount paid for a stock. For example, if stock X was bought for , it split 2:1 three times (resulting in 8 total shares), it is now trading for ( for 8 shares), and it pays a dividend of , then the yield on cost is 80% (8 shares × = paid over $20 invested -> 16/20 = 0.8). The yield with the current price is 4% ( over $50 share price -> 2/50 = 0.04).
Common shares
Unlike preferred stock, there is no stipulated dividend for common stock ("ordinary shares" in the UK). Instead, dividends paid to holders of common stock are set by management, usually in consonance with company earnings. There is no guarantee that future dividends will match past dividends or even be paid at all. The historic yield is calculated using the following formula:
For example, take a company which paid dividends totaling per share last year and whose shares currently sell for . Its dividend yield would be calculated as follows:
The yield for the S&P 500 is reported this way. US newspaper and web listings of common stocks apply a somewhat different calculation: They report the latest quarterly dividend multiplied by 4, divided by the current price. Others try to estimate the next year's dividend and use it to derive a prospective dividend yield. Such a scheme is used for the calculation of the FTSE UK Dividend+ Index. Estimates of future dividend yields are by definition uncertain.
Preferred shares
Nominal yield
Dividend payments on preferred stocks are set out in the prospectus. The name of the preferred share will typically include its nominal yield relative to the issue price: for example, a 6% preferred share. However, the dividend may under some circumstances be passed or reduced.
Yield to call
The yield to call figure for a callable preferred share is the effective current yield, assuming that the issuer will exercise the call contingency immediately on the call date. The yield to call is implicitly a current measure of a future value, accounting for the difference between the future call price versus the current market price. Since the current market price may be above or below the call price, the yield to call may be below or above the current yield.
Yield to worst
For callable preferred stocks, the yield to worst is the lesser of the current yield and the yield to call. Yield to worst represents the minimum of the various yield measures, across the returns resulting from various contingent future events. This amounts to the worst case outcome from the investor's position.
Preferred issues that are not callable, or whose call date has already arrived, do not have a yield to call or yield to worst. The only present yield measure in such cases is the current yield.
Related measures
The reciprocal of the dividend yield is the price/dividend ratio. The dividend yield is related to the earnings yield via:
earnings yield = dividend yield · dividend cover
dividend yield = earnings yield · dividend payout ratio.
See also
Cost of capital
Dividend payout ratio
Earnings yield
Liquidating dividend
P/E ratio
List of finance topics
References
External links
Understanding dividend yields by Investopedia.
Dividend Yields by Yahoo Education Center.
Dividend Yield Calculator
Further reading
Cohen, R.D. (2002, November) "The Relationship Between the Equity Risk Premium, Duration and Dividend Yield" Wilmott Magazine, pp 84–97.
Yield
Financial ratios | Dividend yield | Mathematics | 1,678 |
76,390,348 | https://en.wikipedia.org/wiki/Ytterbium%28III%29%20iodate | Ytterbium(III) iodate is an inorganic compound with the chemical formula Yb(IO3)3. Its dihydrate can be prepared by reacting ytterbium sulfate and iodic acid in water at 200 °C. It crystallizes in the P21/c space group, with unit cell parameters a=8.685, b=6.066, c=16.687 Å, β=115.01°.
References
Ytterbium(III) compounds
Iodates | Ytterbium(III) iodate | Chemistry | 109 |
52,171,006 | https://en.wikipedia.org/wiki/Nick%20Baldwin | Nicholas Peter Baldwin (born 17 December 1952) is a British businessman, and the Chairman of the Office for Nuclear Regulation, and a former Chief Executive of Powergen (E.ON UK since July 2004)
Early life
He was born in Gosport.
He gained a BSc in Mechanical Engineering from City University (now City, University of London), studying from 1971 to 1975. He later gained an MSc in economics from Birkbeck College.
Career
Powergen
He joined Powergen in 1989. From 2001 to 2002 he was Chief Executive of Powergen. Powergen was sold to E.ON of Germany for £9.6bn, completed in January 2002. The UK business of National Power was bought by RWE for £3.1bn in 2000.
Nuclear Decommissioning Authority
He was Interim Chairman from 2007 to 2008 of the Nuclear Decommissioning Authority. He worked there from 2004 to 2011.
Office for Nuclear Regulation
He became Chairman of the Office for Nuclear Regulation in 2011. The ONR became an independent public organisation in April 2014.
Baldwin was appointed Commander of the Order of the British Empire (CBE) in the 2017 Birthday Honours for services to nuclear safety and security and to the charitable sector.
He was replaced in the role by Mark McAllister on the 1 April 2019.
Personal life
He married Adrienne Plunkett in March 2002 in Evesham. They have a son and daughter. He lives in Worcester. In September 2000 he was hit by lightning, and suffered serious burns and head injuries, whilst sheltering under a tree at Bryce Canyon National Park in Utah, when on a horse-riding holiday; he was taken to hospital in Salt Lake City, where he was in intensive care for three days. A lightning bolt can carry up to one million volts in electricity. He was with his wife and two children at the time, and has no memory of the incident, and for two days after that.
See also
Nuclear power in the United Kingdom
Wulf Bernotat, former Chief Executive of E.ON (in Essen), and former Chairman of Powergen
References
External links
ONR
1952 births
Alumni of Birkbeck, University of London
Alumni of City, University of London
British chief executives in the energy industry
Businesspeople in nuclear power
E.ON
Fellows of the Institution of Engineering and Technology
Fellows of the Institution of Mechanical Engineers
Injuries from lightning strikes
Nuclear energy in the United Kingdom
People from Gosport
Businesspeople from Worcester, England
Living people
Commanders of the Order of the British Empire | Nick Baldwin | Engineering | 509 |
49,055,689 | https://en.wikipedia.org/wiki/IEEE%20802.11ay | IEEE 802.11ay, Enhanced Throughput for Operation in License-exempt Bands above 45 GHz, is a follow-up to IEEE 802.11ad WiGig standard which quadruples the bandwidth and adds MIMO up to 8 streams. Development started in 2015 and the final standard IEEE 802.11ay-2021 was approved in March 2021.
Technical details
802.11ay is a type of WLAN in the IEEE 802.11 family of Wi-Fi WLANs. It is an improvement on IEEE 802.11ad rather than a new standard. It uses the 60 GHz band and has a transmission rate of 20–40 Gbit/s and an extended transmission distance of 300–500 meters. It includes mechanisms for channel bonding and MU-MIMO technologies. It was originally expected to be released in 2017, but was delayed until 2021.
Where 802.11ad uses a maximum of 2.16 GHz bandwidth, 802.11ay bonds four of those channels together for a maximum bandwidth of 8.64 GHz. MIMO is also added with a maximum of four streams. The link-rate per stream is 44 Gbit/s, with four streams this goes up to 176 Gbit/s. Higher order modulation is also added, probably up to 256-QAM.
Applications could include replacement for Ethernet and other cables within offices or homes, and provide backhaul connectivity outside for service providers.
802.11ay should not be confused with the similarly named 802.11ax that was officially approved in 2021. The 802.11ay standard is designed to run at much higher frequencies. The lower frequency of 802.11ax enables it to penetrate walls somewhat, while 802.11ay is generally blocked by walls.
Draft versions
Draft version 0.1 of 802.11ay was released in January 2017, followed by draft version 0.2 in March 2017. Draft version 1.0 was made available in November 2017, and draft 1.2 was available as of April 2018.
Draft version 7.0 was released in December 2020 and the Final 802 Working Group Approval was received in February 2021.
See also
List of WLAN channels
IEEE
References
External links
IEEE 802.11ay-2021 — IEEE Standard for Information Technology — Telecommunications and Information Exchange between Systems Local and Metropolitan Area Networks — Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications Amendment 2: Enhanced Throughput for Operation in License-exempt Bands above 45 GHz.
FAQ: What is 802.11ay wireless technology?
Status of IEEE 802.11 Next Generation 60 GHz (NG60) Study Group" at IEEE802
Wireless networking | IEEE 802.11ay | Technology,Engineering | 537 |
2,970,014 | https://en.wikipedia.org/wiki/Beam%20emittance | In accelerator physics, emittance is a property of a charged particle beam. It refers to the area occupied by the beam in a position-and-momentum phase space.
Each particle in a beam can be described by its position and momentum along each of three orthogonal axes, for a total of six position and momentum coordinates. When the position and momentum for a single axis are plotted on a two dimensional graph, the average spread of the coordinates on this plot are the emittance. As such, a beam will have three emittances, one along each axis, which can be described independently. As particle momentum along an axis is usually described as an angle relative to that axis, an area on a position-momentum plot will have dimensions of length × angle (for example, millimeters × milliradian).
Emittance is important for analysis of particle beams. As long as the beam is only subjected to conservative forces, Liouville's theorem shows that emittance is a conserved quantity. If the distribution over phase space is represented as a cloud in a plot (see figure), emittance is the area of the cloud. A variety of more exact definitions handle the fuzzy borders of the cloud and the case of a cloud that does not have an elliptical shape. In addition, the emittance along each axis is independent unless the beam passes through beamline elements (such as solenoid magnets) which correlate them.
A low-emittance particle beam is a beam where the particles are confined to a small distance and have nearly the same momentum, which is a desirable property for ensuring that the entire beam is transported to its destination. In a colliding beam accelerator, keeping the emittance small means that the likelihood of particle interactions will be greater resulting in higher luminosity. In a synchrotron light source, low emittance means that the resulting x-ray beam will be small, and result in higher brightness.
Definitions
The coordinate system used to describe the motion of particles in an accelerator has three orthogonal axes, but rather than being centered on a fixed point in space, they are oriented with respect to the trajectory of an "ideal" particle moving through the accelerator with no deviation from the intended speed, position, or direction. Motion along this design trajectory is referred to as the longitudinal axis, and the two axes perpendicular to this trajectory (usually oriented horizontally and vertically) are referred to as transverse axes. The most common convention is for the longitudinal axis to be labelled and the transverse axes to be labelled and .
Emittance has units of length, but is usually referred to as "length × angle", for example, "millimeter × milliradians". It can be measured in all three spatial dimensions.
Geometric transverse emittance
When a particle moves through a circular accelerator or storage ring, the position and angle of the particle in the x direction will trace an ellipse in phase space. (All of this section applies equivalently to and ) This ellipse can be described by the following equation:
where x and are the position and angle of the particle, and are the Courant–Snyder (Twiss) parameters, calculated from the shape of the ellipse.
The emittance is given by , and has units of length × angle. However, many sources will move the factor of into the units of emittance rather than including the specific value, giving units of "length × angle × ."
This formula is the single particle emittance, which describes the area enclosed by the trajectory of a single particle in phase space. However, emittance is more useful as a description of the collective properties of the particles in a beam, rather than of a single particle. Since beam particles are not necessarily distributed uniformly in phase space, definitions of emittance for an entire beam will be based on the area of the ellipse required to enclose a specific fraction of the beam particles.
If the beam is distributed in phase space with a Gaussian distribution, the emittance of the beam may be specified in terms of the root mean square value of and the fraction of the beam to be included in the emittance.
The equation for the emittance of a Gaussian beam is:
where is the root mean square width of the beam, is the Courant-Snyder , and is the fraction of the beam to be enclosed in the ellipse, given as a number between 0 and 1. Here the factor of is shown on the right of the equation, and would often be included in the units of emittance, rather than being multiplied in to the computed value.
The value chosen for will depend on the application and the author, and a number of different choices exist in the literature. Some common choices and their equivalent definition of emittance are:
{| class="wikitable"
|-
! !!
|-
| || 0.15
|-
| || 0.39
|-
| || 0.87
|-
| || 0.95
|}
While the x and y axes are generally equivalent mathematically, in horizontal rings where the x coordinate represents the plane of the ring, consideration of dispersion can be added to the equation of the emittance. Because the magnetic force of a bending magnet is dependent on the energy of the particle being bent, particles of different energies will be bent along different trajectories through the magnet, even if their initial position and angle are the same. The effect of this dispersion on the beam emittance is given by:
where is the dispersion at location s, is the ideal particle momentum, and is the root mean square of the momentum difference of the particles in the beam from the ideal momentum. (This definition assumes F=0.15)
Longitudinal emittance
The geometrical definition of longitudinal emittance is more complex than that of transverse emittance. While the and coordinates represent deviation from a reference trajectory which remains static, the coordinate represents deviation from a reference particle, which is itself moving with a specified energy. This deviation can be expressed in terms of distance along the reference trajectory, time of flight along the reference trajectory (how "early" or "late" the particle is compared to the reference), or phase (for a specified reference frequency).
In turn, the coordinate is generally not expressed as an angle. Since represents the change in z over time, it corresponds to the forward motion of the particle. This can be given in absolute terms, as a velocity, momentum, or energy, or in relative terms, as a fraction of the position, momentum, or energy of the reference particle.
However, the fundamental concept of emittance is the same—the positions of the particles in a beam are plotted along one axis of a phase space plot, the rate of change of those positions over time is plotted on the other axis, and the emittance is a measure of the area occupied on that plot.
One possible definition of longitudinal emittance is given by:
where the integral is taken along a path which tightly encloses the beam particles in phase space. Here is the reference frequency and the longitudinal coordinate is the phase of the particles relative to a reference particle. Longitudinal equations such as this one often must be solved numerically, rather than analytically.
RMS emittance
The geometric definition of emittance assumes that the distribution of particles in phase space can be reasonably well characterized by an ellipse. In addition, the definitions using the root mean square of the particle distribution assume a Gaussian particle distribution.
In cases where these assumptions do not hold, it is still possible to define a beam emittance using the moments of the distribution. Here, the RMS emittance () is defined to be,
where is the variance of the particle's position, is the variance of the angle a particle makes with the direction of travel in the accelerator ( with along the direction of travel), and represents an angle-position correlation of particles in the beam. This definition is equivalent to the geometric emittance in the case of an elliptical particle distribution in phase space.
The emittance may also be expressed as the determinant of the variance-covariance matrix of the beam's phase space coordinates where it becomes clear that quantity describes an effective area occupied by the beam in terms of its second order statistics.
Depending on context, some definitions of RMS emittance will add a scaling factor to correspond to a fraction of the total distribution, to facilitate comparison with geometric emittances using the same fraction.
RMS emittance in higher dimensions
It is sometimes useful to talk about phase space area for either four dimensional transverse phase space (IE , , , ) or the full six dimensional phase space of particles (IE , , , , , ). The RMS emittance generalizes to full three dimensional space as shown:
In the absences of correlations between different axes in the particle accelerator, most of these matrix elements become zero and we are left with a product of the emittance along each axis.
Normalized emittance
Although the previous definitions of emittance remain constant for linear beam transport, they do change when the particles undergo acceleration (an effect called adiabatic damping). In some applications, such as for linear accelerators, photoinjectors, and the accelerating sections of larger systems, it becomes important to compare beam quality across different energies. Normalized emittance, which is invariant under acceleration, is used for this purpose.
Normalized emittance in one dimension is given by:
The angle in the prior definition has been replaced with the normalized transverse momentum , where is the Lorentz factor and is the normalized transverse velocity.
Normalized emittance is related to the previous definitions of emittance through and the normalized velocity in the direction of the beam's travel ():
The normalized emittance does not change as a function of energy and so can be used to indicate beam degradation if the particles are accelerated. For speeds close to the speed of light, where is close to one, the emittance is approximately inversely proportional to the energy. In this case, the physical width of the beam will vary inversely with the square root of the energy.
Higher dimensional versions of the normalized emittance can be defined in analogy to the RMS version by replacing all angles with their corresponding momenta.
Measurement
Quadrupole scan technique
One of the most fundamental methods of measuring beam emittance is the quadrupole scan method. The emittance of the beam for a particular plane of interest (i.e., horizontal or vertical) can be obtained by varying the field strength of a quadrupole (or quadrupoles) upstream of a monitor (i.e., a wire or a screen).
The properties of a beam can be described as the following beam matrix.
where is the derivative of x with respect to the longitudinal coordinate. The forces experienced by the beam as it travels down the beam line and passes through the quadrupole(s) are described using the transfer matrix (referenced to transfer maps page) of the beam line, including the quadrupole(s) and other beam line components such as drifts:
Here is the transfer matrix between the original beam position and the quadrupole(s), is the transfer matrix of the quadrupole(s), and is the transfer matrix between the quadrupole(s) and the monitor screen. During the quadrupole scan process, and stay constant, and changes with the field strength of the quadrupole(s).
The final beam when it reaches the monitor screen at distance s from its original position can be described as another beam matrix :
The final beam matrix can be calculated from the original beam matrix by doing matrix multiplications with the beam line transfer matrix :
Where is the transpose of .
Now, focusing on the (1,1) element of the final beam matrix throughout the matrix multiplications, we get the equation:
Here the middle term has a factor of 2 because .
Now divide both sides of the above equation by , the equation becomes:
Which is a quadratic equation of the variable . Since the RMS emittance RMS is defined to be the following.
The RMS emittance of the original beam can be calculated using its beam matrix elements:
To obtain the emittance measurement, the following procedure is employed:
For each value (or value combination) of the quadrupole(s), the beam line transfer transfer matrix is calculated to determine values of and .
The beam propagates through the varied beam line, and is observed at the monitor screen, where the beam size is measured.
Repeat step 1 and 2 to obtain a series of values for and , fit the results with a parabola .
Equate parabola fit parameters with original beam matrix elements: , , .
Calculate RMS emittance of the original beam:
If the length of the quadrupole is short compared to its focal length , where is the field strength of the quadrupole, its transfer matrix can be approximated by the thin lens approximation:
Then the RMS emittance can be calculated by fitting a parabola to values of measured beam size versus quadrupole strength .
By adding additional quadrupoles, this technique can be extended to a full 4-D reconstruction.
Mask-based reconstruction
Another fundamental method for measuring emittance is by using a predefined mask to imprint a pattern on the beam and sample the remaining beam at a screen downstream. Two such masks are pepper pots and TEM grids. A schematic of the TEM grid measurement is shown below.
By using the knowledge of the spacing of the features in the mask one can extract information about the beam size at the mask plane. By measuring the spacing between the same features on the sampled beam downstream, one can extract information about the angles in the beam. The quantities of merit can be extracted as described in Marx et al.
The choice of mask is generally dependent on the charge of the beam; low-charge beams are better suited to the TEM grid mask over the pepper pot, as more of the beam is transmitted.
Emittance of electrons versus heavy particles
To understand why the RMS emittance takes on a particular value in a storage ring, one needs to distinguish between electron storage rings and storage rings with heavier particles (such as protons). In an electron storage ring, radiation is an important effect, whereas when other particles are stored, it is typically a small effect. When radiation is important, the particles undergo radiation damping (which slowly decreases emittance turn after turn) and quantum excitation causing diffusion which leads to an equilibrium emittance. When no radiation is present, the emittances remain constant (apart from impedance effects and intrabeam scattering). In this case, the emittance is determined by the initial particle distribution. In particular if one injects a "small" emittance, it remains small, whereas if one injects a "large" emittance, it remains large.
Acceptance
The acceptance, also called admittance, is the maximum emittance that a beam transport system or analyzing system is able to transmit. This is the size of the chamber transformed into phase space and does not suffer from the ambiguities of the definition of beam emittance.
Conservation of emittance
Lenses can focus a beam, reducing its size in one transverse dimension while increasing its angular spread, but cannot change the total emittance. This is a result of Liouville's theorem. Ways of reducing the beam emittance include radiation damping, stochastic cooling, and electron cooling.
Emittance and brightness
Emittance is also related to the brightness of the beam. In microscopy brightness is very often used, because it includes the current in the beam and most systems are circularly symmetric. Consider the brightness of the incident beam at the sample,
where indicates the beam current and represents the total emittance of the incident beam and the wavelength of the incident electron.
The intrinsic emittance , describing a normal distribution in the initial phase space, is diffused by the emittance introduced by aberrations . The total emittance is approximately the sum in quadrature. Under the assumption of uniform illumination of the aperture with current per unit angle , we have the following emittance-brightness relation,
See also
Accelerator physics
Etendue
Mean transverse energy
References
Accelerator physics | Beam emittance | Physics | 3,384 |
390,335 | https://en.wikipedia.org/wiki/List%20of%20computer%20term%20etymologies | This is a list of the origins of computer-related terms or terms used in the computing world (i.e., a list of computer term etymologies). It relates to both computer hardware and computer software.
Names of many computer terms, especially computer applications, often relate to the function they perform, e.g., a compiler is an application that compiles (programming language source code into the computer's machine language). However, there are other terms with less obvious origins, which are of etymological interest. This article lists such terms.
A
ABEND – originally from an IBM System/360 error message, short for "abnormal end". Jokingly reinterpreted as German Abend ("evening"), because "it is what system operators do to the machine late on Friday when they want to call it a day."
Ada – named after Ada Lovelace, who is considered by many to be the first programmer.
Apache – originally chosen from respect for the Native American Indian tribe of Apache. It was suggested that the name was appropriate, as Apache began as a series of patches to code written for NCSA's HTTPd daemon. The result was "a patchy" server.
AWK – composed of the initials of its authors Aho, Weinberger, and Kernighan.
B
B – probably a contraction of "BCPL", reflecting Ken Thompson's efforts to implement a smaller BCPL in 8 KB of memory on a DEC PDP-7. Or, named after Bon.
biff – named after a dog known by the developers at Berkeley, who – according to the UNIX manual page – died on 15 August 1993, at the age of 15, and belonged to a certain Heidi Stettner. Some sources report that the dog would bark at the mail carrier, making it a natural choice for the name of a mail notification system. The Jargon File contradicts this description, but confirms at least that the dog existed.
bit – first used by Claude E. Shannon in his seminal 1948 paper "A Mathematical Theory of Communication". Shannon's "bit" is a portmanteau of "binary digit". He attributed its origin to John W. Tukey, who had used the word in a Bell Labs memo of 9 January 1947.
Bon – created by Ken Thompson and named either after his wife Bonnie, or else after "a religion whose rituals involve the murmuring of magic formulas" (a reference to the Tibetan native religion Bön).
booting or bootstrapping – from the phrase "to pull oneself up by one's bootstraps", originally used as a metaphor for any self-initiating or self-sustaining process. Used in computing due to the apparent paradox that a computer must run code to load anything into memory, but code cannot be run until it is loaded.
bug – often (but erroneously) credited to Grace Hopper. In 1946, she joined the Harvard Faculty at the Computation Laboratory where she traced an error in the Harvard Mark II to a moth trapped in a relay. This bug was carefully removed and taped to the log book. However, use of the word 'bug' to describe defects in mechanical systems dates back to at least the 1870s, perhaps especially in Scotland. Thomas Edison, for one, used the term in his notebooks and letters.
byte – coined by Werner Buchholz in June 1956 during the early design phase for the IBM Stretch computer.
C
C – a programming language.
Dennis Ritchie, having improved on the B language, named his creation New B. He later renamed it C. (See also D).
C++ – an object-oriented programming language, a successor to the C programming language.
C++ creator Bjarne Stroustrup named his new language "C with Classes" and then "new C". The original language began to be called "old C" which was considered insulting to the C community. At this time Rick Mascitti suggested the name C++ as a successor to C. In C the '++' operator increments the value of the variable it is appended to, thus C++ would increment the value of C.
computer – from the human computers who carried out calculations mentally and possibly with mechanical aids, now replaced by electronic programmable computers.
cookie – a packet of information that travels between a browser and the web server.
The term was coined by web browser programmer Lou Montulli after the term "magic cookies" used by Unix programmers. The term "magic cookie" in turn derives from "fortune cookie", a cookie with an embedded message.
Cursor (user interface) - Cursor is Latin for 'runner.' A cursor is the name given to the transparent slide engraved with a hairline that is used for marking a point on a slide rule. The term was then transferred to computers through analogy.
D
D – a programming language.
Designed by Walter Bright as an improved C, avoiding many of the design problems of C (e.g., extensive pointer manipulation, unenforced array boundaries, etc.).
daemon – a process in an operating system that runs in the background.
It is not an acronym for Disk And Execution Monitor: according to the original team that introduced the concept, the use of the word daemon was inspired by the Maxwell's demon of physics and thermodynamics (an imaginary agent which helped sort molecules with differing velocities and worked tirelessly in the background) The term was embraced, and possibly popularized, by the Unix operating systems which supported multiple background processes: various local (and later Internet) services were provided by daemons. This is exemplified by the BSD mascot, John Lasseter's drawing of a friendly imp.
Dashboard - Originally, the word dashboard applied to a barrier of wood or leather fixed at the front of a horse-drawn carriage or sleigh to protect the driver from mud or other debris "dashed up" (thrown up) by the horses' hooves.[1] The first known use of the term (hyphenated as dash-board, and applied to sleighs) dates from 1847.[2] Commonly these boards did not perform any additional function other than providing a convenient handhold for ascending into the driver's seat, or a small clip with which to secure the reins when not in use.
Debian – a Linux distribution.
A portmanteau of the names Ian Murdock, the Debian Project creator, and Debra Lynn, Ian's then girlfriend and future wife.
default – an initial value for a variable or user setting.
The original meaning of the word 'default' is 'failure to fulfill an obligation'. The obligation here is to provide an input that is required by a program. In the early days of programming, if an input value was missing, or 'null', the program would almost certainly crash. This is often to do with variable 'typing' – for example, a simple calculation program would expect a number as an input: any other type of input such as a text string or even a null (no value), would make any mathematical operation such as multiplication impossible. In order to guard against this possibility, programmers defined initial values that would be used if the user *defaulted* or failed to fulfill the obligation of providing the correct input value. Over time, the term 'default' has come to refer to the initial value itself.
E
Ethernet – a computer networking technology.
According to Robert Metcalfe (one of its initial developers), he devised the name in an early company memo as an endocentric compound of "luminiferous ether"—the "substance" that was widely believed to be the medium through which electromagnetic radiation propagated in the late 19th century—and "net", short for "network". When the networking team would describe data flowing into the network infrastructure, they would routinely describe it as data packets going "up into the ether".
F
finger – Unix command that provides information about users logged into a system.
Les Earnest wrote the finger program in 1971 to provide for users who wanted information about other users on a network or system. According to Earnest, it was named after the act of pointing, because it "bypassed the need to point to a user ID and ask, 'Who is that?'"
foobar – from the U.S. Army slang acronym, FUBAR. Both foo and bar are commonly used as metasyntactic variables.
G
Gentoo – a Linux distribution.
Named after a variety of penguin, the universal Linux mascot.
Git – a distributed version control system.
In the project's initial README file, Linus Torvalds wrote that "'git' can mean anything, depending on your mood", and offers several definitions:
A random three-letter combination which is pronounceable and not a preexisting Unix command
British English slang, meaning a stupid or contemptible person
An acronym for "global information tracker" (when it works)
An acronym for "goddamn idiotic truckload of sh*t" (when it breaks)
When asked about the origin of the name, Torvalds jokingly stated, "I'm an egotistical bastard, and I name all my projects after myself."
GNU – a project with an original goal of creating a free operating system.
Gnu (also called wildebeest) are a genus of African antelopes resembling cattle. The founder of the GNU project Richard Stallman liked the name because of the humour associated with its pronunciation (officially, ), and was also influenced by The Gnu Song, by Flanders and Swann, which is sung by a gnu. It is also an early example of a recursive acronym: "GNU's Not Unix".
Google – a search engine.
The name started as an exaggerated boast about the amount of information the search engine would be able to search. It was originally named 'Googol', a word for the number represented by 1 followed by 100 zeros. The word was originally invented by Milton Sirotta, nephew of mathematician Edward Kasner, in 1938 during a discussion of large numbers and exponential notation.
Gopher – an early protocol for distributing documents over a network. Declined in favor of the World Wide Web.
The name was coined by developer Farhad Anklesaria, as a play on , an assistant who fetches things, and a gopher, who digs, as if through nested hierarchies. The name was also inspired by Goldy Gopher, the mascot for the University of Minnesota where the protocol was developed.
grep – a Unix command line utility
The name comes from a command in the Unix text editor ed that takes the form g/re/p meaning search globally for a regular expression and print lines where instances are found. "Grep" like "Google" is often used as a verb, meaning "to search".
H
Hotmail – free email service, now named Outlook.com.
Founder Jack Smith got the idea of accessing e-mail via the web from a computer anywhere in the world. When Sabeer Bhatia came up with the business plan for the mail service, he tried all kinds of names ending in 'mail' and finally settled for Hotmail as it included the letters "HTML" – the markup language used to write web pages. It was initially referred to as HoTMaiL with selective upper casing.
I
i18n – short for "internationalization".
"18" is for the number of letters between the i and the n. Related, less common terms include l10n (for localization), g11n (for globalization) and a11y (for accessibility).
ICQ – an instant messaging service.
ICQ is not an initialism. It is a play on the phrase "I seek you" or "Internet seek you" (similar to CQ in ham radio usage).
ID10T – pronounced "ID ten T" – is a code frequently used by a customer service representative (CSR) to annotate their notes and identify the source of a problem as the person who is reporting the problem rather than the system being blamed. This is a thinly veiled reference to the CSR's opinion that the person reporting the problem is an IDIOT. Example: Problem reported caused by ID10T, no resolution possible. See also PEBKAC.
J
Jakarta Project – a project constituted by Sun and Apache to create a web server for Java servlets and JSPs.
Jakarta was the name of the conference room at Sun where most of the meetings between Sun and Apache took place. The conference room was most likely named after Jakarta, the capital city of Indonesia, which is located on the northwest coast of the island of Java.
Java – a programming language by Sun Microsystems, later acquired by Oracle.
Named after , a blend of coffee from the island of Java, and also used as slang for coffee in general. The language was initially called "Greentalk" and later "Oak", but this was already trademarked by Oak Technologies, so the developers had to choose another name shortly before release. Other suggested names were "WebRunner", "DNA", and "Silk".
JavaScript – a programming language.
It was originally developed by Brendan Eich of Netscape under the name "Mocha", which was later renamed to "LiveScript", and finally to "JavaScript". The change of name from LiveScript to JavaScript roughly coincided with Netscape adding support for Java technology in its Netscape Navigator web browser. JavaScript was first introduced and deployed in the Netscape browser version 2.0B3 in December 1995. The naming has caused confusion, giving the impression that the language is a spin-off of Java, and it has been characterized by many as a marketing ploy by Netscape to give JavaScript the cachet of what was then the hot new web-programming language.
K
Kerberos – a computer network authentication protocol that is used by both Windows 2000 and Windows XP as their default authentication method.
When created by programmers at MIT in the 1970s, they wanted a name that suggested high security for the project, so they named it after Kerberos, in Greek mythology the three-headed dog guarding the gates of Hades. The reference to Greek mythology is most likely because Kerberos was developed as part of Project Athena.
L
Linux – an operating system kernel, and the common name for many of the operating systems which use it.
Linux creator Linus Torvalds originally used the MINIX operating system on his computer, didn't like it, liked DOS less, and started a project to develop an operating system that would address the problems of MINIX. Hence the working name was Linux (Linus' Minix). Originally, however, Linus had planned to have it named Freax (free + freak + x). His friend Ari Lemmke encouraged Linus to upload it to a network so it could be easily downloaded. Ari gave Linus a directory named linux on his FTP server, as he did not like the name Freax.
Lisa – A personal computer designed at Apple Computer during the early 1980s.
Apple stated that Lisa was an acronym for Local Integrated Software Architecture; however, it is often inferred that the machine was originally named after the daughter of Apple co-founder Steve Jobs, and that this acronym was invented later to fit the name. Accordingly, two humorous suggestions for expanding the acronym included Let's Invent Some Acronyms, and Let's Invent Silly Acronyms.
liveware – computer personnel.
A play on the terms "software" and "hardware". Coined in 1966, the word indicates that sometimes the computer problem is not with the computer itself, but with the user.
Lotus Software – Lotus founder Mitch Kapor got the name for his company from 'The Lotus Position' ('Padmasana' in Sanskrit). Kapor used to be a teacher of Transcendental Meditation technique as taught by Maharishi Mahesh Yogi.
M
Macintosh, Mac – a personal computer from Apple Computer.
From McIntosh, a popular type of apple.
N
Nerd – A colloquial term for a computer person, especially an obsessive, singularly focused one. Originally created by Dr. Seuss from his book If I Ran the Zoo.
O
Oracle – a relational database management system (RDBMS).
Larry Ellison, Ed Oates and Bob Miner were working on a consulting project for the CIA (Central Intelligence Agency). The code name for the project was Oracle (the CIA evidently saw this as a system that would give answers to all questions). The project was designed to use the newly written SQL database language from IBM. The project eventually was terminated but they decided to finish what they started and bring it to the world. They kept the name Oracle and created the RDBMS engine.
P
Pac-Man – a video arcade game.
The term comes from paku paku which is a Japanese onomatopoeia used for noisy eating; similar to chomp chomp. The game was released in Japan with the name Puck-Man, and released in the US with the name Pac-Man, fearing that kids may deface a Puck-Man cabinet by changing the P to an F.
Patch – A set of changes to a computer program or its supporting data designed to update, fix, or improve it.
Historically, software suppliers distributed patches on paper tape or on punched cards, expecting the recipient to cut out the indicated part of the original tape (or deck), and patch in (hence the name) the replacement segment
PCMCIA – the standards body for PC card and ExpressCard, expansion card form factors.
The Personal Computer Memory Card International Association is an international standards body that defines and promotes standards for expansion devices such as modems and external hard disk drives to be connected to notebook computers. Over time, the acronym PCMCIA has been used to refer to the PC card form factor used on notebook computers. A twist on the acronym is People Can't Memorize Computer Industry Acronyms.
PEBKAC – an acronym for "Problem Exists Between Keyboard And Chair", which is a code frequently used by a customer service representative (CSR) to annotate their notes and identify the source of a problem as the person who is reporting the problem rather than the system being blamed. This is a thinly veiled reference to the CSR's opinion that the person reporting the problem is the problem. Example: PEBKAC, no resolution possible. See also ID10T.
Pentium – a series of microprocessors from Intel.
The fifth microprocessor in the 80x86 series. It would have been named i586 or 80586, but Intel decided to name it Pentium (penta = five) after it lost a trademark infringement lawsuit against AMD due to a judgment that numbers like "286", "386", and "486" cannot be trademarked. According to Intel, Pentium conveys a meaning of strength, like titanium.
Since some early Pentium chips contained a mathematical precision error, it has been jokingly suggested that the reason for the chip being named Pentium rather than 586 was that Intel chips would calculate 486 + 100 = 585.99999948.
Perl – an interpreted scripting language.
Perl was originally named Pearl, after the "pearl of great price" of Matthew 13:46. Larry Wall, the creator of Perl, wanted to give the language a short name with positive connotations and claims to have looked at (and rejected) every three- and four-letter word in the dictionary. He even thought of naming it after his wife Gloria. Before the language's official release Wall discovered that there was already a programming language named Pearl, and changed the spelling of the name. Although the original manuals suggested the backronyms "Practical Extraction and Report Language" and "Pathologically Eclectic Rubbish Lister", these were intended humorously.
PHP – a server-side scripting language
Originally named "Personal Home Page Tools" by creator Rasmus Lerdorf, it was rewritten by developers Zeev Suraski and Andi Gutmans who gave it the recursive name "PHP Hypertext Preprocessor". Lerdorf currently insists the name should not be thought of as standing for anything, for he selected "Personal Home Page" as the name when he did not foresee PHP evolving into a general-purpose programming language.
Pine – e-mail client.
Many people believe that Pine stands for "Pine Is Not Elm". However, one of its original authors, Laurence Lundblade, insists this was never the case and that it started off simply as a word and not an acronym; his first choice of a backronym for pine would be "Pine Is Nearly Elm". Over time it was changed to mean Program for Internet News and E-mail.
ping – a computer network tool used to detect hosts.
The author of ping, Mike Muuss, named it after the pulses of sound made by a sonar called a "ping". Later Dave Mills provided the backronym "Packet Internet Groper".
Python – an interpreted scripting programming language.
Named after the television series Monty Python's Flying Circus.
R
Radio button – a GUI widget used for making selections.
Radio buttons got their name from the preset buttons in radio receivers. When one used to select preset stations on a radio receiver physically instead of electronically, depressing one preset button would pop out whichever other button happened to be pushed in.
Red Hat Linux – a Linux distribution from Red Hat.
Company founder Marc Ewing was given the Cornell lacrosse team cap (with red and white stripes) by his grandfather while at college. People would turn to him to solve their problems, and he was referred to as "that guy in the red hat". He lost the cap and had to search for it desperately. The manual of the beta version of Red Hat Linux had an appeal to readers to return the hat if found by anyone.
RSA – an asymmetric algorithm for public key cryptography.
Based on the surnames of the authors of this algorithm – Ron Rivest, Adi Shamir and Len Adleman.
S
Samba – a free implementation of Microsoft's networking protocol.
The name samba comes from inserting two vowels into the name of the standard protocol that Microsoft Windows network file system use, named Server Message Block (SMB). The author searched a dictionary using grep for words containing S M and B in that order; the only matches were Samba and Salmonberry.
shareware – coined by Bob Wallace to describe his word processor PC-Write in early 1983. Before this Jim Knopf (also known as Jim Button) and Andrew Fluegelman called their distributed software "user supported software" and "freeware" respectively, but it was Wallace's terminology that prevailed.
spam – unwanted repetitious messages, such as unsolicited bulk e-mail.
The term spam is derived from the Monty Python SPAM sketch, set in a cafe where everything on the menu includes SPAM luncheon meat. While a customer plaintively asks for some kind of food without SPAM in it, the server reiterates the SPAM-filled menu. Soon, a chorus of Vikings join in with a song: "SPAM, SPAM, SPAM, SPAM, SPAM, lovely SPAM, wonderful SPAM", over and over again, drowning out all conversation.
SPIM – a simulator for a virtual machine closely resembling the instruction set of MIPS processors, is simply MIPS spelled backwards. In recent time, spim has also come to mean SPam sent over Instant Messaging.
Swing – a graphics library for Java.
Swing was the code-name of the project that developed the new graphic components (the successor of AWT). It was named after swing, a style of dance band jazz that was popularized in the 1930s and unexpectedly revived in the 1990s. Although an unofficial name for the components, it gained popular acceptance with the use of the word in the package names for the Swing API, which begin with javax.swing.
T
Tomcat – a web server from the Jakarta Project.
Tomcat was the code-name for the JSDK 2.1 project inside Sun. Tomcat started off as a servlet specification implementation by James Duncan Davidson who was a software architect at Sun. Davidson had initially hoped that the project would be made open-source, and since most open-source projects had O'Reilly books on them with an animal on the cover, he wanted to name the project after an animal. He came up with Tomcat since he reasoned the animal represented something that could take care of and fend for itself.
troff – a document processing system for Unix.
Troff stands for "typesetter roff", although many people have speculated that it actually means "Times roff" because of the use of the Times font family in troff by default. Troff has its origins from roff, an earlier formatting program, whose name is a contraction of "run off".
Trojan horse – a malicious program that is disguised as legitimate software.
The term is derived from the classical myth of the Trojan Horse. Analogously, a Trojan horse appears innocuous (or even to be a gift), but in fact is a vehicle for bypassing security.
Tux – The penguin mascot used as the primary logo for the Linux kernel, and Linux-based operating systems.
Linus Torvalds, the creator of Linux, suggested a penguin mascot because he "likes penguins a lot", and wanted Linux to be associated with something "kind of goofy and fun". The logo was originally created by Larry Ewing in 1996 as an entry in a Linux Logo competition. The name Tux was contributed by James Hughes, who suggested "(T)orvolds (U)ni(X) — TUX!"
U
Ubuntu Linux – a Debian-based Linux distribution sponsored by Canonical Ltd.
Derived from ubuntu, a South African ideology.
Unix – an operating system.
When Bell Labs pulled out of the MULTiplexed Information and Computing System (MULTICS) project, which was originally a joint Bell Labs/GE/MIT project, Ken Thompson of Bell Labs, soon joined by Dennis Ritchie, wrote a simpler version of the operating system for a spare DEC minicomputer, allegedly found in a corridor. They needed an OS to run the game Space Travel, which had been compiled under MULTICS. The new OS was named UNICS – UNiplexed Information and Computing System by Brian Kernighan.
V
vi – a text editor,
Initialism for visual, a command in the ex editor which helped users to switch to the visual mode from the ex mode. the first version was written by Bill Joy at UC Berkeley.
Vim – a text editor.
Acronym for Vi improved after Vim added several features over the vi editor. Vim however had started out as an imitation of Vi and was expanded as Vi imitation.
Virus – a piece of program code that spreads by making copies of itself.
The term virus was first used as a technical computer science term by Fred Cohen in his 1984 paper "Computer Viruses Theory and Experiments", where he credits Len Adleman with coining it. Although Cohen's use of virus may have been the first academic use, it had been in the common parlance long before that. A mid-1970s science fiction novel by David Gerrold, When H.A.R.L.I.E. was One, includes a description of a fictional computer program named VIRUS that worked just like a virus (and was countered by a program named ANTIBODY). The term "computer virus" also appears in the comic book "Uncanny X-Men" No. 158, published in 1982. A computer virus's basic function is to insert its own executable code into that of other existing executable files, literally making it the electronic equivalent to the biological virus, the basic function of which is to insert its genetic information into that of the invaded cell, forcing the cell to reproduce the virus.
W
Wiki or WikiWiki – a hypertext document collection or the collaborative software used to create it.
Coined by Ward Cunningham, the creator of the wiki concept, who named them for the "wiki wiki" or "quick" shuttle buses at Honolulu Airport. Wiki wiki was the first Hawaiian term he learned on his first visit to the islands. The airport counter agent directed him to take the wiki wiki bus between terminals.
Worm – a self-replicating program, similar to a virus.
The name 'worm' was taken from a 1970s science fiction novel by John Brunner entitled The Shockwave Rider. The book describes programs known as "tapeworms" which spread through a network for the purpose of deleting data. Researchers writing an early paper on experiments in distributed computing noted the similarities between their software and the program described by Brunner, and adopted that name.
WYSIWYG – describes a system in which content during editing appears very similar to the final product.
Acronym for What You See Is What You Get, the phrase was originated by a newsletter published by Arlene and Jose Ramos, named WYSIWYG. It was created for the emerging Pre-Press industry going electronic in the late 1970s.
X
X Window System – a windowing system for computers with bitmap displays.
X derives its name as a successor to a pre-1983 window system named the W Window System.
Y
Yahoo! – internet portal and web directory.
Yahoo!'s history site says the name is an acronym for "Yet Another Hierarchical Officious Oracle", but some remember that in its early days (mid-1990s), when Yahoo! lived on a server named akebono.stanford.edu, it was glossed as "Yet Another Hierarchical Object Organizer." The word "Yahoo!" was originally invented by Jonathan Swift and used in his book Gulliver's Travels. It represents a person who is repulsive in appearance and action and is barely human. Yahoo! founders Jerry Yang and David Filo selected the name because they considered themselves yahoos.
Z
zip – a file format, also used as a verb to mean compress.
The file format was created by Phil Katz, and given the name by his friend Robert Mahoney. The compression tool Phil Katz created was named PKZIP. Zip means "speed", and they wanted to imply their product would be faster than ARC and other compression formats of the time.
See also
Glossary of computer terms
List of company name etymologies
Lists of etymologies
References
Etymologies
Computer terms | List of computer term etymologies | Technology | 6,353 |
49,185,980 | https://en.wikipedia.org/wiki/Emission%20channeling | Emission channeling is an experimental technique for identifying the position of short-lived radioactive atoms in the lattice of a single crystal.
When the radioactive atoms decay, they emit fast charged particles (e.g., α-particles and β-particles). Because of their charge, the emitted particles interact in characteristic ways with the electrons and nuclei of the crystal atoms, giving rise to channeling and blocking directions for the particle escaping the crystal. The intensity (or yield) of the emitted particles is therefore dependent on the position of the detector relative to crystal planes and axes. This fact is used to infer the location of the radioactive species in the lattice by varying the emission angles and subsequent comparison to simulation results. For the simulations, the manybeam formalism can be employed, and resolutions below 1 Å are achievable.
Among others, the technique has been used to determine the sites of manganese impurities implanted in semiconducting gallium arsenide: 70% occupy substitutional gallium sites and 28% are located at tetrahedral interstitial sites with arsenic as nearest neighbors.
See also
Channelling (physics)
References
External links
Radioactivity
Experimental physics | Emission channeling | Physics,Chemistry | 241 |
43,309,086 | https://en.wikipedia.org/wiki/MeMZ-965 | The MeMZ-965 was a Soviet automobile engine, built by the Melitopolski Motor Plant (MeMZ).
Originally known as the NAMI-G (for the Soviet National Automotive Institute), the MeMZ-965 was designed for use in the LuAZ-967. It was a air-cooled 90° V4, producing . It had had characteristics not common for automobile engines, including a magnesium alloy engine block, accessories mounted high (to assist in case of crossing rivers), and a rear-mounted oil cooler.
When the initial MD-65 engine proposed for the ZAZ-965 proved inadequate, the MeMZ engine was selected, thanks in part to it being air-cooled, like the successful VW Type 1's boxer engine. It would be developed into the MeMZ-966 and the MeMZ-968.
The MeMZ-968 was offered in the ZAZ-968M in three performance levels:
MeMZ-968E (, carbureted, low-compression for 76-octane fuel);
MeMZ-968GE (, dual carburettor); or
MeMZ-968BE (, 8.4:1 compression, for 93-octane).
In addition, the MeMZ-965 would serve as a prototype vee-twin (half an MeMZ-965), for the prototype NAMI 086.
Applications:
LuAZ-967
ZAZ-969
ZAZ-965
ZAZ-965A
ZAZ-966
ZAZ-968
ZAZ-968M
Notes
Sources
Thompson, Andy. Cars of the Soviet Union. Somerset, UK: Haynes Publishing, 2008.
ZAZ-965/965A, Avtolegendy SSSR Nr.17, DeAgostini 2009. ISSN 2071-095X.
Science and technology in the Soviet Union
Automobile engines
V4 engines | MeMZ-965 | Technology | 418 |
29,580,777 | https://en.wikipedia.org/wiki/Luxol%20fast%20blue%20stain | Luxol fast blue stain, abbreviated LFB stain or simply LFB, is a commonly used stain to observe myelin under light microscopy, created by Heinrich Klüver and Elizabeth Barrera in 1953. LFB is commonly used to detect demyelination in the central nervous system (CNS), but cannot discern myelination in the peripheral nervous system.
Procedure
Luxol fast blue is a copper phthalocyanine dye that is soluble in alcohol and is attracted to bases found in the lipoproteins of the myelin sheath.
Under the stain, myelin fibers appear blue, neuropil appears pink, and nerve cells appear purple. Tissues sections are treated over an extended period of time (usually overnight) and then differentiated with a lithium carbonate solution.
Combination methods
The combination of LFB with a variety of common staining methods provides the most useful and reliable method for the demonstration of pathological processes in the CNS. It is often combined with H&E stain (hematoxylin and eosin), which is abbreviated H-E-LFB, H&E-LFB. Other common staining methods include the periodic acid-Schiff, Oil Red O, phosphotungstic acid, and Holmes silver nitrate method.
See also
Bielschowsky stain
References
Staining
Histology
Histochemistry | Luxol fast blue stain | Chemistry,Biology | 280 |
45,449,726 | https://en.wikipedia.org/wiki/Fungistatics | Fungistatics are anti-fungal agents that inhibit the growth of fungus (without killing the fungus). The term fungistatic may be used as both a noun and an adjective. Fungistatics have applications in agriculture, the food industry, the paint industry, and medicine.
Anti-fungal medicines
Fluconazole is a fungistatic antifungal medication that is administered orally or intravenously. It is used to treat a variety of fungal infections, especially Candida infections of the vagina ("yeast infections'), mouth, throat, and bloodstream. It is also used to prevent infections in people with weak immune systems, including those with neutropenia due to cancer chemotherapy, transplant patients, and premature babies. Its mechanism of action involves interfering with synthesis of the fungal cell membrane.
Itraconazole (R51211), invented in 1984, is a triazole fungistatic antifungal agent prescribed to patients with fungal infections. The drug may be given orally or intravenously. Itraconazole has a broader spectrum of activity than fluconazole (but not as broad as voriconazole or posaconazole). In particular, it is active against Aspergillus, which fluconazole is not. The mechanism of action of itraconazole is the same as the other azole antifungals: it inhibits the fungal-mediated synthesis of ergosterol.
Anti-fungal food preservatives
Sodium benzoate and potassium sorbate are both examples of fungistatic substances that are widely used in the preservation of food and beverages.
See also
Fungicide – the other type of anti-fungal agents are fungicidal agents (fungicides)
References
Pharmaceutical sciences
Food chemistry
Biochemistry | Fungistatics | Chemistry,Biology | 368 |
28,343,292 | https://en.wikipedia.org/wiki/Lentinus%20brumalis | Lentinus brumalis is an inedible species of fungus in the family Polyporaceae. Its common name is the winter polypore. The epithet brumalis means "occurring in the winter", describing how this species tends to fruit during winter. It causes white rot on dead hardwood, and is distributed throughout the Northern Hemisphere in temperate and boreal zones.
Taxonomy
Lentinus brumalis was first described as Boletus brumalis in 1794 by Christiaan Hendrik Persoon in his work "Neuer Versuch einer systematischen Eintheilung der Schwämme" (New attempt at a systematic classification of fungi). It was transferred to the current genus, Lentinus in 2010 by Ivan V. Zmitrovich.
Description
Macroscopic characteristics
Lentinus brumalis has a round, broadly convex cap that has a diameter of and is thick. It is depressed in the middle and somewhat zoned. The surface of the cap is dry, though rarely hairy. It ranges from yellow-brown to dark brown in colour. The margin of the cap is often inrolled, particularly in young specimens.
There are 3 mm deep pores on the white to cream underside of the cap. They are spaced 2-4 pores per mm2. They have moderately wide, (0.5-)1-1.5 mm large and roundish to almost diamond-shaped pores, which run down the stem a little (decurrent) and are therefore slightly elongated. They change in appearance from dull to lustrous when the orientation to light is changed. The spore print is white.
The stalk is long and 2–5 mm thick. It is gray to brown, occasionally with red tints and is generally lighter than the cap. Its dry surface is either smooth or finely felted to slightly scaly. The flesh is white and its consistency is tender to elastic. It does not have a particular taste or odour.
Microscopic characteristics
The spores are elliptic to cylindrical and measure 5–7 × 1.5–2.5 μm. They are smooth inamyloid, not changing colour when mounted with iodine. The basidia, club-shaped structures that bear spores, have 4 spores each and measure 16–22 × 5–6.5 μm. Cystidia, (large cells found on the fruiting bodies of some fungi) are absent.
Clamp connections are found throughout all tissue. The hyphal system is dimitic, made out of two types of hyphae. The generative hyphae of the flesh is 4–10 μm wide, colourless, thin-walled and occasionally branched. The binding hyphae of flesh has a similar colour and width, though it can sometimes swell up to 13 μm wide. It is thick-walled and nonseptate. It is frequently branched and the branches taper to 1–2 μm wide.
KOH does not affect the colour of any parts of this fungi (negative reaction). When stained by guiaic gum, the flesh turns blue, over a period of 6–12 hours.
Mycochemistry
L. brumalis produces the black pigment melanin, especially under high levels of moisture content (35%-55%) in the wood substrate. Lentinus brumalis degrades lignin in wood by producing enzymes, primarily lignin peroxidase and laccase.
Growth
The stipe of Lentinus brumalis is strongly phototropic (grows towards light) before its cap forms. For example, a 12–300 second exposure to 1500 foot-candles of light can cause the stipe to curve 5–80° within 24 hours. After the cap has formed and reached a diameter of 9mm, the stipe stops growing towards the light, instead becoming strongly geotropic (growing away from gravitational pull).
Ecology and distribution
It is saprotrophic on dead hardwoods, in particular, birch, beech and mountain ash, though in rare cases it grows on conifers such as hemlock and fir. In Uzbekistan, it grows on European nettle, willow and poplar trees as well. It grows solitary or in small groups. In North America, Lentinus brumalis is more common in the east, where it grows June through October. In Northern Europe, however it fruits in late October, and March.
Similar species
A potential look-alike, Lentinus strictipes, can be distinguished from L brumalis as it does not fruit until April, as well as possessing smaller, and finer pores, that are rarely larger than 0.5 mm. A closer look-alike, L.arcularius (the spring polypore), differs from Lentinus brumalis in its larger pores, which are up to 2.5 mm wide, and easily recognizable even on young fruiting bodies. Neofavolus alveolaris has a paler cap, larger pores and spores and a more lateral stipe. L. longiporus has significantly longer pores and grows under willows and poplars in April and May. Cerioporus leptocephalus, Cerioporus varius and Picipes melanopus all have a dark black stipe that is not found on L. brumalis.
Research
Cultures of L. brumalis have been taken onto three different satellites (the Salyut-5 orbital station, the Salyut 6 orbital station and the Cosmo 690) to research the effects of weightlessness, space orientation and light on the geotropism and formation of its fruiting bodies. In the absence of gravity and light, the stipe grew strongly twisted into a spiral or ball, and caps did not form, though in the presence of light, there was little anatomical difference from control samples. However, on Salyut 6, with the samples in the dark, they formed no fruiting bodies.
L. brumalis has been studied for its potential ability to degrade dibutyl phthalate. A study in 2007 reported that dibutyl phthalate was nearly eliminated from a culture medium of L. brumalis within 12 days, potentially through transesterification and de-esterification.
Uses
The fruiting body of L. brumalis is inedible, and it has no use as a dyestuff as it yields little to no colour.
References
External links
Polyporaceae
Fungi described in 1794
Inedible fungi
Fungus species
Taxa named by Christiaan Hendrik Persoon | Lentinus brumalis | Biology | 1,345 |
32,848,672 | https://en.wikipedia.org/wiki/Dual%20q-Krawtchouk%20polynomials | In mathematics, the dual q-Krawtchouk polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties.
Definition
The polynomials are given in terms of basic hypergeometric functions by
where
References
Orthogonal polynomials
Q-analogs
Special hypergeometric functions | Dual q-Krawtchouk polynomials | Mathematics | 67 |
47,735,568 | https://en.wikipedia.org/wiki/92P/Sanguin | 92P/Sanguin, also called Sanguin's Comet or Comet Sanguin, is a Jupiter-family comet discovered on October 15, 1977, by Juan G. Sanguin at Leoncito Astronomical Complex. It completes a single rotation approximately every 6 days.
The nucleus of the comet has a radius of about 1.2 kilometers based on observations by Keck, assuming a geometric albedo of 0.04.
References
External links
Periodic comets
092P
0092
092P
19771015 | 92P/Sanguin | Astronomy | 104 |
7,633,216 | https://en.wikipedia.org/wiki/Long%20terminal%20repeat | A long terminal repeat (LTR) is a pair of identical sequences of DNA, several hundred base pairs long, which occur in eukaryotic genomes on either end of a series of genes or pseudogenes that form a retrotransposon or an endogenous retrovirus or a retroviral provirus. All retroviral genomes are flanked by LTRs, while there are some retrotransposons without LTRs. Typically, an element flanked by a pair of LTRs will encode a reverse transcriptase and an integrase, allowing the element to be copied and inserted at a different location of the genome. Copies of such an LTR-flanked element can often be found hundreds or thousands of times in a genome. LTR retrotransposons comprise about 8% of the human genome.
The first LTR sequences were found by A.P. Czernilofsky and J. Shine in 1977 and 1980.
Transcription
The LTR-flanked sequences are partially transcribed into an RNA intermediate, followed by reverse transcription into complementary DNA (cDNA) and ultimately dsDNA (double-stranded DNA) with full LTRs. The LTRs then mediate integration of the DNA via an LTR specific integrase into another region of the host chromosome.
Retroviruses such as human immunodeficiency virus (HIV) use this basic mechanism.
Dating retroviral insertions
As 5' and 3' LTRs are identical upon insertion, the difference between paired LTRs can be used to estimate the age of ancient retroviral insertions. This method of dating is used by paleovirologists, though it fails to take into account confounding factors such as gene conversion and homologous recombination.
HIV-1
The HIV-1 LTR is 634 bp in length and, like other retroviral LTRs, is segmented into the U3, R, and U5 regions. U3 and U5 has been further subdivided according to transcription factor sites and their impact on LTR activity and viral gene expression. The multi-step process of reverse transcription results in the placement of two identical LTRs, each consisting of a U3, R, and U5 region, at either end of the proviral DNA. The ends of the LTRs subsequently participate in integration of the provirus into the host genome. Once the provirus has been integrated, the LTR on the 5′ end serves as the promoter for the entire retroviral genome, while the LTR at the 3′ end provides for nascent viral RNA polyadenylation and, in HIV-1, HIV-2, and SIV, encodes the accessory protein, Nef.
All of the required signals for gene expression are found in the LTRs: Enhancer, promoter (can have both transcriptional enhancers or regulatory elements), transcription initiation (such as capping), transcription terminator and polyadenylation signal.
In HIV-1, the 5'UTR region has been characterized according to functional and structural differences into several sub-regions:
TAR, or trans-activation response element, plays a critical role in transcriptional activation via its interaction with viral proteins. It forms a highly stable stem–loop structure consisting of 26 base pairs with a bulge in its secondary structure that interfaces with the viral transcription activator protein Tat.
Poly A plays roles both in dimerization and genome packaging since it is necessary for cleavage and polyadenylation. It has been reported that sequences upstream (U3 region) and downstream (U5 region) are needed in order to make the cleavage process efficient.
PBS, or primer binding site, is 18 nucleotides long and has a specific sequence that binds to the tRNALys primer required for initiation of reverse transcription.
Psi (Ψ), or the Psi packaging element, is a unique motif involved in regulating the packaging of the viral genome into the capsid. It is composed of four stem-loop (SL) structures with a major splicing donor site embedded in the second SL.
DIS, or dimer initiation site, is a highly conserved RNA–RNA interacting sequence constituting the SL1 stem–loop in the Psi packaging element of many retroviruses. DIS is characterized by a conserved stem and palindromic loop that forms a kissing-loop complex between HIV-1 RNA genomes to dimerize them for encapsidation.
The transcript begins, at the beginning of R, is capped, and proceeds through U5 and the rest of the provirus, usually terminating by the addition of a poly A tract just after the R sequence in the 3' LTR.
The finding that both HIV LTRs can function as transcriptional promoters is not surprising since both elements are apparently identical in nucleotide sequence. Instead, the 3' LTR acts in transcription termination and polyadenylation. However, it has been suggested that the transcriptional activity of the 5' LTR is far greater than that of the 3' LTR, a situation that is very similar to that of other retroviruses.
During transcription of the human immunodeficiency virus type 1 provirus, polyadenylation signals present in the 5' long terminal repeat (LTR) are disregarded while the identical polyadenylation signals present in the 3'LTR are utilized efficiently. It has been suggested that transcribed sequences present within the HIV-1 LTR U3 region act in cis to enhance polyadenylation within the 3' LTR.
See also
Direct repeat
RetrOryza
LTR retrotransposon
References
External links
Molecular genetics | Long terminal repeat | Chemistry,Biology | 1,159 |
17,244,108 | https://en.wikipedia.org/wiki/Algol%20paradox | In stellar astronomy, the Algol paradox is a paradoxical situation when elements of a binary star seem to evolve in discord with the established theories of stellar evolution. A fundamental feature of these theories is that the rate of evolution of stars depends on their mass: The greater the mass, the faster this evolution, and the more quickly it leaves the main sequence, entering either a subgiant or giant phase.
In the case of Algol and other binary stars, something completely different is observed: The less massive star is already a subgiant, while the star with much greater mass is still on the main sequence. Since the partner stars of the binary are thought to have formed at approximately the same time and so should have similar ages, this appears paradoxical. The more massive star, rather than the less massive one, should have left the main sequence.
The paradox is resolved by the fact that in many binary stars, there can be a flow of material between the two, disturbing the normal process of stellar evolution. As the flow progresses, their evolutionary stage advances, even as the relative masses change. Eventually, the originally more massive star reaches the next stage in its evolution despite having lost much of its mass to its companion.
See also
Algol variable
References
Paradoxes
Stellar astronomy | Algol paradox | Astronomy | 258 |
59,568,204 | https://en.wikipedia.org/wiki/NGC%203585 | NGC 3585 is an elliptical or a lenticular galaxy located in the constellation Hydra. It is located at a distance of circa 60 million light-years from Earth, which, given its apparent dimensions, means that NGC 3585 is about 80,000 light years across. It was discovered by William Herschel on December 9, 1784.
NGC 3585 features a red discy region in the core with a semi-major axis of circa 45 arcseconds, probably associated with diffuse dust. There are nearly 130 globular cluster candidates in the galaxy, with the total number of globular clusters estimated to be nearly 550. This number is quite low, but it is typical for field elliptical galaxies. Based on luminosity turnover of the globular clusters, it is suspected that there is a subpopulation of younger clusters. The outer isophotes of the galaxy are asymmetrical, maybe due to a tidal disruption.
In the centre of NGC 3585 lies a supermassive black hole whose mass is estimated to be based on the tidal disruption rate or 108.53 ± 0.122 based on the observation of the circumnuclear ring with very-long-baseline interferometry. Based on observations by the Hubble Space Telescope to determine the stellar velocity dispersion at the core, the mass of the hole was estimated to be between 280 and 490 million by using the M–sigma relation.
NGC 3585 is the most prominent member of a loose galaxy group known as the NGC 3585 group. Other members of the group are the spiral galaxies UGCA 226, ESO 502- G 016, and UGCA 230.
References
External links
NGC 3585 on SIMBAD
Elliptical galaxies
Lenticular galaxies
Hydra (constellation)
3585
34160
Astronomical objects discovered in 1784
Discoveries by William Herschel | NGC 3585 | Astronomy | 376 |
5,161,826 | https://en.wikipedia.org/wiki/MetaType1 | MetaType1, also stylized as METATYPE1, is a tool for creating Type 1 fonts using MetaPost, developed by the Polish JNS team (Bogusław Jackowski, Janusz Marian Nowacki and Piotr Strzelczyk).
Since Metafont cannot produce outline fonts (vector-based), a new tool was needed to help creating such fonts, primarily for use with TeX, although the OpenType versions of the fonts might be used in any other program. It is less powerful than Metafont since no pens can be used, only filled paths, but it still allows creation of parametric fonts.
Most important fonts produced with MetaType1 are: Latin Modern, Latin Modern Math, TeX Gyre, Antykwa Toruńska, Antykwa Półtawskiego, Kurier and Iwona.
References
Yannis Haralambous, Fonts and Encodings, O'Reilly 2007, , pages 947–956
Notes
External links
Homepage of Janusz M. Nowacki with fonts produced with MetaType1
ftp with the program MetaType1 (Windows)
MetaType1 for Unices
Free TeX software
1994 software | MetaType1 | Technology | 253 |
1,279,412 | https://en.wikipedia.org/wiki/Caulerpa%20taxifolia | Caulerpa taxifolia is a species of green seaweed, an alga of the genus Caulerpa, native to tropical waters of the Pacific Ocean, Indian Ocean, and Caribbean Sea. The species name taxifolia arises from the resemblance of its leaf-like fronds to those of the yew (Taxus).
A strain of the species bred for use in aquariums has established non-native populations in waters of the Mediterranean Sea, the United States, and Australia. It is one of two species of algae listed in 100 of the World's Worst Invasive Alien Species compiled by the IUCN Invasive Species Specialist Group.
Description
C. taxifolia is light green with stolons (stems) on the sea floor, from which sparsely-branched upright fronds of approximately 20–60 cm (8–24 in) in height arise. Algae in the genus Caulerpa synthesize a mixture of toxins termed caulerpicin, believed to impart a peppery taste to the plants. The effects of the specific toxin synthesized by C. taxifolia, caulerpenyne, have been studied, with extracts from C. taxifolia being found to negatively affect P-glycoprotein-ATPase in the sea sponge G. cydonium.
Like all members of the genus Caulerpa, C. taxifolia consists of a single cell with many nuclei. The algae has been identified as the largest known single-celled organism. Wild-type C. taxifolia is monoecious.
Use in aquaria
Caulerpa species are commonly used in aquaria for their aesthetic qualities and ability to control the growth of undesired species. C. taxifolia has been cultivated for use in aquaria in western Europe since the early 1970s. A clone of the alga that was resistant to cold was observed in the tropical aquarium at the Wilhelma Zoo in Stuttgart and further bred by exposure to chemicals and ultraviolet light. The zoo distributed the strain to other aquaria, including the Oceanographic Museum of Monaco.
The aquarium strain is morphologically identical to native populations of the species. However, a 2008 study found that a population of the aquarium strain near Caloundra, Australia exhibited markedly reduced sexual reproduction, with only male plants present during some reproductive episodes. The aquarium strain can survive out of water for up to 10 days in moist conditions, with 1 cm fragments capable of producing viable plants.
Status as invasive species
Outside its native range, C. taxifolia is listed as an invasive species. It is one of two algae on the list of the world's 100 worst invasive species compiled by the IUCN Invasive Species Specialist Group. The species is able to thrive in heavily polluted waters, possibly contributing to its spread in the Mediterranean.
Presence in the Mediterranean Sea
The presence of C. taxifolia in the Mediterranean was first reported in 1984 in an area adjacent to the Oceanographic Museum of Monaco. Alexandre Meinesz, a marine biologist, attempted to alert Moroccan and French authorities to the spread of the strain in 1989, but the governments failed to respond to his concerns. The occurrence of the strain is generally believed to be due to an accidental release by the museum, but Monaco rejected the attribution and instead claimed that the observed algae was a mutant strain of C. mexicana. By 1999, scientists agreed that it was no longer possible to eliminate the presence of C. taxifolia in the Mediterranean.
A study published in 2002 found that beds of Posidonia oceanica in the Bay of Menton were not negatively affected eight years after colonization by C. taxifolia. Other published studies have shown that fish diversity and biomass are equal or greater in Caulerpa meadows than in seagrass beds and that Caulerpa had no effect on composition or richness of fish species.
Studies in 1998 and 2001 found that the strain observed in the Mediterranean was genetically identical to aquarium strains, with similarities to an additional population in Australia.
Presence in Australia
A 2007 study found that a native bivalve mollusc species was negatively affected by the presence of C. taxifolia, but that the effect was not necessarily different from that of native seagrass species. A 2010 study indicated that the effect of detritus from C. taxifolia negatively impacted abundance and species richness.
Presence in California
C. taxifolia was found in waters near San Diego, California, in 2000, where chlorine bleach was used in efforts to eradicate the strain. The strain was declared eradicated from Agua Hedionda Lagoon in 2006. California passed a law in 2001 forbidding the possession, sale, transport, or release of Caulerpa taxifolia within the state.
The Mediterranean clone of C. taxifolia was listed as a noxious weed in 1999 by the Animal and Plant Health Inspection Service, prohibiting interstate sale and transport of the strain without a permit under the Noxious Weed Act and Plant Protection Act.
Other negative effects
C. taxifolia may become entangled in fishing gear and boat propellers.
Control methods
C. taxifolia may be controlled via mechanical removal, poisoning with chlorine, or application of salt. Researchers at the University of Nice investigated possible use of a species of sea slug, Elysia subornata, as a possible natural control method, but found that it was not suitable for use in the Mediterranean due to cold winter water temperatures and insufficient population density.
Gallery
See also
Largest organisms
References
Further reading
Theodoropoulos, David. 2003. Invasion Biology: Critique of a Pseudoscience. pages 42,159. Avvar Books, Blythe, CA. 237 p.
External links
Killer Algae, 2001 BBC Documentary
In-depth article on invasions of Caulerpa taxifolia, source as escaped aquarium plant, etc.
Caulerpa Taxifolia fact sheet
An excerpt from Killer Algae by Alexandre Meinesz
Caulerpa taxifolia at the Center for Invasive Species Research
"Deep Sea Invasion" Nova (TV series) broadcast April 1, 2003
Species Profile- Caulerpa, Mediterranean Clone (Caulerpa taxifolia), National Invasive Species Information Center, United States National Agricultural Library. Lists general information and resources for Caulerpa, Mediterranean Clone.
taxifolia
Protists described in 1817
Algae of India
Biota of the Mediterranean Sea
Flora of the Indian subcontinent
Invasive species
Invasive species in the Mediterranean Sea
Chlorophyta species | Caulerpa taxifolia | Biology | 1,299 |
5,407,968 | https://en.wikipedia.org/wiki/Rubroboletus%20satanas | Rubroboletus satanas, commonly known as Satan's bolete or the Devil's bolete, is a basidiomycete fungus of the bolete family (Boletaceae) and one of its most infamous members. It was known as Boletus satanas before its transfer to the new genus Rubroboletus in 2014, based on molecular phylogenetic data. Found in broad-leaved and mixed woodland in the warmer regions of Europe, it is classified as a poisonous mushroom, known to cause violent gastroenteritis. However, reports of poisoning are rare, due to the striking coloration and unpleasant odor of the fruiting bodies, which discourage experimentation.
These squat, brightly coloured fruiting bodies are often massive and imposing, with a beige-coloured velvet-textured cap up to across, yellow to orange-red pores and a bulbous red stem. The flesh turns blue when cut or bruised, and fruit bodies often emit an unpleasant rotten odor. It is arguably the largest bolete found in Europe.
Taxonomy and phylogeny
Originally known as Boletus satanas, Satan's bolete was described by German mycologist Harald Othmar Lenz in 1831. Lenz was aware of several reports of adverse reactions from people who had consumed this fungus and apparently felt himself ill from its "emanations" while describing it, hence giving it its sinister epithet. The Greek word (satanas, or Satan) is derived from Hebrew śāṭān (שטן). The American mycologist Harry D. Thiers concluded that material from North America matches the species description; however, genetic testing has since confirmed that western North American collections represent Rubroboletus eastwoodiae, a different species.
Genetic analysis published in 2013 revealed that B. satanas and several other red-pored boletes, are part of the "dupainii" clade (named after B. dupainii), and are distantly nested from the core group of Boletus (including B. edulis and relatives) within the Boletineae. This indicated that B. satanas and its relatives belonged to a distinct genus. The species was hence transferred to the new genus Rubroboletus in 2014, along with several allied red-pored, blue-staining bolete species. Genetic testing on several species of the genus revealed that R. satanas is most closely related to R. pulchrotinctus, a morphologically similar but much rarer species occurring in the Mediterranean regions
Common names
Both Rubroboletus satanas and Suillellus luridus are known as ayimantari ('bear mushroom') in eastern Turkey.
Description
The compact cap can reach an impressive , extraordinarily , very rarely in diameter. At first it is hemispherical with an inrolled margin, but becomes convex at maturity as the fruit body expands, while in older specimens the margin might be slightly undulating. When young, the pileus is greyish white to silvery-white or buff, but older specimens tend to develop olivaceous, ochraceous or brownish tinges. The surface of the cap is finely tomentose, becoming smooth at maturity and is often slightly viscid in wet weather. The cuticle is tightly attached to the flesh and does not peel.
The free to slightly adnate tubes are up to long, pale yellow or greenish yellow and bluing when cut. The pores (tube mouths) are rounded, yellow to orange at first, but soon turning red from the point of their attachment to the stem outwards, eventually becoming entirely purplish red or carmine-red at full maturity and instantly bluing when touched or bruised. The stipe is , extraordinarily , very rarely long, distinctly bulbous (, extraordinarily , very rarely ), and often wider than its length, becoming more ventricose as the fungus expands but remaining bulbous at the base. Its colour is golden-yellow to orange at the apex, becoming increasingly pinkish-red to reddish-orange further down and deep carmine-red to purple-red towards the base. It is decorated in a fine, yellowish to reddish hexagonal net, sometimes confined to the upper half of the stipe. The flesh is thick, spongy and whitish, but may be yellow to straw-coloured in immature specimens and is sometimes reddish at the stem base. It slowly turns a faded blue colour when cut, bluing more intensely around the apex and above the tubes. The smell is weak and pleasantly musky in young fruit bodies, but becomes increasingly putrid in older specimens, reminiscent of carrion. Young specimens have a reportedly pleasant, nutty taste. The spore print is olivaceous green.
The spores are fusiform (spindle-shaped) when viewed under a microscope and measure 10–16 × 4.5–7.5 μm. The cap cuticle is composed of interwoven septate hyphae, which are often finely incrusted.
Similar species
Satan's bolete can be confused with a number of other species:
Rubroboletus rhodoxanthus is found predominantly on acidic soil, develops pinkish tinges of the cap, has a more or less cylindrical or clavate stipe with a very dense, well-developed net and lemon-yellow flesh that distinctly stains blue only in the cap when longitudinally sliced.
Rubroboletus legaliae is also acidophilous, has pinkish tinges on the cap, flesh that stains more extensively blue when cut and narrower spores, measuring 9–15 × 4–6 μm.
Rubroboletus pulchrotinctus has a variable cap colour often featuring a pinkish band at the margin; has a dull-coloured stipe without deep red tinges, pores that remain yellow or orange even in mature fruit bodies, and somewhat narrower spores, measuring 12–15 × 4.5–6 μm.
Rubroboletus rubrosanguineus is associated with spruce (Picea) or fir (Abies), has pinkish tinges on the cap and smaller spores, measuring 10–14.5 × 4–6 μm.
Caloboletus calopus is usually associated with coniferous trees, has pores that remain persistently yellow even in overripe fruit bodies, has a more slender, cylindrical or clavate stipe and narrower spores, measuring 11–16 × 4–5.5 μm.
Distribution and habitat
Rubroboletus satanas is widely distributed throughout the temperate zone, but is rare in most of its reported localities. In Europe, it mostly occurs in the southern regions and is rare or absent in northern countries. It fruits in the summer and early autumn in warm, broad-leaved and mixed forests, forming ectomycorrhizal associations with oak (Quercus) and sweet chestnut (Castanea), with a preference for calcareous (chalky) soils. Other frequently reported hosts are hornbeam (Carpinus), beech (Fagus) and lime and linden trees (Tilia).
In the United Kingdom, this striking bolete is found only in the south of England. It is rare in Scandinavia, occurring primarily on a few islands in the Baltic Sea where conditions are favourable, with highly calcareous soil. In the eastern Mediterranean region, it has been reported from the Bar'am Forest in the Upper Galilee region of northern Israel, as well as the island of Cyprus, where it is found in association with the narrow-endemic golden oak (Quercus alnifolia). It has further been documented in the Black Sea and eastern Anatolia regions of Turkey, as well as Crimea and Ukraine, with its distribution possibly extending as far south as Iran.
In the past, R. satanas had been reported from the United States, however, these sightings are instead of the closely related species Rubroboletus eastwoodiae.
Toxicity
This mushroom is moderately poisonous, especially if eaten raw. The symptoms, which are predominantly gastrointestinal in nature, include nausea, abdominal pain, and violent vomiting with bloody diarrhea that can last up to six hours.
The toxic enzyme bolesatine has been isolated from fruiting bodies of R. satanas and is implicated in the poisonings. Bolesatine is a protein synthesis inhibitor and, when given to mice, causes massive thrombosis. At lower concentrations, bolesatine is a mitogen, inducing cell division in human T lymphocytes. Muscarine has also been isolated from this fungus, but the quantities are believed to be far too small to cause toxic effects in humans. More recent studies have associated the poisoning caused by R. satanas with hyperprocalcitonemia, and classified it as a distinct syndrome among fungal poisonings.
Controversially, English mycologist John Ramsbottom reported in 1953 that R. satanas is consumed in certain parts of Italy and the former Czechoslovakia. In those regions, the fungus is reportedly eaten following prolonged boiling that may neutralise the toxins, though this has never been proven scientifically. Similar reports exist from the San Francisco Bay Area of the United States, but probably involve a different fungus misidentified as R. satanas. Ramsbottom speculated that there may be a regional variation in its toxicity, and conceded that the fungus may not be as poisonous as widely reported. Nevertheless, R. satanas is rarely sampled casually, not least because of the foul smell, which in addition to their bright red colour and blue staining, make this fungus unappealing for human consumption.
References
Poisonous fungi
satanas
Fungi of Europe
Fungi described in 1831
Fungus species | Rubroboletus satanas | Biology,Environmental_science | 2,006 |
16,781,967 | https://en.wikipedia.org/wiki/HD%2047536%20b | HD 47536 b is an extrasolar planet located approximately 400 light-years away. Its inclination and thereby true mass is being calculated via astrometry with Hubble. The astrometricians expect publication by mid-2009.
See also
HD 47536 c
References
External links
Canis Major
Giant planets
Exoplanets discovered in 2003
Exoplanets detected by radial velocity | HD 47536 b | Astronomy | 75 |
47,360,281 | https://en.wikipedia.org/wiki/Wild-type%20transthyretin%20amyloid | Wild-type transthyretin amyloid (WTTA), also known as senile systemic amyloidosis (SSA), is a disease that typically affects the heart and tendons of elderly people. It is caused by the accumulation of a wild-type (that is to say a normal) protein called transthyretin. This is in contrast to a related condition called transthyretin-related hereditary amyloidosis where a genetically mutated transthyretin protein tends to deposit much earlier than in WTTA due to abnormal conformation and bioprocessing.
It belongs to a group of diseases called amyloidosis, chronic progressive conditions linked to abnormal deposition of normal or abnormal proteins, because these proteins are misshapen and cannot be properly degraded and eliminated by the cell metabolism.
Signs and symptoms
Wild-type transthyretin amyloid accumulates mainly in the heart, where it causes stiffness and often thickening of its walls, leading consequently to shortness of breath and intolerance to exercise, called diastolic dysfunction. Excessively slow heart rate can also occur, such as in sick sinus syndrome, with ensuing fatigue and dizziness. Wild-type transthyretin deposition is also a common cause of carpal tunnel syndrome in elderly men, which may cause pain, tingling and loss of sensation in the hands. Some patients may develop carpal tunnel syndrome as an initial symptom of wild-type transthyretin amyloid.
There appears to be an increased risk of developing hematuria or blood in the urine due to urological lesions.
Natural course
The disorder typically affects the heart and its prevalence increases in older age groups. Men are affected much more frequently than women, and up to 25% of men over the age of 80 may have evidence of WTTA.
Patients often present with increased thickness of the wall of the main heart chamber, the left ventricle. People affected by WTT amyloidosis are likely to have required a pacemaker before diagnosis and have a high incidence of a partial electrical blockage of the heart, known as the left bundle branch block. Low ECG signals such as QRS complexes are widely considered a marker of cardiac amyloidosis.
A much better survival has been reported for patients with WTTA as opposed to cardiac AL amyloidosis.
Diagnosis
The condition is suspected in an elderly person, especially male, presenting with symptoms of heart failure such as shortness of breath or swollen legs, and or disease of the electrical system of the heart with ensuing slow heart rate, dizziness or fainting spells. The diagnosis is confirmed on the basis of a biopsy, which can be treated with a special stain called Congo Red that will be positive in this condition, and immunohistochemistry. However, this disease can now non-invasively be diagnosed with the help of Tc-99m pyrophosphate scintigraphy.
Treatment
No drug has been shown to be able to arrest or slow down the process of this condition. There is promise that two drugs, tafamidis and diflunisal, may improve the outlook, since they were demonstrated in randomized clinical trials to benefit patient affected by the related condition FAP-1 otherwise known as transthyretin-related hereditary amyloidosis. Permanent pacing can be employed in cases of symptomatic slow heart rate (bradycardia). Heart failure medications can be used to treat symptoms of difficulty breathing and congestion.
A 2021 investigational first-in-human study demonstrated that NTLA-2001, a therapeutic agent based on the CRISPR-Cas9 system, induces targeted knockout of the transthyretin protein.
Orphan drug status for transthyretin (TTR) amyloidosis
Because of preliminary data suggesting the drug may have activity, the U.S. FDA in 2013 granted tolcapone "orphan drug status" in studies aiming at the treatment of transthyretin familial amyloidosis (ATTR). However, tolcapone was not FDA approved for the treatment of this disease.
See also
Transthyretin-related hereditary amyloidosis
Amyloidosis
References
External links
The Amyloidosis Center at Boston University
Mayo Clinic Definition
A Patient Guide to Amyloidosis
Amyloidosis
Histopathology
Structural proteins | Wild-type transthyretin amyloid | Chemistry | 876 |
521,267 | https://en.wikipedia.org/wiki/Reflection%20%28physics%29 | Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves. The law of reflection says that for specular reflection (for example at a mirror) the angle at which the wave is incident on the surface equals the angle at which it is reflected.
In acoustics, reflection causes echoes and is used in sonar. In geology, it is important in the study of seismic waves. Reflection is observed with surface waves in bodies of water. Reflection is observed with many types of electromagnetic wave, besides visible light. Reflection of VHF and higher frequencies is important for radio transmission and for radar. Even hard X-rays and gamma rays can be reflected at shallow angles with special "grazing" mirrors.
Reflection of light
Reflection of light is either specular (mirror-like) or diffuse (retaining the energy, but losing the image) depending on the nature of the interface. In specular reflection the phase of the reflected waves depends on the choice of the origin of coordinates, but the relative phase between s and p (TE and TM) polarizations is fixed by the properties of the media and of the interface between them.
A mirror provides the most common model for specular light reflection, and typically consists of a glass sheet with a metallic coating where the significant reflection occurs. Reflection is enhanced in metals by suppression of wave propagation beyond their skin depths. Reflection also occurs at the surface of transparent media, such as water or glass.
In the diagram, a light ray PO strikes a vertical mirror at point O, and the reflected ray is OQ. By projecting an imaginary line through point O perpendicular to the mirror, known as the normal, we can measure the angle of incidence, θi and the angle of reflection, θr. The law of reflection states that θi = θr, or in other words, the angle of incidence equals the angle of reflection.
In fact, reflection of light may occur whenever light travels from a medium of a given refractive index into a medium with a different refractive index. In the most general case, a certain fraction of the light is reflected from the interface, and the remainder is refracted. Solving Maxwell's equations for a light ray striking a boundary allows the derivation of the Fresnel equations, which can be used to predict how much of the light is reflected, and how much is refracted in a given situation. This is analogous to the way impedance mismatch in an electric circuit causes reflection of signals. Total internal reflection of light from a denser medium occurs if the angle of incidence is greater than the critical angle.
Total internal reflection is used as a means of focusing waves that cannot effectively be reflected by common means. X-ray telescopes are constructed by creating a converging "tunnel" for the waves. As the waves interact at low angle with the surface of this tunnel they are reflected toward the focus point (or toward another interaction with the tunnel surface, eventually being directed to the detector at the focus). A conventional reflector would be useless as the X-rays would simply pass through the intended reflector.
When light reflects off a material with higher refractive index than the medium in which is traveling, it undergoes a 180° phase shift. In contrast, when light reflects off a material with lower refractive index the reflected light is in phase with the incident light. This is an important principle in the field of thin-film optics.
Specular reflection forms images. Reflection from a flat surface forms a mirror image, which appears to be reversed from left to right because we compare the image we see to what we would see if we were rotated into the position of the image. Specular reflection at a curved surface forms an image which may be magnified or demagnified; curved mirrors have optical power. Such mirrors may have surfaces that are spherical or parabolic.
Laws of reflection
If the reflecting surface is very smooth, the reflection of light that occurs is called specular or regular reflection. The laws of reflection are as follows:
The incident ray, the reflected ray and the normal to the reflection surface at the point of the incidence lie in the same plane.
The angle which the incident ray makes with the normal is equal to the angle which the reflected ray makes to the same normal.
The reflected ray and the incident ray are on the opposite sides of the normal.
These three laws can all be derived from the Fresnel equations.
Mechanism
In classical electrodynamics, light is considered as an electromagnetic wave, which is described by Maxwell's equations. Light waves incident on a material induce small oscillations of polarisation in the individual atoms (or oscillation of electrons, in metals), causing each particle to radiate a small secondary wave in all directions, like a dipole antenna. All these waves add up to give specular reflection and refraction, according to the Huygens–Fresnel principle.
In the case of dielectrics such as glass, the electric field of the light acts on the electrons in the material, and the moving electrons generate fields and become new radiators. The refracted light in the glass is the combination of the forward radiation of the electrons and the incident light. The reflected light is the combination of the backward radiation of all of the electrons.
In metals, electrons with no binding energy are called free electrons. When these electrons oscillate with the incident light, the phase difference between their radiation field and the incident field is π (180°), so the forward radiation cancels the incident light, and backward radiation is just the reflected light.
Light–matter interaction in terms of photons is a topic of quantum electrodynamics, and is described in detail by Richard Feynman in his popular book QED: The Strange Theory of Light and Matter.
Diffuse reflection
When light strikes the surface of a (non-metallic) material it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g. the grain boundaries of a polycrystalline material, or the cell or fiber boundaries of an organic material) and by its surface, if it is rough. Thus, an 'image' is not formed. This is called diffuse reflection. The exact form of the reflection depends on the structure of the material. One common model for diffuse reflection is Lambertian reflectance, in which the light is reflected with equal luminance (in photometry) or radiance (in radiometry) in all directions, as defined by Lambert's cosine law.
The light sent to our eyes by most of the objects we see is due to diffuse reflection from their surface, so that this is our primary mechanism of physical observation.
Retroreflection
Some surfaces exhibit retroreflection. The structure of these surfaces is such that light is returned in the direction from which it came.
When flying over clouds illuminated by sunlight the region seen around the aircraft's shadow will appear brighter, and a similar effect may be seen from dew on grass. This partial retro-reflection is created by the refractive properties of the curved droplet's surface and reflective properties at the backside of the droplet.
Some animals' retinas act as retroreflectors (see tapetum lucidum'' for more detail), as this effectively improves the animals' night vision. Since the lenses of their eyes modify reciprocally the paths of the incoming and outgoing light the effect is that the eyes act as a strong retroreflector, sometimes seen at night when walking in wildlands with a flashlight.
A simple retroreflector can be made by placing three ordinary mirrors mutually perpendicular to one another (a corner reflector). The image produced is the inverse of one produced by a single mirror.
A surface can be made partially retroreflective by depositing a layer of tiny refractive spheres on it or by creating small pyramid like structures. In both cases internal reflection causes the light to be reflected back to where it originated. This is used to make traffic signs and automobile license plates reflect light mostly back in the direction from which it came. In this application perfect retroreflection is not desired, since the light would then be directed back into the headlights of an oncoming car rather than to the driver's eyes.
Multiple reflections
When light reflects off a mirror, one image appears. Two mirrors placed exactly face to face give the appearance of an infinite number of images along a straight line. The multiple images seen between two mirrors that sit at an angle to each other lie over a circle. The center of that circle is located at the imaginary intersection of the mirrors. A square of four mirrors placed face to face give the appearance of an infinite number of images arranged in a plane. The multiple images seen between four mirrors assembling a pyramid, in which each pair of mirrors sits an angle to each other, lie over a sphere. If the base of the pyramid is rectangle shaped, the images spread over a section of a torus.
Note that these are theoretical ideals, requiring perfect alignment of perfectly smooth, perfectly flat perfect reflectors that absorb none of the light. In practice, these situations can only be approached but not achieved because the effects of any surface imperfections in the reflectors propagate and magnify, absorption gradually extinguishes the image, and any observing equipment (biological or technological) will interfere.
Complex conjugate reflection
In this process (which is also known as phase conjugation), light bounces exactly back in the direction from which it came due to a nonlinear optical process. Not only the direction of the light is reversed, but the actual wavefronts are reversed as well. A conjugate reflector can be used to remove aberrations from a beam by reflecting it and then passing the reflection through the aberrating optics a second time. If one were to look into a complex conjugating mirror, it would be black because only the photons which left the pupil would reach the pupil.
Other types of reflection
Neutron reflection
Materials that reflect neutrons, for example beryllium, are used in nuclear reactors and nuclear weapons. In the physical and biological sciences, the reflection of neutrons off atoms within a material is commonly used to determine the material's internal structure.
Sound reflection
When a longitudinal sound wave strikes a flat surface, sound is reflected in a coherent manner provided that the dimension of the reflective surface is large compared to the wavelength of the sound. Note that audible sound has a very wide frequency range (from 20 to about 17000 Hz), and thus a very wide range of wavelengths (from about 20 mm to 17 m). As a result, the overall nature of the reflection varies according to the texture and structure of the surface. For example, porous materials will absorb some energy, and rough materials (where rough is relative to the wavelength) tend to reflect in many directions—to scatter the energy, rather than to reflect it coherently. This leads into the field of architectural acoustics, because the nature of these reflections is critical to the auditory feel of a space.
In the theory of exterior noise mitigation, reflective surface size mildly detracts from the concept of a noise barrier by reflecting some of the sound into the opposite direction. Sound reflection can affect the acoustic space.
Seismic reflection
Seismic waves produced by earthquakes or other sources (such as explosions) may be reflected by layers within the Earth. Study of the deep reflections of waves generated by earthquakes has allowed seismologists to determine the layered structure of the Earth. Shallower reflections are used in reflection seismology to study the Earth's crust generally, and in particular to prospect for petroleum and natural gas deposits.
See also
Anti-reflective coating
Diffraction
Echo satellite
Huygens–Fresnel principle
List of reflected light sources
Negative refraction
Ocean surface wave
Reflection coefficient
Reflectivity
Refraction
Ripple tank
Signal reflection
Snell's law
Sun glitter
Two-ray ground-reflection model
References
External links
Acoustic reflection
Animations demonstrating optical reflection by QED
Simulation on Laws of Reflection of Sound By Amrita University
Physical phenomena
Geometrical optics
Physical optics
Acoustics
Sound | Reflection (physics) | Physics | 2,496 |
7,622,915 | https://en.wikipedia.org/wiki/Parry%E2%80%93Daniels%20map | In mathematics, the Parry–Daniels map is a function studied in the context of dynamical systems. Typical questions concern the existence of an invariant or ergodic measure for the map.
It is named after the English mathematician Bill Parry and the British statistician Henry Daniels, who independently studied the map in papers published in 1962.
Definition
Given an integer n ≥ 1, let Σ denote the n-dimensional simplex in Rn+1 given by
Let π be a permutation such that
Then the Parry–Daniels map
is defined by
References
Dynamical systems | Parry–Daniels map | Physics,Mathematics | 113 |
576,635 | https://en.wikipedia.org/wiki/Port%20scanner | A port scanner is an application designed to probe a server or host for open ports. Such an application may be used by administrators to verify security policies of their networks and by attackers to identify network services running on a host and exploit vulnerabilities.
A port scan or portscan is a process that sends client requests to a range of server port addresses on a host, with the goal of finding an active port; this is not a nefarious process in and of itself. The majority of uses of a port scan are not attacks, but rather simple probes to determine services available on a remote machine.
To portsweep is to scan multiple hosts for a specific listening port. The latter is typically used to search for a specific service, for example, an SQL-based computer worm may portsweep looking for hosts listening on TCP port 1433.
TCP/IP basics
The design and operation of the Internet is based on the Internet Protocol Suite, commonly also called TCP/IP. In this system, network services are referenced using two components: a host address and a port number. There are 65535 distinct and usable port numbers, numbered 1 … 65535. (Port zero is not a usable port number.) Most services use one, or at most a limited range of, port numbers.
Some port scanners scan only the most common port numbers, or ports most commonly associated with vulnerable services, on a given host.
The result of a scan on a port is usually generalized into one of three categories:
Open or Accepted: The host sent a reply indicating that a service is listening on the port.
Closed or Denied or Not Listening: The host sent a reply indicating that connections will be denied to the port.
Filtered, Dropped or Blocked: There was no reply from the host.
Open ports present two vulnerabilities of which administrators must be wary:
Security and stability concerns associated with the program responsible for delivering the service - Open ports.
Security and stability concerns associated with the operating system that is running on the host - Open or Closed ports.
Filtered ports do not tend to present vulnerabilities.
Assumptions
All forms of port scanning rely on the assumption that the targeted host is compliant with RFC. Although this is the case most of the time, there is still a chance a host might send back strange packets or even generate false positives when the TCP/IP stack of the host is non-RFC-compliant or has been altered. This is especially true for less common scan techniques that are OS-dependent (FIN scanning, for example). The TCP/IP stack fingerprinting method also relies on these types of different network responses from a specific stimulus to guess the type of the operating system the host is running.
Types of scans
TCP scanning
The simplest port scanners use the operating system's network functions and are generally the next option to go to when SYN is not a feasible option (described next). Nmap calls this mode connect scan, named after the Unix connect() system call. If a port is open, the operating system completes the TCP three-way handshake, and the port scanner immediately closes the connection to avoid performing a Denial-of-service attack. Otherwise an error code is returned. This scan mode has the advantage that the user does not require special privileges. However, using the OS network functions prevents low-level control, so this scan type is less common. This method is "noisy", particularly if it is a "portsweep": the services can log the sender IP address and Intrusion detection systems can raise an alarm.
SYN scanning
SYN scan is another form of TCP scanning. Rather than using the operating system's network functions, the port scanner generates raw IP packets itself, and monitors for responses. This scan type is also known as "half-open scanning", because it never actually opens a full TCP connection. The port scanner generates a SYN packet. If the target port is open, it will respond with a SYN-ACK packet. The scanner host responds with an RST packet, closing the connection before the handshake is completed. If the port is closed but unfiltered, the target will instantly respond with an RST packet.
The use of raw networking has several advantages, giving the scanner full control of the packets sent and the timeout for responses, and allowing detailed reporting of the responses. There is debate over which scan is less intrusive on the target host. SYN scan has the advantage that the individual services never actually receive a connection. However, the RST during the handshake can cause problems for some network stacks, in particular simple devices like printers. There are no conclusive arguments either way.
UDP scanning
UDP scanning is also possible, although there are technical challenges. UDP is a connectionless protocol so there is no equivalent to a TCP SYN packet. However, if a UDP packet is sent to a port that is not open, the system will respond with an ICMP port unreachable message. Most UDP port scanners use this scanning method, and use the absence of a response to infer that a port is open. However, if a port is blocked by a firewall, this method will falsely report that the port is open. If the port unreachable message is blocked, all ports will appear open. This method is also affected by ICMP rate limiting.
An alternative approach is to send application-specific UDP packets, hoping to generate an application layer response. For example, sending a DNS query to port 53 will result in a response, if a DNS server is present. This method is much more reliable at identifying open ports. However, it is limited to scanning ports for which an application specific probe packet is available. Some tools (e.g., Nmap, Unionscan) generally have probes for less than 20 UDP services, while some commercial tools have as many as 70. In some cases, a service may be listening on the port, but configured not to respond to the particular probe packet.
ACK scanning
ACK scanning is one of the more unusual scan types, as it does not exactly determine whether the port is open or closed, but whether the port is filtered or unfiltered. This is especially good when attempting to probe for the existence of a firewall and its rulesets. Simple packet filtering will allow established connections (packets with the ACK bit set), whereas a more sophisticated stateful firewall might not.
Window scanning
Rarely used because of its outdated nature, window scanning is fairly untrustworthy in determining whether a port is opened or closed. It generates the same packet as an ACK scan, but checks whether the window field of the packet has been modified. When the packet reaches its destination, a design flaw attempts to create a window size for the packet if the port is open, flagging the window field of the packet with 1's before it returns to the sender. Using this scanning technique with systems that no longer support this implementation returns 0's for the window field, labeling open ports as closed.
FIN scanning
Since SYN scans are not surreptitious enough, firewalls are, in general, scanning for and blocking packets in the form of SYN packets. FIN packets can bypass firewalls without modification. Closed ports reply to a FIN packet with the appropriate RST packet, whereas open ports ignore the packet on hand. This is typical behavior due to the nature of TCP, and is in some ways an inescapable downfall.
Other scan types
Some more unusual scan types exist. These have various limitations and are not widely used. Nmap supports most of these.
X-mas and Null Scan - are similar to FIN scanning, but:
X-mas sends packets with FIN, URG and PUSH flags turned on like a Christmas tree
Null sends a packet with no TCP flags set
Protocol scan - determines what IP level protocols (TCP, UDP, GRE, etc.) are enabled.
Proxy scan - a proxy (SOCKS or HTTP) is used to perform the scan. The target will see the proxy's IP address as the source. This can also be done using some FTP servers.
Idle scan - Another method of scanning without revealing one's IP address, taking advantage of the predictable IP ID flaw.
CatSCAN - Checks ports for erroneous packets.
ICMP scan - determines if a host responds to ICMP requests, such as echo (ping), netmask, etc.
Port filtering by ISPs
Many Internet service providers restrict their customers' ability to perform port scans to destinations outside of their home networks. This is usually covered in the terms of service or acceptable use policy to which the customer must agree. Some ISPs implement packet filters or transparent proxies that prevent outgoing service requests to certain ports. For example, if an ISP provides a transparent HTTP proxy on port 80, port scans of any address will appear to have port 80 open, regardless of the target host's actual configuration.
Security
The information gathered by a port scan has many legitimate uses including network inventory and the verification of the security of a network. Port scanning can, however, also be used to compromise security. Many exploits rely upon port scans to find open ports and send specific data patterns in an attempt to trigger a condition known as a buffer overflow. Such behavior can compromise the security of a network and the computers therein, resulting in the loss or exposure of sensitive information and the ability to do work.
The threat level caused by a port scan can vary greatly according to the method used to scan, the kind of port scanned, its number, the value of the targeted host and the administrator who monitors the host. But a port scan is often viewed as a first step for an attack, and is therefore taken seriously because it can disclose much sensitive information about the host.
Despite this, the probability of a port scan alone followed by a real attack is small. The probability of an attack is much higher when the port scan is associated with a vulnerability scan.
Legal implications
Because of the inherently open and decentralized architecture of the Internet, lawmakers have struggled since its creation to define legal boundaries that permit effective prosecution of cybercriminals. Cases involving port scanning activities are an example of the difficulties encountered in judging violations. Although these cases are rare, most of the time the legal process involves proving that an intent to commit a break-in or unauthorized access existed, rather than just the performance of a port scan.
In June 2003, an Israeli, Avi Mizrahi, was accused by the Israeli authorities of the offense of attempting the unauthorized access of computer material. He had port scanned the Mossad website. He was acquitted of all charges on February 29, 2004. The judge ruled that these kinds of actions should not be discouraged when they are performed in a positive way.
A 17-year-old Finn was accused of attempted computer break-in by a major Finnish bank. On April 9, 2003, he was convicted of the charge by the Supreme Court of Finland and ordered to pay US$12,000 for the expense of the forensic analysis made by the bank. In 1998, he had port scanned the bank network in an attempt to access the closed network, but failed to do so.
In 2006, the UK Parliament had voted an amendment to the Computer Misuse Act 1990 such that a person is guilty of an offence who "makes, adapts, supplies or offers to supply any article knowing that it is designed or adapted for use in the course of or in connection with an offence under section 1 or 3 [of the CMA]". Nevertheless, the area of effect of this amendment is blurred, and widely criticized by Security experts as such.
Germany, with the Strafgesetzbuch § 202a,b,c also has a similar law, and the Council of the European Union has issued a press release stating they plan to pass a similar one too, albeit more precise.
United States
Moulton v. VC3
In December 1999, Scott Moulton was arrested by the FBI and accused of attempted computer trespassing under Georgia's Computer Systems Protection Act and Computer Fraud and Abuse Act of America. At this time, his IT service company had an ongoing contract with Cherokee County of Georgia to maintain and upgrade the 911 center security. He performed several port scans on Cherokee County servers to check their security and eventually port scanned a web server monitored by another IT company, provoking a tiff which ended up in a tribunal. He was acquitted in 2000, with judge Thomas Thrash ruling in Moulton v. VC3 (N.D.Ga. 2000) that there was no damage impairing the integrity and availability of the network.
See also
Content Vectoring Protocol
List of TCP and UDP port numbers
Service scan
References
External links
Teo, Lawrence (December, 2000). Network Probes Explained: Understanding Port Scans and Ping Sweeps. Linux Journal, Retrieved September 5, 2009, from Linuxjournal.com
Computer security software
Computer security exploits
Internet Protocol based network software
Network analyzers
Port scanners | Port scanner | Technology,Engineering | 2,677 |
1,699,718 | https://en.wikipedia.org/wiki/Exon%20trapping | Exon trapping is a molecular biology technique to identify potential exons in a fragment of eukaryote DNA of unknown intron-exon structure. This is done to determine if the fragment is part of an expressed gene.
The genomic fragment is inserted into the intron of a 'splicing vector' consisting of a known exon - intron - exon sequence of DNA, and the vector is then inserted into an eukaryotic cell. If the fragment does not contain exons (i.e., consists solely of intron DNA), it will be spliced out together with the vector's original intron. On the other hand, if exons are contained, they will be part of the mature mRNA after transcription (with all intron material removed). The presence of 'trapped exons' can be detected by an increase in size of the mRNA, or through RT-PCR to amplify the DNA of interest.
The technique has largely been supplanted by the approach of sequencing cDNA generated from mRNA and then using bioinformatics tools such as NCBI's BLAST server to determine the source of the sequence, thereby identifying the appropriate exon-intron splice sites.
References
Gene expression | Exon trapping | Chemistry,Biology | 259 |
11,422,126 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20snR71 | In molecular biology, snoRNA snR71 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA snR71 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
snoRNA snR71 was initially discovered using a computational screen of the Saccharomyces cerevisiae genome.
References
External links
Link to snR71 at the FournierLab's snoRNAdb.
Small nuclear RNA | Small nucleolar RNA snR71 | Chemistry | 223 |
37,974,295 | https://en.wikipedia.org/wiki/Cygnus%20X%20%28star%20complex%29 | Cygnus-X is a massive star formation region located in the constellation of Cygnus at a distance from the Sun of 1.4 kiloparsecs (4,600 light years).
As it is located behind the Cygnus Rift and its light is heavily absorbed by the Milky Way's interstellar dust, it is better studied in other wavelengths of the electromagnetic spectrum that penetrate it such as the infrared.
Physical properties
As studies done with the help of the Spitzer Space Telescope have shown, Cygnus-X has a size of 200 parsecs and contains the largest number of massive protostars as well as the largest stellar association (Cygnus OB2, with up to 2,600 stars of spectral type OB and a mass of up to 105 solar masses) within a radius of 2 kiloparsecs of the Sun. It is also associated with one of the largest molecular clouds known, with a mass of 3 million solar masses. Its stellar population includes a large number of early-type stars as well as evolved massive stars such as luminous blue variable candidates, Wolf–Rayet stars, and supergiant stars of spectral types O and B.
Ongoing research has shown Cygnus X includes two stellar associations: Cygnus OB2 and Cygnus OB9 as well as an additional large number of early-type stars that include BD+40°4210, a blue supergiant star and luminous blue variable candidate that is one of the brightest stars of the association, as well as more supergiant stars of spectral types O and B. The same study shows that star formation has been taking place there during at least 10 million years, continuing to the present day.
Cygnus OB7 lies in its front.
See also
Orion molecular cloud complex
Taurus Molecular Cloud
Rho Ophiuchi cloud complex
Perseus molecular cloud
References
What is the Cygnus-X Region?
Cygnus-X: The Inner Workings of a Nearby Star Factory (APOD: 2012 January 18)
New members of the massive stellar population in Cygnus
Emission nebulae
Cygnus (constellation)
Star-forming regions
Molecular clouds | Cygnus X (star complex) | Astronomy | 442 |
1,842,281 | https://en.wikipedia.org/wiki/Red%20River%20Bridge%20War | The Red River Bridge War was a boundary conflict between the U.S. states of Oklahoma and Texas over an existing toll bridge and a new free bridge crossing the Red River.
The Red River Bridge Company, a private firm owned by Benjamin Colbert, had been operating a toll bridge that carried U.S. Route 69 and U.S. Route 75 between Colbert, Oklahoma, and Denison, Texas. In 1931, Texas and Oklahoma jointly built a new, free span northwest of the existing toll bridge.
On July 10, 1931, the Red River Bridge Company obtained an injunction against the Texas Highway Commission (now Texas Department of Transportation), keeping it from opening the new bridge. The company said that the highway commission had promised in July 1930 to buy the old toll bridge for $60,000 (equal to $ today). In reaction to the injunction, the Governor of Texas, Ross S. Sterling, ordered that the new free bridge be barricaded at the Texas end.
On July 17, Oklahoma Governor "Alfalfa Bill" Murray ordered the new bridge open, by executive order. Murray issued this order on the grounds that the land on both sides of the river belonged to Oklahoma, per the Adams–Onís Treaty of 1819. Murray sent highway crews across the new bridge to destroy the barricades.
Governor Sterling sent Adjutant General William Warren Sterling and three Texas Rangers to the new bridge to defend the Texas Highway Commission workers enforcing the injunction, and rebuilt the barricade that night. The next day, Oklahoma crews under Governor Murray's order demolished the Oklahoma approach to the toll bridge, rendering that bridge impassable.
The Texas State Legislature called a special session on July 23 to pass a bill allowing the Red River Bridge Company to sue the state over the issue, partially in response to meetings in Sherman and Denison, Texas, demanding the free bridge be opened. The next day, Governor Murray declared martial law at the site, enforced by Oklahoma National Guardsmen, and personally appeared at the site, armed with a revolver, hours before a Muskogee, Oklahoma, court issued an injunction prohibiting him from blocking the northern toll bridge approach. Murray directed the guardsmen to allow anyone to cross either bridge.
Murray discovered on July 27 that the free bridge was in danger of being closed permanently. He expanded the martial-law zone across the river, stationing guardsmen on both free bridge approaches. The injunction against the bridge opening was dissolved and the martial law order rescinded on August 6.
News of the dispute made national and international headlines. Adolf Hitler may have believed that the events were evidence of in-fighting between the American states, weakening the union.
The free bridge that was the cause of the dispute was opened on Labor Day, September 7, 1931. It was replaced in 1995, though a portion of the bridge was saved as a historical attraction and relocated to the Colbert City park in Colbert, Oklahoma.
References
Further reading
Rusty Williams, The Red River Bridge War: A Texas-Oklahoma Border Battle (Texas A&M University Press, 2016).
External links
Red River Bridge Controversy – Handbook of Texas Online
Texas Day by Day – July 13, 1931
Toll Bridge coordinates:
Oklahoma Digital Maps: Digital Collections of Oklahoma and Indian Territory
Political history of Oklahoma
Battles and conflicts without fatalities
History of Texas
Internal territorial disputes of the United States
Civil wars in the United States
Bridges in Oklahoma
Bridges in Texas
1931 in Oklahoma
1931 in Texas
Former toll bridges in Oklahoma
Former toll bridges in Texas
Red River of the South
Borders of Oklahoma
Borders of Texas
Toll (fee)
Transport controversies | Red River Bridge War | Physics | 718 |
47,836,983 | https://en.wikipedia.org/wiki/Yuxiang | Yuxiang () is a seasoning mixture in Chinese cuisine, and also refers to the resulting sauce in which meat or vegetables are cooked. It is said to have originated in Sichuan cuisine, and has since spread to other regional Chinese cuisines.
Despite the term literally meaning "fish fragrance" in Chinese, yuxiang contains no seafood and is typically not added to seafood.
On top of the basic mixture, cooking yuxiang almost always includes the use of sugar, vinegar, doubanjiang, soy sauce, and pickled chili peppers.
Preparation
Proper preparation of the yuxiang seasoning includes finely minced pao la jiao (pickled chili), white scallion, ginger, and garlic. They are mixed in more-or-less equal portions, though some prefer to include more scallions than ginger and garlic. The mixture is then fried in oil until fragrant. Water, starch, sugar, and vinegar are then added to create a basic sauce.
Dishes
The sauce is used most often for dishes containing beef, pork, or chicken. It is sometimes used for vegetarian recipes. Barbara Tropp suggests in The Modern Art of Chinese Cooking that the characters can also be interpreted as meaning "Sichuan-Hunan" (渝湘) flavor. Dishes that use yuxiang as the main seasoning have the term affixed to their name. For instance:
Yúxiāngròusī (魚香肉絲): Pork strips stir-fried with yuxiang
Yúxiāngqiézi (魚香茄子): Braised eggplants with yuxiang
Yúxiāngniúnǎn (魚香牛腩): Beef brisket stewed with yuxiang
References
External links
Example of video recipe for Yuxiang sauce with shredded pork by Wang Gang
Example of video recipe for Yuxiang sauce with eggplant by Wang Gang
Example of video recipe for Chongqing-style Yuxiang sauce with shredded pork by Wang Gang
Sichuan cuisine
Food ingredients | Yuxiang | Technology | 405 |
483,877 | https://en.wikipedia.org/wiki/International%20Computer%20Music%20Conference | The International Computer Music Conference (ICMC) is a yearly international conference for computer music researchers and composers. It is the annual conference of the International Computer Music Association (ICMA).
History
In 1986, the Institute of Sonology institute was moved to the Royal Conservatory of The Hague, hosting the International Computer Music Conference there during its inaugural year.
Each year there is a specific theme. For example, in 2007, the theme was "Immersed Music" and immersive media. ICMC 2007 took place in Copenhagen. On August 28, there was an "Underwater/Water Concert" at the DGI-byen swimcenter, in the hundred-metre DGI-byen pool, as well as the various other pools of the Vandkulturhuset.
This "Immersed Music" theme of ICMC 2007 explored important issues in musical instrument classification and immersion.
2014 40th ICMC is organised joint with the 11th Sound and Music Computing Conference in Athens, Greece 14–20 September 2014.
2017 43rd ICMC took place from Oct 16, 2017 - Oct 20, 2017 in Shanghai, China. 2018 44th ICMC took place from 5–10 August 2018 in Daegu, Korea.
See also
List of electronic music festivals
References
External links
CMA history
ICMA – International Computer Music Association
ICMC 2008, 2008 Conference website, hosted by the Sonic Arts Research Centre, Queen's University Belfast
ICMC|SMC 2014, 2014 Conference website, hosted by the National and Kapodistrian University of Athens, the Institute for Research on Music & Acoustics and the Onassis Cultural Center.
ICMC2016, 12–16 September 2016 in Utrecht, the Netherlands
ICMC2018, 5–10 August 2018 in Daegu, Korea
Computer conferences
Computer music
Computer science conferences
Music conferences
Electronic music festivals in the United Kingdom
Electroacoustic music festivals
Music festivals established in 1974 | International Computer Music Conference | Technology | 384 |
18,229,656 | https://en.wikipedia.org/wiki/Microbial%20mat | A microbial mat is a multi-layered sheet or biofilm of microbial colonies, composed of mainly bacteria and/or archaea. Microbial mats grow at interfaces between different types of material, mostly on submerged or moist surfaces, but a few survive in deserts. A few are found as endosymbionts of animals.
Although only a few centimetres thick at most, microbial mats create a wide range of internal chemical environments, and hence generally consist of layers of microorganisms that can feed on or at least tolerate the dominant chemicals at their level and which are usually of closely related species. In moist conditions mats are usually held together by slimy substances secreted by the microorganisms. In many cases some of the bacteria form tangled webs of filaments which make the mat tougher. The best known physical forms are flat mats and stubby pillars called stromatolites, but there are also spherical forms.
Microbial mats are the earliest form of life on Earth for which there is good fossil evidence, from , and have been the most important members and maintainers of the planet's ecosystems. Originally they depended on hydrothermal vents for energy and chemical "food", but the development of photosynthesis allowed mats to proliferate outside of these environments by utilizing a more widely available energy source, sunlight. The final and most significant stage of this liberation was the development of oxygen-producing photosynthesis, since the main chemical inputs for this are carbon dioxide and water.
As a result, microbial mats began to produce the atmosphere we know today, in which free oxygen is a vital component. At around the same time they may also have been the birthplace of the more complex eukaryote type of cell, of which all multicellular organisms are composed. Microbial mats were abundant on the shallow seabed until the Cambrian substrate revolution, when animals living in shallow seas increased their burrowing capabilities and thus broke up the surfaces of mats and let oxygenated water into the deeper layers, poisoning the oxygen-intolerant microorganisms that lived there. Although this revolution drove mats off soft floors of shallow seas, they still flourish in many environments where burrowing is limited or impossible, including rocky seabeds and shores, and hyper-saline and brackish lagoons. They are found also on the floors of the deep oceans.
Because of microbial mats' ability to use almost anything as "food", there is considerable interest in industrial uses of mats, especially for water treatment and for cleaning up pollution.
Description
Microbial mats may also be referred to as algal mats and bacterial mats. They are a type of biofilm that is large enough to see with the naked eye and robust enough to survive moderate physical stresses. These colonies of bacteria form on surfaces at many types of interface, for example between water and the sediment or rock at the bottom, between air and rock or sediment, between soil and bed-rock, etc. Such interfaces form vertical chemical gradients, i.e. vertical variations in chemical composition, which make different levels suitable for different types of bacteria and thus divide microbial mats into layers, which may be sharply defined or may merge more gradually into each other. A variety of microbes are able to transcend the limits of diffusion by using "nanowires" to shuttle electrons from their metabolic reactions up to two centimetres deep in the sediment – for example, electrons can be transferred from reactions involving hydrogen sulfide deeper within the sediment to oxygen in the water, which acts as an electron acceptor.
The best-known types of microbial mat may be flat laminated mats, which form on approximately horizontal surfaces, and stromatolites, stubby pillars built as the microbes slowly move upwards to avoid being smothered by sediment deposited on them by water. However, there are also spherical mats, some on the outside of pellets of rock or other firm material and others inside spheres of sediment.
Structure
A microbial mat consists of several layers, each of which is dominated by specific types of microorganism, mainly bacteria. Although the composition of individual mats varies depending on the environment, as a general rule the by-products of each group of microorganisms serve as "food" for other groups. In effect each mat forms its own food chain, with one or a few groups at the top of the food chain as their by-products are not consumed by other groups. Different types of microorganism dominate different layers based on their comparative advantage for living in that layer. In other words, they live in positions where they can out-perform other groups rather than where they would absolutely be most comfortable — ecological relationships between different groups are a combination of competition and co-operation. Since the metabolic capabilities of bacteria (what they can "eat" and what conditions they can tolerate) generally depend on their phylogeny (i.e. the most closely related groups have the most similar metabolisms), the different layers of a mat are divided both by their different metabolic contributions to the community and by their phylogenetic relationships.
In a wet environment where sunlight is the main source of energy, the uppermost layers are generally dominated by aerobic photosynthesizing cyanobacteria (blue-green bacteria whose color is caused by their having chlorophyll), while the lowest layers are generally dominated by anaerobic sulfate-reducing bacteria. Sometimes there are intermediate (oxygenated only in the daytime) layers inhabited by facultative anaerobic bacteria. For example, in hypersaline ponds near Guerrero Negro (Mexico) various kind of mats were explored. There are some mats with a middle purple layer inhabited by photosynthesizing purple bacteria. Some other mats have a white layer inhabited by chemotrophic sulfur oxidizing bacteria and beneath them an olive layer inhabited by photosynthesizing green sulfur bacteria and heterotrophic bacteria. However, this layer structure is not changeless during a day: some species of cyanobacteria migrate to deeper layers at morning, and go back at evening, to avoid intensive solar light and UV radiation at mid-day.
Microbial mats are generally held together and bound to their substrates by slimy extracellular polymeric substances which they secrete. In many cases some of the bacteria form filaments (threads), which tangle and thus increase the colonies' structural strength, especially if the filaments have sheaths (tough outer coverings).
This combination of slime and tangled threads attracts other microorganisms which become part of the mat community, for example protozoa, some of which feed on the mat-forming bacteria, and diatoms, which often seal the surfaces of submerged microbial mats with thin, parchment-like coverings.
Marine mats may grow to a few centimeters in thickness, of which only the top few millimeters are oxygenated.
Types of environment colonized
Underwater microbial mats have been described as layers that live by exploiting and to some extent modifying local chemical gradients, i.e. variations in the chemical composition. Thinner, less complex biofilms live in many sub-aerial environments, for example on rocks, on mineral particles such as sand, and within soil. They have to survive for long periods without liquid water, often in a dormant state. Microbial mats that live in tidal zones, such as those found in the Sippewissett salt marsh, often contain a large proportion of similar microorganisms that can survive for several hours without water.
Microbial mats and less complex types of biofilm are found at temperature ranges from –40 °C to +120 °C, because variations in pressure affect the temperatures at which water remains liquid.
They even appear as endosymbionts in some animals, for example in the hindguts of some echinoids.
Ecological and geological importance
Microbial mats use all of the types of metabolism and feeding strategy that have evolved on Earth—anoxygenic and oxygenic photosynthesis; anaerobic and aerobic chemotrophy (using chemicals rather than sunshine as a source of energy); organic and inorganic respiration and fermentation (i..e converting food into energy with and without using oxygen in the process); autotrophy (producing food from inorganic compounds) and heterotrophy (producing food only from organic compounds, by some combination of predation and detritivory).
Most sedimentary rocks and ore deposits have grown by a reef-like build-up rather than by "falling" out of the water, and this build-up has been at least influenced and perhaps sometimes caused by the actions of microbes. Stromatolites, bioherms (domes or columns similar internally to stromatolites) and biostromes (distinct sheets of sediment) are among such microbe-influenced build-ups. Other types of microbial mat have created wrinkled "elephant skin" textures in marine sediments, although it was many years before these textures were recognized as trace fossils of mats. Microbial mats have increased the concentration of metal in many ore deposits, and without this it would not be feasible to mine them—examples include iron (both sulfide and oxide ores), uranium, copper, silver and gold deposits.
Role in the history of life
The earliest mats
Microbial mats are among the oldest clear signs of life, as microbially induced sedimentary structures (MISS) formed have been found in western Australia. At that early stage the mats' structure may already have been similar to that of modern mats that do not include photosynthesizing bacteria. It is even possible that non-photosynthesizing mats were present as early as . If so, their energy source would have been hydrothermal vents (high-pressure hot springs around submerged volcanoes), and the evolutionary split between bacteria and archea may also have occurred around this time.
The earliest mats may have been small, single-species biofilms of chemotrophs that relied on hydrothermal vents to supply both energy and chemical "food". Within a short time (by geological standards) the build-up of dead microorganisms would have created an ecological niche for scavenging heterotrophs, possibly methane-emitting and sulfate-reducing organisms that would have formed new layers in the mats and enriched their supply of biologically useful chemicals.
Photosynthesis
It is generally thought that photosynthesis, the biological generation of chemical energy from light, evolved shortly after (3 billion). However an isotope analysis suggests that oxygenic photosynthesis may have been widespread as early as . There are several different types of photosynthetic reaction, and analysis of bacterial DNA indicates that photosynthesis first arose in anoxygenic purple bacteria, while the oxygenic photosynthesis seen in cyanobacteria and much later in plants was the last to evolve.
The earliest photosynthesis may have been powered by infra-red light, using modified versions of pigments whose original function was to detect infra-red heat emissions from hydrothermal vents. The development of photosynthetic energy generation enabled the microorganisms first to colonize wider areas around vents and then to use sunlight as an energy source. The role of the hydrothermal vents was now limited to supplying reduced metals into the oceans as a whole rather than being the main supporters of life in specific locations. Heterotrophic scavengers would have accompanied the photosynthesizers in their migration out of the "hydrothermal ghetto".
The evolution of purple bacteria, which do not produce or use oxygen but can tolerate it, enabled mats to colonize areas that locally had relatively high concentrations of oxygen, which is toxic to organisms that are not adapted to it. Microbial mats could have been separated into oxidized and reduced layers.
Cyanobacteria and oxygen
The last major stage in the evolution of microbial mats was the appearance of cyanobacteria, photosynthesizers which both produce and use oxygen. This gave undersea mats their typical modern structure: an oxygen-rich top layer of cyanobacteria; a layer of photosynthesizing purple bacteria that could tolerate oxygen; and oxygen-free, H2S-dominated lower layers of heterotrophic scavengers, mainly methane-emitting and sulfate-reducing organisms.
It is estimated that the appearance of oxygenic photosynthesis increased biological productivity by a factor of between 100 and 1,000. All photosynthetic reactions require a reducing agent, but the significance of oxygenic photosynthesis is that it uses water as a reducing agent, and water is much more plentiful than the geologically produced reducing agents on which photosynthesis previously depended. The resulting increases in the populations of photosynthesizing bacteria in the top layers of microbial mats would have caused corresponding population increases among the chemotrophic and heterotrophic microorganisms that inhabited the lower layers and which fed respectively on the by-products of the photosynthesizers and on the corpses and / or living bodies of the other mat organisms. These increases would have made microbial mats the planet's dominant ecosystems. From this point onwards life itself would have produced significantly more of the resources it needed than did geochemical processes.
Oxygenic photosynthesis in microbial mats would also have increased the free oxygen content of the Earth's atmosphere, both directly by emitting oxygen and because the mats emitted molecular hydrogen (H2), some of which would have escaped from the Earth's atmosphere before it could re-combine with free oxygen to form more water. Microbial mats thus likely played a major role in the evolution of organisms which could first tolerate free oxygen and then use it as an energy source. Oxygen is toxic to organisms that are not adapted to it, but greatly increases the metabolic efficiency of oxygen-adapted organisms — for example anaerobic fermentation produces a net yield of two molecules of adenosine triphosphate, cells' internal "fuel", per molecule of glucose, while aerobic respiration produces a net yield of 36. The oxygenation of the atmosphere was a prerequisite for the evolution of the more complex eukaryote type of cell, from which all multicellular organisms are built.
Cyanobacteria have the most complete biochemical "toolkits" of all the mat-forming organisms: the photosynthesis mechanisms of both green bacteria and purple bacteria; oxygen production; and the Calvin cycle, which converts carbon dioxide and water into carbohydrates and sugars. It is likely that they acquired many of these sub-systems from existing mat organisms, by some combination of horizontal gene transfer and endosymbiosis followed by fusion. Whatever the causes, cyanobacteria are the most self-sufficient of the mat organisms and were well-adapted to strike out on their own both as floating mats and as the first of the phytoplankton, which forms the basis of most marine food chains.
Origin of eukaryotes
The time at which eukaryotes first appeared is still uncertain: there is reasonable evidence that fossils dated between and represent eukaryotes, but the presence of steranes in Australian shales may indicate that eukaryotes were present . There is still debate about the origins of eukaryotes, and many of the theories focus on the idea that a bacterium first became an endosymbiont of an anaerobic archean and then fused with it to become one organism. If such endosymbiosis was an important factor, microbial mats would have encouraged it. There are two known variations of this scenario:
The boundary between the oxygenated and oxygen-free zones of a mat would have moved up when photosynthesis shut down at night and back down when photosynthesis resumed after the next sunrise. Symbiosis between independent aerobic and anaerobic organisms would have enabled both to live comfortably in the zone that was subject to oxygen "tides", and subsequent endosymbiosis would have made such partnerships more mobile.
The initial partnership may have been between anaerobic archea that required molecular hydrogen (H2) and heterotrophic bacteria that produced it and could live both with and without oxygen.
Life on land
Microbial mats from ~ provide the first evidence of life in the terrestrial realm.
The earliest multicellular animals
The Ediacara biota are the earliest widely accepted evidence of multicellular animals. Most Ediacaran strata with the "elephant skin" texture characteristic of microbial mats contain fossils, and Ediacaran fossils are hardly ever found in beds that do not contain these microbial mats. Adolf Seilacher categorized the animals as: "mat encrusters", which were permanently attached to the mat;
"mat scratchers", which grazed the surface of the mat without destroying it; "mat stickers", suspension feeders that were partially embedded in the mat; and "undermat miners", which burrowed underneath the mat and fed on decomposing mat material.
The Cambrian substrate revolution
In the Early Cambrian, however, organisms began to burrow vertically for protection or food, breaking down the microbial mats, and thus allowing water and oxygen to penetrate a considerable distance below the surface and kill the oxygen-intolerant microorganisms in the lower layers. As a result of this Cambrian substrate revolution, marine microbial mats are confined to environments in which burrowing is non-existent or negligible: very harsh environments, such as hyper-saline lagoons or brackish estuaries, which are uninhabitable for the burrowing organisms that broke up the mats; rocky "floors" which the burrowers cannot penetrate; the depths of the oceans, where burrowing activity today is at a similar level to that in the shallow coastal seas before the revolution.
Current status
Although the Cambrian substrate revolution opened up new niches for animals, it was not catastrophic for microbial mats, but it did greatly reduce their extent.
Use of microbial mats in paleontology
Most fossils preserve only the hard parts of organisms, e.g. shells. The rare cases where soft-bodied fossils are preserved (the remains of soft-bodied organisms and also of the soft parts of organisms for which only hard parts such as shells are usually found) are extremely valuable because they provide information about organisms that are hardly ever fossilized and much more information than is usually available about those for which only the hard parts are usually preserved. Microbial mats help to preserve soft-bodied fossils by:
Capturing corpses on the sticky surfaces of mats and thus preventing them from floating or drifting away.
Physically protecting them from being eaten by scavengers and broken up by burrowing animals, and protecting fossil-bearing sediments from erosion. For example, the speed of water current required to erode sediment bound by a mat is 20–30 times as great as the speed required to erode a bare sediment.
Preventing or reducing decay both by physically screening the remains from decay-causing bacteria and by creating chemical conditions that are hostile to decay-causing bacteria.
Preserving tracks and burrows by protecting them from erosion. Many trace fossils date from significantly earlier than the body fossils of animals that are thought to have been capable of making them and thus improve paleontologists' estimates of when animals with these capabilities first appeared.
Industrial uses
The ability of microbial mat communities to use a vast range of "foods" has recently led to interest in industrial uses. There have been trials of microbial mats for purifying water, both for human use and in fish farming, and studies of their potential for cleaning up oil spills. As a result of the growing commercial potential, there have been applications for and grants of patents relating to the growing, installation and use of microbial mats, mainly for cleaning up pollutants and waste products.
See also
Biological soil crust
Cambrian substrate revolution
Cyanobacteria
Ediacaran type preservation
Evolutionary history of life
Sippewissett Microbial Mat
Sewage fungus
Notes
References
Seckbach S (2010) Microbial Mats: Modern and Ancient Microorganisms in Stratified Systems Springer, .
External links
– outline of microbial mats and pictures of mats in various situations and at various magnifications.
Archean life
Cambrian life
Microbiology
Fossils
Phanerozoic
Proterozoic life
Evolutionary biology | Microbial mat | Chemistry,Biology | 4,167 |
29,025,999 | https://en.wikipedia.org/wiki/Terbequinil | Terbequinil (SR-25776) is a stimulant and nootropic drug which acts as a partial inverse agonist at benzodiazepine sites on the GABAA receptor. In human trials it was found to partially reverse the sedative and amnestic effects of the hypnotic drug triazolam with only slight effects when administered by itself.
See also
GABAA receptor negative allosteric modulator
GABAA receptor § Ligands
References
2-Quinolones
Ethers
Carboxamides
GABAA receptor negative allosteric modulators | Terbequinil | Chemistry | 125 |
838,719 | https://en.wikipedia.org/wiki/OLAC | OLAC, the Open Language Archives Community, is an initiative to create a unified means of searching online databases of language resources for linguistic research. The information about resources is stored in XML format for easy searching. OLAC was founded in 2000, and is hosted at the Linguistic Data Consortium webserver at the University of Pennsylvania.
OLAC advises on best practices in language archiving, and works to promote interoperation between language archives.
Metadata
The OLAC metadata set is based on the complete set of Dublin Core metadata terms DCMT, but the format allows for the use of extensions to express community-specific qualifiers. It is often contrasted to IMDI (ISLE Metadata Initiative).
Attributes
The OLAC metadata is based on five primary attributes, refine, code, scheme, lang, and langs, although the last attribute is only for completed metadata sets. Each attribute serves a different function and is applicable in a different section of the metadata.
Elements
There are currently 23 different elements that OLAC lists on its metadata page. Elements may be used more than once, and not every element is required in a metadata submission. Each element's entry on the official OLAC page includes the name of the element, its function, notes on its usage, and examples of its coding.
In addition, OLAC provides a list of metadata extensions to augment descriptions.
References
External links
Official website (and alternative server)
Search OLAC
Linguistics organizations
Organizations established in 2000 | OLAC | Technology | 293 |
52,452,333 | https://en.wikipedia.org/wiki/NGC%205927 | NGC 5927 is a globular cluster in the constellation Lupus. NGC 5927 has a diameter of about 12 arcminutes and an apparent magnitude of +8.86. Its Shapley–Sawyer Concentration Class is VIII, and it contains stars of magnitude 15 and dimmer.
The globular cluster was discovered by the astronomer James Dunlop in 1826 and was also observed in 1834 by John Herschel.
NGC 5927 is relatively metal-rich for a globular cluster, and may have multiple generations of stars.
References
External links
NGC 5927
Globular clusters
Lupus (constellation)
5927 | NGC 5927 | Astronomy | 127 |
46,586,008 | https://en.wikipedia.org/wiki/Weyl%27s%20tile%20argument | In philosophy, Weyl's tile argument, introduced by Hermann Weyl in 1949, is an argument against the notion that physical space is "discrete", as if composed of a number of finite sized units or tiles. The argument purports to show a distance function approximating Pythagoras' theorem on a discrete space cannot be defined and, since the Pythagorean theorem has been confirmed to be approximately true in nature, physical space is not discrete. Academic debate on the topic continues, with counterarguments proposed in the literature.
The argument
The tile argument appears in Weyl's 1949 book Philosophy of Mathematics and Natural Sciences, where he writes:
A demonstration of Weyl's argument proceeds by constructing a square tiling of the plane representing a discrete space. A discretized triangle, units tall and units long, can be constructed on the tiling. The hypotenuse of the resulting triangle will be tiles long. However, by the Pythagorean theorem, a corresponding triangle in a continuous space—a triangle whose height and length are —will have a hypotenuse measuring units long. To show that the former result does not converge to the latter for arbitrary values of , one can examine the percent difference between the two results: Since cancels out, the two results never converge, even in the limit of large . The argument can be constructed for more general triangles, but, in each case, the result is the same. Thus, a discrete space does not even approximate the Pythagorean theorem.
Responses
In response, Kris McDaniel has argued the Weyl tile argument depends on accepting a "size thesis" which posits that the distance between two points is given by the number of tiles between the two points. However, as McDaniel points out, the size thesis is not accepted for continuous spaces. Thus, we might have reason not to accept the size thesis for discrete spaces.
See also
Digital physics
Discrete calculus
Taxicab metric
Causal sets
Poisson point process
Natura non facit saltus
References
Philosophy of physics
Philosophy of mathematics
Discrete geometry
Spacetime | Weyl's tile argument | Physics,Mathematics | 437 |
3,628,901 | https://en.wikipedia.org/wiki/Vortex%20lift | Vortex lift is that portion of lift due to the action of leading edge vortices. It is generated by wings with highly sweptback, sharp, leading edges (beyond 50 degrees of sweep) or highly-swept wing-root extensions added to a wing of moderate sweep. It is sometimes known as non-linear lift due to its rapid increase with angle of attack and controlled separation lift, to distinguish it from conventional lift which occurs with attached flow.
How it works
Vortex lift works by capturing vortices generated from the sharply swept leading edge of the wing. The vortex, formed roughly parallel to the leading edge of the wing, is trapped by the airflow and remains fixed to the upper surface of the wing. As the air flows around the leading edge, it flows over the trapped vortex and is pulled in and down to generate the lift.
A straight, or moderate sweep, wing may experience, depending on its airfoil section, a leading-edge stall and loss of lift, as a result of flow separation at the leading edge and a non-lifting wake over the top of the wing. However, on a highly-swept wing leading-edge separation still occurs but instead creates a vortex sheet that rolls up above the wing producing spanwise flow beneath. Flow not entrained by the vortex passes over the top of the vortex and reattaches to the wing surface. The vortex generates a high negative pressure field on the top of the wing. Vortex lift increases with angle of attack (AOA) as seen on lift~AOA plots which show the vortex, or unattached flow, adding to the normal attached lift as an extra non-linear component of the overall lift. Vortex lift has a limiting AoA at which the vortex bursts or breaks down.
Applications
Four basic configurations which have used vortex lift are, in chronological order, the 60-degree delta wing; the ogive delta wing with its sharply-swept leading edge at the root; the moderately-swept wing with a leading-edge extension, which is known as a hybrid wing; and the sharp-edge forebody, or vortex-lift strake.
Wings which generate vortex lift have been used on delta-winged research aircraft such as the Convair XF-92A and Fairey Delta 2. Early delta wing fighters such as the F-102, the F-106, and contemporaries such as Dassault's deltas had cambered leading edges that were blunt and did not generate significant vortexes. The Concorde supersonic airliner had sharp leading edges. Wings with vortex lift over the inboard section are the moderate-sweep wings with an easily identified LERX used on high-manoeuvrability combat aircraft, such as the Northrop F-5 and McDonnell Douglas F/A-18 Hornet. Vortex lift sharp forebody strakes are used on the General Dynamics F-16 Fighting Falcon.
Benefits and shortcomings
Vortex lift provides high lift with increasing AoA at landing speeds and in manoeuvring flight.
A high AoA needed to meet landing requirements has, in the past, restricted pilot visibility and led to design complications to accommodate a drooping nose, as in the case of the Fairey Delta 2 and Concorde.
For moderate swept wings the addition of a LERX reduces wave drag and improves turning performance and enables a far wider range of flying attitudes.
The use of vortex lift is restricted by vortex breakdown or bursting and an inherent instability in yaw. There is considerable drag due to increased lift production and loss of leading edge suction that is part of normal attached flow round a leading edge.
Among animals
Animals such as hummingbirds, and bats that eat pollen and nectar, are able to hover. They produce vortex lift with the sharp leading edges of their wings and change their wing shapes and curvatures to create stability in the lift.
See also
Kármán vortex street
Aerodynamics
Crab claw sail
References
Aerodynamics
Vortices | Vortex lift | Chemistry,Mathematics,Engineering | 802 |
13,209,748 | https://en.wikipedia.org/wiki/Adder%20Technology | Adder Technology is a manufacturer of information technology hardware based in Cambridge, England, UK. It is the largest producer of keyboard, video, mouse (KVM) controllers in Europe.
History
The company began in 1984 as Adder Publishing and was rebranded as Adder Technology in 1986. It relocated to Bar Hill in 2012.
Overview
Adder develops hardware-based, remote-management devices sold under the brand 'Adder'. Products include KVM switches (analog and Cat5), KVM over IP, digital signage products, remote office/branch office solutions, and out-of-band management solutions.
Adder Technology has won Deloitte Touche Tohmatsu's "Fast 50" designation in the Deloitte Fast 500 awards for 8 consecutive years. The company has received a Queen's Award for Enterprise.
The company has US, UK, Germany, Netherlands, and Singapore offices and a global distribution network. Some 60% of its production is exported to Europe and the United States. It was founded in 1984 by Adrian Dickens.
References
Computer companies of the United Kingdom
Computer hardware companies
Computer peripheral companies
Companies based in Cambridge
Computer companies established in 1984 | Adder Technology | Technology | 236 |
29,358,961 | https://en.wikipedia.org/wiki/MyDLP | MyDLP is a data loss prevention solution originally available released as free and open source software. Supported data inspection channels include web, mail, instant messaging, file transfer to removable storage devices and printers. The MyDLP development project originally made its source code available under the terms of the GNU General Public License.
MyDLP was one of the first free software projects for data loss prevention, but was acquired by the Comodo Group in May 2014. Comodo has since begun marketing the Enterprise version through its Comodo Security Solutions subsidiary, while the free version has been removed from the website. The open source code has not been updated since early 2014.
Subprojects
As of October 2010, MyDLP included the following subprojects:
MyDLP Network: Network server of the project, which was used for high load network operations such as intercepting TCP connections and hosting MyDLP network services.
MyDLP Endpoint: Remote agent of the project, which ran on endpoint machines in order to inspect end user operations such as copying a file to an external device, printing a document and capturing screenshots.
MyDLP Web UI: Management interface for system administrators to configure MyDLP. It pushed relevant parts of system configuration to both MyDLP Network and MyDLP Endpoint.
Platforms and interfaces
MyDLP Network was mostly written in Erlang, because of its performance on concurrent network operations. Python was also used for a few exceptional cases. This subsystem could run on any platform that supported Erlang and Python.
MyDLP Endpoint was developed for Windows platforms, and it was written in C++, C#.
MyDLP Web UI was written in PHP and Adobe Flex. It used MySQL in order to store user configurations.
Features
As of October 2010, MyDLP included widespread data loss prevention features such as text extraction from binary formats, incident management queue, source code detection and data identification methods for bank account, credit card and several national identification numbers. Besides, features like data classification through statistical analysis of trained sentences and native language processor integrated Naive Bayes classifier were claimed to be included.
References
External links
Free security software
Data security
Free software programmed in Erlang
Free software programmed in Java (programming language)
Software using the GNU General Public License | MyDLP | Engineering | 474 |
901,260 | https://en.wikipedia.org/wiki/Close-packing%20of%20equal%20spheres | In geometry, close-packing of equal spheres is a dense arrangement of congruent spheres in an infinite, regular arrangement (or lattice). Carl Friedrich Gauss proved that the highest average density – that is, the greatest fraction of space occupied by spheres – that can be achieved by a lattice packing is
.
The same packing density can also be achieved by alternate stackings of the same close-packed planes of spheres, including structures that are aperiodic in the stacking direction. The Kepler conjecture states that this is the highest density that can be achieved by any arrangement of spheres, either regular or irregular. This conjecture was proven by T. C. Hales. Highest density is known only for 1, 2, 3, 8, and 24 dimensions.
Many crystal structures are based on a close-packing of a single kind of atom, or a close-packing of large ions with smaller ions filling the spaces between them. The cubic and hexagonal arrangements are very close to one another in energy, and it may be difficult to predict which form will be preferred from first principles.
FCC and HCP lattices
There are two simple regular lattices that achieve this highest average density. They are called face-centered cubic (FCC) (also called cubic close packed) and hexagonal close-packed (HCP), based on their symmetry. Both are based upon sheets of spheres arranged at the vertices of a triangular tiling; they differ in how the sheets are stacked upon one another. The FCC lattice is also known to mathematicians as that generated by the A3 root system.
Cannonball problem
The problem of close-packing of spheres was first mathematically analyzed by Thomas Harriot around 1587, after a question on piling cannonballs on ships was posed to him by Sir Walter Raleigh on their expedition to America.
Cannonballs were usually piled in a rectangular or triangular wooden frame, forming a three-sided or four-sided pyramid. Both arrangements produce a face-centered cubic lattice – with different orientation to the ground. Hexagonal close-packing would result in a six-sided pyramid with a hexagonal base.
The cannonball problem asks which flat square arrangements of cannonballs can be stacked into a square pyramid. Édouard Lucas formulated the problem as the Diophantine equation or and conjectured that the only solutions are and . Here is the number of layers in the pyramidal stacking arrangement and is the number of cannonballs along an edge in the flat square arrangement.
Positioning and spacing
In both the FCC and HCP arrangements each sphere has twelve neighbors. For every sphere there is one gap surrounded by six spheres (octahedral) and two smaller gaps surrounded by four spheres (tetrahedral). The distances to the centers of these gaps from the centers of the surrounding spheres is for the tetrahedral, and for the octahedral, when the sphere radius is 1.
Relative to a reference layer with positioning A, two more positionings B and C are possible. Every sequence of A, B, and C without immediate repetition of the same one is possible and gives an equally dense packing for spheres of a given radius.
The most regular ones are
FCC = ABC ABC ABC... (every third layer is the same)
HCP = AB AB AB AB... (every other layer is the same).
There is an uncountably infinite number of disordered arrangements of planes (e.g. ABCACBABABAC...) that are sometimes collectively referred to as "Barlow packings", after crystallographer William Barlow.
In close-packing, the center-to-center spacing of spheres in the xy plane is a simple honeycomb-like tessellation with a pitch (distance between sphere centers) of one sphere diameter. The distance between sphere centers, projected on the z (vertical) axis, is:
where d is the diameter of a sphere; this follows from the tetrahedral arrangement of close-packed spheres.
The coordination number of HCP and FCC is 12 and their atomic packing factors (APFs) are equal to the number mentioned above, 0.74.
Lattice generation
When forming any sphere-packing lattice, the first fact to notice is that whenever two spheres touch a straight line may be drawn from the center of one sphere to the center of the other intersecting the point of contact. The distance between the centers along the shortest path namely that straight line will therefore be r1 + r2 where r1 is the radius of the first sphere and r2 is the radius of the second. In close packing all of the spheres share a common radius, r. Therefore, two centers would simply have a distance 2r.
Simple HCP lattice
To form an A-B-A-B-... hexagonal close packing of spheres, the coordinate points of the lattice will be the spheres' centers. Suppose, the goal is to fill a box with spheres according to HCP. The box would be placed on the x-y-z coordinate space.
First form a row of spheres. The centers will all lie on a straight line. Their x-coordinate will vary by 2r since the distance between each center of the spheres are touching is 2r. The y-coordinate and z-coordinate will be the same. For simplicity, say that the balls are the first row and that their y- and z-coordinates are simply r, so that their surfaces rest on the zero-planes. Coordinates of the centers of the first row will look like (2r, r, r), (4r, r, r), (6r ,r, r), (8r ,r, r), ... .
Now, form the next row of spheres. Again, the centers will all lie on a straight line with x-coordinate differences of 2r, but there will be a shift of distance r in the x-direction so that the center of every sphere in this row aligns with the x-coordinate of where two spheres touch in the first row. This allows the spheres of the new row to slide in closer to the first row until all spheres in the new row are touching two spheres of the first row. Since the new spheres touch two spheres, their centers form an equilateral triangle with those two neighbors' centers. The side lengths are all 2r, so the height or y-coordinate difference between the rows is r. Thus, this row will have coordinates like this:
The first sphere of this row only touches one sphere in the original row, but its location follows suit with the rest of the row.
The next row follows this pattern of shifting the x-coordinate by r and the y-coordinate by . Add rows until reaching the x and y maximum borders of the box.
In an A-B-A-B-... stacking pattern, the odd numbered planes of spheres will have exactly the same coordinates save for a pitch difference in the z-coordinates and the even numbered planes of spheres will share the same x- and y-coordinates. Both types of planes are formed using the pattern mentioned above, but the starting place for the first row's first sphere will be different.
Using the plane described precisely above as plane #1, the A plane, place a sphere on top of this plane so that it lies touching three spheres in the A-plane. The three spheres are all already touching each other, forming an equilateral triangle, and since they all touch the new sphere, the four centers form a regular tetrahedron. All of the sides are equal to 2r because all of the sides are formed by two spheres touching. The height of which or the z-coordinate difference between the two "planes" is . This, combined with the offsets in the x and y-coordinates gives the centers of the first row in the B plane:
The second row's coordinates follow the pattern first described above and are:
The difference to the next plane, the A plane, is again in the z-direction and a shift in the x and y to match those x- and y-coordinates of the first A plane.
In general, the coordinates of sphere centers can be written as:
where i, j and k are indices starting at 0 for the x-, y- and z-coordinates.
Miller indices
Crystallographic features of HCP systems, such as vectors and atomic plane families, can be described using a four-value Miller index notation ( hkil ) in which the third index i denotes a degenerate but convenient component which is equal to −h − k. The h, i and k index directions are separated by 120°, and are thus not orthogonal; the l component is mutually perpendicular to the h, i and k index directions.
Filling the remaining space
The FCC and HCP packings are the densest known packings of equal spheres with the highest symmetry (smallest repeat units).
Denser sphere packings are known, but they involve unequal sphere packing.
A packing density of 1, filling space completely, requires non-spherical shapes, such as honeycombs.
Replacing each contact point between two spheres with an edge connecting the centers of the touching spheres produces tetrahedrons and octahedrons of equal edge lengths.
The FCC arrangement produces the tetrahedral-octahedral honeycomb.
The HCP arrangement produces the gyrated tetrahedral-octahedral honeycomb.
If, instead, every sphere is augmented with the points in space that are closer to it than to any other sphere, the duals of these honeycombs are produced: the rhombic dodecahedral honeycomb for FCC, and the trapezo-rhombic dodecahedral honeycomb for HCP.
Spherical bubbles appear in soapy water in a FCC or HCP arrangement when the water in the gaps between the bubbles drains out. This pattern also approaches the rhombic dodecahedral honeycomb or trapezo-rhombic dodecahedral honeycomb. However, such FCC or HCP foams of very small liquid content are unstable, as they do not satisfy Plateau's laws. The Kelvin foam and the Weaire–Phelan foam are more stable, having smaller interfacial energy in the limit of a very small liquid content.
There are two types of interstitial holes left by hcp and fcc conformations; tetrahedral and octahedral void. Four spheres surround the tetrahedral hole with three spheres being in one layer and one sphere from the next layer. Six spheres surround an octahedral voids with three spheres coming from one layer and three spheres coming from the next layer. Structures of many simple chemical compounds, for instance, are often described in terms of small atoms occupying tetrahedral or octahedral holes in closed-packed systems that are formed from larger atoms.
Layered structures are formed by alternating empty and filled octahedral planes. Two octahedral layers usually allow for four structural arrangements that can either be filled by an hpc of fcc packing systems. In filling tetrahedral holes a complete filling leads to fcc field array. In unit cells, hole filling can sometimes lead to polyhedral arrays with a mix of hcp and fcc layering.
See also
Cubic crystal system
Hermite constant
Random close pack
Sphere packing
Cylinder sphere packing
Notes
External links
P. Krishna & D. Pandey, "Close-Packed Structures" International Union of Crystallography by University College Cardiff Press. Cardiff, Wales. PDF
Discrete geometry
Crystallography
Packing problems
Spheres | Close-packing of equal spheres | Physics,Chemistry,Materials_science,Mathematics,Engineering | 2,367 |
34,381,785 | https://en.wikipedia.org/wiki/Kepler-42 | Kepler-42, formerly known as KOI-961, is a red dwarf located in the constellation Cygnus and approximately 131 light years from the Sun. It has three known extrasolar planets, all of which are smaller than Earth in radius and orbit very close to the star.
Characteristics
Kepler-42's mass is estimated to be 0.13 times that of the Sun, and a radius 0.17 times that of the Sun, just 1.7 times that of the gas giant Jupiter. Due to its small radius and hence surface area, the luminosity of Kepler-42 is only 0.24% of that of the Sun. Its metallicity is one third of the Sun's. Kepler-42 has an appreciable proper motion of up to 431±8 mas/yr. Due to its small size and low temperature, the star's habitable zone is much closer to the star than Earth is to the Sun.
Planetary system
The planetary system comprising three transiting planets was discovered in February 2011 and confirmed on 10 January 2012, using the Kepler Space Telescope. These planets' radii range from approximately those of Mars to Venus. The Kepler-42 system is only the second known system containing planets of Earth's radius or smaller (the first was the Kepler-20 system pictured at left). These planets' orbits are also compact, making the system (whose host star itself has a radius comparable to those of some hot Jupiters) resemble the moon systems of giant planets such as Jupiter or Saturn more than it does the Solar System. Despite these planets' small size and the star's being one of the faintest stars in Kepler field with confirmed planets, the detection of these planets was possible due to the small size of the star, causing these planets to block a larger proportion of starlight during their transits.
Not all of the orbital parameters of the system are known. For example, as with all transiting planets that have not had their properties established by means of other methods such as the radial velocity method, the orbital eccentricity remains unknown.
Based on the orbits of the planets and the luminosity and effective temperature of the host star, the equilibrium temperatures of the planets can be calculated. Assuming an extremely high albedo of 0.9 and absence of greenhouse effect, the outer planet Kepler-42 d would have an equilibrium temperature of about , similar to Earth's . Estimates for the known planets are in the tables below:
Notes
References
M-type main-sequence stars
Cygnus (constellation)
Planetary systems with three confirmed planets
Planetary transit variables
961
J19285255+4437096
TIC objects | Kepler-42 | Astronomy | 545 |
41,578,185 | https://en.wikipedia.org/wiki/Broadly%20neutralizing%20HIV-1%20antibodies | Broadly neutralizing HIV-1 antibodies (bNAbs) are neutralizing antibodies which neutralize multiple HIV-1 viral strains. bNAbs are unique in that they target conserved epitopes of the virus, meaning the virus may mutate, but the targeted epitopes will still exist. In contrast, non-bNAbs are specific for individual viral strains with unique epitopes. The discovery of bNAbs has led to an important area of research, namely, discovery of a vaccine, not only limited to HIV, but also other rapidly mutating viruses like influenza.
Characteristics
The following table shows the characteristics of various HIV-1 bNAbs
In addition to targeting conserved epitopes, bNAbs are known to have long variable regions on their immunoglobulin (Ig) isotypes and subclasses. When compared to non-bNAbs, sequence variability from the germline immunoglobulin isotype is 7 fold. This implies that bNAbs develop from intense affinity maturation in the germinal centers hence the reason for high sequence variability on the variable Ig domain. Indeed HIV-1 patients who develop bNAbs have been shown to have high germinal center activity as exhibited by their comparatively higher levels of plasma CXCL13, which is a biomarker of germinal center activity.
Online databases like bNAber (now defunct) and LANL constantly report and update the discovery of new HIV bNAbs.
History of HIV bNAbs
In 1990, researchers identified the first HIV bNAb, far more powerful than any antibody seen before. They described the exact viral component, or epitope that triggered the antibody. Six amino acids at the tip of HIV's surface protein, gp120, were responsible. The first bNAb turned out to be clinically irrelevant, but in 1994 another team isolated a bNAb that worked on cells taken from patients. This antibody attached to a "conserved" portion of gp120 that outlasts many of its mutations, affecting 17/24 tested strains at low doses. Another bNAb was discovered that acted on protein gp41 across many strains. Antibodies require antigens to trigger them and these were not originally identified.
Over time more bNAbs were isolated, while single cell antibody cloning made it possible to produce large quantities of the antibodies for study. Low levels of bNAbs are now found in up to 25% of HIV patients. bNAbs evolve over years, accumulating some three times as many mutations as other antibodies.
By 2006, researchers had identified a few so-called "broadly neutralizing antibodies" (bNAbs) that worked on multiple HIV strains. They analyzed 1800 blood samples from HIV-infected people from Africa, South Asia and the English-speaking world. They individually probed 30,000 of one woman's antibody-producing B cells and isolated two that were able to stop more than 70% of 162 divergent HIV strains from establishing an infection. Since 2009, researchers have identified more than 50 HIV bNAbs.
In 2006, a Malawian man joined a study within weeks of becoming infected. Over a year, he repeatedly donated blood, which researchers used to create a timeline of changes in his virus' gp120, his antibody response and the ultimate emergence of a bNAb. Researchers want to direct this evolution in other subjects to achieve similar results. A screen of massive gp120 libraries led to one that strongly bound both an original antibody and the mature bNAb that evolved from it. Giving patients a modified gp120 that contains little more than the epitope that both antibodies target could act to "prime" the immune system, followed by a booster that contains trimer spikes in the most natural configuration possible. However, it is still under study whether bNAbs could prevent HIV infection.
In 2009, researchers isolated and characterized the first HIV bNAbs seen in a decade. The two broadest neutralizers were PGT151 and PGT152. They could block about two-thirds of a large panel of HIV strains. Unlike most other bNAbs, these antibodies do not bind to known epitopes, on Env or on Env's subunits (gp120 or gp41). Instead, they attach to parts of both. Gp120 and gp41 assemble as a trimer. The bNAbs binding site occurs only on the trimer structure, the form of Env that invades host cells.
Recent years have seen an increase in HIV-1 bNAb discovery.
See also
HIV vaccine development
UB-421
References
External links
bNAber, Database of Broadly Neutralizing HIV-1 Antibodies (bNAbs)
LANL Antibody Database
Molecular biology
Immunology | Broadly neutralizing HIV-1 antibodies | Chemistry,Biology | 978 |
28,915,919 | https://en.wikipedia.org/wiki/Spiroindolone | The spiroindolones are a class of compounds in which an indolone ring is substituted with another ring in a spiro arrangement.
Alkaloids in this class include horsfiline, rhynchophylline, gelsemine, carapanaubine, and maremycin E.
Spiroindolones are also an emerging class of antimalarial drugs
whose mode of action is to inhibit protein synthesis in the target parasite. The most advanced compound in this class in terms of drug development is cipargamin (NITD609).
References
Antimalarial agents
Spiro compounds | Spiroindolone | Chemistry | 128 |
396,631 | https://en.wikipedia.org/wiki/Brian%20Josephson | Brian David Josephson (born 4 January 1940) is a Welsh physicist and was professor emeritus of physics at the University of Cambridge. Best known for his pioneering work on superconductivity and quantum tunnelling, he shared the 1973 Nobel Prize in Physics with Leo Esaki and Ivar Giaever for his discovery of the Josephson effect, made in 1962 when he was a 22 year-old PhD student at Cambridge.
Josephson has spent his academic career as a member of the Theory of Condensed Matter group at Cambridge's Cavendish Laboratory. He has been a fellow of Trinity College, Cambridge since 1962, and served as professor of physics from 1974 until 2007.
In the early 1970s, Josephson took up transcendental meditation and turned his attention to issues outside the boundaries of mainstream science. He set up the Mind–Matter Unification Project at Cavendish to explore the idea of intelligence in nature, the relationship between quantum mechanics and consciousness, and the synthesis of science and Eastern mysticism, broadly known as quantum mysticism. He has expressed support for topics such as parapsychology, water memory and cold fusion, which has made him a focus of criticism from fellow scientists.
Early life and career
Education
Josephson was born in Cardiff, Wales, on 4 January 1940 to Jewish parents, Mimi (née Weisbard, 1911–1998) and Abraham Josephson. He attended Cardiff High School, where he credits some of the school masters for having helped him, particularly the physics master, Emrys Jones, who introduced him to theoretical physics. In 1957, he went up to Cambridge, where he initially read mathematics at Trinity College, Cambridge. After completing Maths Part II in two years, and finding it somewhat sterile, he decided to switch to physics.
Josephson was known at Cambridge as a brilliant but shy student. Physicist John Waldram recalled overhearing Nicholas Kurti, an examiner from Oxford, discuss Josephson's exam results with David Shoenberg, reader in physics at Cambridge, and asking: "Who is this chap Josephson? He seems to be going through the theory like a knife through butter." While still an undergraduate, he published a paper on the Mössbauer effect, pointing out a crucial issue other researchers had overlooked. According to one eminent physicist speaking to Physics World, Josephson wrote several papers important enough to assure him a place in the history of physics even without his discovery of the Josephson effect.
He graduated in 1960 and became a research student in the university's Mond Laboratory on the old Cavendish site, where he was supervised by Brian Pippard. American physicist Philip Anderson, also a future Nobel Prize laureate, spent a year in Cambridge in 1961–1962, and recalled that having Josephson in a class was "a disconcerting experience for a lecturer, I can assure you, because everything had to be right or he would come up and explain it to me after class." It was during this period, as a PhD student in 1962, that he carried out the research that led to his discovery of the Josephson effect; the Cavendish Laboratory unveiled a plaque on the Mond Building dedicated to the discovery in November 2012. He was elected a fellow of Trinity College in 1962, and obtained his PhD in 1964 for a thesis entitled Non-linear conduction in superconductors.
Discovery of the Josephson effect
Josephson was 22 years old when he did the work on quantum tunnelling that won him the Nobel Prize. He discovered that a supercurrent could tunnel through a thin barrier, predicting, according to physicist Andrew Whitaker, that "at a junction of two superconductors, a current will flow even if there is no drop in voltage; that when there is a voltage drop, the current should oscillate at a frequency related to the drop in voltage; and that there is a dependence on any magnetic field." This became known as the Josephson effect and the junction as a Josephson junction.
His calculations were published in Physics Letters (chosen by Pippard because it was a new journal) in a paper entitled "Possible new effects in superconductive tunnelling," received on 8 June 1962 and published on 1 July. They were confirmed experimentally by Philip Anderson and John Rowell of Bell Labs in Princeton; this appeared in their paper, "Probable Observation of the Josephson Superconducting Tunneling Effect," submitted to Physical Review Letters in January 1963.
Before Anderson and Rowell confirmed the calculations, the American physicist John Bardeen, who had shared the 1956 Nobel Prize in Physics (and who shared it again in 1972), objected to Josephson's work. He submitted an article to Physical Review Letters on 25 July 1962, arguing that "there can be no such superfluid flow." The disagreement led to a confrontation in September that year at Queen Mary College, London, at the Eighth International Conference on Low Temperature Physics. When Bardeen (then one of the most eminent physicists in the world) began speaking, Josephson (still a student) stood up and interrupted him. The men exchanged views, reportedly in a civil and soft-spoken manner. See also: .
Whitaker writes that the discovery of the Josephson effect led to "much important physics," including the invention of SQUIDs (superconducting quantum interference devices), which are used in geology to make highly sensitive measurements, as well as in medicine and computing. IBM used Josephson's work in 1980 to build a prototype of a computer that would be up to 100 times faster than the IBM 3033 mainframe.
Nobel Prize
Josephson was awarded several important prizes for his discovery, including the 1969 Research Corporation Award for outstanding contributions to science, and the Hughes Medal and Holweck Prize in 1972. In 1973 he won the Nobel Prize in Physics, sharing the $122,000 award with two other scientists who had also worked on quantum tunnelling. Josephson was awarded half the prize "for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects". The other half of the award was shared equally by Japanese physicist Leo Esaki of the Thomas Watson Research Center in Yorktown, New York, and Norwegian-American physicist Ivar Giaever of General Electric in Schenectady, New York.
Positions held
Josephson spent a postdoctoral year in the United States (1965–1966) as research assistant professor at the University of Illinois at Urbana–Champaign. After returning to Cambridge, he was made assistant director of research at the Cavendish Laboratory in 1967, where he remained a member of the Theory of Condensed Matter group, a theoretical physics group, for the rest of his career. He was elected a Fellow of the Royal Society (FRS) in 1970, and the same year was awarded a National Science Foundation fellowship by Cornell University, where he spent one year. In 1972 he became a reader in physics at Cambridge and in 1974 a full professor, a position he held until he retired in 2007.
A practitioner of Transcendental Meditation (TM) since the early seventies, Josephson became a visiting faculty member in 1975 of the Maharishi European Research University in the Netherlands, part of the TM movement. He also held visiting professorships at Wayne State University in 1983, the Indian Institute of Science, Bangalore in 1984, and the University of Missouri-Rolla in 1987.
Parapsychology
Early interest and Transcendental Meditation
Josephson became interested in philosophy of mind in the late sixties and, in particular, in the mind–body problem, and is one of the few scientists to argue that parapsychological phenomena (telepathy, psychokinesis and other paranormal themes) may be real. In 1971, he began practising Transcendental Meditation (TM), which had been taken up by several celebrities, including the Beatles.
Winning the Nobel Prize in 1973 gave him the freedom to work in less orthodox areas, and he became increasingly involved – including during science conferences, to the irritation of fellow scientists – in talking about meditation, telepathy and higher states of consciousness. In 1974, he angered scientists during a colloquium of molecular and cellular biologists in Versailles by inviting them to read the Bhagavad Gita (5th – 2nd century BCE) and the work of Maharishi Mahesh Yogi, the founder of the TM movement, and by arguing about special states of consciousness achieved through meditation. "Nothing forces us," one scientist shouted at him, "to listen to your wild speculations." Biophysicist Henri Atlan wrote that the session ended in uproar.
In May that year, Josephson addressed a symposium held to welcome the Maharishi to Cambridge. The following month, at the first Canadian conference on psychokinesis, he was one of 21 scientists who tested claims by Matthew Manning, a Cambridgeshire teenager who said he had psychokinetic abilities; Josephson apparently told a reporter that he believed Manning's powers were a new kind of energy. He later withdrew or corrected the statement.
Josephson said that Trinity College's tradition of interest in the paranormal meant that he did not dismiss these ideas out of hand. Several presidents of the Society for Psychical Research had been fellows of Trinity, and the Perrott-Warrick Fund, set up in Trinity in 1937 to fund parapsychology research, is still administered by the college. He continued to explore the idea that there is intelligence in nature, particularly after reading Fritjof Capra's The Tao of Physics (1975), and in 1979 took up a more advanced form of TM, known as the TM-Sidhi program. According to Anderson, the TM movement produced a poster showing Josephson levitating several inches above the floor. Josephson argued that meditation could lead to mystical and scientific insights, and that, as a result of it, he had come to believe in a creator.
Fundamental Fysiks Group
Josephson became involved in the mid-1970s with a group of physicists associated with the Lawrence Berkeley Laboratory at the University of California, Berkeley, who were investigating paranormal claims. They had organized themselves loosely into the Fundamental Fysiks Group, and had effectively become the Stanford Research Institute's (SRI) "house theorists," according to historian of science David Kaiser. Core members in the group were Elizabeth Rauscher, George Weissmann, John Clauser, Jack Sarfatti, Saul-Paul Sirag, Nick Herbert, Fred Alan Wolf, Fritjof Capra, Henry Stapp, Philippe Eberhard and Gary Zukav.
There was significant government interest at the time in quantum mechanics – the American government was financing research at SRI into telepathy – and physicists able to understand it found themselves in demand. The Fundamental Fysiks Group used ideas from quantum physics, particularly Bell's theorem and quantum entanglement, to explore issues such as action at a distance, clairvoyance, precognition, remote viewing and psychokinesis.
In 1976, Josephson travelled to California at the invitation of one of the Fundamental Fysiks Group members, Jack Sarfatti, who introduced him to others including laser physicists Russell Targ and Harold Puthoff, and quantum physicist Henry Stapp. The San Francisco Chronicle covered Josephson's visit.
Josephson co-organized a symposium on consciousness at Cambridge in 1978, publishing the proceedings as Consciousness and the Physical World (1980), with neuroscientist V. S. Ramachandran. A conference on "Science and Consciousness" followed a year later in Cordoba, Spain, attended by physicists and Jungian psychoanalysts, and addressed by Josephson, Fritjof Capra and David Bohm (1917–1992).
By 1996, he had set up the Mind–Matter Unification Project at the Cavendish Laboratory to explore intelligent processes in nature. In 2002, he told Physics World: "Future science will consider quantum mechanics as the phenomenology of particular kinds of organised complex system. Quantum entanglement would be one manifestation of such organisation, paranormal phenomena another."
Reception and views on the scientific community
Josephson delivered the Pollock Memorial Lecture in 2006, the Hermann Staudinger Lecture in 2009 and the Sir Nevill Mott Lecture in 2010.
Matthew Reisz wrote in Times Higher Education in 2010 that Josephson has long been one of physics' "more colourful figures." His support for unorthodox causes has attracted criticism from fellow scientists since the 1970s, including from Philip Anderson. Josephson regards the criticism as prejudice, and believes that it has served to deprive him of an academic support network.
He has repeatedly criticized "science by consensus," arguing that the scientific community is too quick to reject certain kinds of ideas. "Anything goes among the physics community – cosmic wormholes, time travel," he argues, "just so long as it keeps its distance from anything mystical or New Age-ish." Referring to this position as "pathological disbelief," he holds it responsible for the rejection by academic journals of papers on the paranormal. He has compared parapsychology to the theory of continental drift, proposed in 1912 by Alfred Wegener (1880–1930) to explain observations that were otherwise inexplicable, which was resisted and ridiculed until evidence led to its acceptance after Wegener's death.
Science writer Martin Gardner criticized Josephson in 1980 for complaining to The New York Review of Books, along with three other physicists, about an article by J. A. Wheeler that ridiculed parapsychology. Several physicists complained in 2001 when, in a Royal Mail booklet celebrating the Nobel Prize's centenary, Josephson wrote that Britain was at the forefront of research into telepathy. Physicist David Deutsch said the Royal Mail had "let itself be hoodwinked" into supporting nonsense, although another physicist, Robert Matthews, suggested that Deutsch was skating on thin ice given the latter's own work on parallel universes and time travel.
In 2004, Josephson criticized an experiment by the Committee for Skeptical Inquiry to test claims by Russian schoolgirl Natasha Demkina that she could see inside people's bodies using a special kind of vision. The experiment involved her being asked to match six people to their confirmed medical conditions (plus one with none); to pass the test she had to make five correct matches, but made only four. Josephson argued that this was statistically significant, and that the experiment had set her up to fail. One of the researchers, Richard Wiseman, professor of psychology at the University of Hertfordshire, responded by highlighting that the conditions of the experiment had been agreed to before it started, and the potential significance of her claims warranted a higher than normal bar. Keith Rennolis, professor of applied statistics at the University of Greenwich, supported Josephson's position, asserting that the experiment was "woefully inadequate" to determine any effect.
Josephson's reputation for promoting unorthodox causes was cemented by his support for the ideas of water memory and cold fusion, both of which are rejected by mainstream scientists. Water memory is purported to provide a possible explanation for homeopathy; it is dismissed by a majority of scientists as pseudoscience, although Josephson has expressed support for it since attending a conference at which French immunologist Jacques Benveniste first proposed it. Cold fusion is the hypothesis that nuclear reactions can occur at room temperature. When Martin Fleischmann, the British chemist who pioneered research into it, died in 2012, Josephson wrote a supportive obituary in the Guardian, and had published in Nature a letter complaining that its obituary had failed to give Fleischmann due credit. Antony Valentini of Imperial College London withdrew Josephson's invitation to a 2010 conference on the de Broglie-Bohm theory because of his work on the paranormal, although it was reinstated after complaints.
Josephson's defense of paranormal claims and of cold fusion have led him to being described as an exemplar of a sufferer of the hypothetical Nobel disease.
Awards
£1,000 New Scientist prize, 1969
Research Corporation Award for outstanding contributions to science, 1969
Elected a Fellow of the Royal Society (FRS) in 1970
Fritz London Memorial Prize, 1970
Guthrie Medal (Institute of Physics), 1972
Van der Pol medal, International Union of Radio Science, 1972
Elliott Cresson Medal (Franklin Institute), 1972
Hughes Medal, 1972
Holweck Prize (Institute of Physics and French Institute of Physics), 1972
Nobel Prize in Physics, 1973
Honorary doctorate, University of Wales, 1974
Faraday Medal (Institution of Electrical Engineers), 1982
Honorary doctorate, University of Exeter, 1983
Sir George Thomson (Institute of Measurement and Control), 1984
Selected works
(2012). "Biological Observer-Participation and Wheeler's 'Law without Law'," in Plamen L. Simeonov, Leslie S. Smith and Andrée C. Ehresmann (eds.), Integral Biomathics, Springer, pp. 244–252.
(2005). "Foreword," in Michael A. Thalbourne and Lance Storm (eds.), Parapsychology in the Twenty-First Century, McFarland, pp. 1–2.
(2003). "We Think That We Think Clearly, But That's Only Because We Don't Think Clearly," in Patrick Colm Hogan and Lalita Pandit (eds.), Rabindranath Tagore: Universality and Tradition, Fairleigh Dickinson University Press, pp. 107–115.
(2003). "String Theory, Universal Mind, and the Paranormal", arXiv, physics.gen-ph, 2 December 2003.
(2002). "Beyond quantum theory: A realist psycho-biological interpretation of reality’ revisited", Biosystems, 64(1–3), January, pp. 43–45.
(2000). "Positive bias to paranormal claims", Physics World, October.
(1999). "What is truth?, Physics World, February.
(1997). "Skeptics cornered", Physics World, September.
(1997). "What is Music a Language For?" in Paavo Pylkkänen, Pauli Pylkkö, and Antti Hautamäki (eds.), Brain, Mind and Physics, IOS Press, pp. 262–265.
(1996). "Consciously avoiding the X-factor", Physics World, December.
with Jessica Utts (1996). "Do you believe in psychic phenomena? Are they likely to be able to explain consciousness?", Times Higher Education, 8 April.
with Tethys Carpenter (1996). "What can Music tell us about the Nature of the Mind? A Platonic Model," in Stuart R. Hameroff, Alfred W. Kaszniak and Alwyn Scott (eds.), Toward a Science of Consciousness, MIT Press, pp. 691–694.
with Colm Wall and Anthony Clark (1995). "Light Barrier", New Scientist, 29 April.
(1994). "Awkward Eclipse", New Scientist, 17 December.
(1994). BBC 'Heretic' series", Times Higher Education Supplement, 12 August.
with Beverly A. Rubik (1992). "The challenge of consciousness research", Frontier Perspectives, 3(1), pp. 15–19.
with Fotini Pallikari-Viras (1991). "Biological Utilization of Quantum Nonlocality", Foundations of Physics, 21(2), pp. 197–207 (also available here).
(1990). "The History of the Discovery of Weakly Coupled Superconductors," in John Roche (ed.), Physicists Look Back: Studies in the History of Physics, CRC Press, p. 375.
(1988). "Limits to the universality of quantum mechanics", Foundations of Physics, 18(12), December, pp. 1195–1204.
with M. Conrad and D. Home (1987). "Beyond Quantum Theory: A Realist Psycho-Biological Interpretation of Physical Reality," in Alwyn van der Merwe, Franco Selleri and Gino Tarozzi (eds.), Microphysical Reality and Quantum Formalism, Springer, 1987, p. 285ff.
with D.E. Broadbent (1981). "Perceptual Experiments and Language Theories", Philosophical Transactions of the Royal Society, 295(10772), October, pp. 375–385.
with H. M. Hauser (1981). "Multistage Acquisition of Intelligent Behaviour" , Kybernetes, 10(1).
with V. S. Ramachandran (eds.) (1980). Consciousness and the Physical World, Pergamon Press.
with Richard D. Mattuck, Evan Harris Walker and Olivier Costa de Beauregard (1980). "Parapsychology: An Exchange", New York Review of Books, 27, 26 June, pp. 48–51.
(1979). "Foreword," in Andrija Puharich (ed.), The Iceland Papers: Select Papers on Experimental and Theoretical Research on the Physics of Consciousness, Essentia Research Associates.
(1978). "A Theoretical Analysis of Higher States of Consciousness and Meditation", Current Topics in Cybernetics and Systems, pp. 3–4.
(1974). "The Artificial Intelligence/Psychology Approach to the Study of the Brain and Nervous System", Lecture Notes in Biomathematics, 4, pp. 370–375.
(1974). "Magnetic field dependence of the surface reactance of superconducting tin at 174 MHz", Journal of Physics F: Metal Physics, 4(5), May, p. 751.
(1973). "The Discovery of Tunnelling Supercurrents", Science, Nobel lecture, 12 December, pp. 157–164.
(1969). "Equation of state near the critical point", Journal of Physics C: Solid State Physics, 2(7), July.
with J. Lekner (1969). "Mobility of an Impurity in a Fermi Liquid", Physical Review Letters. 23(3), pp. 111–113.
(1967). "Inequality for the specific heat: II. Application to critical phenomena", Proceedings of the Physical Society, 92(2), October.
(1967). "Inequality for the specific heat: I. Derivation", Proceedings of the Physical Society, 92(2), October.
(1966). "Macroscopic Field Equations for Metals in Equilibrium", Physical Review, 152, December, pp. 211–217.
(1966). "Relation between the superfluid density and order parameter for superfluid He near Tc", Physics Letters, 21(6), 1 July, pp. 608–609.
(1965). "Supercurrents through Barriers", Advances in Physics, 14(56), pp. 419–451.
(1964). Non-linear conduction in superconductors, (PhD thesis), University of Cambridge, December.
(1964). "Coupled Superconductors", Review of Modern Physics, 36(1), pp. 216–220.
(1962). "The Relativistic Shift in the Mössbauer Effect and Coupled Superconductors", submitted for Trinity College fellowship.
(1962). "Possible new effects in superconductive tunnelling", Physics Letters, 1(7), 1 July, pp. 251–253.
(1960). "Temperature-dependent shift of gamma rays emitted by a solid", Physical Review Letters, 4, 1 April.
See also
Josephson voltage standard
Josephson vortex
Long Josephson junction
Pi Josephson junction
Phi Josephson junction
List of Jewish Nobel laureates
List of Nobel laureates in Physics
List of physicists
Scientific phenomena named after people
References
Further reading
Brian Josephson's home page, University of Cambridge.
Brian Josephson, academia.edu.
"bdj50: Conference in Cambridge to mark the 50th Anniversary of the Publication of Brian Josephson’s Seminal Work", Department of Physics, University of Cambridge.
Anderson, Philip. "How Josephson Discovered His Effect", Physics Today, November 1970. Anderson's account of Josephson's discovery; he taught the graduate course in solid-state/many-body theory in which Josephson was a student.
Barone, A. and Paterno, G. Physics and Applications of the Josephson Effect, Wiley, 1982.
Bertlmann, R. A. and Zeilinger, A. (eds.), Quantum (Un)speakables: From Bell to Quantum Information, Springer, 2002.
Buckel, Werner and Kleiner, Reinhold. Superconductivity: Fundamentals and Applications, VCH, 1991.
Jibu, Mari and Yasue, Kunio. Quantum Brain Dynamics and Consciousness: An Introduction, John Benjamins Publishing, 1995.
Josephson, Brian; Rubik, Beverly A.; Fontana, David; Lorimer, David. "Defining consciousness", Nature, 358(618), 20 August 1992.
Rosen, Joe. "Josephson, Brian David," Encyclopedia of Physics, Infobase Publishing, 2009, pp. 165–166.
Stapp, Henry. "Quantum Approaches to Consciousness," in Philip David Zelazo, Morris Moscovitch and Evan Thompson (eds.), The Cambridge Handbook of Consciousness, 2007.
Stenger, Victor J. The Unconscious Quantum: Metaphysics in Modern Physics and Cosmology, Prometheus Books, 1995.
External links
including the Nobel Lecture, 12 December 1973 The Discovery of Tunnelling Supercurrents
1940 births
Nobel laureates in Physics
Welsh Nobel laureates
British Nobel laureates
British theoretical physicists
Alumni of Trinity College, Cambridge
British Jews
Fellows of Trinity College, Cambridge
Fellows of the Institute of Physics
Fellows of the Royal Society
Jewish physicists
Living people
British parapsychologists
Scientists from Cardiff
Quantum mind
Welsh Jews
Welsh physicists
Cold fusion
Missouri University of Science and Technology faculty
Quantum mysticism advocates
Psychonautics researchers | Brian Josephson | Physics,Chemistry | 5,402 |
5,736,656 | https://en.wikipedia.org/wiki/Space%20advertising | Space advertising is the practice of advertising in space. This is usually done with product placements during crewed space missions.
Space advertising falls into two categories: obtrusive and non-obtrusive.
Obtrusive space advertising is advertising in outer space that is visible to individuals on the Earth's surface without the aid of a telescope or other technological devices. Both international and national laws govern the practice of obtrusive space advertising due to concerns about space debris (objects in space that can cause harm) and the potential obstruction of astronomical views from the Earth's surface. Contemporary regulations and technological capabilities limit space advertising, yet it persists in popular culture in a variety of forms.
Non-obtrusive space advertising is the term for any other type of advertisement in space, such as logos on space suits, satellites, and rockets.
History
Since the Space Race and the dissolution of the Soviet Union, space-based advertising has been explored as a non-militarized use for space. Since then, several attempts at space advertising have occurred, such as Elon Musk’s SpaceX launch of a Tesla car into orbit.
One major advantage that space advertising has over other Earth-bound methods is the scale of its reach. Millions of people across multiple countries can be exposed to an advertisement orbiting Earth. However, relatively high start-up costs have prevented this from becoming a common mode of advertisement.
Attempts
In the past, attempts at orbital spaceflight have been discouraged due to the high cost (millions of USD per launch). Public space exploration authorities have also been reluctant to cater to advertisers. For example, NASA's restrictive policy on its employees' endorsing of products required astronauts to refer to M&M's as "candy-coated chocolates."
Successful attempts
Due to the high cost of orbital launches as well as associated maintenance costs, there have not been many successful advertising projects. For context, SpaceX's base fares for sending objects into space are highly costly, starting at $67 million.
Some successful attempts include the following:
Tokyo Broadcasting System (1990): The Tokyo Broadcasting System (TBS) paid approximately $11 million to the Russian space agency for the flight of journalist Toyohiro Akiyama to the Russian space station Mir. The launch vehicle displayed the Tokyo Broadcasting System logo.
Pepsi (1996): Pepsi paid approximately $5 million to have a cosmonaut float a replica of the company's soda can outside the Russian space station.
Tnuva (1997): Israeli milk company Tnuva filmed a commercial for their product on the former Russian space station Mir. The commercial aired in August 1997 and holds the Guinness World Record for the first advertisement shot in space.
Pizza Hut (2000): In 2000, Pizza Hut paid approximately $1 million to have the company logo featured on a Proton rocket that was being launched to the International Space Station by Russia. In 2001, Pizza Hut delivered a 6-inch salami pizza to the International Space Station.
Nissin Foods (2005) sent vacuum-sealed Cup Noodles to space that were eaten by cosmonaut Sergei Krikalev for a TV commercial.
Element 21 (2006): Russian cosmonaut Mikhail Tyurin hit a golf ball from the ISS porch as part of a commercial with Element 21.
Toshiba Space Chair Project (2009): Toshiba used helium balloons to bring four empty chairs to the edge of space and filmed a TV commercial for their Regza HD TVs.
Lowe's & Made in Space 3D Printer (2016): Sent a 3D printer to the International Space Station.
KFC (2017) launched the Zinger-1 mission, sending a KFC Zinger Sandwich to the edge of space. This mission was a test flight for World View Enterprises' satellite high-altitude balloons.
SpaceX (2018) sent a Tesla Roadster into orbit as the dummy payload for the first Falcon Heavy test flight.
Vegemite (2019): A group of university students from the University of Technology Sydney launched two pieces of Vegemite toast on a stratospheric balloon from the Hunter Valley region, located north of Sydney.
Rocket Lab (2019) sent a reflective sphere, the Humanity Star, into orbit.
Failed attempts
Although the number of attempts at space advertising is small, there have been several failed attempts to send advertising into space by companies and organizations around the world.
Some failed attempts include:
France's “Ring of Light” Project (1989): This project was intended as a tribute to the 100th anniversary of the building of the Eiffel Tower. It involved the launch of a ring of 100 reflectors that would link together, reflecting the sun's light to become visible for about 10 minutes out of every 90-minute orbital period. It was ultimately called off due to concern that it could interfere with space-related scientific research and widespread criticism from the general public.
The Znamya Project (1990s): A Russian space program that involved the launch of satellites designed to reflect and beam sunlight to polar regions on Earth.
Space Marketing Inc. (1993) proposed launching a billboard into space. This was ultimately blocked by House of Representatives members who passed legislation to prevent the issuing of launch licenses for the purpose of putting advertisements in space.
PepsiCo Billboard (2019): The Russian branch of PepsiCo Inc. partnered with Russian startup StartRocket for the attempted creation of an orbital billboard. There was a successful exploratory test of orbital advertisements; however, this attempt was ultimately stopped when the plan was denied by PepsiCo's U.S. Branch.
Challenges
Regulation
Different countries have varying advertising regulation levels. As advertisements that orbit the Earth, effectively operating across country borders, obtrusive space advertisements must necessarily grapple with these regulatory differences. For instance, the EU prohibits advertisers from airing tobacco or alcohol-related advertisements. Ireland also outlaws advertisements that undermine public authority. Regulatory differences may make it more challenging for obtrusive space advertisements to remain legal across multiple jurisdictions.
Beyond content-based regulations, consumers in countries like the United States have the right to opt out of receiving ads. It is unclear whether or not a consumer can effectively opt out of receiving space-based advertisements (e.g., by closing one's blinds).
Property rights are another legal concern. Due to the bright lighting of space-based ads, non-consenting property owners may raise legal challenges, arguing that the ads constitute a nuisance and violate their legally held rights.
Astronomical observations
The International Astronomical Union argues that artificial satellites built out of reflective material adversely impact astronomical observations. A paper that was presented to the United Nations stated that "scattered light from sunlit spacecraft and space debris, and radio noise from communications satellites and global positioning systems in space, reach the entire surface of the Earth”. Obtrusive space advertisements that are comparable to the brightness of the moon have the potential to make the observation of faint, distant objects impossible from the earth's surface.
Space debris
Space objects that have surpassed their functional use period and are not equipped with de-orbiting technology are considered space debris. This can lead to collisions with other space objects, which can contribute to a cascading increase in space debris known as the Kessler syndrome. Increasing amounts of space debris can make space exploration and utilization of Low Earth Orbit (LEO) more difficult.
Space advertisers could face penalties if the advertisements are considered to eventually become space debris. Because objects in orbit can remain in orbit for long periods of time, it is possible that the object remains in orbit longer than the advertising entity still exists. If approved, obtrusive space advertisers can expect to comply with end-of-life de-orbiting and anti-collision measures.
Regulations
While space advertising is a relatively new concept, it is subject to some international treaties and national policies, either specifically on space advertising or space commercial activities.
For obtrusive advertising
UN treaties
The Outer Space Treaty (1966) sets principles of international space law. It determines that all states should have the right to freely explore outer space. This treaty provides free access to space, so space advertising is not subject to global prohibition.
The Space Liability Convention (1972) rules that a state is fully liable for damages caused by space objects launched in its territory. Under this treaty, states are responsible for private launches for commercial purposes, including advertising.
The United States
51 U.S. Code 50911 prohibits the issuance of licenses and launches for activities involving obtrusive space advertising. This prohibition does not apply to other forms of advertising, such as displaying logos. Both launches with commercial licenses and launches with experimentation permits allow the display of logos.
Other nations
In November 2016, Japan legislated a licensing system for private-sector companies' launches. his act aims to stimulate Japanese commercial activities in space by supporting third-party liability insurance and channeling more liability onto launching companies to reassure customers who pay the launchers.
Russia prohibits launches that contaminate outer space and create unfavorable environmental changes. However, there is no explicit ban on space advertising, despite the light pollution and debris it potentially creates.
For non-obtrusive advertising
The United States
Public law 106-391 does not apply to non-obtrusive commercial space advertising, including commercial space transportation vehicles, space infrastructure payloads, space launch facilities, and launch support facilities.
NASA (National Aeronautics and Space Administration) does not permit use of the NASA insignia, logo, or other supporting graphics in advertisements. However, it is discussing loosening its commercial restriction policy as a governmental agency. It is considering selling its spacecraft's naming rights for financial gain. Loosening such restrictions could encourage more brands to conduct space advertising.
NASA supports the production of commercials or other marketing videos. In 2019, NASA opened the International Space Station (ISS) for space advertising and other short-duration commercial activities conducted by private companies' crews.
Other nations
No other country has explicit legislative regulations for non-obtrusive space advertising. The non-obtrusive advertising of the states’ own entities and private corporate entities is less problematic under national and international laws compared to obtrusive space advertising.
In popular culture
Advertising in outer space or space flight has been featured in several science fiction books, films, video games, and television series. They are usually shown as a satire of commercialization.
Film
In the 2008 animated science fiction film WALL-E, the star-liner spacecraft Axiom features a wide variety of advertisements for Buy n Large products.
In the 2008 film Hancock, the logo of the fictitious All-Heart charity is painted on the Moon by the title character.
Literature
In Fredric Brown's 1945 short story, "Pi in the Sky," an inventor rearranges the apparent positions of the stars to form an advertising slogan.
In Robert A. Heinlein's 1951 novella The Man Who Sold the Moon, the protagonist raises funds for his lunar ambitions by publicly describing means of covering the visible lunar face in advertising and propaganda and then taking money not to do so.
In Arthur C. Clarke's 1956 set of linked stories Venture to the Moon, within the story Watch this Space, a sodium cannon is modified by one of the parties - and, as the narrator notes, with great financial inducement and reward—to modify the exit nozzle of the cannon to paint the non-illuminated portion of the Moon visible from Earth with the logo of a soft drink company. As the sodium atoms enter the sunlight, they glow in contrast to the darker Moon surface below as the party escapes into space. While the story implies that this company may be Coca-Cola, there is sufficient ambiguity that this company may also have been Pepsi or another unnamed corporation.
In Isaac Asimov's 1958 short story "Buy Jupiter", a group of extraterrestrials broker a deal with the governments of Earth to purchase the planet Jupiter for use as an advertisement platform for passing starships from their worlds.
In Franquin's 1961 comics album, Z comme Zorglub, Zorglub attempts to write an advertisement for Coca-Cola on the Moon.
A Red Dwarf novel features an advertising campaign whereby a ship is sent on a mission by The Coca-Cola Company to cause 128 stars to go supernova in order to visibly spell the words "Coke Adds Life!" across the sky on Earth. The message is intended to last five weeks and be visible even in daylight.
in Hancock (2008) the titular protagonist decorates the moon with the logo of the fictional AllHeart corporation as gratitude to Ray Embrey, who helped him throughout the movie.
References
External links
Photos and Logotypes to be sent into space on TechCrunch
Advertising, space
Advertising by medium
Advertising | Space advertising | Astronomy | 2,603 |
38,484,030 | https://en.wikipedia.org/wiki/Multiplicative%20noise | In signal processing, the term multiplicative noise refers to an unwanted random signal that gets multiplied into some relevant signal during capture, transmission, or other processing.
An important example is the speckle noise commonly observed in radar imagery. Examples of multiplicative noise affecting digital photographs are proper shadows due to undulations on the surface of the imaged objects, shadows cast by complex objects like foliage and Venetian blinds, dark spots caused by dust in the lens or image sensor, and variations in the gain of individual elements of the image sensor array.
References
Signal processing | Multiplicative noise | Technology,Engineering | 114 |
2,922,091 | https://en.wikipedia.org/wiki/Lambda%20Canis%20Majoris | Lambda Canis Majoris (λ Canis Majoris) is a solitary, blue-white hued star in the constellation Canis Major. Lambda CMa is visible to the naked eye with an apparent visual magnitude of +4.48. Based upon an annual parallax shift of 7.70 mas as seen from Earth, this star is located about 424 light years from the Sun. At that distance, the visual magnitude is diminished by an extinction of 0.14 due to interstellar dust.
This is a B-type main-sequence star with a stellar classification of B4 V. The star is roughly 40 million years old, and is spinning with a projected rotational velocity of 102 km/s. It has about 5.7 times the mass of the Sun and is radiating 560 times the Sun's luminosity at an effective temperature of 16,300 K.
References
External links
B-type main-sequence stars
Canis Majoris, Lambda
Canis Major
Durchmusterung objects
045813
30788
2361 | Lambda Canis Majoris | Astronomy | 213 |
36,477,688 | https://en.wikipedia.org/wiki/C20H32N2O | {{DISPLAYTITLE:C20H32N2O}}
The molecular formula C20H32N2O (molar mass: 316.48 g/mol, exact mass: 316.2515 u) may refer to:
JNJ-5207852
Methyldiazinol, or 3-azi-17α-methyl-DHT
Molecular formulas | C20H32N2O | Physics,Chemistry | 81 |
2,239,740 | https://en.wikipedia.org/wiki/Deltaretrovirus | Deltaretrovirus is a genus of the Retroviridae family. It consists of exogenous horizontally transmitted viruses found in several groups of mammals. , ICTV lists under this genus the Bovine leukemia virus and three species of primate T-lymphotropic virus.
The genus of viruses is known for its propensity to target immune cells and oncogenicity, evident in the names of the four named species. Infection is usually asymptomatic, but inflammation and cancer can develop over time.
Classification
Four species are recognized by the ICTV as of 2023:
Bovine leukemia virus
Primate T-lymphotropic virus 1
Primate T-lymphotropic virus 2
Primate T-lymphotropic virus 3
Two additional PTLVs are known but not recognized: HTLV-4 (South Cameroon, 2005) and STLV-5 (Mac B43 strain, highly divergent PTLV-1).
In addition, eight endogenous retroviruses identified as Deltaretrovirus are known as of 2019. Two of these were complete enough to show ORFs; the rest only showing long terminal repeats.
Hosts
Known exogenous deltaretroviruses infect cattle and primates.
The two complete endogenous ones were found in bats and dolphins; the others in Solenodon, mongoose, and fossa. These endogenous examples fill in the large gap in the host range.
Clinical relevance
References
External links
Viralzone: Deltaretrovirus
Deltaretroviruses
Virus genera | Deltaretrovirus | Biology | 324 |
61,079,121 | https://en.wikipedia.org/wiki/Macropodid%20alphaherpesvirus%202 | Macropodid alphaherpesvirus 2 (MaHV-2) is a species of herpesvirus in the genus Simplexvirus. It was officially accepted as a valid species by the International Committee on Taxonomy of Viruses in 2004.
Hosts
Macropodid alphaherpesvirus 2 has been detected in two species of captive macropods: grey dorcopsis (Dorcopsis luctuosa) and quokkas (Setonix brachyurus).
See also
Macropodid alphaherpesvirus 1
References
Further reading
Alphaherpesvirinae | Macropodid alphaherpesvirus 2 | Biology | 112 |
41,116,218 | https://en.wikipedia.org/wiki/Microbial%20biogeography | Microbial biogeography is a subset of biogeography, a field that concerns the distribution of organisms across space and time. Although biogeography traditionally focused on plants and larger animals, recent studies have broadened this field to include distribution patterns of microorganisms. This extension of biogeography to smaller scales—known as "microbial biogeography"—is enabled by ongoing advances in genetic technologies.
The aim of microbial biogeography is to reveal where microorganisms live, at what abundance, and why. Microbial biogeography can therefore provide insight into the underlying mechanisms that generate and hinder biodiversity. Microbial biogeography also enables predictions of where certain organisms can survive and how they respond to changing environments, making it applicable to several other fields such as climate change research.
History
Schewiakoff (1893) theorized about the cosmopolitan habitat of free-living protozoans. In 1934, Lourens Baas Becking, based on his own research in California's salt lakes, as well as work by others on salt lakes worldwide, concluded that "everything is everywhere, but the environment selects". Baas Becking attributed the first half of this hypothesis to his colleague Martinus Beijerinck (1913).
Baas Becking hypothesis of cosmopolitan microbial distribution would later be challenged by other works.
Microbial vs macro-organism biogeography
The biogeography of macro-organisms (i.e., plants and animals that can be seen with the naked eye) has been studied since the eighteenth century. For macro-organisms, biogeographical patterns (i.e., which organism assemblages appear in specific places and times) appear to arise from both past and current environments. For example, polar bears live in the Arctic but not the Antarctic, while the reverse is true for penguins; although both polar bears and penguins have adapted to cold climates over many generations (the result of past environments), the distance and warmer climates between the north and south poles prevent these species from spreading to the opposite hemisphere (the result of current environments). This demonstrates the biogeographical pattern known as "isolation with geographic distance" by which the limited ability of a species to physically disperse across space (rather than any selective genetic reasons) restricts the geographical range over which it can be found.
The biogeography of microorganisms (i.e., organisms that cannot be seen with the naked eye, such as fungi and bacteria) is an emerging field enabled by ongoing advancements in genetic technologies, in particular cheaper DNA sequencing with higher throughput that now allows analysis of global datasets on microbial biology at the molecular level. When scientists began studying microbial biogeography, they anticipated a lack of biogeographic patterns due to the high dispersibility and large population sizes of microbes, which were expected to ultimately render geographical distance irrelevant. Indeed, in microbial ecology the oft-repeated saying by Lourens Baas Becking that "everything is everywhere, but the environment selects" has come to mean that as long as the environment is ecologically appropriate, geological barriers are irrelevant. However, recent studies show clear evidence for biogeographical patterns in microbial life, which challenge this common interpretation: the existence of microbial biogeographic patterns disputes the idea that "everything is everywhere" while also supporting the idea that environmental selection includes geography as well as historical events that can leave lasting signatures on microbial communities.
Microbial biogeographic patterns are often similar to those of macro-organisms. Microbes generally follow well-known patterns such as the distance decay relationship, the abundance-range relationship, and Rapoport's rule. This is surprising given the many disparities between microorganisms and macro-organisms, in particular their size (micrometers vs. meters), time between generations (minutes vs. years), and dispersibility (global vs. local). However, important differences between the biogeographical patterns of microorganism and macro-organism do exist, and likely result from differences in their underlying biogeographic processes (e.g., drift, dispersal, selection, and mutation). For example, dispersal is an important biogeographical process for both microbes and larger organisms, but small microbes can disperse across much greater ranges and at much greater speeds by traveling through the atmosphere (for larger animals dispersal is much more constrained due to their size). As a result, many microbial species can be found in both northern and southern hemispheres, while larger animals are typically found only at one pole rather than both. Furthermore, microorganisms, such as bacteria, are affected by conditions at very small scales that may differ from the scales that are typically considered for macro-organisms. For example, soil bacterial diversity is shaped by the carbon input and connectivity in microscale aqueous habitats.
Distinct patterns
Reversed and non-monotonous latitudinal diversity gradients
Larger organisms tend to exhibit latitudinal gradients in species diversity, with larger biodiversity existing in the tropics and decreasing toward more temperate polar regions. In contrast, studies on indoor fungal communities and global topsoil microbiomes found microbial biodiversity to be significantly higher in temperate zones than in the tropics. Interestingly, different buildings exhibited the same indoor fungal composition in any given location, where similarity increased with proximity. Thus, despite human efforts to control indoor climates, outside environments appear to be the strongest determinant of indoor fungal composition. On the other hand, the strong biogeographical pattern of soil bacteria is typically attributed to changes in environmental factors such as soil pH. However, soil pH may be a biogeographical proxy that is affected by a soils climatic water balance, which mediates carbon inputs and the connectivity of bacterial aqueous habitats.
Bipolar latitude distributions
Certain microbial populations exist in opposite hemispheres and at complementary latitudes. These 'bipolar' (or 'antitropical') distributions are much rarer in macro-organisms; although macro-organisms exhibit latitude gradients, 'isolation by geographic distance' prevents bipolar distributions (e.g., polar bears are not found at both poles). In contrast, a study on marine surface bacteria showed not only a latitude gradient, but also complementarity distributions with similar populations at both poles, suggesting no "isolation by geographic distance". This is likely due to differences in the underlying biogeographic process, dispersal, as microbes tend to disperse at high rates and far distances by traveling through the atmosphere.
Seasonal variations
Microbial diversity can exhibit striking seasonal patterns at a single geographical location. This is largely due to dormancy, a microbial feature not seen in larger animals that allows microbial community composition to fluctuate in relative abundance of persistent species (rather than actual species present). This is known as the "seed-bank hypothesis" and has implications for our understanding of ecological resilience and thresholds to change.
Applications
Directed panspermia
Panspermia suggests that life can be distributed throughout outer space via comets, asteroids, and meteoroids. Panspermia assumes that life can survive the harsh space environment, which features vacuum conditions, intense radiation, extreme temperatures, and a dearth of available nutrients. Many microorganisms are able to evade such stressors by forming spores or entering a state of low-metabolic dormancy. Studies in microbial biogeography have even shown that the ability of microbes to enter and successfully emerge from dormancy when their respective environmental conditions are favorable contributes to the high levels of microbial biodiversity observed in almost all ecosystems. Thus microbial biogeography can be applied to panspermia as it predicts that microbes are able to protect themselves from the harsh space environment, know to emerge when conditions are safe, and also take advantage of their dormancy capability to enhance biodiversity wherever they may land.
Directed panspermia is the deliberate transport of microorganisms to colonize another planet. If aiming to colonize an Earth-like environment, microbial biogeography can inform decisions on the biological payload of such a mission. In particular, microbes exhibit latitudinal ranges according to Rapoport's rule, which states that organisms living at lower latitudes (near the equator) are found within smaller latitude ranges than those living at higher latitudes (near the poles). Thus the ideal biological payload would include widespread, higher-latitude microorganisms that can tolerate of a wider range of climates. This is not necessarily the obvious choice, as these widespread organisms are also rare in microbial communities and tend to be weaker competitors when faced with endemic organisms. Still, they can survive in a range of climates and thus would be ideal for inhabiting otherwise lifeless Earth-like planets with uncertain environmental conditions. Extremophiles, although tough enough to withstand the space environment, may not be ideal for directed panspermia as any given extremophile species requires a very specific climate to survive. However, if the target was closer to Earth, such as a planet or moon in our Solar System, it may be possible to select a specific extremophile species for the well-defined target environment.
See also
Microbiomes of the built environment
Microbial ecology
References
Biogeography
Microorganisms | Microbial biogeography | Biology | 1,909 |
3,484,419 | https://en.wikipedia.org/wiki/Alternating%20factorial | In mathematics, an alternating factorial is the absolute value of the alternating sum of the first n factorials of positive integers.
This is the same as their sum, with the odd-indexed factorials multiplied by −1 if n is even, and the even-indexed factorials multiplied by −1 if n is odd, resulting in an alternation of signs of the summands (or alternation of addition and subtraction operators, if preferred). To put it algebraically,
or with the recurrence relation
in which af(1) = 1.
The first few alternating factorials are
1, 1, 5, 19, 101, 619, 4421, 35899, 326981, 3301819, 36614981, 442386619, 5784634181, 81393657019
For example, the third alternating factorial is 1! – 2! + 3!. The fourth alternating factorial is −1! + 2! − 3! + 4! = 19. Regardless of the parity of n, the last (nth) summand, n!, is given a positive sign, the (n – 1)th summand is given a negative sign, and the signs of the lower-indexed summands are alternated accordingly.
This pattern of alternation ensures the resulting sums are all positive integers. Changing the rule so that either the odd- or even-indexed summands are given negative signs (regardless of the parity of n) changes the signs of the resulting sums but not their absolute values.
proved that there are only a finite number of alternating factorials that are also prime numbers, since 3612703 divides af(3612702) and therefore divides af(n) for all n ≥ 3612702. The primes are af(n) for
n = 3, 4, 5, 6, 7, 8, 10, 15, 19, 41, 59, 61, 105, 160, 661, ...
with several higher probable primes that have not been proven prime.
Notes
References
Yves Gallot, Is the number of primes finite?
Paul Jobling, Guy's problem B43: search for primes of form n!-(n-1)!+(n-2)!-(n-3)!+...+/-1!
Integer sequences
Factorial and binomial topics | Alternating factorial | Mathematics | 511 |
62,770,515 | https://en.wikipedia.org/wiki/List%20of%20engineering%20awards | This list of engineering awards is an index to articles about notable awards for achievements in engineering. It includes aerospace engineering, chemical engineering, civil engineering, electrical engineering, electronic engineering, structural engineering and systems science awards. It excludes computer-related awards, computer science awards, industrial design awards, mechanical engineering awards, motor vehicle awards, occupational health and safety awards and space technology awards, which are covered by separate lists.
The list is organized by the region and country of the organizations that sponsor the awards, but some awards are not limited to people from that country.
International
Africa
Americas
Asia
Europe
Oceania
See also
List of computer science awards
List of computer-related awards
List of mechanical engineering awards
List of motor vehicle awards
List of space technology awards
Lists of awards
Lists of science and technology awards
References
Awards
Engineering | List of engineering awards | Technology | 160 |
24,495,502 | https://en.wikipedia.org/wiki/Sega%20v.%20Accolade | Sega Enterprises Ltd. v. Accolade, Inc., 977 F.2d 1510 (9th Cir. 1992), is a case in which the United States Court of Appeals for the Ninth Circuit applied American intellectual property law to the reverse engineering of computer software. Stemming from the publishing of several Sega Genesis games by video game publisher Accolade, which had disassembled Genesis software in order to publish games without being licensed by Sega, the case involved several overlapping issues, including the scope of copyright, permissible uses for trademarks, and the scope of the fair use doctrine for computer code.
The case was filed in the U.S. District Court for the Northern District of California, which ruled in favor of Sega and issued an injunction against Accolade preventing them from publishing any more games for the Genesis and requiring them to recall all the existing Genesis games they had for sale. Accolade appealed the decision to the Ninth Circuit on the grounds that their reverse engineering of the Genesis was protected under fair use. The Ninth Circuit reversed the district court's order and ruled that Accolade's use of reverse engineering to publish Genesis titles was protected under fair use, and that its alleged violation of Sega trademarks was the fault of Sega. The case is frequently cited in matters involving reverse engineering and fair use under copyright law.
Background
In March 1984, Sega Enterprises Ltd. was purchased by its former CEO, David Rosen, along with a group of backers. Hayao Nakayama, one of these backers, was named the new CEO of Sega. Following the crash of the arcade industry, Nakayama decided to focus development efforts on the home console market. During this time, Sega became concerned about software and hardware piracy in Southeast Asia, and particularly in Taiwan. Taiwan was not a signatory of the Berne Convention on copyright, limiting Sega's legal options in that region. However, Taiwan did allow prosecution for trademark infringement. Though Sega had created security systems in their consoles to keep their software from being pirated and to keep unlicensed publishers out, much like its competitor Nintendo, counterfeiters had discovered ways to prevent the Sega trademark from appearing on their games, bypassing the trademark altogether.
After the release of the Sega Genesis in 1988, video game publisher Accolade began exploring options to release some of their PC game titles onto the console. At the time, however, Sega had a licensing deal in place for third-party developers that increased the costs to the developer. According to Accolade co-founder Alan Miller, "One pays them between $10 and $15 per cartridge on top of the real hardware manufacturing costs, so it about doubles the cost of goods to the independent publisher." In addition to this, Sega required that it would be the exclusive publisher of Accolade's games if Accolade were to be licensed, preventing Accolade from releasing its games to other systems. To get around licensing, Accolade chose to seek an alternative way to bring their games to the Genesis by purchasing a console in order to decompile the executable code of three Genesis games and use it to program their new cartridges in a way that would allow them to disable the security lockouts that prevented playing of unlicensed games. This was done successfully to bring Ishido: The Way of Stones to the Genesis in 1990. In doing so, Accolade had also copied Sega's copyrighted game code multiple times in order to reverse engineer the software of Sega's licensed Genesis games.
As a result of the piracy and unlicensed development issues, Sega incorporated a technical protection mechanism into a new edition of the Genesis released in 1990, referred to as the Genesis III. This new variation of the Genesis included code known as the Trademark Security System (TMSS), which, when a game cartridge was inserted into the console, would check for the presence of the string "SEGA" at a particular point in the memory contained in the cartridge. If and only if the string was present, the console would run the game, and would briefly display the message: "Produced by or under license from Sega Enterprises LTD." This system had a twofold effect: it added extra protection against unlicensed developers and software piracy, and it forced the Sega trademark to display when the game was powered up, making a lawsuit for trademark infringement possible if unlicensed software were to be developed. Accolade learned of this development at the Winter Consumer Electronics Show in January 1991, at which Sega showed the new Genesis III and demonstrated it screening and rejecting an Ishido game cartridge. With more games planned for the following year, Accolade successfully identified the TMSS code. They later added this code to the games HardBall!, Star Control, Mike Ditka Power Football, and Turrican.
Lawsuit
On October 31, 1991, Sega filed suit against Accolade in the United States District Court for the Northern District of California, on charges of trademark infringement and unfair competition in violation of the Lanham Act. Copyright infringement, a violation of the Copyright Act of 1976, was added a month later to the list of charges. In response, Accolade filed a counterclaim for falsifying the source of its games by displaying the Sega trademark when the game was powered up. The case was heard by Judge Barbara A. Caulfield.
Sega argued that Accolade had infringed upon its copyrights because Accolade's games contained Sega's material. Accolade insisted that their use of Sega's material constituted fair use. However, Judge Caulfield did not accept this explanation since Accolade was a game manufacturer, their works were for financial gain, and because their works competed directly with Sega's licensed games, likely resulting in a sales decrease for Sega's games. Accolade's case was further hurt by a presentation by a Sega engineer named Takeshi Nagashima, who showed two Sega game cartridges that were able to run on the Genesis III without the trademark-displaying TMSS, and offered them to Accolade's defense team but would not reveal how that was possible. Ultimately, this would result in Accolade's defeat on April 3, 1992, when Judge Caulfield ruled in favor of Sega and issued an injunction prohibiting future sales by Accolade of Genesis-compatible games incorporating the Sega message or using the results of the reverse engineering. Almost a week later, Accolade was also required by the court to recall all of their Genesis-compatible games.
Appeal
The decision in the district court ruling had been very costly to Accolade. According to Accolade co-founder Alan Miller, "Just to fight the injunction, we had to pay at least half a million dollars in legal fees." On April 14, 1992, Accolade asked the district court to stay the preliminary injunction pending appeal, but when the court did not rule by April 21, Accolade appealed the injunction to the Ninth Circuit of the U.S. Court of Appeals. A stay was granted on the mandate to recall all of Accolade's Genesis games, but the injunction preventing further reverse engineering and development of Genesis software was maintained until August 28, when the Ninth Circuit ordered it dissolved pending the appeal review.
In support of the appeal, the Computer & Communications Industry Association submitted an amicus curiae brief claiming that the district court had made errors in concluding that Accolade had infringed upon Sega's copyright by reverse engineering its software, extending copyright protection to method of operation, and failing to consider whether Accolade's games were substantially similar to Sega's copyrighted material. Amicus briefs were also submitted by the American Committee for Interoperable Systems, the Computer and Business Equipment Manufacturers Association, and copyright law professor Dennis S. Karjala from Arizona State University.
In reviewing the case, the court considered several factors in its own analysis, examining trademark and copyright issues separately. As in the district court proceedings, Nagashima showed the court a game cartridge that ran on the Genesis that did not display the trademark logo. However, the court was not moved by this, deciding that Nagashima's cartridges showed what one could do with knowledge of the TMSS, which Accolade did not possess. According to the court, because knowledge of how to avoid displaying the trademark on the Genesis III was not information that was public to the industry, Sega's attempt to prove that the display of their trademark was not required for games to be played on the console was insufficient. Writing for the opinion of the court, Judge Stephen Reinhardt stated, "Sega knowingly risked two significant consequences: the false labeling of some competitors' products and the discouraging of other competitors from manufacturing Genesis-compatible games. Under the Lanham Act, the former conduct, at least, is clearly unlawful." The court then went on to cite Anti-Monopoly v. General Mills Fun Group, which states in reference to the Lanham Act, "The trademark is misused if it serves to limit competition in the manufacture and sales of a product. That is the special province of the limited monopolies provided pursuant to the patent laws." The judges in the case had decided that Sega had violated this provision of the act by utilizing its trademark to limit competition for software for its console.
To determine the status of Accolade's claim of fair use of Sega's copyrighted game code, the court reviewed four criteria of fair use: the nature of the copyrighted work, the amount of the copyrighted work used, the purpose of use, and the effects of use on the market for the work. Of note to the judges in reviewing Sega's copyright claim was the difference in size between the TMSS file and the sizes of Accolade's games. As noted by Judge Reinhardt in writing the opinion of the court, the TMSS file "contains approximately twenty to twenty-five bytes of data. Each of Accolade's games contains a total of 500,000 to 1,500,000 bytes. According to Accolade employees, the header file is the only portion of Sega's code that Accolade copied into its own game programs." This made the games overwhelmingly original content, and according to Judge Reinhardt, to the benefit of the public to be able to compete with Sega's licensed games, especially if the games were dissimilar as contended in the appeal. The court did not accept the argument that Accolade's games competed directly with Sega's, noting that there was no proof that any of Accolade's published games had diminished the market for any of Sega's games. Despite claims from Sega's attorneys that the company had invested much time and effort into developing the Genesis, and that Accolade was capitalizing on this time and energy, the court rejected these claims by noting that U.S. Supreme Court in Feist v. Rural Publications had unequivocally rejected the notion that copyright protection could be based on the "sweat of the brow," i.e., that a work was entitled to copyright because of the amount of effort it took to create it. The court also noted that the Sega code contained some functional elements that were not protected under the Copyright Act of 1976. On the matter of reverse engineering as a process, the court concluded that "where disassembly is the only way to gain access to the ideas and functional elements embodied in a copyrighted computer program and where there is a legitimate reason for seeking such access, disassembly is a fair use of the copyrighted work, as a matter of law."
On August 28, 1992, the Ninth Circuit reversed the district court's preliminary injunction and ruled that Accolade's decompilation of the Sega software constituted fair use. The court's written opinion followed on October 20 and noted that the use of the software was non-exploitative, despite being commercial, and that the trademark infringement, being required by the TMSS for a Genesis game to run on the system, was inadvertently triggered by a fair use act and the fault of Sega for causing false labeling. As a result of the verdict being overturned, the costs of the appeal were assessed to Sega. The injunction remained in force, however, because Sega petitioned the appeals court to rehear the case.
Settlement
On January 8, 1993, with Sega's petition for a rehearing still pending, the court took the unusual step of amending its October 20, 1992 opinion and lifted the injunction preventing Accolade from developing or selling Genesis software. This was followed by a formal denial of Sega's petition for a rehearing on January 26. As Accolade's counterclaim for false labeling under the Lanham Act was declined by the Ninth Circuit, this essentially left "each party as free to act as it was before the issuance of preliminary injunctive relief" while the district court considered the counterclaim. Sega and Accolade ultimately settled on April 30, 1993. As a part of this settlement, Accolade became an official licensee of Sega, and later developed and released Barkley Shut Up and Jam! while under license. The terms of the licensing, including whether or not any special arrangements or discounts were made to Accolade, were not released to the public. The financial terms of the settlement were also not disclosed, although both companies agreed to pay their own legal costs.
In an official statement, Sega of America chairman David Rosen expressed satisfaction with the settlement. According to Rosen, "This settlement is a satisfactory ending to what was a very complex set of issues. Not only are we pleased to settle this case amicably, we've also turned a corner in our association with Accolade and now look forward to a healthy and mutually beneficial relationship in the future." Accolade's Alan Miller expressed more excitement with the settlement and the opportunities it presented for the company, saying in his statement, "We are very pleased with the settlement, and we're excited about the new markets it opens to Accolade. Accolade currently experiences strong demand for its Sega Genesis products in North America and Europe. We will now be able to publish our products on the Sega Genesis and Game Gear systems throughout the world." Despite the settlement, however, Accolade had lost somewhere between $15 million and $25 million during the injunction period, according to Miller.
Impact
Sega v. Accolade has been an influential case in matters involving reverse engineering of software and copyright infringement, and has been cited in numerous cases since 1993. The case redefined how reverse engineering with unlicensed products is seen in legal issues involving copyright. The decision was also as influential because it was issued by the U.S. Court of Appeals for the Ninth Circuit, whose jurisdiction included all states in the western United States where the majority of U.S.-based software development occurred, including California and Washington. The case also helped establish guidelines for permissible reverse engineering; for example, American computer programmer Andrew Schulman cited the decision with approval in his 1994 book "Undocumented Dos," which explored and revealed undocumented functionality in Microsoft operating systems that he had uncovered using disassembly and reverse engineering. The process that Accolade undertook to reverse engineer the Sega code was perceived as fairly typical to the way other companies had been conducting reverse engineering, which made the court's decision even more influential. The Ninth Circuit's decision confirmed that the console's functional principles were not protected by copyright, and also established that reverse engineering can constitute "fair use" when no other means were available to access information about the console's functional principles. One such example of the precedent set by this case is Sony Computer Entertainment, Inc. v. Connectix Corporation, which was issued in 2000 by the Ninth Circuit, specifically cited Sega v. Accolade in deciding that reverse engineering the Sony PlayStation BIOS was protected by fair use and was non-exploitative.
Among the influences of the decision include Sega v. Accolade'''s effect on the criteria for fair use and the responsibilities of trademark holders in legal examinations. Although Accolade had copied entire Genesis games in order to identify the TMSS, the court gave little weight to the criterion on the amount of the copyrighted work being copied, in light of the fact that Accolade had done so in order to create their own compatible software. Likewise, the nature of the work was also given less weight, essentially establishing a two-factor approach to evaluating fair use in the purpose of use and impact on the market. It was also the first time that the Lanham Act was interpreted to mean that confusion resulting from the placement of one's trademark on another work by means of a security program is the fault of the original registrant of the trademark.Sega v. Accolade also served to help establish that the functional principles of computer software cannot be protected by copyright law. Rather, the only legal protection to such principles can be through holding a patent or by trade secret. This aspect of the decision has received criticism as well, citing that although the functional principles are not protectable under copyright law, the TMSS code was protectable and by allowing reverse engineering of the TMSS as fair use, the decision had encouraged the copying of legally protected programs.
See also
Vault Corp. v. Quaid Software Ltd. Atari Games Corp. v. Nintendo of America Inc.''
References
1992 in United States case law
Sega
United States copyright case law
United States Court of Appeals for the Ninth Circuit cases
Video game copyright case law
Reverse engineering
Fair use case law | Sega v. Accolade | Engineering | 3,630 |
11,576,886 | https://en.wikipedia.org/wiki/Adactylidium | Adactylidium is a genus of mites known for its unusual life cycle. An impregnated female mite feeds upon a single egg of a thrips, rapidly growing five to eight female offspring and one male in her body. The single male mite mates with all his sisters when they are still inside their mother. The new females, now impregnated, eat their way out of their mother's body so that they can emerge to find new thrips eggs, killing their mother in the process (though the mother may be only 4 days old at the time), starting the cycle again. The male emerges as well, but does not look for food or new mates, and dies after a few hours.
See also
Telescoping generations
References
Trombidiformes genera
Parasites of insects
Incestuous animals | Adactylidium | Biology | 173 |
31,430,439 | https://en.wikipedia.org/wiki/MedChemComm | MedChemComm (in full: Medicinal Chemistry Communications) is a peer-reviewed scientific journal publishing original (primary) research and review articles on all aspects of medicinal chemistry, including drug discovery, pharmacology and pharmaceutical chemistry. Until December 2019, it was published monthly by the Royal Society of Chemistry in partnership with the European Federation for Medicinal Chemistry, of which it was the official journal. Authors can elect to have accepted articles published as open access. According to the Journal Citation Reports, the journal has a 2014 impact factor of 2.495, ranking it 27th out of 59 journals in the category "Chemistry, Medicinal" and 163 out of 289 journals in the category "Biochemistry & Molecular Biology".
The editor-in-chief is Mike Waring (Newcastle University).
As of January 1, 2020 - the journal is now called RSC Medicinal Chemistry and continues to be published monthly under this new name.
Article types
MedChemComm publishes Research Articles (original scientific work, usually between 4-10 pages in length) and Reviews (critical analyses of specialist areas).
References
External links
Medicinal chemistry journals
Royal Society of Chemistry academic journals
Academic journals established in 2010
English-language journals
Hybrid open access journals
Monthly journals | MedChemComm | Chemistry | 247 |
5,340,853 | https://en.wikipedia.org/wiki/Triethyloxonium%20tetrafluoroborate | Triethyloxonium tetrafluoroborate is the organic oxonium compound with the formula . It is often called Meerwein's reagent or Meerwein's salt after its discoverer Hans Meerwein. Also well known and commercially available is the related trimethyloxonium tetrafluoroborate. The compounds are white solids that dissolve in polar organic solvents. They are strong alkylating agents. Aside from the salt, many related derivatives are available.
Synthesis and reactivity
Triethyloxonium tetrafluoroborate is prepared from boron trifluoride, diethyl ether, and epichlorohydrin:
where the Et stands for ethyl. The trimethyloxonium salt is available from dimethyl ether via an analogous route. These salts do not have long shelf-lives at room temperature. They degrade by hydrolysis:
The propensity of trialkyloxonium salts for alkyl-exchange can be advantageous. For example, trimethyloxonium tetrafluoroborate, which reacts sluggishly due to its low solubility in most compatible solvents, may be converted in situ to higher alkyl/more soluble oxoniums, thereby accelerating alkylation reactions.
This reagent is useful for esterification of carboxylic acids under conditions where acid-catalyzed reactions are infeasible:
Structure
The structure of triethyloxonium tetrafluoroborate has not been characterized by X-ray crystallography, but the structure of triethyloxonium hexafluorophosphate has been examined. The measurements confirm that the cation is pyramidal with C-O-C angles in the range 109.4°–115.5°. The average C–O distance is 1.49 Å.
Safety
Triethyloxonium tetrafluoroborate is a very strong alkylating agent, although the hazards are diminished because it is non-volatile. It releases strong acid upon contact with water. The properties of the methyl derivative are similar.
References
Reagents for organic chemistry
Tetrafluoroborates
Ethylating agents
Oxonium compounds | Triethyloxonium tetrafluoroborate | Chemistry | 478 |
451,268 | https://en.wikipedia.org/wiki/Passive%20attack | A passive attack on a cryptosystem is one in which the cryptanalyst cannot interact with any of the parties involved, attempting to break the system solely based upon observed data (i.e. the ciphertext). This can also include known plaintext attacks where both the plaintext and its corresponding ciphertext are known.
While active attackers can interact with the parties by sending data, a passive attacker is limited to intercepting communications (eavesdropping), and seeks to decrypt data by interpreting the transcripts of authentication sessions. Since passive attackers do not introduce data of their own, they can be difficult to detect.
While most classical ciphers are vulnerable to this form of attack, most modern ciphers are designed to prevent this type of attack above all others.
Attributes
Traffic analysis
Non-evasive eavesdropping and monitoring of transmissions
Because data unaffected, tricky to detect
Emphasis on prevention (encryption) not detection
Sometimes referred to as "tapping"
The main types of passive attacks are traffic analysis and release of message contents.
During a traffic analysis attack, the eavesdropper analyzes the traffic, determines the location, identifies communicating hosts and observes the frequency and length of exchanged messages. He uses all this information to predict the nature of communication. All incoming and outgoing traffic of the network is analyzed, but not altered.
For a release of message content, a telephonic conversation, an E-mail message or a transferred file may contain confidential data. A passive attack monitors the contents of the transmitted data.
Passive attacks are very difficult to detect because they do not involve any alteration of the data. When the messages are exchanged neither the sender nor the receiver is aware that a third party may capture the messages. This can be prevented by encryption of data.
A recent study on the cybersecurity of wearable devices used passive attacks on different smartwatches to test whether they have significant vulnerabilities and whether they are the best targets during the pairing process.
See also
Cybersecurity
Known plaintext attack
Chosen plaintext attack
Chosen ciphertext attack
Adaptive chosen ciphertext attack
Topics in cryptography
References
Further reading
Cryptography and Network Security By William Stallings
Cryptographic attacks | Passive attack | Technology | 446 |
25,800,675 | https://en.wikipedia.org/wiki/Compatibility%20%28mechanics%29 | In continuum mechanics, a compatible deformation (or strain) tensor field in a body is that unique tensor field that is obtained when the body is subjected to a continuous, single-valued, displacement field. Compatibility is the study of the conditions under which such a displacement field can be guaranteed. Compatibility conditions are particular cases of integrability conditions and were first derived for linear elasticity by Barré de Saint-Venant in 1864 and proved rigorously by Beltrami in 1886.
In the continuum description of a solid body we imagine the body to be composed of a set of infinitesimal volumes or material points. Each volume is assumed to be connected to its neighbors without any gaps or overlaps. Certain mathematical conditions have to be satisfied to ensure that gaps/overlaps do not develop when a continuum body is deformed. A body that deforms without developing any gaps/overlaps is called a compatible body. Compatibility conditions are mathematical conditions that determine whether a particular deformation will leave a body in a compatible state.
In the context of infinitesimal strain theory, these conditions are equivalent to stating that the displacements in a body can be obtained by integrating the strains. Such an integration is possible if the Saint-Venant's tensor (or incompatibility tensor) vanishes in a simply-connected body where is the infinitesimal strain tensor and
For finite deformations the compatibility conditions take the form
where is the deformation gradient.
Compatibility conditions for infinitesimal strains
The compatibility conditions in linear elasticity are obtained by observing that there are six strain-displacement relations that are functions of only three unknown displacements. This suggests that the three displacements may be removed from the system of equations without loss of information. The resulting expressions in terms of only the strains provide constraints on the possible forms of a strain field.
2-dimensions
For two-dimensional, plane strain problems the strain-displacement relations are
Repeated differentiation of these relations, in order to remove the displacements and , gives us the two-dimensional compatibility condition for strains
The only displacement field that is allowed by a compatible plane strain field is a plane displacement field, i.e., .
3-dimensions
In three dimensions, in addition to two more equations of the form seen for two dimensions, there are
three more equations of the form
Therefore, there are 34=81 partial differential equations, however due to symmetry conditions, this number reduces to six different compatibility conditions. We can write these conditions in index notation as
where is the permutation symbol. In direct tensor notation
where the curl operator can be expressed in an orthonormal coordinate system as .
The second-order tensor
is known as the incompatibility tensor, and is equivalent to the Saint-Venant compatibility tensor
Compatibility conditions for finite strains
For solids in which the deformations are not required to be small, the compatibility conditions take the form
where is the deformation gradient. In terms of components with respect to a Cartesian coordinate system we can write these compatibility relations as
This condition is necessary if the deformation is to be continuous and derived from the mapping (see Finite strain theory). The same condition is also sufficient to ensure compatibility in a simply connected body.
Compatibility condition for the right Cauchy-Green deformation tensor
The compatibility condition for the right Cauchy-Green deformation tensor can be expressed as
where is the Christoffel symbol of the second kind. The quantity represents the mixed components of the Riemann-Christoffel curvature tensor.
The general compatibility problem
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on simply connected bodies. More precisely, the problem may be stated in the following manner.
Consider the deformation of a body shown in Figure 1. If we express all vectors in terms of the reference coordinate system , the displacement of a point in the body is given by
Also
What conditions on a given second-order tensor field on a body are necessary and sufficient so that there exists a unique vector field that satisfies
Necessary conditions
For the necessary conditions we assume that the field exists and satisfies
. Then
Since changing the order of differentiation does not affect the result we have
Hence
From the well known identity for the curl of a tensor we get the necessary condition
Sufficient conditions
To prove that this condition is sufficient to guarantee existence of a compatible second-order tensor field, we start with the assumption that a field exists such that
. We will integrate this field to find the vector field along a line between points and (see Figure 2), i.e.,
If the vector field is to be single-valued then the value of the integral should be independent of the path taken to go from to .
From Stokes' theorem, the integral of a second order tensor along a closed path is given by
Using the assumption that the curl of is zero, we get
Hence the integral is path independent and the compatibility condition is sufficient to ensure a unique field, provided that the body is simply connected.
Compatibility of the deformation gradient
The compatibility condition for the deformation gradient is obtained directly from the above proof by observing that
Then the necessary and sufficient conditions for the existence of a compatible field over a simply connected body are
Compatibility of infinitesimal strains
The compatibility problem for small strains can be stated as follows.
Given a symmetric second order tensor field when is it possible to construct a vector field such that
Necessary conditions
Suppose that there exists such that the expression for holds. Now
where
Therefore, in index notation,
If is continuously differentiable we have . Hence,
In direct tensor notation
The above are necessary conditions. If is the infinitesimal rotation vector then . Hence the necessary condition may also be written as .
Sufficient conditions
Let us now assume that the condition is satisfied in a portion of a body. Is this condition sufficient to guarantee the existence of a continuous, single-valued displacement field ?
The first step in the process is to show that this condition implies that the infinitesimal rotation tensor is uniquely defined. To do that we integrate along the path to , i.e.,
Note that we need to know a reference to fix the rigid body rotation. The field is uniquely determined only if the contour integral along a closed contour between and is zero, i.e.,
But from Stokes' theorem for a simply-connected body and the necessary condition for compatibility
Therefore, the field is uniquely defined which implies that the infinitesimal rotation tensor is also uniquely defined, provided the body is simply connected.
In the next step of the process we will consider the uniqueness of the displacement field . As before we integrate the displacement gradient
From Stokes' theorem and using the relations we have
Hence the displacement field is also determined uniquely. Hence the compatibility conditions are sufficient to guarantee the existence of a unique displacement field in a simply-connected body.
Compatibility for Right Cauchy-Green Deformation field
The compatibility problem for the Right Cauchy-Green deformation field can be posed as follows.
Problem: Let be a positive definite symmetric tensor field defined on the reference configuration. Under what conditions on does there exist a deformed configuration marked by the position field such that
Necessary conditions
Suppose that a field exists that satisfies condition (1). In terms of components with respect to a rectangular Cartesian basis
From finite strain theory we know that . Hence we can write
For two symmetric second-order tensor field that are mapped one-to-one we also have the relation
From the relation between of and that , we have
Then, from the relation
we have
From finite strain theory we also have
Therefore,
and we have
Again, using the commutative nature of the order of differentiation, we have
or
After collecting terms we get
From the definition of we observe that it is invertible and hence cannot be zero. Therefore,
We can show these are the mixed components of the Riemann-Christoffel curvature tensor. Therefore, the necessary conditions for -compatibility are that the Riemann-Christoffel curvature of the deformation is zero.
Sufficient conditions
The proof of sufficiency is a bit more involved. We start with the assumption that
We have to show that there exist and such that
From a theorem by T.Y.Thomas we know that the system of equations
has unique solutions over simply connected domains if
The first of these is true from the defining of and the second is assumed. Hence the assumed condition gives us a unique that is continuous.
Next consider the system of equations
Since is and the body is simply connected there exists some solution to the above equations. We can show that the also satisfy the property that
We can also show that the relation
implies that
If we associate these quantities with tensor fields we can show that is invertible and the constructed tensor field satisfies the expression for .
See also
Saint-Venant's compatibility condition
Linear elasticity
Deformation (mechanics)
Infinitesimal strain theory
Finite strain theory
Tensor derivative (continuum mechanics)
Curvilinear coordinates
References
External links
Amit Acharya's notes on compatibility on iMechanica
Plasticity by J. Lubliner, sec. 1.2.4 p. 35
Continuum mechanics
Elasticity (physics) | Compatibility (mechanics) | Physics,Materials_science | 1,836 |
55,754,701 | https://en.wikipedia.org/wiki/Lee%20J.%20Carter | Lee Jin Carter (born June 2, 1987) is an American former politician who represented the 50th district in the Virginia House of Delegates from 2018 to 2022. A member of the Democratic Party, he defeated Jackson Miller, the Republican House Majority Whip, to win the seat. Born in North Carolina, Carter is an IT specialist and a former U.S. Marine. The first openly communist state delegate in the United States since 1929, Carter served on the Finance Committee and the Militia, Police and Public Safety Committee. In 2017, he was endorsed by the Democratic Socialists of America (DSA), of which he was then a member.
As a Marine, Carter went to Kuwait and the Mediterranean. His unit, the 22nd Marine Expeditionary Unit, was also one of the first to respond to the 2010 Haiti earthquake. In 2021, Carter ran for governor of Virginia in the 2021 election. He came in fifth of the five candidates in the Democratic primary with less than 3% of the vote, losing to Terry McAuliffe, and also lost the primary for renomination for his House seat.
Early life and military career
Carter was born June 2, 1987, in Elizabeth City, North Carolina. He was a member of the United States Marine Corps (USMC) from 2006 to 2011, having attended the USMC Staff Noncommissioned Officer Academy. During his time in the U.S. Marine Corps, Carter completed tours in the Middle East and the Mediterranean. His unit, the 22nd Marine Expeditionary Unit, was also one of the first to respond to the 2010 earthquake in Haiti.
Carter earned an associate of applied science degree from the Northern Virginia Community College in 2017. He worked as an IT specialist before running for office.
Political career
2017 campaign
Carter was inspired to run for office after receiving a shock while repairing a lighting system in the summer of 2015 and subsequently struggling to receive worker's compensation from Virginia while unable to work. Before choosing to run, he had long identified as "to the left of where the Democratic party [is]" but was further inspired by Bernie Sanders to explore democratic socialism.
Carter ran for the Virginia House of Delegates for the 50th district. He was endorsed by the Democratic Socialists of America (DSA), of which he has been a member since April 2017. His campaign mostly focused on issues such as single-payer healthcare and financial contributions to politicians. Jackson Miller, the incumbent Republican, distributed a mailer campaign that compared Carter to Communist rulers Vladimir Lenin, Joseph Stalin, and Mao Zedong, an act the Democratic Party of Virginia condemned as fearmongering. Miller called Carter an "anti-jobs candidate", and said his "ideas are so out of the mainstream, and so incredibly expensive". During the campaign, Carter claimed he had little support from the state's Democratic Party, saying their resources were "stretched thin" but that the DSA had "managed to knock on thousands of doors" on his behalf. On November 7, 2017, Carter won the race by nine percentage points. He was one of 15 DSA members elected in 2017.
2019 campaign
Carter ran for reelection in the 2019 election, defeating his primary opponent, Manassas city councilman Mark Wolfe, by 57.7% to 42.3% of the vote.
In the general election, Carter defeated Republican Ian Lovejoy, another Manassas city councilman, by 53.3% to 46.5% of the vote. Carter was endorsed by U.S. Senator Bernie Sanders, who campaigned with Carter in Manassas the day before the election.
2021 campaigns
On January 1, 2021, Carter announced his candidacy for the Democratic nomination for governor of Virginia. He also ran for renomination as delegate but had two challengers. Carter lost both the gubernatorial primary and the Democratic primary for delegate, the latter of which was won by attorney Michelle Lopes-Maldonado. After the losses, he announced his retirement from electoral politics and endorsed the Independent candidate Princess Blanding for governor.
Tenure
During Carter's remarks on a tax bill during the 2018 legislative session, fellow Democratic Delegate Mark Keam briefly displayed the hammer and sickle on a laptop behind Carter, an action for which he later apologized; Keam also apologized for violating Rule 57 in regard to the legislative body's decorum ("No member shall in debate use any language or gesture calculated to wound, offend, or insult another member"). Carter dismissed the affair as "clearly ... a joke, but ... in very poor taste and rooted in a lack of knowledge about the history of the political left."
Political positions
Capital punishment
Carter opposes the death penalty under all circumstances, and introduced a bill in the House of Delegates to abolish it in Virginia.
Criminal justice reform
Carter introduced legislation in the 2020 session that would prohibit Virginia prisons and jails from strip-searching minors before visitation. The bill passed unanimously in subcommittee.
Guns
Carter supports the right to keep and bear arms, and has opposed proposed assault weapons bans in Virginia as a "terrible idea". He opposes red flag laws, since he believes they result in right-wing extremists abusing the process to disarm their opposition, and has voted against prohibiting guns on the property of the Virginia State Capitol, the only Democrat to do so.
Healthcare
Carter introduced legislation in the 2020 session that would cap the monthly copay for insulin at $30. The bill passed and was signed into law at a $50 monthly copay cap.
Autism
Carter is autistic, and opposes public funding for applied behavior analysis in the treatment of autism, a controversial therapy when used to attempt to treat the condition. He has likened ABA to conversion therapy. Carter said in a statement that "There is zero difference between ABA and punishing deaf kids to make them read lips instead of signing. Which is what institutions used to do to them decades ago." Carter opposes Autism Speaks, and has called the organization a hate group.
Education
Carter was the only Democrat to vote against a bipartisan bill in 2021 to require schools to provide at least three specialized student support positions. The bill passed and was signed by Governor Ralph Northam.
Labor
Since taking office, Carter has been an outspoken advocate for workers' rights. In December 2018 he introduced House Bill 1806, which would overturn Virginia's 70-year-old right-to-work law. Of the bill, Carter said, "When workers form a union, everyone in the workplace benefits from higher wages and better conditions. ... Taft–Hartley was created specifically to allow some people to stand opposed to their coworkers' union while still reaping the rewards for free. It was intentionally designed to bankrupt unions, and I'm fighting to end it."
In late 2019, after Carter introduced or supported bills overturning restrictions on the ability of Virginia state employees to strike, he received a wave of death threats on social media, as critics mistook the exception of police officers from the bills for a case of their right to strike being removed. These threats were severe and credible enough that Carter spent the day at an undisclosed safe location on January 20, 2020, the day a gun rights rally was organized at the Virginia State Capitol. This coincided with the declaration of a state of emergency by Northam in response to potential violence at the rally.
In the 2020 session, Carter introduced a bill to address pay disparities for certain categories of workers. One bill would prevent employers from categorizing employees as "tipped employees" if state or federal regulations prohibit those employees from accepting tips. This bill targeted workers at Dulles International Airport and Reagan National Airport, who are classified as tipped employees and are ineligible from receiving minimum wage even though they are prohibited from receiving tips.
2020 presidential election
Carter endorsed Bernie Sanders for president in 2020, and co-chaired his Virginia campaign.
Personal life
Carter has been married and divorced three times. He has a daughter with his second wife. In October 2018, to get ahead of any potential attempts at "personal smears", Carter admitted making "homophobic, transphobic, sometimes sexist or racially insensitive" comments online as a teenager.
On July 2, 2021, Carter's longtime partner Violet Rae announced their engagement on Twitter.
While serving as a Virginia delegate, Carter also worked as a Lyft driver.
Carter is autistic, and has expressed positions belonging to the Autism Rights Movement.
On July 10, 2024, Carter began posting a series of introspective tweets about his gender, and his realization of being gender non-binary. Five days later, Carter announced he was stepping away from public life following allegations of sexual misconduct.
Electoral history
See also
List of Bernie Sanders 2020 presidential campaign endorsements
List of Democratic Socialists of America members who have held office in the United States
References
External links
1987 births
Living people
American anti-capitalists
American Unitarian Universalists
Candidates in the 2021 United States elections
Marine Corps University alumni
Members of the Democratic Socialists of America from Virginia
Members of the Virginia House of Delegates
Military personnel from North Carolina
North Carolina socialists
People from Elizabeth City, North Carolina
People from Manassas, Virginia
People in information technology
United States Marines
Virginia Democrats
Virginia independents
Virginia socialists
American politicians with disabilities
Autistic people
21st-century members of the Virginia General Assembly | Lee J. Carter | Technology | 1,887 |
34,974,169 | https://en.wikipedia.org/wiki/Oko | Oko (Old ) is a Russian (previously Soviet) missile defence early warning programme consisting of satellites in Molniya and geosynchronous orbits. Oko satellites are used to identify launches of ballistic missiles by detection of their engines' exhaust plume in infrared light, and complement other early warning facilities such as Voronezh, Daryal and Dnepr radars. The information provided by these sensors can be used for the A-135 anti-ballistic missile system which defends Moscow. The satellites are run by the Russian Aerospace Forces, and previously the Russian Aerospace Defence Forces and Russian Space Forces. Since November 2015, it is being replaced by the new EKS system.
History
Development of the Oko system began in the early 1970s under the design bureau headed by AI Savin, which became TsNII Kometa. The spacecraft element was designed by NPO Lavochkin. The first satellite was launched in 1972 but it was not until 1978 that the overall system became operational and 1982 before it was placed on combat duty. The system had a major malfunction in 1983 when it mistakenly identified sunlight on high altitude clouds as a missile attack. Stanislav Petrov, on duty at the new control centre in Serpukhov-15, Moscow Oblast, discounted the warning due to the newness of the system and the lack of corroboration from ground-based radar.
The vast majority of the satellites launched (86 out of 100 as of March 2012 ) have been the first generation US-K satellites which operate in molniya orbits. Seven first generation satellites were launched into geosynchronous orbits, called US-KS, starting in 1975. A decree of 3 September 1979 led to the creation of the second generation satellites US-KMO which had their first launch in 1991. In total, 101 satellites have been launched.
The US-K satellites, were launched by Molniya-M launch vehicles with Blok 2BL upper stages from Plesetsk Cosmodrome. The US-KS and US-KMO operate in geosynchronous orbits and were launched by Proton with DM-2 upper stages from Baikonur.
The last US-KMO satellite (Kosmos 2479) was launched on 30 March 2012 and the last US-K satellite (Kosmos 2469) on 30 September 2010. They are due to be replaced by a new system called EKS.
Debris
The first generation Molniya-type orbit Oko satellites launched between 1976 and 1983 were prone to disintegration, resulting in extensive space debris. The reason they broke up was because they each carried an on-board explosive charge that would be used to destroy the satellite in the case of a malfunction. Unfortunately, control of the explosive charge was itself unreliable and it would often explode, rendering the satellite inoperative, while it was still under control. The design was eventually changed, and the explosive charge in Kosmos 1481 was the last to explode early.
Facilities
The system has two dedicated control centres. The western centre is at Serpukhov-15 () near Kurilovo outside Moscow () and the eastern centre is at Pivan-1 () () in the Russian Far East. The centre at Serpukhov-15 burned down in 2001 which caused the loss of contact with currently orbiting satellites.
See also
Defense Support Program
Space-Based Infrared System
EKS, the new system replacing the entire Oko program.
Notes
References
External links
Novosti Kosmonavtiki.ru: Kosmos 520 in the Lavochkin Museum
Novosti Kosmonavtiki.ru: telescopes
Novosti Kosmonavtiki.ru: infrared telescope
Novosti Kosmonavtiki.ru: antenna
Early warning systems
Missile defense
Reconnaissance satellites of Russia
Reconnaissance satellites of the Soviet Union
Early warning satellites
Military equipment introduced in the 1970s
Spacecraft that broke apart in space | Oko | Technology | 801 |
48,498,882 | https://en.wikipedia.org/wiki/Management%20interface | In computing, a management interface is a network interface dedicated to configuration and management operations. Management interfaces are typically connected to dedicated out of band management networks (either VPNs or physical networks), and non-management interfaces are not allowed to carry device or network management traffic. This greatly reduces the attack surface of the managed devices, as external attackers cannot access management functions directly, and thus improves network security.
In some cases, serial ports are used to access the command line interface directly, avoiding transport over a generic network stack completely, providing a further layer of isolation from network attacks.
See also
Management plane
Network architecture | Management interface | Technology,Engineering | 123 |
31,160,031 | https://en.wikipedia.org/wiki/Pleurotus%20purpureo-olivaceus | Pleurotus purpureo-olivaceus is a gilled fungus native to Australia and New Zealand. It is found on dead wood of Nothofagus trees. Although morphologically similar to some other Pleurotus fungi, it has been shown to be a distinct species incapable of cross-breeding and phylogenetically removed from other species of Pleurotus.
The caps of the fruit bodies are up to wide, and are dark violet to brown to olive to yellow-green, depending on light exposure. Stipes are lateral and white to yellow.
References
External links
Biological Species in Pleurotus: ISG XV: Pleurotus purpureo-olivaceus at University of Tennessee-Knoxville Mycology Lab
Pleurotaceae
Fungi described in 1964
Fungi native to Australia
Fungi of New Zealand
Carnivorous fungi
Edible fungi
Fungus species | Pleurotus purpureo-olivaceus | Biology | 178 |
63,993,718 | https://en.wikipedia.org/wiki/Kryptonia | Kryptonia is a bacterial phylum with candidate status. It is a member of the FCB group.
The phylum was first proposed in 2016 following the recovery of genomes from a large-scale effort to mine metagenomic and single-cell genomic datasets for novel bacterial diversity. Extensive analysis of 5.2 Tb of metagenomic data from around the world suggests members of Kryptonia are found exclusively in high-temperature pH-neutral geothermal springs, such as the Jinze pool (Yunnan Province, China), Dewar Creek Spring (British Columbia, Canada), and Great Boiling Spring (Nevada, USA). Due to primer mismatches, members of this phylum have been widely under-detected in 16S rRNA sequencing-based surveys of community composition.
Analysis of the first genomes recovered from this group (from four different genera) suggests that members of Kryptonia are heterotrophs with a putative capacity for iron respiration. They are inferred to be incapable of some producing key metabolic compounds on their own (e.g.: biotin (also known as vitamin B7 or vitamin H) and certain amino acids), and thus may be metabolically dependent on other microbes in their environment, although the nature of such a relationship is unknown.
The name "Kryptonia" is derived from the Greek work "krupton", which means "hidden" or "secret". This is a nod to the phylum having hitherto eluded detection due to SSU rRNA primer biases.
Taxonomy
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI).
Class "Kryptonia"
Order "Kryptoniales"
Family "Kryptoniaceae"
Genus "Ca. Chrysopegocella" Eloe-Fadrosh et al. 2016 corrig. Oren et al. 2020 ["Ca. Chrysopegis" (sic)]
"Ca. Chrysopegocella kryptomonas" Eloe-Fadrosh et al. 2016 corrig. Oren et al. 2020
Genus "Ca. Kryptobacter" Eloe-Fadrosh et al. 2016
"Ca. Kryptobacter tengchongensis" Eloe-Fadrosh et al. 2016
Genus "Ca. Kryptonium" Eloe-Fadrosh et al. 2016
"Ca. Kryptonium thompsoni" Eloe-Fadrosh et al. 2016
Genus "Ca. Thermokryptus" Eloe-Fadrosh et al. 2016
"Ca. Thermokryptus mobilis" Eloe-Fadrosh et al. 2016
See also
List of bacteria genera
List of bacterial orders
References
Bacteria phyla
Bacteria by classification
Genomics | Kryptonia | Biology | 616 |
3,488,365 | https://en.wikipedia.org/wiki/Ninja%20rocks | Ninja rocks is a colloquial term for an improvised weapon or tool consisting of the extremely sharp porcelain or ceramic shards recovered from smashing or crushing the alumina insulator of a commercial spark plug. When thrown, ninja rocks are known to exploit the tensile stress present in the side windows on most cars in order to instantly shatter them, providing a quick and quiet alternative to other window-smashing methods and making ninja rocks ideal for emergencies or "smash-and-grab" auto burglaries, having been used in the latter function since at least 1995. They have no traditional association with the ninja or ninjutsu, only being named such due to their "silent but deadly" function in burglaries and a superficial resemblance to the shuriken stereotypically used as a throwing weapon by ninjas.
Functionality
Ninja rocks take advantage of the physical properties of tempered glass, disrupting surface compressive stress and causing the glass to shatter.
Tempered glass, which is used for the side windows of most vehicles, is manufactured with an extremely high surface compressive stress and high internal tensile stress. This gives it strength and durability against shocks and blunt impact. When the glass breaks (such as in a serious vehicular collision) the internal stresses present in the pane cause the entire pane to shatter into thousands of tiny pieces. This reduces the risk of laceration one might otherwise face when using 'normal' glass, and is an essential safety feature in vehicular design.
It is these physical stresses designed into tempered glass which make it vulnerable to ninja rocks. Made of shards of aluminium oxide ceramic, ninja rocks are very hard, and very sharp. When thrown at tempered glass, the ninja rocks' sharp, hard point focuses impact energy into an incredibly small area without blunting. This disrupts the glass surface compressive stress at the point of impact, subsequently releasing the internal potential energy within the stressed pane, shattering the glass.
To be effective, a ninja rock needs to be sufficiently sharp, impact the glass on that sharp point, and impact it with sufficient force. Thrown ninja rocks may often fail to shatter tempered glass if one of these conditions is not met. Ninja rocks are ineffective against windshields, as these are made of a laminated type of safety glass, and designed not to shatter.
Legal status
United States of America
California
In California, since 2003, ninja rocks have been explicitly listed as burglary tools, and their possession with intent to burglarize is a misdemeanor punishable by up to six months in county jail and/or a fine of up to $1000. Legal records do not use the phrase "ninja rocks", preferring more precise phrases such as "ceramic or porcelain spark plug chips or pieces".
Until 2003, "burglary tools" in California did not include devices to break glass. In late 2001, two important convictions including possession of ninja rocks were appealed. In People v. Gordon (2001) 90 Cal.App.4th 1409 (Review denied), Division 1 (San Diego) of the Fourth District Court of Appeal found that possession of ninja rocks was not punishable under section 466 of the penal code. That court applied the ejusdem generis rule of construction, deciding that ninja rocks were not enough alike the then-listed burglary tools. On the other hand, in In re Robert B. (2001) 93 Cal.App.4th 963, Division 3 (Orange County) contradicted this interpretation of section 466 and upheld the conviction. On February 13, 2002, the latter case was granted review by the California Supreme Court.
Two days later, the state assembly proposed in Assembly Bill 2015 to amend section 466 to include ninja rocks. The bill passed unanimously in both houses in August 2002.
Washington
One Washington trial court found that the ability of ninja rocks to quietly break tempered glass meant that their possession could be used to establish intent to commit burglary, even in a case where the ninja rocks were not actually thrown at any glass because the burglars had found an unlocked door. One defendant appealed his conviction to the Court of Appeals on the grounds that "the trial court erred by admitting an unusual burglary tool into evidence". The Court of Appeals denied this reasoning and upheld the conviction.
References
Legal
See also Assembly and Senate committee analyses of AB 2015, among other records.
Amended version of section 466 on FindLaw
San Diego Legal Updates: Burglary Tools
Motor vehicle theft
Ceramic materials
Crime in California | Ninja rocks | Engineering | 921 |
73,263,652 | https://en.wikipedia.org/wiki/Cyptotrama%20pauper | Cyptotrama pauper also known as Cyptotrama pauperum is a species of mushroom producing fungus in the family Agaricaceae.
Taxonomy
It was described in 1989 by the German mycologist Rolf Singer who classified it as Cyptotrama pauperum however this is now regarded as an orthographic variant and the species is now called Cyptotrama pauper.
Description
Cyptotrama pauper is a small red-pink mushroom with thin and fragile white or pale flesh.
Cap: 0.8-3.7 cm wide and convex to flat or with upturned edges. The surface is red but fades to pink with age, it is smooth or with finely appressed or fibrillose scales. It is not hygrophanous or viscid. Gills: Adnate, crowded and white. Stem: 1.6-3.8 cm long and 1–2.5mm thick. The surface is pink and smooth but pale or white and silky towards the apex and it runs equally or with a slight taper towards the top. The interior is a hollow tube (tubulose) and the base has silky white mycelium present. Spores: Fusoid or oblong, hyaline, non-amyloid. 9.5-12 x 4.7-5.3 μm. Basidia: 37-40 x 8-8.5 μm. Four spored. Smell: Indistinct.
Habitat and distribution
The specimens studied by Singer were found growing on the fallen trunk of a Dicotyledonous tree in the tropical forest on the road between Manaus and Itacoatiara, Brazil.
Similar species
Singer states that the species is related to Cyptotrama hygrocyboides but is distinguished by the lack of pleurocystidia.
References
Agaricaceae
Fungi described in 1989
Fungi of South America
Taxa named by Rolf Singer
Fungus species | Cyptotrama pauper | Biology | 402 |
57,430,554 | https://en.wikipedia.org/wiki/Baiting%20crowd | Baiting crowd is a form of collective aggression; this is a situation where a group of individuals unify their aggression, while they may not even know each other, towards another individual or group.
Deindividuation
It is a typical situation where a person is about to jump from a high building while there are people standing below, whereas some people begin to shout that the person has to jump off.
Leon Mann (born 12 December 1937) is an Australian psychologist. He is currently Director of the Research Leadership Program and coordinator of the University of Melbourne's Mentoring Program for Research Leaders.
In the late 20th century, Leon Mann became interested in the idea of deindividuation. More specifically, he wanted to investigate how aggression of a group of people towards another individual or group, could influence one’s own behaviour. In the attempt to investigate this matter, Mann analysed 166 cases of suicide or suicide attempt, in which in 21 cases a crowd was involved. He wanted to find out when collective aggression towards an individual, who was about to commit suicide (e.g. encouraging to jump of a large building), would cause an outsider to join the baiting crowd in this process of deindividuation. In ten of these 21 cases several factors may have led to deindividuation where baiting occurred.
He found out that when the crowd was small and during daytime, people would usually not shout that the person should jump. However, when the crowd was large and it was late at night it caused people to be anonymous, therefore they would more quickly shout that the person had to jump from the building.
This is caused through deindividuation which is a process whereby people lose their sense of socialized individual identity and often show unsocialized and even antisocial behavior. In this situation people often blur their normal behaviors, leading to an increase in impulsive and deviant behavior. One becomes less an individual and more part of the mass.
Deindividuation theory predicts that under the cover of anonymity, crowd members will be more aggressive than when identifiable. Emergent norm theory predicts that the most aggressive behavior will occur when an aggressive norm prevails in a crowd in which participants are identifiable to each other.
Influencing factors
In his article "The Baiting Crowd in Episodes of Threatened Suicide", Leon Mann classified several factors that influence the baiting crowd.
Four anonymity factors that will reinforce the baiting crowd
Crowd size
Leon Mann's research shows that a sense of anonymity increases sharply as a group grows, resulting in a reduced self-awareness. Moreover, in larger groups the chances increase of a person in the crowd encouraging the victim's act. This encouragement could form a model for the rest of the group to follow.
Cover of darkness
The degree of lighting appears to be a factor that also plays a part in some of the cases. Because little lighting strengthens anonymity, most incidents occur when it is dark outside.
Physical distance between victim and crowd
An increase in the physical distance between the observer and the victim increases the dehumanization of the observer in relation to the victim. Because the victim is too far away from the observer, the victim can no longer be recognized or heard, which can result in him being seen less as a person.
Duration of episode
When a crowd has to wait a long time before something happens, this crowd can get bored or annoyed. A possible explanation behind this is that the crowd gets the feeling that the situation is a big act, which leads to increased frustrations. They may wonder whether the person really wants to commit suicide or he is just in it to get some publicity.
Furthermore he classified 3 other factors which may influence the baiting crowd
Aversive temperature
Most cases of baiting crowd appear in the hottest months of the year, an increase in temperature causes an increase in frustrations among the crowd. Those frustrations lead to less patience and the raise of deindividualisation.
The city
In large cities baiting appears to be more common than in smaller cities or villages. This may be because a large city leads to more anonymity. Furthermore, there is often a large amount of high buildings that are easily available to the person.
Dehumanisation of the victim
A crowd can see the victim as a crazy person. They do not see someone with the need to commit suicide as a fellow human being, but view them as inferior. This feeling can be enhanced when a form of discrimination takes place.
Critique
Leon Mann states that through the presence of a crowd and the loss of individuality (deindividuation) crowd baiting is taking place. He supports this phenomenon by relying on his 21 single case studies using the archival research method. Although Mann discovered the phenomenon, he himself states that “Because of the sampling method, the small number of cases, and the nature of the data (journalistic accounts), no strong conclusions can be drawn, but leads for further research can be adduced”. Many more studies will be required to discover the different facets of crowd baiting such as “Who are the baiters and their individual circumstances and characteristics?” and “What are the conditions for crowd members to encourage the victim to jump?”
References
Aggression
Problem behavior
Symptoms and signs of mental disorders | Baiting crowd | Biology | 1,073 |
31,548,988 | https://en.wikipedia.org/wiki/LogMAR%20chart | A logMAR chart (Logarithm of the Minimum Angle of Resolution) is a chart consisting of rows of letters that is used by ophthalmologists, orthoptists, optometrists, and vision scientists to estimate visual acuity. The chart was developed at the National Vision Research Institute of Australia in 1976, and is designed to enable a more accurate estimate of acuity than do other charts (e.g., the Snellen chart). For this reason, the LogMAR chart is recommended, particularly in a research setting.
When using a LogMAR chart, visual acuity is scored with reference to the logarithm of the minimum angle of resolution, as the chart's name suggests. An observer who can resolve details as small as 1 minute of visual angle scores LogMAR 0, since the base-10 logarithm of 1 is 0; an observer who can resolve details as small as 2 minutes of visual angle (i.e., reduced acuity) scores LogMAR 0.3, since the base-10 logarithm of 2 is near-approximately 0.3; and so on.
Specific types of logMAR chart include the original Bailey-Lovie chart, as well as the ETDRS charts, developed for the Early Treatment Diabetic Retinopathy Study.
History
The chart was designed by Ian Bailey and Jan E. Lovie-Kitchin at the National Vision Research Institute of Australia. They described their motivation for designing the LogMAR chart as follows: "We have designed a series of near vision charts in which the typeface, size progression, size range, number of words per row and spacings were chosen in an endeavour to achieve a standardization of the test task."
Relation to the Snellen chart
The Snellen chart, which dates back to 1862, is also commonly used to estimate visual acuity. A Snellen score of 6/6 (20/20), indicating that an observer can resolve details as small as 1 minute of visual angle, corresponds to a LogMAR of 0 (since the base-10 logarithm of 1 is 0); a Snellen score of 6/12 (20/40), indicating an observer can resolve details as small as 2 minutes of visual angle, corresponds to a LogMAR of 0.3 (since the base-10 logarithm of 2 is near-approximately 0.3), and so on.
Recording visual acuity using the LogMAR chart
Each letter has a score value of 0.02 log units. Since there are 5 letters per line, the total score for a line on the LogMAR chart represents a change of 0.1 log units. The formula used in calculating the score is:
LogMAR VA = 0.1 + LogMAR value of the best line read − 0.02 × number of optotypes read
Given that each line has 5 optotypes, the equivalent formula is:
LogMAR VA = LogMAR value of the best line read + 0.02 × number of optotypes missed
Advantages of LogMAR over other charts
The LogMAR chart is designed to enable more accurate estimates of acuity as compared to other acuity charts (e.g., the Snellen chart). Each line of the LogMAR chart comprises the same number of test letters (effectively standardizing the test across letter size); the letter size change from one line to the next is a constant ratio, as is the spacing between lines (making the chart easy to use at nonstandard viewing distances). In ETDRS charts, the Sloan letters are used (Sloan letters are perfectly square approximately equally legible one from another), while the Bailey–Lovie chart used rectangular (5:4) letters based on the Transport typeface, as set out in British Standard 4274:1968.
Zero LogMAR indicates standard vision, positive values indicates poor vision, and negative values indicates good vision. This is less intuitive than other VA notations. However, LogMAR is actually a notation of vision loss.
Low vision and blindness definition with LogMAR
The World Health Organization established criteria for low vision using the LogMAR scale. Low vision is defined as a best-corrected visual acuity worse than 0.5 LogMAR but equal or better than 1.3 LogMAR in the better eye. Blindness is defined as a best-corrected visual acuity worse than 1.3 LogMAR.
References
External links
National Vision Research Institute of Australia
Logarithmic SLOAN Visual Acuity Test
Australian inventions
Charts
Diagnostic ophthalmology
Optotypes
1976 introductions | LogMAR chart | Mathematics | 945 |
48,573,622 | https://en.wikipedia.org/wiki/NGC%204388 | NGC 4388 is an active spiral galaxy in the equatorial constellation of Virgo. It was discovered April 17, 1784 by Wilhelm Herschel. This galaxy is located at a distance of 57 million light years and is receding with a radial velocity of 2,524km/s. It is one of the brightest galaxies of the Virgo Cluster due to its luminous nucleus. NGC 4388 is located 1.3° to the west of the cluster center, which translates to a projected distance of .
The NGC 4388 galaxy has been assigned a morphological class of SA(s)b, which indicates it is a spiral with no central bar (SA) or inner ring structure (s), and has moderately-wound spiral arms (b). It is inclined at an angle of 79° to the line of sight from the Earth and thus is being viewed from nearly edge-on. The major axis of the elliptical profile is aligned with a position angle of 92°.
The interstellar medium of the galaxy has recently undergone a stripping event due to ram pressure, causing star formation to steeply decline some Myr ago. The galaxy may have passed close to the cluster center around 200 Myr ago, which led to the loss of much of its neutral hydrogen from interaction with the inter-cluster medium.
This is a classic Type 2 Seyfert galaxy where the emission from the active galactic nucleus is being concealed by a torus of obscuring gas and dust. The supermassive black hole at the core has a mass of , which has a hot corona with a temperature energy of that is producing X-ray emission. There is a strong nuclear outflow to the north and south that extends out as far as from the core. These flows have a mean velocity of ·s−1.
NGC 4388 has a large extended emission-line region (EELR) that has a length of around 35 kpc, stretching beyond the galaxy. This emission cloud is created when the supermassive black hole is active and its radiation ionizes gas from its galaxy. In the case of NGC 4388 it is suggested that the gas was first stripped from the galaxy either via tidal interaction or via ram pressure and later ionized. A small H II region, known as the GAFO Region, ionised by a 3.3 million years old star cluster, is located within this region.
One supernova has been observed in NGC 4388: SN 2023fyq (type Ib-pec, mag. 19.5 at discovery).
Gallery
References
External links
Unbarred spiral galaxies
Seyfert galaxies
Virgo Cluster
4388
Virgo (constellation)
040581 | NGC 4388 | Astronomy | 548 |
22,981,287 | https://en.wikipedia.org/wiki/Macroscopic%20quantum%20self-trapping | In quantum mechanics, macroscopic quantum self-trapping is when two Bose–Einstein condensates weakly linked by an energy barrier which particles can tunnel through, nevertheless end up with a higher average number of bosons on one side of the junction than the other. The junction of two Bose–Einstein condensates is mostly analogous to a Josephson junction, which is made of two superconductors linked by a non-conducting barrier. However, superconducting Josephson junctions do not display macroscopic quantum self-trapping, and thus macroscopic quantum self-tunneling is a distinguishing feature of Bose–Einstein condensate junctions. Self-trapping occurs when the self-interaction energy between the Bosons is larger than a critical value called .
It was first described in 1997. It has been observed in Bose–Einsten condensates of exciton-polaritons, and predicted for a condensate of magnons.
While the tunneling of a particle through classically forbidden barriers can be described by the particle's wave function, this merely gives the probability of tunneling. Although various factors can increase or decrease the probability of tunneling, one can not be certain whether or not tunneling will occur.
When two condensates are placed in a double potential well and the phase and population differences are such that the system is in equilibrium, the population difference will remain fixed. A naïve conclusion is that there is no tunneling at all, and the bosons are truly "trapped" on one side of the junction. However, macroscopic quantum self-trapping does not rule out quantum tunneling — rather, only the possibility of observing tunneling is ruled out. In the event that a particle tunnels through the barrier, another particle tunnels in the opposite direction. Because the identity of individual particles is lost in that case, no tunneling can be observed, and the system is considered to remain at rest.
See also
Double-well potential
Gross–Pitaevskii equation
References
Quantum mechanics
Bose–Einstein condensates | Macroscopic quantum self-trapping | Physics,Chemistry,Materials_science | 414 |
36,740,294 | https://en.wikipedia.org/wiki/Gamma%20Crateris | Gamma Crateris is a binary star system, divisible with a small amateur telescope, and located at the center of the southern constellation of Crater. It is visible to the naked eye with an apparent visual magnitude of 4.06. With an annual parallax shift of 39.62 mas as seen from Earth, this star is located 82.3 light years from the Sun. Based upon the motion of this system through space, it is a potential member of the Castor Moving Group.
The star was confirmed by Gabriel Cristian Neagu and Jan Ovidiu Tercu as a variable of DSCT type. The variability has an amplitude of 0.001 magnitudes and a main period of 0.03647 d (52.52 min). The variability was discovered during the datamining activity with the goal of increasing the student's investigative competences.
The primary, component A, is a white-hued A-type main sequence star of apparent visual magnitude 4.08 with a stellar classification of A9 V. The star has an estimated 1.81 times the mass of the Sun and around 1.3 times the Sun's radius. It is about 757 million years old and is spinning with a projected rotational velocity of 144 km/s. The primary is radiating 18.8 times the solar luminosity from its outer atmosphere at an effective temperature of 8,020 K. Based upon the detection of an infrared excess, the star may host an orbiting debris disk. However, this finding remains in doubt.
The companion, component B, is a magnitude 9.6 star with an estimated mass 75% that of the Sun. As of 2010, the companion was located at an angular separation of 4.98 arc seconds along a position angle of 93.1° relative to the primary. This is equivalent to a projected separation of 125.6 AU. This star may be the source of the X-ray emission detected coming from this system.
References
External links
A-type main-sequence stars
Binary stars
Crater (constellation)
Crateris, Gamma
Durchmusterung objects
Crateris, 15
099211
055705
4405 | Gamma Crateris | Astronomy | 443 |
46,211,250 | https://en.wikipedia.org/wiki/Human%E2%80%93lion%20conflict | Human–lion conflict refers to the pattern of problematic interactions between native people and lions. Conflict with humans is a major contributor of the decline in lion populations in Africa. Habitat loss and fragmentation due to conversion of land for agriculture has forced lions to live in closer proximity to human settlements. As a result, conflict is often characterized by lions preying upon livestock, known as livestock depredation. When depredation events take place, farmers suffer financial losses and lions face threats of retaliatory killing.
Causes of conflict
The main cause of conflict is habitat loss. 83% of the African lion's range has been reduced and what remains is increasingly fragmented. Lions, as large carnivores, rely on large connected expanses of land. The conversion of their habitat into agricultural land prevents them from dispersing and can limit the availability of natural prey. Lions are therefore roaming closer to farms than before and are at a higher risk of preying on livestock.
Ecological variables
There are many ecological variables that can affect likelihood of depredation. Factors such as farms' distance from water sources, protected areas, elevation and surrounding vegetative cover may all play a role. Some research has shown that depredation decreases with distance from protected areas. This could be because access to nearby conservation areas provides lions with a refuge when coming into contact with humans. Farms that are located close to water sources and at a low elevation may be especially vulnerable to conflict. The effect of vegetative cover remains unclear. Dense vegetative cover has been associated with a higher rate of depredation yet has also been shown to reduce depredation as it allows predators to hide from humans. Farm and livestock management can also affect chances of depredation. Corralling livestock at night as well as providing guards to monitor lion movements to prevent and deter predation can limit losses.
Financial losses
Depredation events lead to financial losses to farmers who rely on livestock as a source of income. In the North West province of South Africa, around $375, 797USD were lost as a result of game and livestock losses caused by depredation. Lions are not the only predators involved, with hyenas, leopards, and wild dogs responsible for depredation events as well. However, lions typically attack cattle which incur higher financial losses than sheep and goats (hunted by hyenas and leopards).
Compensation
In order to lessen these financial losses, some regions offer financial compensation to affected farmers. However, these programs are not always effective. Major criticisms revolve around the response times of programs, arguing that they take too long or do not happen at all. Additionally, farmers do not always receive sufficient financial restitution. For example, as of 2009, Botswana's state-funded compensation program only compensated farmers for 80% of the value they lost. It is common for farmers to not even report livestock losses due in part to dissatisfaction with response time and amount. However, the Predator Compensation Fund (PCF) in Massailand, Kenya has reduced retaliatory killings following depredation events by 73%, illustrating that when done correctly, compensation programs can be effective. Farmers who have received compensation have also reported a lower likelihood of killing a suspected lion than those who have not received any. Regardless of their efficacy, compensation programs are reactionary and not preventative, only seeking to mitigate farmers' losses after the event and do not address underlying causes of conflict
Retaliatory killing
Retaliatory killing is the hunting of a suspected predator after a depredation event. While a threat for all predators, lions are killed disproportionately to the number of losses they are responsible for as opposed to hyenas and leopards. One reason lions are killed in retaliation more than other carnivores is because of their propensity to kill cattle more than sheep and goats. Because cattle are of more financial value to farmers than sheep and goats, the desire to retaliate can be greater. Lions also will hunt during the night when most attacks take place, are easier to track, and are more likely to defend a carcass which make them more vulnerable to being killed by humans
Likelihood of retaliation
Social and economic differences also impact motivation to hunt lions in reaction to livestock losses. People who have lost a higher proportion of their livestock to depredation (often those with smaller farms to begin with) as well as those owning livestock for the purpose of sale as opposed to traditional or subsistence reasons have been found to report a higher willingness to retaliate. Both higher proportional losses and keeping livestock for sale increase the value of cattle and therefore increase the financial incentive to kill suspected predators. Likelihood to retaliate has also been shown to be influenced by social factors such as religion and culture
Reducing conflict
Due to the complex social, economic, and ecological aspects of human-lion conflict, it is recommended that mitigation strategies be adaptable and situation-specific. Specific actions such as providing guards to monitor predators and protect cattle, corralling livestock at night, providing compensation and restoring natural prey densities may reduce conflict in some areas. Focusing on unique conservation programs that take into account factors such as culture and religion, type of livestock owned, and reason for owning livestock may also be helpful.
See also
Human–wildlife conflict
Lion lights
References
Wildlife conservation
Human–wildlife conflict
Felidae attacks
Lions and humans | Human–lion conflict | Biology | 1,096 |
11,471,245 | https://en.wikipedia.org/wiki/Cryptosporella%20umbrina | Cryptosporella umbrina is a plant pathogen.
References
External links
Index Fungorum
USDA ARS Fungal Database
Fungal plant pathogens and diseases
Gnomoniaceae
Fungi described in 1918
Fungus species | Cryptosporella umbrina | Biology | 42 |
11,174,725 | https://en.wikipedia.org/wiki/Volumetric%20concrete%20mixer | A volumetric concrete mixer (also known as volumetric mobile mixer) is a concrete mixer mounted on a truck or trailer that contains separate compartments for sand, stone, cement and water.
On arrival at the job site, the machine mixes the materials to produce the exact amount of concrete needed.
How It Works
Volumetric mixers batch, measure, mix and dispense all from one unit. Volumetric concrete mixers can produce exactly the amount of concrete needed when it is needed at any time. Some concrete suppliers offer general purpose concrete batched in a volumetric mixer as a practical alternative to ready-mix if quantities and schedules are not fully known, to eliminate waste and prevent premature stiffening of the mix.
The volumetric mixer varies in capacity size up to 12 m3 and has a production rate of around 60m3 an hour depending on the mix design. Many volumetric concrete mixer manufacturers have innovated the mixer in capacity and design, as well as added features including color, multiple admixes, fiber systems, and the ability to do gunite or shotcrete.
The advantages of a volumetric mixer include:
Reduces waste and associated costs by providing exact quantities.
No risk of premature stiffening of concrete if delays are encountered.
Permits delivery of smaller quantities of concrete.
Night time works do not require the re-opening of a concrete batch plant.
Flexibility to alternate between multiple concrete mixes as required for the application.
Ability to do continuous concrete pours
Sustainability
Less waste - exact amount of concrete is poured
Less water - volumetric concrete mixers use on average 8-10 gallons of water to clean out versus 200 gallons for a traditional barrel truck
Less emissions - the truck does not need to idle while waiting to pour concrete
History
In the mid-1960's, companies such as Cemen Tech, Reimer Mixers (manufactured under the name ProAll circa 2016), and Zimmerman began building their own versions of volumetric concrete mixers .
In 1999, equipment manufacturers created a trade association, Volumetric Mixer Manufacturers Bureau (VMMB). It had six charter members: Cemen Tech, Inc., Zimmerman Ind, Inc., ProAll Reimer, Bay-Lynx, Custom-Crete, and Elkin. Currently its members include (in alphabetical order): Bay-Lynx, Cemen Tech, Holcombe Mixers, ProAll Reimer Mixers, and Zimmerman Ind, Inc.
References
External links
National Ready Mixed Concrete Association Bureaus
Engineering vehicles | Volumetric concrete mixer | Engineering | 501 |
1,138,327 | https://en.wikipedia.org/wiki/John%20Cornforth | Sir John Warcup Cornforth Jr., (7 September 1917 – 8 December 2013) was an AustralianBritish chemist who won the Nobel Prize in Chemistry in 1975 for his work on the stereochemistry of enzyme-catalysed reactions, becoming the only Nobel laureate born in New South Wales.
Cornforth investigated enzymes that catalyse changes in organic compounds, the substrates, by taking the place of hydrogen atoms in a substrate's chains and rings. In his syntheses and descriptions of the structure of various terpenes, olefins, and steroids, Cornforth determined specifically which cluster of hydrogen atoms in a substrate were replaced by an enzyme to effect a given change in the substrate, allowing him to detail the biosynthesis of cholesterol. For this work, he won a share of the Nobel Prize in Chemistry in 1975, alongside co-recipient Vladimir Prelog, and was knighted in 1977.
Early life and family
Born in Sydney, Cornforth was the son and the second of four children of English-born, Oxford-educated schoolmaster and teacher John Warcup Cornforth and Hilda Eipper (1887–1969), a granddaughter of pioneering missionary and Presbyterian minister Christopher Eipper. Before her marriage, Eipper had been a maternity nurse.
Cornforth was raised in Sydney as well as Armidale, in the north of New South Wales, where he undertook primary school education.
At about 10 years old, Cornforth had noted signs of deafness, which led to a diagnosis of otosclerosis, a disease of the middle ear which causes progressive hearing loss. This left him completely deaf by the age of 20 but also fatefully influenced his career direction away from law, his original intended field of study, and towards chemistry. In an interview with Sir Harry Kroto for the Vega Science Trust, Cornforth explained:I had to find something in which the loss of hearing would not be too severe a handicap...I chose chemistry...The most liberating thing was the realization that the literature wasn't entirely correct. It gave me quite a shock at first, and then a thrill. Because I can set this right! And always, and ever since, I've relied upon the primary literature exclusively.
Education
Cornforth was educated at Sydney Boys' High School, where he excelled academically, passed tests in English, mathematics, science, French, Greek, and Latin, and was inspired by his chemistry teacher, Leonard ("Len") Basser, to change his career directions from law to chemistry. Cornforth graduated as the dux of the class of 1933 at Sydney Boys' High School, at the age of 16.
In 1934, Cornforth matriculated and studied at the University of Sydney, where he studied organic chemistry at the University of Sydney's School of Chemistry and from which he graduated with a Bachelor of Science with First-Class Honours and the University Medal in 1937. During his studies, his hearing became progressively worse, thus making listening to lectures difficult. At the time, he could not use hearing aids as the sound became distorted, and he did not significantly use lip reading.
While studying at the University of Sydney, Cornforth met his future wife, fellow chemist and scientific collaborator, Rita Harradence. Harradence was a graduate of St George Girls High School and a distinguished academic achiever who had topped the state in Chemistry in the New South Wales Leaving Certificate Examination. Harradence graduated with a Bachelor of Science with First-Class Honours and the University Medal in Organic Chemistry in 1936, a year ahead of Cornforth. Harradence also graduated with a MSc in 1937, writing a master's thesis titled "Attempts to synthesise the pyridine analogue of vitamin B1".
In 1939, Cornforth and Harradence, independently of each other, each won one of two Science Research Scholarships (the 1851 Research Fellowship) from the Royal Commission for the Exhibition of 1851, tenable overseas for two years. At the University of Oxford, Harradence was a member of Somerville College while Cornforth was at St. Catherine's College and they worked with Sir Robert Robinson, with whom they collaborated for 14 years. During his time at Oxford, Cornforth found working for and with Robinson stimulating, and the two often deliberated to no end until one had a cogent case against the other's counterargument. In 1941, Cornforth and Harradence both graduated with a D.Phil. in Organic Chemistry. At the time, there were no institutions or facilities at which a PhD in chemistry could be done in Australia.
Career
After his arrival at Oxford and during World War II, Cornforth significantly influenced the work on penicillin, particularly in purifying and concentrating it. Penicillin is usually very unstable in its crude form; as a consequence of this, researchers at the time were building upon Howard Florey's work on the drug. In 1940, Cornforth and other chemists measured the yield of penicillin in arbitrary units to understand the conditions that favoured penicillin production and activity, and he contributed to the writing of The Chemistry of Penicillin.
In 1946, the Cornforths, who had by now married, left Oxford and joined the Medical Research Council (MRC), working at the National Institute for Medical Research (NIMR), where they continued on earlier work in synthesising sterols, including cholesterol. The Cornforths' collaboration with Robinson continued and flourished. In 1951, they completed, simultaneously with Robert Burns Woodward, the first total synthesis of the non-aromatic steroids. At the NIMR, Cornforth collaborated with numerous biological scientists, including George Popják, with whom he shared an interest in cholesterol. Together, they received the Davy Medal in 1968 in recognition of their distinguished joint work on the elucidation of the biosynthetic pathway to polyisoprenoids and steroids.
While working at the MRC, Cornforth was appointed a professor at the University of Warwick and was employed there from 1965 to 1971.
In 1975, Cornforth was awarded a share of the Nobel Prize in Chemistry, alongside Vladimir Prelog. In his acceptance speech, Cornforth said:
Also in 1975, he moved to the University of Sussex in Brighton as a Royal Society Research Professor. Cornforth remained there as a professor and was active in research until his death.
Personal life
In 1941, the year in which they graduated from the University of Oxford, Cornforth married Rita Harriet Harradence (b. 1915), with whom he had one son, John, and two daughters, Brenda and Philippa. Cornforth had met Harradence after she had broken a Claisen flask in their second year at the University of Sydney; Cornforth, with his expertise of glassblowing and the use of a blowpipe, mended the break. Rita Cornforth died on 6 November 2012, at home with her family around her, following a long illness.
On an important author or paper that was integral to his success, Cornforth stated that he was particularly impressed by the works of German chemist Hermann Emil Fischer.
Cornforth died in Sussex on 8 December 2013. at the age of 96. Cornforth is survived by his three children and four grandchildren. He was a sceptic and an atheist.
Honours and awards
Cornforth was named the Australian of the Year in 1975, jointly with Maj. Gen. Alan Stretton. In 1977, Cornforth was recognised by his alma mater, the University of Sydney, with the award of an honorary Doctor of Science. Cornforth's other awards and recognitions follow:
Davy Medal (1968; jointly with George Joseph Popják)
Elected a Fellow of the Royal Society (FRS) in 1953
Commander of the Order of the British Empire (CBE; 1972)
Nobel Prize in Chemistry (1975)
Royal Medal (1976)
Knight Bachelor (1977)
Corresponding Fellow of the Australian Academy of Science (1977)
Foreign member of the Royal Netherlands Academy of Arts and Sciences (since 1978)
Copley Medal (1982)
Companion of the Order of Australia (AC; 1991)
Centenary Medal (2001)
Cornforth's certificate of election for the Royal Society reads:
Popular culture
Cornforth was the focus of a skit on an episode of Comedy Inc., whereby a fictional Who Wants to Be A Millionaire? contestant (played by Genevieve Morris) is asked "Which Australian scientist won the Nobel Prize for Chemistry in 1975?" for the million-dollar question. As it happens, the contestant gleefully claims they are second cousins with Cornforth (despite being nearly 50 years his junior) and knows Cornforth is the answer, confidently rattling off a bunch of highly specific and esoteric facts about Cornforth's life and achievements, all the while the host (a satirical portrayal of Eddie McGuire) stubbornly and continuously stalls her for dramatic effect, asking her for several minutes if she'd like to think about it more to an absurd degree.
On September 7, 2017, Google celebrated his 100th birthday with a Google Doodle.
The Royal Australian Chemical Institute (RACI) honours Cornforth by naming its prize for the best PhD thesis in chemical science completed at an Australian university the Cornforth Medal.
References
External links
1917 births
2013 deaths
Academics of the University of Sussex
Academics of the University of Warwick
Alumni of St Catherine's College, Oxford
Australian chemists
Organic chemists
Australian atheists
Australian Knights Bachelor
Australian Nobel laureates
Australian of the Year Award winners
Australian people of English descent
Australian people of German descent
Australian Commanders of the Order of the British Empire
Companions of the Order of Australia
Fellows of the Australian Academy of Science
Foreign associates of the National Academy of Sciences
Australian fellows of the Royal Society
Members of the Royal Netherlands Academy of Arts and Sciences
Nobel laureates in Chemistry
People educated at Sydney Boys High School
Scientists from Sydney
People from Armidale
Scientists from Oxford
People from Sussex
People from Warwick
Recipients of the Copley Medal
Royal Medal winners
University of Sydney alumni
Australian deaf people
Australian emigrants to the United Kingdom
British scientists with disabilities | John Cornforth | Chemistry | 2,059 |
75,937,036 | https://en.wikipedia.org/wiki/Berkelium%28II%29%20oxide | Berkelium(II) oxide is a binary inorganic compound of berkelium and oxygen with the chemical formula .
Physical properties
The compound is described to be a brittle gray solid.
References
Oxides
Berkelium compounds
Oxygen compounds | Berkelium(II) oxide | Chemistry | 47 |
58,528,288 | https://en.wikipedia.org/wiki/Poincar%C3%A9%20Medal | The Henri Poincaré Medal (Médaille Henri Poincaré) is a mathematics award from the Institut de France, Academie des Sciences, Fondation Henri Poincaré. The medal recognizes an eminent mathematician and is awarded only on exceptional occasions. It was established in 1914 and was eliminated in 1997 in favor of the Grande Médaille.
It should be distinguished from the Henri Poincaré Prize of the International Association of Mathematical Physics, and from various other medals featuring Poincaré's name and likeness.
Recipients
1954 Georges Valiron
1974 Pierre Deligne
1992 John G. Thompson
See also
List of things named after Henri Poincaré
List of mathematics awards
References
Medals
French awards
Mathematics awards
1914 establishments in France
Awards established in 1914
Awards disestablished in 1997
1997 disestablishments in France | Poincaré Medal | Technology | 165 |
17,719,366 | https://en.wikipedia.org/wiki/OGLE-TR-211b | OGLE-TR-211b is a transiting planet in Carina constellation. Its radius is about 36% more than Jupiter and has mass 3% more than Jupiter, which is considered an “inflated Hot Jupiter”. The planet takes 3.7 days at about the same distance as 51 Pegasi b orbits around 51 Pegasi.
See also
OGLE-TR-182b
Optical Gravitational Lensing Experiment OGLE
References
External links
Hot Jupiters
Transiting exoplanets
Exoplanets discovered in 2007
Giant planets
Carina (constellation) | OGLE-TR-211b | Astronomy | 112 |
23,571,585 | https://en.wikipedia.org/wiki/C24H34O5 | {{DISPLAYTITLE:C24H34O5}}
The molecular formula C24H34O5 (molar mass: 402.52 g/mol) may refer to:
Bufagin, a toxic steroid obtained from toad's milk
Cortexolone 17α-propionate
Dehydrocholic acid
Molecular formulas | C24H34O5 | Physics,Chemistry | 73 |
24,321,121 | https://en.wikipedia.org/wiki/Geopiety | Geopiety is "the belief and worship of powers behind nature or the human environment". It was coined by the American geographer John Kirtland Wright for geographical piety.
The term "geopiety" comes from a combination of the Greek root geo, for earth, and the Latin root "pietas". As Wright explained when coining the term, geopiety is meant to refer to "emotional piety aroused by awareness of terrestrial diversity of the kind of which geography is also a form of awareness".
One example of geopiety can be found in the works of American preacher Jonathan Edwards:
See also
Religion and geography
References
Bibliography
Human geography | Geopiety | Environmental_science | 136 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.