id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
73,020,236 | https://en.wikipedia.org/wiki/The%20Clerk%27s%20House | The Clerk's House is an historic building in Shoreditch, England. Standing at 118½ Shoreditch High Street, it is a Grade II listed building dating to 1735. It is two storeys, plus an attic and a basement. Part of its interior, such as some wood panelling, dates to the 16th century.
Believed to have formerly been a watch house, from which somebody looked out for body snatchers in the adjacent St Leonard's churchyard, the ground floor is now a business, while the upper floors remain residential.
In 2024, The Clerk's House in London's Shoreditch district became an exhibition space for Emalin Gallery, showcasing contemporary work in a historic interior.
References
Shoreditch
Grade II listed houses in London
Houses completed in 1735 | The Clerk's House | [
"Engineering"
] | 161 | [
"Architecture stubs",
"Architecture"
] |
73,020,447 | https://en.wikipedia.org/wiki/Gert%20Bange | Gert Bange (born December 1, 1977, in Görlitz, East Germany) is a German structural biologist and biochemist. He is Professor of Biochemistry at the Department of Chemistry and Vice President for Research at Philipps-Universität Marburg.
Career
After graduating from high school in 1996 and doing his civil service in Halle/Saale, Bange studied biochemistry at Martin Luther University Halle/Saale from 1997 to 2002. In 2007, he received his PhD in biochemistry and worked until 2012 at the Biochemistry Center of the Ruprecht-Karls-University Heidelberg under Irmgard Sinning. He then moved to the LOEWE Center for Synthetic Microbiology (SYNMIKRO) at Philipps University Marburg as an independent junior research group leader. Since 2018, he has been W3 Professor of Biochemistry at the Department of Chemistry of Philipps University and was Deputy Executive Director of SYNMIKRO from 2019 to 2022. He has also been a Fellow at the Max Planck Institute for Terrestrial Microbiology in Marburg since 2021.
Research
Bange works in the fields of structural biology and biochemistry and is interested in molecular deciphering of new biological mechanisms and their components. Research interests include the study of molecular machines, mechanisms of bacterial stress and environmental adaptation, and the interaction between microorganisms and their hosts. Bange serves on the editorial boards of the Journal for Biological Chemistry and the Journal of Bacteriology. He is on the board of the Initiative Biotechnologie und Nanotechnologie e.V. and was a member of the Senate of Philipps-Universität Marburg until 2022.
Honors and awards (selection)
2021 ERC Advanced Grant "KIWIsome"
2020 Prize for excellent PhD Supervisor of the Philipps-University Marburg
2018 Winner of the iGEM Competition (Overgrad, as Instructor of the Team)
2012 Fellowship of the Peter und Traudl Engelhorn Stiftung
References
External links
Gert Bange at SYNMIKRO
Publications from and about Gert Bange
1977 births
German biochemists
Academic staff of the University of Marburg
Living people
People from Görlitz
Structural biologists
Martin Luther University of Halle-Wittenberg alumni | Gert Bange | [
"Chemistry"
] | 453 | [
"Structural biologists",
"Structural biology"
] |
49,259,329 | https://en.wikipedia.org/wiki/Stock%20tank%20oil | Stock tank oil refers to the volume of oil after flashing to nominal atmospheric (or other stated) storage pressure and temperature (as opposed to reservoir conditions).
Background
Crude oil is a complex mixture of many individual compounds which influence the physical properties of the oil, such as its density (API gravity), viscosity, Permeability, Dew point, Formation Volume, etc.. The properties also influence how the oil is classified and its value.
The quantity of oil originally in place in a reservoir is usually expressed as Stock Tank Oil Originally In Place (STOOIP), this is usually given in millions of stock tank barrels (MMstb).
Processing
Crude oil from subterranean reservoirs is usually produced with associated gas, produced water and solids such as sand. The crude oil must be stabilised to prevent excessive gassing during storage and to remove water and solids. Stabilisation involves passing the wellhead fluids through one, two, or three stages of separation operating at successively lower pressure to flash off lighter hydrocarbons and to remove water and solids. For a three stage train the operating pressures of the high, intermediate and low pressure separators might typically be:
HP separator: 1200 psig
IP separator: 200 psig
LP separator: 50 psig
Stock tank: 2 psig
The vapor pressure and relative volatility of constituent compounds is:
Alternatively, plant may be arranged with each stage operating at successively higher temperature.
The product is dead crude which is stored in stock tanks operating at just above atmospheric pressure. Offshore production also produces a dead crude but is often spiked with natural gas liquids (NGL) as a convenient transport route for these liquids. This is known as live crude. The NGL is flashed off and recovered in the onshore separation plant. The operating pressure and temperature of the separation plant may be specified such that the oil meets a required vapor pressure, such as Reid Vapor Pressure.
Properties
Density and viscosity are important property inputs into correlations for pipe-flow calculations. Stack Tank Oil (STO) density (or API) may also be used by regulatory bodies to classify oil and oil products. Other properties such as Molecular Weight, saturated aromatic resin and asphaltene (SARA), refractive index, wax appearance temperature, asphaltene precipitation, and acid number, are also specified at Stock Tank conditions. Typically stock tank conditions are 14.7 psia and 60 °F (101,325 Pa and 16 °C).Flowrates of oil are commonly specified at Stock Tank conditions, e.g: Stock Tank barrels of oil per day STBOPD.
See also
Oil in place
References
www.oilgasglossary.com/stock_tank_oil.html
Petroleum industry
Petroleum production | Stock tank oil | [
"Chemistry"
] | 554 | [
"Petroleum industry",
"Petroleum",
"Chemical process engineering",
"Petroleum stubs"
] |
49,259,432 | https://en.wikipedia.org/wiki/Emak | Emak is an Italian manufacturer and distributor of machines, components and accessories for gardening, agriculture, forestry and industrial applications. Emak's brands are: Efco, Oleo-Mac, Bertolini and Nibbi.
The product range comprises more than 250 models of chainsaws, brushcutters, lawnmowers, garden tractors, hedgetrimmers, rotary tillers, rotary cultivators, flail mowers, cutterbar mowers, transporters and similar products.
History
Emak was incorporated in 1992 by the merger of Oleo-Mac and Efco, two companies specialised in the production of gardening and forestry machines, both of which had been active since the 1970s in the province of Reggio Emilia (Italy). The combination of the manufacturing and managerial resources of the two companies soon allowed the new industrial complex to achieve a competitive position on the international market.
In 1996 Emak secured ISO 9001 certification and from June 1998 it has been listed on the Milan Stock Exchange, forming an international group thanks to the acquisition of 9 companies.
In 2008, following the acquisition of Bertolini S.p.A., Emak extended its product range to include the agricultural and gardening machines distributed under the Bertolini and Nibbi brand names. In the same year, newer models of petrol engineering replaced the old models. For example, gasoline-powered saws, such as the Efco 137S, Efco 143 and MT 442 have become more powerful and productive. Their weight was significantly reduced, for the possibility of long hard work. Also, the concern has released a new line of gasoline brushcutters, this is Efco Stark 30 TR, Efco Stark 35 IC and Efco Stark 40 TWIN.
Between 2005 and 2006, the group expanded its international presence by purchasing a Polish distributor and the creation of its US subsidiary.
In 2016, the budget line of EFCO chainsaws was released, these are models MTH 510, MTH 560. These models are made specifically for third world countries with low economic indicators. They are a farm series (domestic) and have piston systems from the Chinese manufacturer Kamo, and carburetors from the Chinese manufacturer Walbro. These saws are made of plastic of the middle segment, so they can be used only for domestic purposes. Officially delivered to countries such as Moldova, Pakistan, Lebanon and others.
In 2017, new models are produced for more developed countries, these are Efco 149, Efco 154, Efco MT 640 and MT 770, which are professional and industrial class and have a high motor resource of engines under prolonged loads under severe conditions. They already have Emak piston systems with Burn Right technologies and carburetors of the Japanese office of Walbro 7 series, which are equipped with four-channel fuel supply, which is a high-tech component in gas equipment.
Sites
Emak has four production plants (in Italy) and seven branches in France, UK, Spain, Poland, Ukraine, China and Brazil. These markets account for over half of the Group's total turnover with more than 5000 dealers served directly.
Brands and applications
Oleo-Mac ed Efco are the brands for the company's gardening and forestry machines. For the general agricultural sector the reference brands are Bertolini and Nibbi, which are distributed also by agricultural machinery and tractor dealerships.
Product range
The product range includes lines for home users (garden maintenance, firewood cutting, general DIY jobs), and for professional users (maintenance of large open spaces in urban and rural locations, machines for clearing, pruning, cutting firewood, forestry work and tidying of overgrown areas and undergrowth).
Certifications
Emak has achieved certifications in the fundamental areas of corporate sustainability: Environment and Quality.
See also
List of Italian companies
References
External links
Official site
Engineering companies of Italy
Saws
Woodworking hand-held power tools
Logging
Tool manufacturing companies of Italy
Italian companies established in 1992
Manufacturing companies established in 1992
Italian brands
Companies based in the Province of Reggio Emilia
Emak Group S.p.A.
Industrial machine manufacturers
Technology companies of Italy
Companies listed on the Borsa Italiana | Emak | [
"Engineering"
] | 860 | [
"Industrial machine manufacturers",
"Industrial machinery"
] |
49,260,002 | https://en.wikipedia.org/wiki/I.T.%20%28film%29 | I.T. is a 2016 thriller film directed by John Moore and written by Dan Kay and William Wisher. It stars Pierce Brosnan, James Frecheville, Anna Friel, Stefanie Scott, and Michael Nyqvist and was produced by David T. Friendly and Beau St. Clair, who was Brosnan's producing partner at the production company Irish DreamTime before her death. The film was released in theaters and via video on demand in the United States on September 23, 2016, by RLJ Entertainment.
Plot
Mike Regan is a self-made aviation tycoon who lives in a state-of-the-art smart house full of modern technology with his wife Rose and 17-year-old daughter Kaitlyn. Mike's company is developing an app called "Omni Jet" which will increase business while the company raises much-needed financial capital with a stock offering. However, it requires U.S. Securities and Exchange Commission (SEC) approval.
At the company, Mike meets Ed Porter, a 28-year-old information technology (I.T.) consultant and calls him to fix his home's Wi-Fi signal, which his daughter complains is slow. The password is revealed here to be "ReganHouse1" which lacks Password strength, an indicator of Regan's lack of security. Porter also upgrades the Global Positioning System (GPS) in Mike's car and claims that he also worked at the National Security Agency (NSA) and had taken part in a military exercise in Kandahar.
Porter meets Kaitlyn and starts a relationship with her through social media, but Mike fires him after Kaitlyn invites Porter into the house; this ends his promising career at the company. Devastated, Porter begins to remotely access Mike's private data and his house as he covertly monitors them through the security cameras and devices all over the smart house. He also spies on Kaitlyn and secretly records her masturbating in the shower.
Porter sends fake emails to Mike's clients and the SEC, threatening the company's survival. He also takes full control of the house's technology, which leaves the family terrified. He uses a spoof email to send Rose fake mammogram results, saying that she tested positive for breast cancer. Rose is extremely distressed, but her test results were actually negative according to her attending physician. After Mike becomes aware that Porter has done this, he attacks Porter and threatens to kill him if he does not stay away from his family.
Porter then uploads the video of Kaitlyn masturbating online, immediately catching the attention of her schoolmates; she is mortified and blames her father for installing the technology in their house. Angered, Mike drives to see Porter but he is also being monitored by Porter, who mockingly telephones him through the car navigation system and sends him the video. Porter then remotely deactivates the car's brake system, hitting a nearby stalled truck and destroying the car.
Mike requests help from Henrik, an I.T. expert, who says that Mike must destroy all the smart technology in the house and delete his emails, bank accounts and computer files. Henrik explains that Porter's real name is Richard Edward Portman and that his father committed suicide when Porter was six years old. He also reveals that he never worked for the NSA, as he claimed, and the photograph of him with soldiers in Kandahar was fake. To allow Mike to obtain evidence from Porter's apartment, Henrik creates a diversion by stealing the phone of a waitress at a coffee shop with whom Porter's obsessed and texting Porter, telling him to come to the coffee shop. While Porter is gone, Mike manages to get in his apartment and steal several thumb drives containing evidence, escaping just as Porter returns to his apartment after realising it was a diversion. Porter realises the masked man that he saw in his apartment was Mike, so he frames him for assault, prompting the police to arrest Mike when trying to provide the evidence from the thumb drives to the police.
After being released, Mike returns home to find Kaitlyn and Rose cable-tied and gagged by Porter, who holds them at gunpoint. Shortly afterwards a struggle ensues; Porter shoots a window and Mike punches Porter, who hits his head and lies dying as Mike holds the gun to his chest, but Rose begs Mike not to kill him.
Some time later, the company's employees applaud Mike and his family for successfully developing the app and their house is restored.
Cast
Pierce Brosnan as Mike Regan
James Frecheville as Richard Edward Portman / Ed Porter
Anna Friel as Rose Regan
Stefanie Scott as Kaitlyn Regan
Michael Nyqvist as Henrik
Adam Fergus as Sullivan
Jason Barry as Patrick
Clare-Hope Ashitey as Joan
Eric Kofi-Abrefa as Detective Kayden
Brian Mulvey as George
Austin Swift as Lance
Production
The film was first announced in October 2013 as a revenge thriller with Pierce Brosnan headlining the project. It was set to be financed by Voltage Pictures and directed by Stefano Sollima. In August 2014, it was revealed that John Moore had replaced Sollima as the film's director, and commissioned a complete rewrite of the script by William Wisher. Stefanie Scott was cast in April 2015 as the daughter of Brosnan's character, and that the film was to start shooting in June later that year, in Ireland. A month later (in May), it was reported that Anna Friel had joined the cast as the wife of Brosnan's character and that James Frecheville would play the young antagonist. Michael Nyqvist joined the project a few days later. It is the last film produced by Beau St. Clair before she died from cancer. Principal photography began on June 25, 2015, and concluded on July 29, 2015.
Release
I.T. was first released theatrically in Bulgaria and Romania on September 9, 2016. RLJ Entertainment released the film in theaters and via video on demand in the United States on September 23, 2016. The theatrical trailer for the film debuted in August 2016.
Reception
I.T. received negative reviews from critics. On the review aggregator website Rotten Tomatoes, the film has an approval rating of 9% based on 43 reviews and an average rating of 3.4 out of 10. On Metacritic, the film has a score of 27 out of 100 based on 9 reviews, indicating "generally unfavorable reviews". Joe Leydon of Variety criticized the cliched screenplay, but gave praise to Brosnan's performance and the direction. The Guardian gave the film two out of five stars, calling it 'blundering'. The New York Times found little to recommend, criticizing the screenplay and execution.
References
External links
2016 films
2016 thriller films
American thriller films
English-language Danish films
Danish thriller films
Films about computing
Films directed by John Moore
Films set in Washington, D.C.
Films shot in Ireland
English-language French films
French thriller films
Irish thriller films
Techno-thriller films
Voltage Pictures films
2010s English-language films
2010s American films
2010s French films
Films scored by Timothy Williams (composer)
Films with screenplays by William Wisher Jr.
English-language thriller films | I.T. (film) | [
"Technology"
] | 1,463 | [
"Works about computing",
"Films about computing"
] |
49,260,321 | https://en.wikipedia.org/wiki/Smart%20manufacturing | Smart manufacturing is a broad category of manufacturing that employs computer-integrated manufacturing, high levels of adaptability and rapid design changes, digital information technology, and more flexible technical workforce training. Other goals sometimes include fast changes in production levels based on demand, optimization of the supply chain, efficient production and recyclability. In this concept, as smart factory has interoperable systems, multi-scale dynamic modelling and simulation, intelligent automation, strong cyber security, and networked sensors.
The broad definition of smart manufacturing covers many different technologies. Some of the key technologies in the smart manufacturing movement include big data processing capabilities, industrial connectivity devices and services, and advanced robotics.
Big data processing
Smart manufacturing leverages big data analytics to optimize complex production processes and enhance supply chain management. Big data analytics refers to a method for gathering and understanding large data sets in terms of what are known as the three V's, velocity, variety and volume. Velocity informs the frequency of data acquisition, which can be concurrent with the application of previous data. Variety describes the different types of data that may be handled. Volume represents the amount of data. Big data analytics allows an enterprise to use smart manufacturing to predict demand and the need for design changes rather than reacting to orders placed.
Some products have embedded sensors, which produce large amounts of data that can be used to understand consumer behavior and improve future versions of the product.
Advanced robotics
Advanced industrial robots, also known as smart machines, operate autonomously and can communicate directly with manufacturing systems. In some advanced manufacturing contexts, they can work with humans for co-assembly tasks. By evaluating sensory input and distinguishing between different product configurations, these machines are able to solve problems and make decisions independent of people. These robots are able to complete work beyond what they were initially programmed to do and have artificial intelligence that allows them to learn from experience. These machines have the flexibility to be reconfigured and re-purposed. This gives them the ability to respond rapidly to design changes and innovation, which is a competitive advantage over more traditional manufacturing processes. An area of concern surrounding advanced robotics is the safety and well-being of the human workers who interact with robotic systems. Traditionally, measures have been taken to segregate robots from the human workforce, but advances in robotic cognitive ability have opened up opportunities, such as cobots, for robots to work collaboratively with people.
Cloud computing allows large amounts of data storage or computational power to be rapidly applied to manufacturing, and allow a large amount of data on machine performance and output quality to be collected. This can improve machine configuration, predictive maintenance, and fault analysis. Better predictions can facilitate better strategies for ordering raw materials or scheduling production runs.
3D printing
As of 2019, 3D printing is mainly used in rapid prototyping, design iteration, and small-scale production. Improvements in speed, quality, and materials could make it useful in mass production and mass customization.
However, 3D printing developed so much in recent years that it is no longer used just as technology for prototyping. 3D printing sector is moving beyond prototyping especially it is becoming increasingly widespread in supply chains. The industries where digital manufacturing with 3D printing is the most seen are automotive, industrial and medical. In the auto industry, 3D printing is used not only for prototyping but also for the full production of final parts and products. 3D printing has also been used by suppliers and digital manufacturers coming together to help fight COVID-19.
3D printing allows to prototype more successfully, thus companies are saving time and money as significant volumes of parts can be produced in a short period. There is great potential for 3D printing to revolutionise supply chains, hence more companies are using it. The main challenge that 3D printing faces is the change of people's mindset. Moreover, some workers will need to re-learn a set of new skills to manage 3D printing technology.
Eliminating workplace inefficiencies and hazards
Smart manufacturing can also be attributed to surveying workplace inefficiencies and assisting in worker safety. Efficiency optimization is a huge focus for adopters of "smart" systems, which is done through data research and intelligent learning automation. For instance operators can be given personal access cards with inbuilt Wi-Fi and Bluetooth, which can connect to the machines and a Cloud platform to determine which operator is working on which machine in real time. An intelligent, interconnected 'smart' system can be established to set a performance target, determine if the target is obtainable, and identify inefficiencies through failed or delayed performance targets. In general, automation may alleviate inefficiencies due to human error. And in general, evolving AI eliminates the inefficiencies of its predecessors.
As robots take on more of the physical tasks of manufacturing, workers no longer need to be present and are exposed to fewer hazards.
Impact of Industry 4.0
Industry 4.0 is a project in the high-tech strategy of the German government that promotes the computerization of traditional industries such as manufacturing. The goal is the intelligent factory (Smart Factory) that is characterized by adaptability, resource efficiency, and ergonomics, as well as the integration of customers and business partners in business and value processes. Its technological foundation consists of cyber-physical systems and the Internet of Things.
This kind of "intelligent manufacturing" makes a great use of:
Wireless connections, both during product assembly and long-distance interactions with them;
Last generation sensors, distributed along the supply chain and the same products (Internet of things);
Elaboration of a great amount of data to control all phases of construction, distribution and usage of a good.
European Roadmap "Factories of the Future" and German one "Industrie 4.0″ illustrate several of the action lines to undertake and the related benefits. Some examples are:
Advanced manufacturing processes and rapid prototyping will make possible for each customer to order one-of-a-kind product without significant cost increase.
Collaborative Virtual Factory (VF) platforms will drastically reduce cost and time associated to new product design and engineering of the production process, by exploiting complete simulation and virtual testing throughout the Product Lifecycle.
Advanced Human-Machine interaction (HMI) and augmented reality (AR) devices will help increasing safety in production plants and reducing physical demand to workers (whose age has an increasing trend).
Machine learning will be fundamental to optimize the production processes, both for reducing lead times and reducing the energy consumption.
Cyber-physical systems and machine-to-machine (M2M) communication will allow to gather and share real-time data from the shop floor in order to reduce downtime and idle time by conducting extremely effective predictive maintenance.
Statistics
The Ministry of Economy, Trade and Industry in South Korea announced on 10 March 2016 that it had aided the construction of smart factories in 1,240 small and medium enterprises, which it said resulted in an average 27.6% decrease in defective products, 7.1% faster production of prototypes, and 29.2% lower cost.
See also
Open manufacturing
Fourth Industrial Revolution
References
External links
CESMII - US National Institute on Smart Manufacturing
Factories of the Future
Agnieszka Radziwon, Arne Bilberg, Marcel Bogers, Erik Skov Madsen. The Smart Factory: Exploring Adaptive and Flexible Manufacturing Solutions – Proceedings of the 24th DAAAM International Symposium on Intelligent Manufacturing and Automation, 23–26 October 2013, Zadar, Croatia. – Elsevier, Procedia Engineering, ISSN 1877-7058, 69 (2014),
Agnieszka Radziwon, Marcel Bogers, Arne Bilberg. The Smart Factory: Exploring an Open Innovation Solution for Manufacturing Ecosystems Date Written: May 28, 2014. Available at SSRN, 11 Pages. Posted: 1 Oct 2014
GE launches 'microfactory' to co-create the future of manufacturing
Manufacturing | Smart manufacturing | [
"Engineering"
] | 1,603 | [
"Manufacturing",
"Mechanical engineering"
] |
49,260,907 | https://en.wikipedia.org/wiki/Diplodia%20allocellula | Diplodia allocellula is an endophytic fungus that might be a latent pathogen. It was found on Acacia karroo, a common tree in southern Africa.
References
Further reading
Jami, Fahimeh, et al. "Greater Botryosphaeriaceae diversity in healthy than associated diseased Acacia karroo tree tissues." Australasian Plant Pathology 42.4 (2013): 421–430.
Jami, Fahimeh, et al. "Botryosphaeriaceae species overlap on four unrelated, native South African hosts." Fungal Biology 118.2 (2014): 168–179.
External links
Botryosphaeriaceae
Fungi described in 2012
Fungi of Africa
Fungal plant pathogens and diseases
Fungus species | Diplodia allocellula | [
"Biology"
] | 157 | [
"Fungi",
"Fungus species"
] |
49,260,910 | https://en.wikipedia.org/wiki/Dothiorella%20dulcispinae | Dothiorella dulcispinae is an endophytic fungus that might be a latent plant pathogen. It was found on Acacia karroo, a common tree in southern Africa.
References
Further reading
Jami, Fahimeh, et al. "Greater Botryosphaeriaceae diversity in healthy than associated diseased Acacia karroo tree tissues." Australasian Plant Pathology 42.4 (2013): 421–430.
Pavlic-Zupanc, D., et al. "Molecular and morphological characterization of Dothiorella species associated with dieback of Ostrya carpinifolia in Slovenia and Italy, and a host and geographic range extension for D. parva."
External links
MycoBank
{
dulcispinae
Fungi described in 2012
Fungal plant pathogens and diseases
Fungus species | Dothiorella dulcispinae | [
"Biology"
] | 173 | [
"Fungi",
"Fungus species"
] |
49,260,912 | https://en.wikipedia.org/wiki/Dothiorella%20brevicollis | Dothiorella brevicollis is an endophytic fungus that might be a latent plant pathogen. It was found on Acacia karroo, a common tree in southern Africa.
References
Further reading
Jami, Fahimeh, et al. "Greater Botryosphaeriaceae diversity in healthy than associated diseased Acacia karroo tree tissues." Australasian Plant Pathology 42.4 (2013): 421–430.
Pavlic-Zupanc, D., et al. "Molecular and morphological characterization of Dothiorella species associated with dieback of Ostrya carpinifolia in Slovenia and Italy, and a host and geographic range extension for D. parva."
External links
MycoBank
brevicollis
Fungi described in 2012
Fungal plant pathogens and diseases
Fungus species | Dothiorella brevicollis | [
"Biology"
] | 170 | [
"Fungi",
"Fungus species"
] |
49,260,921 | https://en.wikipedia.org/wiki/Spencermartinsia%20pretoriensis | Spencermartinsia pretoriensis is an endophytic fungus that might be a latent pathogen. It was found on Acacia karroo, a common tree in southern Africa.
References
Further reading
Jami, Fahimeh, et al. "Greater Botryosphaeriaceae diversity in healthy than associated diseased Acacia karroo tree tissues." Australasian Plant Pathology 42.4 (2013): 421–430.
Pitt, Wayne M., Jose Ramon Úrbez-Torres, and Florent P. Trouillas. "Dothiorella vidmadera, a novel species from grapevines in Australia and notes on Spencermartinsia." Fungal Diversity 61.1 (2013): 209–219.
External links
MycoBank
Botryosphaeriales
Fungi described in 2012
Fungal plant pathogens and diseases
Fungus species | Spencermartinsia pretoriensis | [
"Biology"
] | 181 | [
"Fungi",
"Fungus species"
] |
49,260,926 | https://en.wikipedia.org/wiki/Tiarosporella%20urbis-rosarum | Tiarosporella urbis-rosarum is an endophytic fungus that might be a latent pathogen. It was found on Acacia karroo, a common tree in southern Africa.
References
Further reading
Jami, Fahimeh, et al. "Greater Botryosphaeriaceae diversity in healthy than associated diseased Acacia karroo tree tissues." Australasian Plant Pathology 42.4 (2013): 421–430.
Slippers, Bernard, et al. "Confronting the constraints of morphological taxonomy in the Botryosphaeriales." Persoonia - Molecular Phylogeny and Evolution of Fungi 33 (2014): 155.
External links
MycoBank
Leotiomycetidae
Fungi described in 2012
Fungal plant pathogens and diseases
Fungus species | Tiarosporella urbis-rosarum | [
"Biology"
] | 161 | [
"Fungi",
"Fungus species"
] |
49,261,053 | https://en.wikipedia.org/wiki/Sarcodon%20catalaunicus | Sarcodon catalaunicus is a species of tooth fungus in the family Bankeraceae. Found in Mediterranean Europe, it was described as new to science in 1937 by French mycologist René Maire. The type collection was found growing under Quercus ilex in Santa Coloma de Farners (Catalonia, Spain).
References
External links
Fungi described in 1937
Fungi of Europe
catalaunicus
Fungus species | Sarcodon catalaunicus | [
"Biology"
] | 84 | [
"Fungi",
"Fungus species"
] |
49,261,088 | https://en.wikipedia.org/wiki/Citrix%20Virtual%20Apps | Citrix Virtual Apps (formerly WinFrame, MetaFrame, Presentation Server and XenApp) is an application virtualization software produced by Citrix Systems that allows Windows applications to be accessed via individual devices from a shared server or cloud system.
Product overview
Citrix Virtual Apps is application virtualization software that delivers centrally-hosted Windows applications to local devices without the necessity of installing them. It is the flagship product for Citrix and was formerly known under the names WinFrame, MetaFrame, and Presentation Server.
Citrix Virtual Apps software uses FlexCast Management Architecture (FMA), a proprietary architecture for Citrix virtualization products. It delivers individual applications, as opposed to entire desktops, to devices. It is also used with Citrix Workspace to deliver apps as part of a complete virtual desktop environment.
With Citrix Virtual Apps, Windows applications can be used on devices that typically could not run them, including Macintosh computers, mobile devices, Google Chromebooks, and Linux computers. Conversely, it enables otherwise incompatible apps to run on Windows desktops.
Citrix Virtual Apps is accessed on all devices via Citrix Workspace App. The software can be delivered from on-premises data centers or public, private, or hybrid clouds.
History
The precursor to Virtual Apps was called WinFrame, a multi-user operating system based on Windows NT 3.51. Released in 1995, WinFrame was one of the first products distributed by Citrix. At this stage of the product development, Citrix Systems licensed the Windows NT 3.51 base operating system from Microsoft. The core development that Citrix delivered was the MultiWin engine. This allowed multiple users to logon and execute applications on a WinFrame server. Citrix was to later license the MultiWin technology to Microsoft, forming the basis of Microsoft's Terminal Services.
Repackaged versions of Windows 95, with Citrix WinFrame Client included, were also available from Citrix.
MetaFrame superseded WinFrame in 1998. The product was renamed several times: it became MetaFrame XP in 2002, MetaFrame XP Presentation Server in 2003, and then was rebranded as Presentation Server in 2005. Each of these products focused on remote access of applications and server-based computing.
In 2008, the product was renamed XenApp. The "Xen" was taken from the company's acquisition of XenSource in 2007.
Between 2010 and 2012, Citrix issued two updates of XenApp. XenApp 6 launched in 2010 and included a new central management console called AppCenter. In 2012, XenApp 6.5 was released and this update included a new feature called Instant App Access, which aimed to reduce application launch time.
In 2013, version 7.0 was released. This update combined XenDesktop and XenApp into one application called XenDesktop under the Flex Management Architecture (FMA). Prior to this, all versions of XenApp used the company's Independent Management Architecture (IMA). In 2014, version 7.5 was released as XenApp, separate from XenDesktop, but it was also built on FMA.
In 2018, XenApp was rebranded Citrix Virtual Apps.
More recently, Citrix has introduced a cloud-based solution known as Citrix DaaS, which it positions as a successor to its on-premise Citrix Virtual Apps and Desktops (CVAD) offering. However, it is still releasing new Virtual Apps and Desktops versions, to meet the needs of customers who prefer or require an on-premise solution.
References
External links
Citrix Systems
Cloud computing
Centralized computing
Remote desktop protocols
Remote desktop
Virtualization software | Citrix Virtual Apps | [
"Technology"
] | 743 | [
"Centralized computing",
"IT infrastructure",
"Computer systems"
] |
49,261,105 | https://en.wikipedia.org/wiki/Directed%20assembly%20of%20micro-%20and%20nano-structures | Directed assembly of micro- and nano-structures are methods of mass-producing micro to nano devices and materials. Directed assembly allows the accurate control of assembly of micro and nano particles to form even the most intricate and highly functional devices or materials.
Directed self-assembly
Directed self-assembly (DSA) is a type of directed assembly which utilizes block co-polymer morphology to create lines, space and hole patterns, facilitating for a more accurate control of the feature shapes. Then it uses surface interactions as well as polymer thermodynamics to finalize the formation of the final pattern shapes. To control the surface interactions enabling sub-10 nm resolution, a team consisting of Massachusetts Institute of Technology, University of Chicago, and Argonne National Laboratory developed a way to use vapor-phase deposited polymeric top layer on the block co-polymer film in 2017.
The DSA is not a standalone process, but rather is integrated with traditional manufacturing processes in order to mass-produce micro and nano structures at a lower cost. Directed self-assembly is mostly used in the semiconductor and hard drive industries. The semiconductor industry uses this assembly method in order to be able increase the resolution (trying to fit in more gates), while the hard drive industry uses DSA to manufacture "bit patterned media" according to the specified storage densities.
Micro-structures
There are many applications of directed assembly in the micro-scale, from tissue engineering to polymer thin-films. In tissue engineering, directed assembly have been able to replace scaffolding approach of building tissues. This happens by controlling the position and organization of different cells, which are the "building blocks" of the tissue, into different desired micro-structures. This eliminates the error of not being able to reproduce the same tissue, which is a major issue in the scaffolding approach.
Nanostructures
Nanotechnology provides methods to organizing materials such as molecules, polymers, building blocks, etc. to form precise nanostructures which have many applications. In the process and application of peptide self-assembly into nano tubes, the single-wall carbon nano tubes is an example which consists of a graphene sheet seamlessly wrapped to a cylinder. This produced in the outside flow of a carbon and yield by laser vaporization of graphite enriched by a transition metal.
Nanoimprint lithography is a popular method to fabricate nanometer scale pattern. The patterns are made by mechanical deformation of imprint resist (monomer or polymer formulation) and subsequent processes. Then, it is cured by heat or ultraviolet light, and tight level of the resist and template is controlled at appropriate conditions depend on our purposes. In addition, nanoimprint lithography has high resolution and throughput with low cost. Disadvantages include increased time for templating procedures, a lack of standard procedures results in multiple fabrication methods, and the patterns that are able to be formed are limited.
With the goal of mitigating these disadvantages while applying nanotechnology to electronics, researchers at the National Science Foundation's Nanoscale Science and Engineering Center for High-Rate Nanomanufacturing (CHN) at Northeastern University with partners UMass Lowell and University of New Hampshire have developed a directed assembly process of single-walled carbon nano tube (SWNT) networks to create a circuit template that can be transfer from one substrate to another.
Self-assembled monolayers on solid substrates
Self-assembled monolayers (SAMs) are made of a layer of organic molecules which forms naturally as an ordered lattice on the surface of a desired substrate. Their molecules in the lattice have connections chemically at one end (head group), while the other end (end group) creates the exposed surface of the SAM.
Many types of SAMs can be formed. For example: thiols form SAMs on gold, silver, copper, or on some compound semiconductors such as InP and GaAs. By changing the tail group of the molecules, different surface properties can be obtained; therefore SAMs can be used to render surfaces hydrophobic or hydrophilic as well as change surface states of semiconductor. With self-assembly, positioning of SAMs is used to define chemical system precisely to find the target location in a molecular-inorganic device. With this characteristic, SAMs is a good candidates for molecular electronic devices such as use SAMs to build electronic devices and maybe the circuits is an intriguing prospect. Because of their ability to provide the basis for very high-density data storage and high-speed devices.
Acoustic methods
Directed assembly using the acoustic methods manipulate waves in order to allow non-invasive assembling of micro and nano structures. Due to this, acoustics are especially widely used in the biomedical industry to manipulate droplets, cells and other molecules.
Acoustic waves are generated by a piezoelectric transducer controlled from the pulse generator. These waves are able to then manipulate droplets of liquid and move them together, in order to form a packed assembly. Moreover, the frequency and amplitude of the waves can be modified in order to achieve a more accurate control of the particular behavior of the droplet or cell.
Optical methods
Directed assembly or more specifically directed self-assembly, can produce a high pattern resolution (~10 nm) with high efficiency and compatibility. However, when using DSA in high volume manufacturing, one must have a way to quantify the degree of order of line/space patterns formed by DSA in order to reduce defect.
Normal approaches, such as critical dimension-scanning electron microscopy (CD-SEM), to obtain data for pattern quality inspection take too much time and is also labor-intensive. On the other hand, the optical scatterometer-based metrology is a non-invasive technique and has very high throughput due to its larger spot size. These result in the collection of more statistical data than by using SEM, and that data processing is also automated with the optical technique making it more feasible than traditional CD-SEM.
Magnetic methods
Magnetic field directed self-assembly (MFDSA) allows the manipulation of dispersion and subsequent assembly of magnetic nanoparticles. This is widely used in the development of advanced materials whereby inorganic nanoparticles (NPs) are dispersed in polymers in order to enhance the properties of the materials.
The magnetic field technique allows the assembling of particles in 3D by doing the assembly in a dilute suspension where the solvent does not evaporate. It also does not need to use a template, and the approach also improve the magnetic anisotropy along the chain direction.
Dielectrophoretic methods
Dielectrophoretic directed self-assembly utilizes an electric field that controls metal particles, such as gold nanorods, by inducing a dipole in the particles. By varying the polarity and strength of the electric field, the polarized particles are either attracted to positive regions or repelled from negative regions where the electric field has higher strength. This direct manipulation method transports the particles to position and orient them into a nano-structure on a receptor substrate.
References
Manufacturing
Microtechnology
Nanotechnology | Directed assembly of micro- and nano-structures | [
"Materials_science",
"Engineering"
] | 1,441 | [
"Microtechnology",
"Materials science",
"Manufacturing",
"Mechanical engineering",
"Nanotechnology"
] |
49,261,146 | https://en.wikipedia.org/wiki/Ciliate%2C%20dasycladacean%20and%20hexamita%20nuclear%20code | The ciliate, dasycladacean and Hexamita nuclear code (translation table 6) is a genetic code used by certain ciliate, dasycladacean and Hexamita species.
The ciliate macronuclear code has not been determined completely. The codon UAA is known to code for Gln only in the Oxytrichidae.
The code
AAs = FFLLSSSSYYQQCC*WLLLLPPPPHHQQRRRRIIIMTTTTNNKKSSRRVVVVAAAADDEEGGGG
Starts = -----------------------------------M----------------------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V).
Differences from the standard code
Systematic range
Ciliata: Oxytricha and Stylonychia, Paramecium, Tetrahymena, Oxytrichidae and probably Glaucoma chattoni.
Dasycladaceae: Acetabularia, and Batophora.
Diplomonadida: Hexamita inflata, Diplomonadida ATCC50330, and ATCC50380.
See also
List of genetic codes
References
External links
Molecular genetics
Gene expression
Protein biosynthesis | Ciliate, dasycladacean and hexamita nuclear code | [
"Chemistry",
"Biology"
] | 646 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
49,262,559 | https://en.wikipedia.org/wiki/Dell%20PERC | A Dell PowerEdge RAID Controller, or Dell PERC, is a series of RAID, disk array controllers made by Dell for its PowerEdge server computers. The controllers support SAS and SATA hard disk drives (HDDs) and solid-state drives (SSDs).
PERC versions
Series 5 family
These are compatible with 9th and 10th Generation Dell PowerEdge servers.
PERC 5/E – external
PERC 5/I – internal – integrated or adapter
Series 6 family
These are compatible with 10th and 11th Generation Dell PowerEdge servers.
PERC S100 – software based
PERC 6/E – external – adapter
PERC 6/I – internal – modular or adapter
Series 7 family
These are compatible with 10th and 11th Generation Dell PowerEdge servers.
PERC S300 – software based
PERC H200 – internal – integrated/adapter or modular
PERC H700 – internal – integrated/adapter or modular
PERC H800 – external – adapter
Series 8 family
These are compatible with 12th Generation Dell PowerEdge servers.
PERC S110 – software based
PERC H310 – adapter or mini mono or mini blade
PERC H710 – internal – adapter or mini mono or mini blade
PERC H710p – internal – adapter or mini mono or mini blade
PERC H810 – external
Series 9 family
These are compatible with 13th Generation Dell PowerEdge servers.
PERC S130 – software based
PERC H330 – internal – Adapter – Tower Servers, Mini-Mono- Rack Servers; no battery backup unit (BBU)
PERC H730 – internal – Adapter – Tower Servers and Secondary Controllers, Mini-Mono- Rack Servers
PERC H730p – internal
PERC H830 – external
Note: All PERC 9 series cards support RAID 6 except for the PERC H330.
Series 10 family
These are compatible with 14th and 15th Generation Dell PowerEdge Servers.
PERC H840
PERC H345
PERC H740p
PERC H745
PERC H745p MX
Series 11 family
These are compatible with 16th Generation Dell PowerEdge Servers.
PERC H750
PERC H750 ADAPTER SAS
PERC H755 ADAPTER
PERC H755 FRONT SAS
PERC H755N FRONT NVME
PERC H755 MX ADAPTER
Series 12 family
PERC H965I ADAPTER
PERC H965I FRONT
PERC H965I MX
PERC H965E ADAPTER
See also
Dell DRAC
Intel Rapid Storage Technology
List of Dell PowerEdge Servers
References
External links
Dell products
Computer storage devices | Dell PERC | [
"Technology"
] | 545 | [
"Computer storage devices",
"Recording devices"
] |
49,263,113 | https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20180 | Zinc finger protein 180 is a protein that is encoded in humans by the ZNF180 gene.
Function
Zinc finger proteins have been shown to interact with nucleic acids and to have diverse functions. The zinc finger domain is a conserved amino acid sequence motif containing two specifically positioned cysteines and two histidines that are involved in coordinating zinc. Kruppel-related proteins form one family of zinc finger proteins. See MIM 604749 for additional information on zinc finger proteins.
References
External links
Proteins | Zinc finger protein 180 | [
"Chemistry"
] | 105 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
49,263,763 | https://en.wikipedia.org/wiki/Infrastructure%20as%20code | Infrastructure as code (IaC) is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers, as well as virtual machines, and associated configuration resources.
The definitions may be in a version control system, rather than maintaining the code through manual processes.
The code in the definition files may use either scripts or declarative definitions, but IaC more often employs declarative approaches.
Overview
IaC grew as a response to the difficulty posed by utility computing and second-generation web frameworks. In 2006, the launch of Amazon Web Services’ Elastic Compute Cloud and the 1.0 version of Ruby on Rails just months before created widespread scaling difficulties in the enterprise that were previously experienced only at large, multi-national companies. With new tools emerging to handle this ever-growing field, the idea of IaC was born. The thought of modeling infrastructure with code, and then having the ability to design, implement, and deploy application infrastructure with known software best practices appealed to both software developers and IT infrastructure administrators. The ability to treat infrastructure as code and use the same tools as any other software project would allow developers to rapidly deploy applications.
Advantages
The value of IaC can be broken down into three measurable categories: cost, speed, and risk. Cost reduction aims at helping not only the enterprise financially, but also in terms of people and effort, meaning that by removing the manual component, people are able to refocus their efforts on other enterprise tasks. Infrastructure automation enables speed through faster execution when configuring your infrastructure and aims at providing visibility to help other teams across the enterprise work quickly and more efficiently. Automation removes the risk associated with human error, like manual misconfiguration; removing this can decrease downtime and increase reliability. These outcomes and attributes help the enterprise move towards implementing a culture of DevOps, the combined working of development and operations.
Types of approaches
There are generally two approaches to IaC: declarative (functional) vs. imperative (procedural). The difference between the declarative and the imperative approach is essentially 'what' versus 'how' . The declarative approach focuses on what the eventual target configuration should be; the imperative focuses on how the infrastructure is to be changed to meet this. The declarative approach defines the desired state and the system executes what needs to happen to achieve that desired state. Imperative defines specific commands that need to be executed in the appropriate order to end with the desired conclusion.
Methods
Infrastructure as Code (IaC) allows you to manage servers and their configurations using code. There are two ways to send these configurations to servers: the 'push' and 'pull' methods. In the 'push' method, the system controlling the configuration directly sends instructions to the server. In the 'pull' method, the server retrieves its own instructions from the controlling system.
Tools
There are many tools that fulfill infrastructure automation capabilities and use IaC. Broadly speaking, any framework or tool that performs changes or configures infrastructure declaratively or imperatively based on a programmatic approach can be considered IaC. Traditionally, server (lifecycle) automation and configuration management tools were used to accomplish IaC. Now enterprises are also using continuous configuration automation tools or stand-alone IaC frameworks, such as Microsoft’s PowerShell DSC or AWS CloudFormation.
Continuous configuration automation
All continuous configuration automation (CCA) tools can be thought of as an extension of traditional IaC frameworks. They leverage IaC to change, configure, and automate infrastructure, and they also provide visibility, efficiency and flexibility in how infrastructure is managed. These additional attributes provide enterprise-level security and compliance.
Community content
Community content is a key determinant of the quality of an open source CCA tool. As Gartner states, the value of CCA tools is "as dependent on user-community-contributed content and support as it is on the commercial maturity and performance of the automation tooling". Established vendors such as Puppet and Chef have created their own communities. Chef has Chef Community Repository and Puppet has PuppetForge. Other vendors rely on adjacent communities and leverage other IaC frameworks such as PowerShell DSC. New vendors are emerging that are not content-driven, but model-driven with the intelligence in the product to deliver content. These visual, object-oriented systems work well for developers, but they are especially useful to production-oriented DevOps and operations constituents that value models versus scripting for content. As the field continues to develop and change, the community-based content will become ever more important to how IaC tools are used, unless they are model-driven and object-oriented.
Notable CCA tools include:
Other tools include AWS CloudFormation, cdist, StackStorm, Juju, and Step CI.
Relationships
Relationship to DevOps
IaC can be a key attribute of enabling best practices in DevOps. Developers become more involved in defining configuration and Ops teams get involved earlier in the development process. Tools that utilize IaC bring visibility to the state and configuration of servers and ultimately provide the visibility to users within the enterprise, aiming to bring teams together to maximize their efforts. Automation in general aims to take the confusion and error-prone aspect of manual processes and make it more efficient, and productive. Allowing for better software and applications to be created with flexibility, less downtime, and an overall cost-effective way for the company. IaC is intended to reduce the complexity that kills efficiency out of manual configuration. Automation and collaboration are considered central points in DevOps; infrastructure automation tools are often included as components of a DevOps toolchain.
Relationship to security
The 2020 Cloud Threat Report released by Unit 42 (the threat intelligence unit of cybersecurity provider Palo Alto Networks) identified around 200,000 potential vulnerabilities in infrastructure as code templates.
See also
Docker
IT infrastructure
Infrastructure as a service
Orchestration
Continuous configuration automation
Landing zone (software)
References
Agile software development
Software development process
Configuration management
Systems engineering
Orchestration software
Cloud computing | Infrastructure as code | [
"Engineering"
] | 1,264 | [
"Systems engineering",
"Configuration management"
] |
49,263,973 | https://en.wikipedia.org/wiki/Digital%20manufacturing | Digital manufacturing is an integrated approach to manufacturing that is centered around a computer system. The transition to digital manufacturing has become more popular with the rise in the quantity and quality of computer systems in manufacturing plants. As more automated tools have become used in manufacturing plants it has become necessary to model, simulate, and analyze all of the machines, tooling, and input materials in order to optimize the manufacturing process. Overall, digital manufacturing can be seen sharing the same goals as computer-integrated manufacturing (CIM), flexible manufacturing, lean manufacturing, and design for manufacturability (DFM). The main difference is that digital manufacturing was evolved for use in the computerized world.
As part of Manufacturing USA, Congress and the U.S. Department of Defense established MxD (Manufacturing x Digital), the nation's digital manufacturing institute, to speed adoption of these digital tools.
Three dimensional modeling
Manufacturing engineers use 3D modeling software to design the tools and machinery necessary for their intended applications. The software allows them to design the factory floor layout and the production flow. This technique lets engineers analyze the current manufacturing processes and allows them to search for ways to increase efficiency in production before production even begins.
Simulation
Simulation can be used to model and test a system's behavior. Simulation also provides engineers with a tool for inexpensive, fast, and secure analysis to test how changes in a system can affect the performance of that system.
These models can be classified into the following:
Static - System of equations at a point in time
Dynamic - System of equations that incorporate time as a variable
Continuous - Dynamic model where time passes linearly
Discrete - Dynamic model where time is separated into chunks
Deterministic - Models where a unique solution is generated per a given input
Stochastic - Models where a solution is generated utilizing probabilistic parameters
Applications of simulation can be assigned to:
Product design (e.g. virtual reality)
Process design (e.g. assisting in the design of manufacturing processes)
Enterprise resource planning
Analysis
Digital manufacturing systems often incorporate optimization capabilities to reduce time, cost, and improve the efficiency of most processes. These systems improve optimization of floor schedules, production planning, and decision making. The system analyzes feedback from production, such as deviations or problems in the manufacturing system, and generates solutions for handling them.
In addition, many technologies analyze data from simulations in order to calculate a design that is optimal before it is even built.
Debate continues on the impact of such systems on the manufacturing workforce. Econometric models have found that each newly installed robot displaces 1.6 manufacturing workers on average. Those models also have forecasted that by 2030 as many as 20 million additional manufacturing jobs worldwide could be displaced due to robotization.
However, other research has found evidence, not of job losses, but of a skills gap. Digital manufacturing is creating hundreds of new data-centric manufacturing jobs — roles like “collaborative robotics technician” and “predictive maintenance systems specialist" — but not enough available workers with the skills and training necessary to fill them.
Tooling and processes
There are many different tooling processes that digital manufacturing utilizes. However, every digital manufacturing process involves the use of computerized numerical controlled machines (CNC). This technology is crucial in digital manufacturing as it not only enables mass production and flexibility, but it also provides a link between a CAD model and production. The two primary categories of CNC tooling are additive and subtractive. Major strides in additive manufacturing have come about recently and are at the forefront of digital manufacturing. These processes allow machines to address every element of a part no matter the complexity of its shape.
Examples of additive tooling and processes
Stereolithography - In this process, solid parts are formed by solidifying layers of a photopolymer with ultraviolet light. There is a wide range of acrylics and epoxies that are used in this process.
Ink-Jet Processing - Although the most widely used ink-jet process is used for printing on paper, there are many that are applied in engineering. This process involves a printhead depositing layers of liquid material onto a filler powder in the shape of the desired object. After the powder is saturated, a fresh new layer of powder is added continually until the object is built. Another less known material drop deposition process use a build and support material to produce a 3D model. The build material is Thermoplastic and the support material is wax. The wax is melted away after the layered model is printed. Another similar technique uses (DBM) Droplet based manufacturing to build Thermoplastic models without support with 5 axis drop positioning
Laser sintering and fusion - This process utilizes heat produced by infrared lasers to bond a powdered material together to form a solid shape.
Solid ground curing - A layer of liquid photopolymer is spread over a platform. An optical mask is generated and laid over the polymer. A UV lamp cures the resin that is not blocked by the mask. Any remaining liquid is removed and the voids are filled with wax. Liquid resin is spread over the layer that was just produced and the process is repeated. When the part is finished, the wax can be melted out of the voids.
Laminated-Object Manufacturing - A sheet material is laid on a platform and a laser cuts the desired contour. The platform is lowered by one sheet thickness and a new sheet is laid with a layer of thermal adhesive between the two sheets. A heated roller presses the sheets together and activates the adhesive. The laser cuts the contours of this layer and the process is repeated. When the part is finished, the leftover sheet material around the perimeter of the part must be removed. The final part is coated with sealant.
Fused filament fabrication- FFF is the most commonly used form of 3-D printing. Thermoplastic material is heated just beyond solidification and extruded onto a platform in the desired shape. The platform is lowered, and the next layer is extruded onto the previous layer. The process is repeated until the part is complete.
Examples of subtractive tooling and processes
Water Jet Cutting - A water jet cutter is a CNC tool that uses a high pressure stream of water, often mixed with an abrasive material, to cut shapes or patterns out of many types of materials.
Milling - A CNC mill uses a rotational cutting tool to remove material from a piece of stock. Milling can be performed on most metals, many plastics, and all types of wood.
Lathe - A CNC lathe removes material by rotating the work-piece while a stationary cutting tool is brought into contact with the material.
Laser cutting - A Laser cutter is a CNC tool that uses a focused laser beam to cut and engrave sheet material. Cutting can be done on plastics, woods and on higher power machines, metal. Recently, affordable laser cutters have become popular with hobbyists.
Benefits
Optimization of a parts manufacturing process. This can be done by modifying and/or creating procedures within a virtual and controlled environment. By doing this the use of new robotic or automated systems can be tested in the manufacturing procedure before being physically implemented.
Digital manufacturing allows for the whole manufacturing process to be created virtually before it is implemented physically. This enables designers to see the results of their process before investing time and money into creating the physical plant.
The effects caused by changing the machines or tooling processes can be seen in real-time. This allows for analysis information to be taken for any individual part at any desired point during the manufacturing process.
Types
On demand
Additive Manufacturing - Additive manufacturing is the "process of joining materials to make objects from 3D model data, usually layer upon layer." Digital Additive manufacturing is highly automated which means less man hours and machine utilization, and therefore reduced cost. By incorporating model data from digitized open sources, products can be produced quickly, efficiently, and cheaply.
Rapid Manufacturing - Much like Additive manufacturing, Rapid manufacturing uses digital models to rapidly produce a product that can be complicated in shape and heterogeneous in material composition. Rapid manufacturing utilizes not only the digital information process, but also the digital physical process. Digital information governs the physical process of adding material layer by layer until the product is complete. Both the information and physical processes are necessary for rapid manufacturing to be flexible in design, cheap, and efficient.
Cloud-based design and manufacturing
Cloud-Based Design (CBD) refers to a model that incorporates social network sites, cloud computing, and other web technologies to aid in cloud design services. This type of system must be cloud computing-based, be accessible from mobile devices, and must be able to manage complex information. Autodesk Fusion 360 is an example CBD.
Cloud-Based Manufacturing (CBM) refers to a model that utilizes the access to open information from various resources to develop reconfigurable production lines to improve efficiency, reduce costs, and improve response to customer needs. A number of online manufacturing platforms enables users to upload their 3D files for DFM analysis and Manufacture.
See also
Injection moulding
Rapid prototyping
References | Digital manufacturing | [
"Technology"
] | 1,854 | [
"Industrial computing",
"Digital manufacturing"
] |
49,264,945 | https://en.wikipedia.org/wiki/List%20of%20iPad%20models | This article is intended to provide an overview of the various iPad models that are or have been marketed by Apple.
Comparison of models
iPad
Supported
Unsupported (64-bit)
Unsupported (32-bit)
iPad Mini
Supported
Unsupported
iPad Air
Supported
Unsupported
iPad Pro
Supported
Unsupported
iPad systems-on-chips
Operating system support
See also
List of iPhone models
Notes
References
Apple Inc. lists
IOS
IPad
iPad
iPadOS | List of iPad models | [
"Technology"
] | 94 | [
"Computing-related lists",
"Lists of mobile computers",
"Apple Inc. lists"
] |
49,265,651 | https://en.wikipedia.org/wiki/Sarcodon%20excentricus | Sarcodon excentricus is a species of tooth fungus in the family Bankeraceae. The fungus was originally described in 1951 by William Chambers Coker and Alma Holland Beers. The type collection was made by Lexemuel Ray Hesler in Cades Cove, Tennessee in 1937. Coker and Beers did not include a description of the fungus written in Latin—a requirement of the nomenclatural code at the time—and so their new species was not validly published. Richard Baird published S. excentricus validly in 1985.
References
External links
Fungi described in 1985
Fungi of the United States
excentricus
Fungi without expected TNC conservation status
Fungus species | Sarcodon excentricus | [
"Biology"
] | 136 | [
"Fungi",
"Fungus species"
] |
49,265,835 | https://en.wikipedia.org/wiki/Gas%20vesicle | Gas vesicles, also known as gas vacuoles, are nanocompartments in certain prokaryotic organisms, which help in buoyancy. Gas vesicles are composed entirely of protein; no lipids or carbohydrates have been detected.
Function
Gas vesicles occur primarily in aquatic organisms as they are used to modulate the cell's buoyancy and modify the cell's position in the water column so it can be optimally located for photosynthesis or move to locations with more or less oxygen. Organisms that could float to the air–liquid interface out competes other aerobes that cannot rise in a water column, through using up oxygen in the top layer.
In addition, gas vesicles can be used to maintain optimum salinity by positioning the organism in specific locations in a stratified body of water to prevent osmotic shock. High concentrations of solute will cause water to be drawn out of the cell by osmosis, causing cell lysis. The ability to synthesize gas vesicles is one of many strategies that allow halophilic organisms to tolerate environments with high salt content.
Evolution
Gas vesicles are likely one of the most early mechanisms of motility among microscopic organisms due to the fact that it is the most widespread form of motility conserved within the genome of prokaryotes, some of which have evolved about 3 billion years ago.
Modes of active motility such as flagella movement require a mechanism that could convert chemical energy into mechanical energy, and thus is much more complex and would have evolved later. Functions of the gas vesicles are also largely conserved among species, although the mode of regulation might differ, suggesting the importance of gas vesicles as a form of motility. In certain organism such as enterobacterium Serratia sp. flagella-based motility and gas vesicle production are regulated oppositely by a single RNA binding protein, RsmA, suggesting alternate modes of environmental adaptation which would have developed into different taxons through regulation of the development between motility and flotation.
Although there is evidence suggesting the early evolution of gas vesicles, plasmid transfer serves as an alternate explanation of the widespread and conserved nature of the organelle. Cleavage of a plasmid in Halobacterium halobium resulted in the loss of the ability to biosynthesize gas vesicles, indicating the possibility of horizontal gene transfer, which could result in a transfer of the ability to produce gas vesicles among different strains of bacteria.
Structure
Gas vesicles are generally lemon-shaped or cylindrical, hollow tubes of protein with conical caps on both ends. The vesicles vary most in their diameter. Larger vesicles can hold more air and use less protein making them the most economic in terms of resource use, however, the larger a vesicle is the structurally weaker it is under pressure and the less pressure required before the vesicle would collapse. Organisms have evolved to be the most efficient with protein use and use the largest maximum vesicle diameter that will withstand the pressure the organism could be exposed to. In order for natural selection to have affected gas vesicles, the vesicles' diameter must be controlled by genetics.
Although genes encoding gas vesicles are found in many species of haloarchaea, only a few species produce them. The first Haloarchaeal gas vesicle gene, GvpA was cloned from Halobacterium sp. NRC-1. 14 genes are involved in forming gas vesicles in haloarchaea.
The first gas vesicle gene, GvpA was identified in Calothrix. There are at least two proteins that compose a cyanobacterium's gas vesicle: GvpA, and GvpC. GvpA forms ribs and much of the mass (up to 90%) of the main structure. GvpA is strongly hydrophobic and may be one of the most hydrophobic proteins known. GvpC is hydrophilic and helps to stabilize the structure by periodic inclusions into the GvpA ribs. GvpC is capable of being washed out of the vesicle and a consequential decreases in the vesicle's strength. The thickness of the vesicle's wall may range from 1.8 to 2.8 nm. The ribbed structure of the vesicle is evident on both inner and outer surfaces with a spacing of 4–5 nm between ribs. Vesicles may be 100–1400 nm long and 45–120 nm in diameter.
Within a species gas vesicle sizes are relatively uniform with a standard deviation of ±4%.
Growth
It appears that gas vesicles begin their existence as small biconical (two cones with the flat bases joined) structures which enlarge to the specific diameter than grow and expand their length. It is unknown exactly what controls the diameter but it may be a molecule that interferes with GvpA or the shape of GvpA may change.
Regulation
Formation of gas vesicles are regulated by two Gvp proteins: GvpD, which represses the expression of GvpA and GvpC proteins, and GvpE, which induces expression. Extracellular environmental factors also affect vesicle formation, either by regulating Gvp protein production or by directly disturbing the vesicle structure.
Light intensity
Light intensity has been found to affect gas vesicles production and maintenance differently between different bacteria and archaea. For Anabaena flos-aquae, higher light intensities leads to vesicle collapse from an increase in turgor pressure and greater accumulation of photosynthetic products. In cyanobacteria, vesicle production decreases at high light intensity due to exposure of the bacterial surface to UV radiation, which can damage the bacterial genome.
Carbohydrates
Accumulation of glucose, maltose, or sucrose in Haloferax mediterranei and Haloferax volcanii were found to inhibit the expression of GvpA proteins and, therefore, a decrease of gas vesicle production. However, this only occurred at the cell's early exponential growth phase. Vesicle formation could also be induced in decreasing extracellular glucose concentrations.
Oxygen
A lack of oxygen was found to negatively affect gas vesicle formation in halophilic archaea. Halobacterium salinarum produce little or no vesicles under anaerobic conditions due to reduced synthesis of mRNA transcripts encoding for Gvp proteins. H. mediterranei and H. volcanii do not produce any vesicles under anoxic conditions due to a decrease in synthesized transcripts encoding for GvpA and truncated transcripts expressing GvpD.
pH
Increased extracellular pH levels have been found to increase vesicle formation in Microcytis species. Under increased pH, levels of gvpA and gvpC transcripts increase, allowing more exposure to ribosomes for expression and leading to upregulation of Gvp proteins. It may be attributed to greater transcription of these genes, decreased decay of the synthesized transcripts or the higher stability of the mRNA.
Ultrasonic irradiation
Ultrasonic irradiation, at certain frequencies, was found to collapse gas vesicles in cyanobacteria Spirulina platensis, preventing them from blooming.
Quorum sensing
In enterobacterium; Serratia sp. strain ATCC39006, gas vesicle is produced only when there is sufficient concentration of a signalling molecule, N-acyl homoserine lactone. In this case, the quorum sensing molecule, N-acyl homoserine lactone acts as a morphogen initiating organelle development. This is advantageous to the organism as resources for gas vesicle production are utilized only when there is oxygen limitation caused by an increase in bacterial population.
Role in vaccine development
Gas vesicle gene gvpC from Halobacterium sp. is used as delivery system for vaccine studies.
Several characteristics of the protein encoded by the gas vesicle gene gvpC allow it to be used as carrier and adjuvant for antigens: it is stable, resistant to biological degradation, tolerates relatively high temperatures (up to 50 °C), and non-pathogenic to humans. Several antigens from various human pathogens have been recombined into the gvpC gene to create subunit vaccines with long-lasting immunologic responses.
Different genomic segments encoding for several Chlamydia trachomatis pathogen's proteins, including MOMP, OmcB, and PompD, are joined to the gvpC gene of Halobacteria. In vitro assessments of cells show expression of the Chlamydia genes on cell surfaces through imaging techniques and show characteristic immunologic responses such as TLRs activities and pro-inflammatory cytokines production. Gas vesicle gene can be exploited as a delivery vehicle to generate a potential vaccine for Chlamydia. Limitations of this method include the need to minimize the damage of the GvpC protein itself while including as much of the vaccine target gene into the gvpC gene segment.
A similar experiment uses the same gas vesicle gene and Salmonella enterica pathogen's secreted inosine phosphate effector protein SopB4 and SopB5 to generate a potential vaccine vector. Immunized mice secrete pro-inflammatory cytokines IFN-γ, IL-2, and IL-9. Antibody IgG is also detected. After an infection challenge, none or significantly less amount of bacteria were found in the harvested organs such as the spleen and the liver. Potential vaccines using gas vesicle as an antigen display can be given via the mucosal route as an alternative administration pathway, increasing its accessibility to more people and eliciting a wider range of immune responses within the body.
Role as contrast agents and reporter genes
Gas vesicles have several physical properties that make them visible on various medical imaging modalities. The ability of gas vesicle to scatter light has been used for decades for estimating their concentration and measuring their collapse pressure . The optical contrast of gas vesicles also enables them to serve as contrast agents in optical coherence tomography, with applications in ophthalmology. The difference in acoustic impedance between the gas in their cores and the surrounding fluid gives gas vesicles robust acoustic contrast. Moreover, the ability of some gas vesicle shells to buckle generates harmonic ultrasound echoes that improves the contrast to tissue ratio. Finally, gas vesicles can be used as contrast agents for magnetic resonance imaging (MRI), relying on the difference between the magnetic susceptibility of air and water. The ability to non-invasively collapse gas vesicles using pressure waves provides a mechanism for erasing their signal and improving their contrast. Subtracting the images before and after acoustic collapse can eliminate background signals enhancing the detection of gas vesicles.
Heterologous expression of gas vesicles in bacterial and mammalian cells enabled their use as the first family of acoustic reporter genes. While fluorescent reporter genes like green fluorescent protein (GFP) had widespread use in biology, their in vivo applications are limited by the penetration depth of light in tissue, typically a few mm. Luminescence can be detected deeper within the tissue, but have a low spatial resolution. Acoustic reporter genes provide sub-millimeter spatial resolution and a penetration depth of several centimeters, enabling the in vivo study of biological processes deep within the tissue.
References
Bacteria
Prokaryotic cell anatomy
Vesicles | Gas vesicle | [
"Biology"
] | 2,409 | [
"Microorganisms",
"Prokaryotes",
"Bacteria"
] |
49,265,860 | https://en.wikipedia.org/wiki/Sarcodon%20scabripes | Sarcodon scabripes is a species of fungus in the family Bankeraceae found in Asia, Europe, and North America. It was originally described in 1897 as Hydnum scabripes by Charles Horton Peck. Howard James Banker transferred it to the genus Sarcodon in 1906. The fungus makes fruit bodies with a drab gray to flesh-colored cap, and flesh that is white. In addition to the United States, where it was first documented, S. scabripes has been reported from Japan and the Sverdlovsk Oblast region of Russia.
References
External links
Fungi described in 1896
Fungi of Asia
Fungi of Europe
Fungi of the United States
scabripes
Taxa named by Charles Horton Peck
Fungi without expected TNC conservation status
Fungus species | Sarcodon scabripes | [
"Biology"
] | 157 | [
"Fungi",
"Fungus species"
] |
49,266,292 | https://en.wikipedia.org/wiki/Boletopsis%20atrata | Boletopsis atrata is a species of hydnoid fungus in the family Bankeraceae. It was described as new to science in 1982 by Norwegian mycologist Leif Ryvarden. It has a disjunct distribution, found in temperate forests of East Asia and Eastern North America, where it fruits at the base of hardwood trees and stumps, especially oak (Quercus) and chestnuts (Castanea).
References
External links
Fungi described in 1982
Fungi of Asia
Fungi of North America
Thelephorales
Taxa named by Leif Ryvarden
Fungus species | Boletopsis atrata | [
"Biology"
] | 120 | [
"Fungi",
"Fungus species"
] |
49,266,956 | https://en.wikipedia.org/wiki/Yin%20T.%20Hsieh | Yin T. Hsieh (Traditional Chinese: 謝英鐸; 14 April 1929 – 24 February 2018) was a Taiwanese scientist and agronomist based in the Dominican Republic.
Contributions
Hsieh is considered the "Father of Dominican Rice" for his work on the development of several rice varieties and technologies .
In Taiwan, Doctor Hsieh worked in the development of several rice varieties, including Kaohsiung 22, Kaohsiung 24, Kaohsiung 25, Kaohsiung 27, Kaohsiung 53, Kaohsiung 64, Kaohsiung 136, y Kaohsiung 137.
Hsieh Arrived to the Dominican Republic on December 29, 1965. He focused efforts in genetic improvement of Dominican rice helping to increase the country yield production of the crop. Due to these, and other, efforts, the country now has a strong and well developed rice production system
References
External links
El Nuevo Diario. Expert calls to invests in agricultural sciences (in Spanish)
UNAPEC celebrates World Environment day (in Spanish)
Periódico Hoy. Big farmers learn from small farmers (in Spanish)
Provincias Dominicanas. Distinguished visitors of the XX century, Bonao, Dominican Republic (in Spanish).
Diario Libre. Taiwanese embassy to show documentaries (in Spanish).
ElCaribe.com. Dominican Republic has been self-sufficient in rice production for six years (in Spanish).
Perspectiva Ciudadana. The Father of Dominican Rice (in Spanish).
Periódico Hoy. On Rice history (in Spanish).
Listín Diario. Fire destroys Dr. Hsieh's 22 years of efforts (in Spanish).
1929 births
2018 deaths
Plant breeding
Agronomists
Agriculture in the Dominican Republic
National Taiwan University alumni
Dominican Republic scientists
Texas A&M University alumni
Recipients of the Order of Christopher Columbus | Yin T. Hsieh | [
"Chemistry"
] | 382 | [
"Plant breeding",
"Molecular biology"
] |
49,268,590 | https://en.wikipedia.org/wiki/Buildings%20Department | The Buildings Department (BD) is a department of the Hong Kong Government responsible for building codes, building safety, and inspection. It was founded in 1993 and is now subordinate to the Development Bureau.
History
The Buildings Department succeeded the former Buildings Ordinance Office (BOO). BOO first existed under the former Public Works Department. From 1982-1986 it existed under the Building Development Department, and from 1986-1993 under the Buildings and Lands Department.
In March 2021, the BD was criticized because some cases of misconnected sewage pipes had been unresolved by the department for more than a decade, resulting in waste flowing into the ocean.
References
External links
Building codes
Hong Kong government departments and agencies | Buildings Department | [
"Engineering"
] | 140 | [
"Building engineering",
"Building codes"
] |
49,269,078 | https://en.wikipedia.org/wiki/Fellow%20of%20the%20Institution%20of%20Mechanical%20Engineers | Fellowship of the Institution of Mechanical Engineers (FIMechE) is an award and fellowship granted to individuals that the Institution of Mechanical Engineers judges to be a "professional engineer working in a senior role with significant autonomy and responsibility." It is the highest level of membership and demonstrates experience, commitment and contribution to engineering.
Fellowship
Fellows are entitled to use the post-nominal letters FIMechE. examples of fellows include Colin P. Smith, Barry Thornton, William Pillar, Laurence Williams and Michael Alcock. See the :Category:Fellows of the Institution of Mechanical Engineers for more examples.
References
Institution of Mechanical Engineers
Academic awards
Institution of Mechanical Engineers | Fellow of the Institution of Mechanical Engineers | [
"Engineering"
] | 128 | [
"Institution of Mechanical Engineers",
"Mechanical engineering organizations"
] |
49,269,127 | https://en.wikipedia.org/wiki/Isoxicam | Isoxicam is a nonsteroidal anti-inflammatory drug (NSAID) that was taken or applied to reduce inflammation and as an analgesic reducing pain in certain conditions. The drug was introduced in 1983 by the Warner-Lambert Company. Isoxicam is a chemical analog of piroxicam (Feldene) which has a pyridine ring in lieu of an isoxazole ring. In 1985 isoxicam was withdrawn from the French market, due to adverse effects, namely toxic epidermal necrolysis resulting in death. Although these serious side effects were observed only in France, the drug was withdrawn worldwide.
References
Dermatoxins
Isoxazoles
Nonsteroidal anti-inflammatory drugs
Drugs developed by Pfizer
Sultams
Withdrawn drugs | Isoxicam | [
"Chemistry"
] | 161 | [
"Drug safety",
"Withdrawn drugs"
] |
49,270,083 | https://en.wikipedia.org/wiki/Graph%20edit%20distance | In mathematics and computer science, graph edit distance (GED) is a measure of similarity (or dissimilarity) between two graphs.
The concept of graph edit distance was first formalized mathematically by Alberto Sanfeliu and King-Sun Fu in 1983.
A major application of graph edit distance is in inexact graph matching, such
as error-tolerant pattern recognition in machine learning.
The graph edit distance between two graphs is related to the
string edit distance between strings.
With the interpretation of strings as connected, directed acyclic graphs of
maximum degree one, classical definitions
of edit distance such as Levenshtein distance,
Hamming distance
and Jaro–Winkler distance may be interpreted as graph edit distances
between suitably constrained graphs. Likewise, graph edit distance is
also a generalization of tree edit distance between
rooted trees.
Formal definitions and properties
The mathematical definition of graph edit distance is dependent upon the definitions of
the graphs over which it is defined, i.e. whether and how the vertices and edges of the
graph are labeled and whether the edges are directed.
Generally, given a set of graph edit operations (also known as elementary graph operations), the graph edit distance between two graphs and , written as can be defined as
where denotes the set of edit paths transforming into (a graph isomorphic to) and is the cost of each graph edit operation .
The set of elementary graph edit operators typically includes:
vertex insertion to introduce a single new labeled vertex to a graph.
vertex deletion to remove a single (often disconnected) vertex from a graph.
vertex substitution to change the label (or color) of a given vertex.
edge insertion to introduce a new colored edge between a pair of vertices.
edge deletion to remove a single edge between a pair of vertices.
edge substitution to change the label (or color) of a given edge.
Additional, but less common operators, include operations such as edge splitting that introduces a new vertex into an edge (also creating a new edge), and edge contraction that eliminates vertices of degree two between edges (of the same color). Although such complex edit operators can be defined in terms of more elementary transformations, their use allows finer parameterization of the cost function when the operator is cheaper than the sum of its constituents.
A deep analysis of the elementary graph edit operators is presented in
And some methods have been presented to automatically deduce these elementary graph edit operators. And some algorithms learn these costs online:
Applications
Graph edit distance finds applications in handwriting recognition, fingerprint recognition and cheminformatics.
Algorithms and complexity
Exact algorithms for computing the graph edit distance between a pair of graphs typically transform the problem into one of finding the minimum cost edit path between the two graphs.
The computation of the optimal edit path is cast as a pathfinding search or shortest path problem, often implemented as an A* search algorithm.
In addition to exact algorithms, a number of efficient approximation algorithms are also known. Most of them have cubic computational time
Moreover, there is an algorithm that deduces an approximation of the GED in linear time
Despite the above algorithms sometimes working well in practice, in general the problem of computing graph edit distance is NP-hard (for a proof that's available online, see Section 2 of Zeng et al.), and is even hard to approximate (formally, it is APX-hard).
References
Graph theory
Graph algorithms
Computational problems in graph theory
Distance | Graph edit distance | [
"Physics",
"Mathematics"
] | 697 | [
"Computational problems in graph theory",
"Discrete mathematics",
"Distance",
"Physical quantities",
"Quantity",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Size",
"Combinatorics",
"Space",
"Mathematical relations",
"Spacetime",
"Wikipedia categories named after... |
49,270,112 | https://en.wikipedia.org/wiki/Remote%20guarding | Remote guarding is a proactive security system combining CCTV video cameras, video analytics, alarms, monitoring centers and security agents. Potential threats are first spotted by cameras and analyzed in real-time by software algorithms based on predefined criteria. Once an event has been identified by the software, a security officer located in a remote center is then alerted to analyze the threat and take appropriate action immediately to prevent or minimize damage from occurring. These actions can include a verbal warning to the perpetrator via two-way radio from the center or reporting the event to local law enforcement.
Remote guarding adds a layer of human verification to automated security systems that results in reducing false alarms to almost 0% by having trained agents review each alert before taking action. This service along with changes to how the alarm industry handles alarms, such as second call verification, continue to reduce the false alarm burden on local law enforcement.
Compliance with Bureaus & Law Enforcement Agencies
Remote guarding is considered by some as the next innovation in traditional man-guarding, which is when licensed security officers are placed on site to observe and report any suspicious activities per The Department of Consumer Affairs (DCA), Bureau of Security and Investigative Services (BSIS), which has jurisdiction over the private security industry. The authority is derived from Division 3, commencing with Section 7580 thru Section 7588.5, Chapter 11.5, Private Security Services Act in the Business and Professions Code (B&P).
Because remote guarding is relatively new to the security industry, organizations and agencies are still working on bringing standards to remote guarding services in respect to previous traditional security mediums.
In October 2016, Underwriter Laboratories (UL) issued the first certificate of compliance for remote guarding through the use of command and control in accordance to the requirements outlined in UL 827B and UL 827 to Elite Interactive Solutions, LLC. of Los Angeles, California. almost 4 years 2 long
Adaptation of Remote Guarding
Traditional security guard companies, as well as a new breed of security companies that focus specifically on Remote Guarding, have emerged to help secure facilities that include property types such as automotive dealerships, multi-family residential housing, office buildings, office parks, warehouses, distribution centers, and many other locations where security is set up to monitor and protect property and people.
However, a new breed of security companies that are exclusively focused on remote guarding have approached the traditional security industry differently by forgoing the inclusion of man-guarding and designing systems and hiring/training agents specifically to solely provide remote guarding services.
While not in direct competition with Remote Guarding Robert H. Perry and Associates reports a trend that Contract Security Companies are starting electronic security divisions or teaming with companies that specialize in electronic security. According to the report, customers are replacing security officers with electronic security, or enhancing security coverage by using security officers and electronic security devices.
References
Alarms
Security technology | Remote guarding | [
"Technology"
] | 578 | [
"Warning systems",
"Alarms"
] |
49,270,505 | https://en.wikipedia.org/wiki/Hafnian | In mathematics, the hafnian is a scalar function of a symmetric matrix that generalizes the permanent.
The hafnian was named by Eduardo R. Caianiello "to mark the fruitful period of stay in Copenhagen (Hafnia in Latin)."
Definition
The hafnian of a symmetric matrix is defined as
where is the set of all partitions of the set into subsets of size .
This definition is similar to that of the Pfaffian, but differs in that the signatures of the permutations are not taken into account. Thus the relationship of the hafnian to the Pfaffian is the same as relationship of the permanent to the determinant.
Basic properties
The hafnian may also be defined as
where is the symmetric group on . The two definitions are equivalent because if , then is a partition of into subsets of size 2, and as ranges over , each such partition is counted exactly times. Note that this argument relies on the symmetry of , without which the original definition is not well-defined.
The hafnian of an adjacency matrix of a graph is the number of perfect matchings (also known as 1-factors) in the graph. This is because a partition of into subsets of size 2 can also be thought of as a perfect matching in the complete graph .
The hafnian can also be thought of as a generalization of the permanent, since the permanent can be expressed as
.
Just as the hafnian counts the number of perfect matchings in a graph given its adjacency matrix, the permanent counts the number of matchings in a bipartite graph given its biadjacency matrix.
The hafnian is also related to moments of multivariate Gaussian distributions.
By Wick's probability theorem, the hafnian of a real symmetric matrix may expressed as
where is any number large enough to make positive semi-definite. Note that the hafnian does not depend on the diagonal entries of the matrix, and the expectation on the right-hand side does not depend on .
Generating function
Let be an arbitrary complex symmetric matrix composed of four blocks , , and . Let be a set of independent variables, and let be an antidiagonal block matrix composed of entries (each one is presented twice, one time per nonzero block). Let denote the identity matrix. Then the following identity holds:
where the right-hand side involves hafnians of matrices , whose blocks , , and are built from the blocks , , and respectively in the way introduced in MacMahon's Master theorem. In particular, is a matrix built by replacing each entry in the matrix with a block filled with ; the same scheme is applied to , and . The sum runs over all -tuples of non-negative integers, and it is assumed that .
The identity can be proved by means of multivariate Gaussian integrals and Wick's probability theorem.
The expression in the left-hand side, , is in fact a multivariate generating function for a series of hafnians, and the right-hand side constitutes its multivariable Taylor expansion in the vicinity of the point As a consequence of the given relation, the hafnian of a symmetric matrix can be represented as the following mixed derivative of the order :
The hafnian generating function identity written above can be considered as a hafnian generalization of MacMahon's Master theorem, which introduces the generating function for matrix permanents and has the following form in terms of the introduced notation:
Note that MacMahon's Master theorem comes as a simple corollary from the hafnian generating function identity due to the relation .
Non-negativity
If is a Hermitian positive semi-definite matrix and is a complex symmetric matrix, then
where denotes the complex conjugate of .
A simple way to see this when is positive semi-definite is to observe that, by Wick's probability theorem, when is a complex normal random vector with mean , covariance matrix and relation matrix .
This result is a generalization of the fact that the permanent of a Hermitian positive semi-definite matrix is non-negative. This corresponds to the special case using the relation .
Loop hafnian
The loop hafnian of an symmetric matrix is defined as
where is the set of all perfect matchings of the complete graph on vertices with loops, i.e., the set of all ways to partition the set into pairs or singletons (treating a singleton as the pair ). Thus the loop hafnian depends on the diagonal entries of the matrix, unlike the hafnian. Furthermore, the loop hafnian can be non-zero when is odd.
The loop hafnian can be used to count the total number of matchings in a graph (perfect or non-perfect), also known as its Hosoya index. Specifically, if one takes the adjacency matrix of a graph and sets the diagonal elements to 1, then the loop hafnian of the resulting matrix is equal to the total number of matchings in the graph.
The loop hafnian can also be thought of as incorporating a mean into the interpretation of the hafnian as a multivariate Gaussian moment. Specifically, by Wick's probability theorem again, the loop hafnian of a real symmetric matrix can be expressed as
where is any number large enough to make positive semi-definite.
Computation
Computing the hafnian of a (0,1)-matrix is #P-complete, because computing the permanent of a (0,1)-matrix is #P-complete.
The hafnian of a matrix can be computed in time.
If the entries of a matrix are non-negative, then its hafnian can be approximated to within an exponential factor in polynomial time.
See also
Permanent
Pfaffian
Boson sampling
References
Algebraic graph theory
Matching (graph theory)
Combinatorics | Hafnian | [
"Mathematics"
] | 1,226 | [
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"Matching (graph theory)",
"Algebra",
"Algebraic graph theory"
] |
49,271,581 | https://en.wikipedia.org/wiki/Discord | Discord is an instant messaging and VoIP social platform which allows communication through voice calls, video calls, text messaging, and media. Communication can be private or take place in virtual communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links. Discord runs on Windows, macOS, Android, iOS, iPadOS, Linux, and in web browsers. the service has about 150 million monthly active users and 19 million weekly active servers. It is primarily used by gamers, although the share of users interested in other topics is growing. Discord was the 30th most visited website in the world with 22.98% of its traffic coming from the United States. In March 2022, Discord employed 600 people globally.
History
The concept of Discord came from Jason Citron, who had founded OpenFeint, a social gaming platform for mobile games, and Stanislav Vishnevskiy, who had founded Guildwork, another social gaming platform. Citron sold OpenFeint to GREE in 2011 for million, which he used to found Hammer & Chisel, a game development studio, in 2012. Their first product was Fates Forever, released in 2014, which Citron anticipated to be the first multiplayer online battle arena (MOBA) game on mobile platforms, but it did not become commercially successful.
According to Citron, during the development process, he noticed how difficult it was for his team to work out tactics in games like Final Fantasy XIV and League of Legends using available voice over IP (VoIP) software. This led to the development of a chat service with a focus on user friendliness with minimal impact on performance. The name Discord was chosen because it "sounds cool and has to do with talking", was easy to say, spell, remember, and was available for trademark and website. In addition, "Discord in the gaming community" was the problem they wished to solve.
To develop Discord, Hammer & Chisel gained additional funding from YouWeb's 9+ incubator, which had also funded the startup of Hammer & Chisel, and from Benchmark capital and Tencent.
Discord was publicly released in May 2015 under the domain name discordapp.com. According to Citron, they made no specific moves to target any specific audience, but some gaming-related subreddits quickly began to replace their IRC links with Discord links. Discord became widely used by esports and LAN tournament gamers. The company benefited from relationships with Twitch streamers and subreddit communities for Diablo and World of Warcraft.
In January 2016, Discord raised an additional $20million in funding, including an investment from WarnerMedia (then TimeWarner). WarnerMedia was acquired by AT&T in 2018 and WarnerMedia Investment Group was shut down in 2019, selling its equity.
Microsoft announced in April 2018 that it would provide Discord support for Xbox Live users, allowing them to link their Discord and Xbox Live accounts so that they can connect with their Xbox Live friends list through Discord.
In December 2018, the company announced it had raised $150million in funding at a $2billion valuation. The round was led by Greenoaks Capital with participation from Firstmark, Tencent, IVP, Index Ventures and Technology Opportunity Partners.
Starting in June 2020, Discord announced it was shifting focus away from video gaming specifically to a more all-purpose communication and chat client for all functions, revealing its new slogan "Your place to talk", along with a revised website. Among other planned changes was to reduce the number of gaming in-jokes it used within the client, improving the user onboarding experience, and increasing server capacity and reliability. The company announced it had received an additional $100million in investments to help with these changes.
In March 2021, Discord announced it had hired its first finance chief, former head of finance for Pinterest Tomasz Marcinkowski. An inside source called this one of the first steps for the company towards a potential initial public offering, though co-founder and chief executive officer Jason Citron had stated earlier in the month he was not thinking about taking the company public. Discord doubled its monthly user base to about 140million in 2020. The same month, Bloomberg News and The Wall Street Journal reported that several companies were looking to purchase Discord, with Microsoft named as the likely lead buyer at a value estimated at . However, they ended talks with Microsoft, opting to stay independent. Instead, Discord launched another round of investment in April 2021. Among those investing into the company was Sony Interactive Entertainment; the company stated that it intended to integrate a portion of Discord's services into the PlayStation Network by 2022.
In May 2021, Discord rebranded its game controller-shaped logo "Clyde" in celebration of its sixth anniversary. The company also changed the color palette of its branding and user interfaces, making it much more saturated, to be more "bold and playful". They also changed its slogan from "your place to talk", to "imagine a place", believing that it would be easier to attach to additional taglines; these changes were met with backlash and criticism from Discord users.
In July 2021, Discord acquired Sentropy, an internet moderation company.
Ahead of a funding round in August 2021, Discord had reported in 2020 revenues, triple from the prior year, and had an estimated valuation of . According to Citron, the increased valuation was due to the shift away from "broadcast wide-open social media communication services to more small, intimate places", as well as increased usage from the COVID-19 pandemic. They captured users that were leaving Facebook and other platforms due to privacy concerns. Citron states that they are still in talks with several potential buyers including all major gaming console manufacturers. From this, the company secured an additional in further investments in September 2021.
In September 2021, Google sent cease and desist notices to the developers of two of the most popular music bots used on Discord–Groovy and Rythm–which were used on an estimated 36million servers in total. These bots allowed users to request and play songs in a voice channel, taking the songs from YouTube ad-free. Two weeks later, Discord partnered with YouTube to test a "Watch Together" feature, which allows Discord users to watch YouTube videos together.
Citron posted mockup images of Discord around the proposed Web3 principles with integrated cryptocurrency and non-fungible token support in November 2021, leading to criticism from its user base. Citron later stated that "We [...] want to clarify we have no plans to ship it at this time."
The CNIL fined Discord €800,000 in November 2022 for being in violation of the European Union's General Data Protection Regulation (GDPR). The violations found by CNIL were that the application would continue to run in the background after it was closed and would not disconnect the user from a voice chat, as well as allowing users to create passwords that only consist of six characters.
In early 2023, Discord was used to publish classified United States documents in one of the most significant intelligence leaks in recent history. The documents, distributed on a Minecraft Discord server as photos, detailed the state of the Russo-Ukrainian War, surveillance of allied and adversarial nations, and indicated cracks in alliances with nations aligned with the United States.
In August 2023, Discord cut 4% of its staff, laying off 40 employees as part of a restructuring effort. On December 5, 2023, Discord revamped their mobile app for iOS and Android devices. They added new features such as dark mode for OLED screens, voice messages, and new icons.
After a five-fold increase in employees between 2020 and 2024, the company laid off 17%, or—170 employees, in January 2024.
On April Fool's 2024, Discord accidentally broke the record of the most viewed YouTube video in 24 hours. The cause of this record was the Discord client playing the announcement video on loop in the app itself. However, more than 1.3 billion views were removed 2 days later after YouTube fixed the views count, and no records were broken by the Discord Loot Boxes video.
Features
Discord is centered around managing communities. Communication tools such as voice and video calls, persistent chat rooms, and integrations with other gamer-focused services along with the general ability to send direct messages and create personal groups are present.
Servers
Discord communities are organized into discrete collections of channels called servers. Although they are referred to as servers on the front end, they are called "guilds" in the developer documentation, to distinguish themselves from actual servers. Users can create servers for free, manage their public visibility, and create voice channels, text channels, and categories to sort the channels into. Most servers have a limit of 250,000 members, but this limit can be raised if the server owner contacts Discord.
Users can also create roles and assign them to server members. Roles can, among other things, determine which channels users have access to, change users’ colors, and designate a server's moderation team. The previously largest known Discord server was Snowsgiving 2021, an official Discord-controlled server made for the 2021 winter holiday season. This server reached 1 million members. In 2023, the server for Midjourney reached over 15 million members, making it the largest server on Discord.
Starting October 2017, Discord allows game developers and publishers to verify their servers. Verified servers, like verified accounts on social media sites, have badges to mark them as official communities. A verified server is moderated by its developers' or publishers' own moderation team. Verification was later extended in February 2018 to include esports teams and musical artists. By the end of 2017, about 450 servers were verified. In 2023, Discord paused their verification program while they performed maintenance. The program has not been reopened
Channel types
Channels may be used either for voice chat and streaming or for instant messaging and file sharing, or both.
Discord launched Stage Channels in May 2021, a feature similar to Clubhouse which allows for live, moderated channels, for audio talks, discussions, and other uses, which can further be potentially gated to only invited or ticketed users. Initially, users could search for open Stage Channels relevant to their interests through a Stage Discovery tool, which was discontinued in October 2021.
In August 2021, Discord launched Threads, which are temporary text channels that can be set to automatically disappear. This is meant to help foster more communication within servers.
Forum Channels, which allow for longer and separate conversations were introduced to the platform in September 2022. These channels bring an Internet forum experience to Discord.
Discord launched Media Channels in June 2023. Media Channels are restricted to videos and images only.
User profiles
Users register for Discord with an email address and must create a username. Until mid-2023, to allow multiple users to use the same username, each user was assigned a four-digit number called a "discriminator" (colloquially a "Discord tag"), prefixed with "#", which was added to the end of their username. Users who subscribed to Discord Nitro had the ability to change this tag to any four-digit number. This system was ultimately changed to a handle-based system in May 2023, removing the discriminator from usernames. This new system mandated a change of username. Users selected their new usernames in priority based on how early they registered for Discord, Nitro status, and ownership of partner and verified servers. Users criticized the possible impersonation risk that may arise if their previous username was claimed by another user.
In June 2021, Discord added a feature that allows the user to add an about me section to their profile, as well as a custom colored banner at the top of their profile. Subscribers to Discord Nitro have the added ability to upload static or animated images as their banners instead of solid colors.
Video calls and streaming
Video calling and screen sharing were added in October 2017, allowing users to create private video calls with up to 10 users, later increased to 50 due to the increased popularity of video calling during the COVID-19 pandemic.
In August 2019, this was expanded with live streaming channels in servers. A user can share their entire screen, or a specific application, and others in that channel can choose to watch the stream. While these features somewhat mimic the livestreaming capabilities of platforms like Twitch, the company does not plan to compete with these services, as these features were made for small groups.
Digital distribution
In August 2018, Discord launched a games storefront beta, allowing users to purchase a curated set of games through the service. This will include a "First on Discord" featured set of games that their developers attest to Discord's help in getting launched, giving these games 90 days of exclusivity on the Discord marketplace. Discord Nitro subscribers will also gain access to a rotating set of games as part of their subscription, with the price of Nitro being bumped from $4.99 to $9.99 a month. A cheaper service called 'Nitro Classic' was also released that has the same perks as Nitro but does not include free games.
Following the launch of the Epic Games Store, which challenged Valve's Steam storefront by only taking a 12% cut of game revenue, Discord announced in December 2018 that it would reduce its own revenue cut to 10%.
To further support developers, starting in March 2019 Discord gave the ability for developers and publishers that ran their own servers to offer their games through a dedicated store channel on their server, with Discord managing the payment processing and distribution. This can be used, for example, to give select users access to alpha- and beta-builds of a game in progress as an early access alternative.
Also in March 2019, Discord removed the digital storefront, instead choosing to focus on the Nitro subscription and having direct sales be done through developer's own servers. In September 2019, Discord announced that it was ending its free game service in October 2019 as they found too few people were playing the games offered.
Developer tools and bots
In December 2016, the company introduced its GameBridge API, which allows game developers to directly integrate with Discord within games.
In December 2017, Discord added a software development kit that allows developers to integrate their games with the service, called "rich presence". This integration is commonly used to allow players to join each other's games through Discord or to display information about a player's game progression in their Discord profile.
Bots are community-made tools to automate tasks. When installed by server owners, they may aid in moderation, host mini games, and perform myriad of other automated tasks. there are around 430,000 total bots active in estimated 30% of all servers. Discord provides official bot APIs which allow custom elements such as dropdowns and buttons. In spring 2022, Discord released an official "app directory" where server owners can add bots to their servers in-Discord. The Verge described bots as an "important part of Discord".
Unofficial extensions
Although Discord disallows modifications, many unofficial extensions have been created. BetterDiscord, for example, is an open-source desktop modification that allows various plugins to be installed. These plugins augment existing functionality or add features that are not offered by Discord. One plugin, for example, allows its users to apply custom skins for free; another plugin allows increasing the volume of a voice-call participant beyond the default. BetterDiscord has generally been well-received, though PC Gamer has said it is prone to crashes and bugs. According to BetterDiscord's developers, users of the modification are not at risk of being sanctioned by Discord so long as they do not use additional modifications that violate Discord's terms of service.
Infrastructure
Discord is a persistent group chat software, based on an eventually consistent database architecture. Discord was originally built on MongoDB. The infrastructure was migrated to Apache Cassandra when the platform reached a billion messages, then later migrated to ScyllaDB when it reached a trillion messages.
The desktop, web, and iOS apps use React, using React Native on iOS/iPadOS. The Android app was originally written natively, but now shares code with the iOS app. The desktop client is built on the Electron software framework using web technologies, which allows it to be multi-platform and operate as an installed application on personal computers.
The software is supported by Google Cloud Platform's infrastructure in more than thirty data centres located in thirteen regions to keep latency with clients low.
In July 2020, Discord added noise suppression into its mobile app using the Krisp audio-filtering technology.
Discord's backend is written mostly in Elixir and Python, as well as Rust, Go, and C++.
Monetization
While the software itself comes at no cost, the developers investigated ways to monetize it, with potential options including paid customization options such as emoji or stickers.
In January 2017, the first paid subscription and features were released with "Discord Nitro Classic" (originally released as "Discord Nitro"). For a monthly subscription fee of $4.99, users can get an animated avatar, use custom and/or animated emojis across all servers (non-Nitro users can only use custom emoji on the server they were added to), an increased maximum file size on file uploads (from 8 MB to 50 MB), the ability to screen share in higher resolutions, the ability to choose their own discriminator (from #0001 to #9999) and a unique profile badge.
In October 2018, "Discord Nitro" was renamed "Discord Nitro Classic" with the introduction of the new "Discord Nitro", which cost $9.99 and included access to free games through the Discord game store. Monthly subscribers of Discord Nitro Classic at the time of the introduction of the Discord games store were gifted with Discord Nitro, lasting until January 1, 2020, and yearly subscribers of Discord Nitro Classic were gifted with Discord Nitro until January 1, 2021.
In October 2019, Discord ended their free game service with Nitro.
In June 2019, Discord introduced Server Boosts, a way to benefit specific servers by purchasing a "boost" for it, with enough boosts granting various benefits for the users in that particular server. Each boost is a subscription costing $4.99 a month. For example, if a server maintains 2 boosts, it unlocks perks such as a higher maximum audio quality in voice channels and the ability to use an animated server icon. Users with Discord Nitro or Discord Nitro Classic have a 30% discount on server boost costs, with Nitro subscribers specifically also getting 2 free server boosts.
Discord began testing digital stickers on its platform in October 2020 for users in Canada. Most stickers cost between $1.50 and $2.25 and are part of Discord's monetization strategy. Discord Nitro subscribers received a free "What's Up Wumpus" sticker pack focused on Discord's mascot, Wumpus. In May 2023, Discord made most stickers free to all users.
In October 2022, the "Discord Nitro Classic" subscription tier was replaced by a $2.99 "Discord Nitro Basic", which features a subset of features from the $9.99 "Nitro" tier.
Discord added Avatar Decorations and Profile Themes in October 2023. Users can purchase animated decorations for their profiles from Discord's Shop.
Another way Discord makes money is through a 10% commission as the distribution fee from all games sold through game developers' verified servers.
Reception
By January 2016, Hammer & Chisel reported Discord had been used by 3million people, with growth of 1million per month, reaching 11million users in July that year. By December 2016, the company reported it had 25million users worldwide. By the end of 2017, the service had drawn nearly 90million users, with roughly 1.5million new users each week. With the service's third anniversary, Discord stated that it had 130million unique registered users. The company observed that while the bulk of its servers are used for gaming-related purposes, a small number have been created by users for non-gaming activities, like stock trading, fantasy football, and other shared interest groups.
In May 2016, one year after the software's release, Tom Marks, writing for PC Gamer, described Discord as the best VoIP service available. Lifehacker has praised Discord's interface, ease of use, and platform compatibility.
In 2021, Discord had at least 350million registered users across its web and mobile platforms. It was used by 56million people every month, sending a total of 25billion messages per month. By June 2020, the company reported it had 100million active users each month. the service has over 227million monthly active users.
Criticisms and controversies
Cyberbullying and moderation
Discord has had problems with hostile behavior and abuse within chats, with some communities of chat servers being "raided" (a large number of users joining a server) by other communities. This includes flooding chats with controversial topics related to race, religion, politics, and pornography. Discord has stated that it has plans to implement changes that would "rid the platform of the issue".
Discord has a Trust and Safety department, where they respond to user reports. However, because Discord is centered around private communities, it is difficult to research on its effectiveness. A study published in New Media & Society criticized Discord's offloading of server search functions to unmoderated third-party apps, saying that it facilitates hateful communities to find new audience.
In January 2018, The Daily Beast reported that it found several Discord servers that were specifically engaged in distributing revenge porn and facilitating real-world harassment of the victims of these images and videos. Such actions are against Discord's terms of service and Discord shut down servers and banned users identified from these servers.
Data privacy
In September 2024, the Federal Trade Commission released a report summarizing 9 company responses (including from Discord) to orders made by the agency pursuant to Section 6(b) of the Federal Trade Commission Act of 1914 to provide information about user and non-user data collection (including of children and teenagers) and data use by the companies that found that the companies' user and non-user data practices put individuals vulnerable to identity theft, stalking, unlawful discrimination, emotional distress and mental health issues, social stigma, and reputational harm.
Use by extremist users and groups
Discord gained popularity with the alt-right due to the pseudonymity and privacy offered by Discord's service. Analyst Keegan Hankes from the Southern Poverty Law Center stated:It's pretty unavoidable to be a leader in this [alt-right] movement without participating in Discord. Citron stated that servers found to be engaged in illegal activities or violations of the terms of service would be shut down, but would not disclose any examples.
Following the violent events that occurred during the Unite the Right rally in Charlottesville, Virginia, on August 12, 2017, it was found that Discord had been used to plan and organize the white nationalist rally. This included participation by Richard Spencer and Andrew Anglin, high-level figures in the movement. Discord responded by closing servers that supported the alt-right and far-right, and banning users who had participated. Discord's executives condemned "white supremacy" and "neo-Nazism", and said that these groups "are not welcome on Discord". Discord has worked with the Southern Poverty Law Center to identify hateful groups using Discord and ban those groups from the service. Since then, several neo-Nazi and alt-right servers have been shut down by Discord, including those operated by neo-Nazi terrorist group Atomwaffen Division, Nordic Resistance Movement, Iron March, and European Domas.
In March 2019, the media collective Unicorn Riot published the contents of a Discord server used by several members of the white nationalist group Identity Evropa who were also members of the United States Armed Forces. Unicorn Riot has since published member lists and contents of several dozen servers connected to alt-right, white supremacist, and other such movements.
In January 2021, two days after the U.S. Capitol attack, Discord deleted the pro-Donald Trump server The Donald, "due to its overt connection to an online forum used to incite violence, plan an armed insurrection in the United States, and spread harmful misinformation related to 2020 U.S. election fraud", while stating that there was no evidence the server was used to organize the attack on the Capitol building. The server had been used by former members of the r/The_Donald subreddit, which Reddit had deleted several months previous.
In January 2022, the British anti-disinformation organization Logically reported that Holocaust denial, neo-Nazism and other forms of hate speech were flourishing on the Discord and Telegram groups of the German website Disclose.tv.
In May 2022, Payton S. Gendron was named as the suspect in a race-driven mass shooting in Buffalo, New York, that killed ten people. It was reported that Gendron used a private Discord server as a diary for weeks as he prepared for the attack. Approximately 30 minutes before the shooting, several users were invited by Gendron to view the server and read the messages. The messages were later published on 4chan. Discord told the press that the server was deleted by moderators shortly after the shooting. The New York state attorney general's office announced an investigation of Discord among other online services in the wake of the shooting to determine if they had taken enough steps to prevent such content from being broadcast on their services, with which Discord said they would comply.
Child grooming and safety
CNN has reported that Discord has had problems with sexual exploitation of children and young teenagers on its platform.
In July 2018, Discord updated its terms of service to ban drawn pornography with underage subjects. Some Discord users subsequently criticized the moderation staff for selectively allowing "cub" content, or underage pornographic furry artwork, under the same guidelines. The staff held that "cub porn" was separate from lolicon and shotacon, being "allowable as long as it is tagged properly". After numerous complaints from the community, Discord amended its community guidelines in February 2019 to include "non-humanoid animals and mythological creatures as long as they appear to be underage" in its list of disallowed categories, in addition to announcing periodic transparency reports to better communicate with users.
In June 2023, NBC News reported that they had identified 35 cases of adults being charged with "kidnapping, grooming, or sexual assault" that allegedly involved the platform. They additionally discovered 165 cases of prosecution for the sharing of child sexual exploitation material on the platform.
In March 2024, a joint investigation by The Washington Post, Wired, Der Spiegel and Recorder outlined the extensive child grooming, sexual abuse (including sextortion) and murder conducted by a group known as 764 on Discord. The investigation linked 764 and its associated groups and servers to cases in Germany, United States and Romania, going as far back as April 2021. Discord's representative stated that the service filed hundreds of reports, in addition to removing over 34,000 accounts associated with the group.
Bans
On January 27, 2021, Discord banned the r/WallStreetBets server during the GameStop short squeeze, citing "hateful and discriminatory content", which users found contentious. One day later, Discord allowed another server to be created and began assisting with moderation on it.
Censorship
In September 2024, according to Russian media Kommersant, Russian regulator Roskomnadzor was planning to block the platform. Russian regulator Roskomnadzor demanded that the platform remove 947 posts containing illegal content and imposed a 3.5 million roubles (USD$37,493) fine. On 8 October 2024, Russia officially blocked Discord.
Following a decision made by the Ankara 1st Criminal Court of Peace, several hours apart from Russia's block, Turkey blocked Discord.
Discord is blocked by the Great Firewall in China. Chinese police will find and interrogate people who make sensitive comments on the platform.
See also
Comparison of VoIP software
Comparison of cross-platform instant messaging clients
List of freeware
Notes
References
Further reading
Morris, Tee (May 19, 2020). Discord For Dummies. Wiley. .
External links
2015 software
Android (operating system) software
Freeware
Internet properties established in 2015
IOS software
Instant messaging clients for Linux
MacOS instant messaging clients
Windows instant messaging clients
Proprietary cross-platform software
Voice over IP clients for Linux
VoIP software
Web conferencing
Video game culture
Instant messaging | Discord | [
"Technology"
] | 6,092 | [
"Instant messaging"
] |
49,271,595 | https://en.wikipedia.org/wiki/Turboatom | Ukrainian Energy Machines Joint Stock Company "Turboatom", commonly known as just Turboatom (), is a state enterprise responsible for power engineering in Ukraine. The company specializes in the production and maintenance of steam and other turbines for thermal power stations; nuclear power plants and cogeneration plants; hydraulic turbines for hydroelectric power stations and pumped storage power plants; gas turbines and combined cycle turbines for thermal power plants; and other power equipment.
Turboatom is among the top ten turbine construction companies in the world, amongst other major companies such as General Electric, Siemens, Alstom, JSC Power Machines, Andritz Hydro, and Voith.
History
1929–1940 Foundation and commissioning
On 1 May 1932, the first stage of the turbogenerator plant was commissioned. Its design capacity was consistent with production of 1.5 million kW per year of steam turbines. Four workshops were put into service: blade, tool, workshop of different parts and workshop of discs and diaphragms.
Achievements in 1932 included:
First hydroelectric generator for Dzoraget Hydroelectric Power Station
Diesel-generator set for Turkestan–Siberia Railway
Renovation of 10 regional power plants and 12 turbines with a total capacity of 134 MW
Production of turbine blades that previously were imported from abroad
By the end of 1932, development of the first design of a turbogenerator with a capacity of 50 MW was completed. In early 1933, three more workshops were commissioned: large machining, winding and assembly. At the end of the year, production of 24 types of blades was mastered and turbines were refurbished for Kashira Power Plant and some other power plants.
1940–1950 World War II and postwar years
During World War II, the plant suspended turbine production and began manufacture of defense products, such as mortars, and repaired tanks. The work at the enterprise was suspended only three days before the beginning of the occupation of Kharkiv on 21 October 1941. Meanwhile, an evacuation process was in full swing. During the evacuation, "Turboatom" was divided into several parts. On 23 August 1943, Kharkiv was freed and rehabilitation of the enterprise began.
A team of turbine constructors restored the native plant and continued to manufacture products while also participating in the restoration of municipal services. In late 1944, the plant already had 374 (out of 700) items of machine tools and other production equipment installed in accordance with the plant rehabilitation project. In a year, 250 machine tools were installed and commissioned. The number of workers and employees of the plant rapidly increased from 876 to 1664 persons in a year.
In 1944 for city of Kharkiv were restored and equipped four turbines with total capacity of 68 thousand kW; two turbines with capacity of 22 thousand kilowatt for Kyiv and also erected turbines for Sevastopl, Kaluga, and Shterov Power Station total capacity of 28,200 kW.
Achievements in 1944 included:
Four turbines with a total capacity of 68 thousand kW restored and equipped for the city of Kharkiv
Two turbines with a capacity of 22 thousand kilowatt for Kyiv
Erection of turbines for Sevastopl, Kaluga, and Shterov Power Stations with a total capacity of 28,200 kW
Development of nuclear turbine construction
Turboatom began serial production of MK-30 turbines with a capacity of 30 MW for mining and chemical centers with experimental reactors. The first Kharkiv MK-30 turbine began operations in December 1958 in Tomsk.
In the early 1970s, manufacture of turbines for nuclear power plants with a capacity of 500 MW was approved. This allowed a sharp reduction in capital expenditure on the construction of power plants. Turbines of this type are installed at:
Leningrad Nuclear Power Plant (the world's largest)
Kursk Nuclear Power Plant
Smolensk Nuclear Power Plant
In the 1970s, Turboatom was identified as a leading enterprise in designing and manufacturing powerful steam turbines for nuclear power plants.
In 1982–1985 the company mastered manufacture of steam turbines with a capacity of 1000 MW for the following NPPs:
South Ukraine Nuclear Power Plant
Kalinin Nuclear Power Plant
Zaporizhzhia Nuclear Power Plant
Balakovo Nuclear Power Plant
Rostov Nuclear Power Plant
Kozloduy Nuclear Power Plant (Bulgaria)
Developments since 1991
In 1998, Turboatom backed out of a plan to help Russia construct a NPP in Iran, following pressure from US president Bill Clinton. The multi-million dollar deal would have provided 25 to 30 percent of the factory's total output over five years, and the backing-out was thus regarded as a severe blow to the regional economy by 2000.
In 2012 while visiting the site, then president Viktor Yanukovych called the company "Energoatom" instead of Turboatom three times during his visit. This is an example of a yanukism.
American utility company Westinghouse Electric signed a memorandum of understanding (MOU) with Turboatom in 2017 in order to assist in the upgrading of Ukraine's 13 VVER-1000 reactors. Another MOU followed in 2018 with Japanese power and engineering company Toshiba Energy Systems & Solutions Corporation.
At the end of July 2018, the company's specialists produced high-pressure cylinders CNT-1 to Armenian Nuclear Power Plant.
In 2019 Energoatom and Turboatom signed a five-year contract to modernise condensers and turbines at a number of Ukrainian nuclear power plants.
Privatization
In 2017, the National Reforms Council floated the idea that Turboatom could be privatized. At that time, the state held a controlling stake of roughly 75 percent in the company, with Ukrainian Prime Minister Volodymyr Groysman arguing that the state should continue to control a stake of at least 51 percent after privatization.
A governmental committee approved the transfer of the state's 75 percent stake into private hands in May 2018. However, the government terminated any privatization process of the company in 2019 by removing Turboatom from a list of companies cleared for privatization.
See also
Energoatom
References
External links
Government-owned companies of Ukraine
Engine manufacturers of Ukraine
Nuclear technology companies
Ministry of Heavy and Transport Machine-Building (Soviet Union)
Water turbine manufacturers
Engine manufacturers of the Soviet Union | Turboatom | [
"Engineering"
] | 1,261 | [
"Nuclear technology companies",
"Engineering companies"
] |
53,129,859 | https://en.wikipedia.org/wiki/Live%20single-cell%20imaging | In systems biology, live single-cell imaging is a live-cell imaging technique that combines traditional live-cell imaging and time-lapse microscopy techniques with automated cell tracking and feature extraction, drawing many techniques from high-content screening. It is used to study signalling dynamics and behaviour in populations of individual living cells. Live single-cell studies can reveal key behaviours that would otherwise be masked in population averaging experiments such as western blots.
In a live single-cell imaging experiment a fluorescent reporter is introduced into a cell line to measure the levels, localisation or activity of a signalling molecule. Subsequently, a population of cells is imaged over time with careful atmospheric control to maintain viability, and reduce stress upon the cells. Automated cell tracking is then performed upon these time series images, following which filtering and quality control may be performed. Analysis of features describing the fluorescent reporter over time, can then lead to modelling and generation of biological conclusions from which further experimentation can be guided.
History
The field of live single-cell imaging began with work demonstrating that green fluorescent protein (GFP), found in the jellyfish Aequorea victoria, could be expressed in living organisms. This discovery allowed researches to study the localisation and levels of proteins in living single cells, for example the activity of kinases, and calcium levels, through the use of FRET reporters, as well as numerous other phenotypes.
Generally, these early studies focused on the localisation and behaviour of these fluorescently labelled proteins at the subcellular level over short periods of time. However, this changed with pioneering studies looking at the tumour suppressor p53 and the stress and inflammation related protein NF-κB, revealing there levels and localisation respectively to oscillate over periods of several hours. Live single-cell approaches were also applied around this time to understand signalling in single-cell organisms including bacteria, where live studies allowed the dynamics of competence to be modelled, and yeast revealing the mechanism underpinning coherent cell cycle entry.
Experimental work flow
Fluorescent reporters
In any live single-cell study, the first step is to introduce a reporter for our protein/molecule of interest into a suitable cell line. Much of the growth in the field has come from improved gene editing tools such as CRISPR, this leading to development of a wide variety of fluorescent reporters.
Fluorescent tagging uses a gene encoding a fluorescent protein that is inserted into the coding frame of the protein to be tagged. Texture and intensity features can be extracted from images of the tagged protein.
Molecules can also be tagged in vitro and introduced into the cell with electrophoresis. This enable the use of smaller and more photostable fluorophores but requires additional washing steps.
By engineering expression of FRET reporter such that donor and emitter fluorophores are only in close proximity when an upstream signalling molecule is either active or inactive, the donor to emitter fluorescence intensity ratio can be used as a measure of signalling activity. For example, in key early work using FRET reporters for live single studies FRET reporters of Rho GTPase activity were engineered.
Nuclear translocation reporters use engineered nuclear import and nuclear export signals, which can be inhibited by signalling molecules, to record signalling activity via the ratio of nuclear reporter to cytoplasmic reporter.
Live imaging
Live-cell imaging of fluorescently labelled cells must then be performed. This requires simultaneous incubation of cells in stress free conditions whilst imaging is being performed. There are several factors that must be taken into account when choosing imaging conditions such as phototoxicity, photobleaching, tracking ease, rate of change of signalling activity, and Signal to noise. These all relate to imaging frequency and illumination intensity.
Phototoxicity can result from being exposed to large amounts of light over long periods of time. Cells will become stressed, which can lead to apoptosis. High frequency and intensity imaging can cause the fluorophore signal to decrease through photobleaching. Higher frequency imaging generally makes automated cell tracking easier. Imaging frequencies should be able to capture necessary changes to signalling activity. Low intensity imaging or poor reporters may prevent low levels of signalling activity within the cell from being detected.
Live-cell tracking
Following live-cell imaging, automated tracking software is then employed to extract time series data from videos of cells. Live-cell tracking is generally split into two steps, image segmentation of cells or their nuclei and cell/nuclei tracking based on these segments. Many challenges still exist in this stage of a live single-cell imaging study. However recent progress has been highlighted in the field first objective comparison of single-cell tracking techniques.
Quantitative phase imaging (QPI) is particularly useful for live-cell tracking. As QPI is label-free, it does not induce phototoxicity, nor does it suffer from the photobleaching associated with fluorescence imaging. QPI offers a significantly higher contrast than conventional phase imaging techniques, such as phase-contrast microscopy. The higher contrast facilitates more robust cell segmentation and tracking than achievable with conventional phase imaging.
New techniques that use a combination of traditional image segmentation techniques and deep learning to segment cells are also becoming more widely used as well.
Data analysis
In the final stage of a live single-cell imaging study, modelling and analysis of time series data extracted from tracked cells is performed. Pedigree tree profiles can be constructed to reveal heterogeneity in individual cell response and downstream signalling. Refining and compressing data from video-based single-cell tracking can provide relevant inputs for big data analysis, contributing to the identification of biomarkers for enhanced diagnosis and prognosis. A large overlap between analysis of single-cell live data, and modelling of biological systems using ordinary differential equations exists. Results from this key data analysis step will drive further experimentation, for example by perturbing aspects of the system being studied and then comparing signalling dynamics with those of the control population.
Applications
By analysing the signalling dynamics of single cells across entire populations, live single-cell studies are now letting us understand how these dynamics affect key cellular decision making processes. For example, live single-cell studies of the growth factor ERK revealed it to possess digital all-or-nothing activation. Moreover, this all-or-nothing activation was pulsatile, and the frequency of pulses in turn determined whether mammalian cells would commit to cell cycle entry or not. In another key example, live single-cell studies of CDK2 activity in mammalian cells demonstrated that bifurcation in CDK2 activity following mitosis, determined whether cells would continue to proliferate or enter a state of quiescence; now shown, using live single-cell methods, to be caused by stochastic DNA damage inducing upregulation of p21, which inhibits CDK2 activity. Moving forward, live single-cell studies will now likely incooperate multiple reporters into single-cell lines to allow complex decision making processes to be understood, however challenges still remain in scaling up live single-cell studies.
References
Cell imaging | Live single-cell imaging | [
"Chemistry",
"Biology"
] | 1,437 | [
"Cell imaging",
"Microscopy"
] |
53,131,761 | https://en.wikipedia.org/wiki/WeedTuber | A weedtuber (a portmanteau of the words weed and YouTube) is a vlogger (an online video content creator) who deals with issues surrounding cannabis. Since cannabis legalization of the 2010s the producers/hosts sometimes consume cannabis on camera.
Popular weedtubers may have 300,000 or more channel subscribers. One (Joel Hradecky) has over one million as of early 2017. MassRoots listed 10 channels with over 100,000 subscribers in mid 2016. The term "weedtuber" began to appear on Google Trends in early 2015.
A sponsor is reported to be willing to pay a channel with over 100,000 subscribers between $300 and $1000 for mentioning their product.
Legality
Some weedtube channels were produced where cannabis was illegal but tolerated at the time, like Vancouver, British Columbia's Stephen Payne aka "Marijuana Man".
See also
Mukbang, content creators who eat for a video audience
References
Cannabis media
Lifestyle YouTubers
Internet terminology | WeedTuber | [
"Technology"
] | 202 | [
"Computing terminology",
"Internet terminology"
] |
53,133,381 | https://en.wikipedia.org/wiki/Crumpling | In geometry and topology, crumpling is the process whereby a sheet of paper or other two-dimensional manifold undergoes disordered deformation to yield a three-dimensional structure comprising a random network of ridges and facets with variable density. The geometry of crumpled structures is the subject of some interest to the mathematical community within the discipline of topology. Crumpled paper balls have been studied and found to exhibit surprisingly complex structures with compressive strength resulting from frictional interactions at locally flat facets between folds. The unusually high compressive strength of crumpled structures relative to their density is of interest in the disciplines of materials science and mechanical engineering.
Significance
The packing of a sheet by crumpling is a complex phenomenon that depends on material parameters and the packing protocol. Thus the crumpling behaviour of foil, paper and poly-membranes differs significantly and can be interpreted on the basis of material foldability. The high compressive strength exhibited by dense crumple formed cellulose paper is of interest towards impact dissipation applications and has been proposed as an approach to utilising waste paper.
From a practical standpoint, crumpled balls of paper are commonly used as toys for domestic cats.
References
Topology
Manifolds
Deformation (mechanics)
Structural analysis
Materials science
Mechanical engineering | Crumpling | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 250 | [
"Structural engineering",
"Geometry stubs",
"Applied and interdisciplinary physics",
"Deformation (mechanics)",
"Aerospace engineering",
"Structural analysis",
"Materials science",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Mechanical engineering",
"nan",
"Manifolds... |
53,134,359 | https://en.wikipedia.org/wiki/Polymorphic%20toxins | Polymorphic toxins (PTs) are multi-domain proteins primarily involved in competition between bacteria but also involved in pathogenesis when injected in eukaryotic cells. They are found in all major bacterial clades.
Bacteria live in complex multispecies communities such as biofilms and human-associated microbiotas. The dynamics and structure of these communities are greatly influenced by interbacterial competition through the secretion of toxic effectors. Bacteria have evolved several systems to outcompete their neighbors by poisoning them through a contact-dependent killing (including effectors of type V and VI secretion systems) or the release of soluble toxins (including colicins) in the environment.
Definition
Polymorphic toxins are bacterial exotoxins which share common features regarding their domain architecture.
Each family of PTs is defined by a conserved N-terminal region associated with diverse C-terminal (CT) toxic domains, which can be found in several other PT families. The fact that toxic domains are shared between several families of PTs is a hallmark of this category of toxins. A pool of more than 150 distinct toxic domains have been predicted by an in silico study. The most frequent toxic activities found among PTs are RNases, DNases, peptidases and protein-modifying activities.
PTs are involved in killing or inhibiting the growth of bacterial competitors lacking the adequate immunity protein. Indeed, in PT systems, a gene encoding a protective immunity protein is always located immediately downstream of the toxin gene. The immunity protein is present in the cytoplasm to protect the toxin producing-cell both from auto-intoxication and from toxin produced by other strains.
Polymorphic toxin families
The most studied PT families encompass colicins, toxic effectors of type V secretion systems, some toxic effectors of type VI secretion systems and MafB toxins.
Colicins
Contact-Dependent Growth Inhibition (CDI) systems: CdiA toxins
Rhs toxins
"Extended" VgrG toxins
"Extended" Hcp toxins
MafB toxins
See also
Exotoxin
References
Toxins | Polymorphic toxins | [
"Environmental_science"
] | 429 | [
"Toxins",
"Toxicology"
] |
53,134,676 | https://en.wikipedia.org/wiki/NGC%20413 | NGC 413 is a spiral galaxy of type SB(r)c located in the constellation Cetus. It was discovered in 1886 by Francis Leavenworth. It was described by Dreyer as "extremely faint, pretty small, very little extended."
Image gallery
References
External links
0413
Astronomical objects discovered in 1886
Cetus
Barred spiral galaxies
004347 | NGC 413 | [
"Astronomy"
] | 73 | [
"Cetus",
"Constellations"
] |
53,134,973 | https://en.wikipedia.org/wiki/WebGPU | WebGPU is a JavaScript API provided by a web browser that enables webpage scripts to efficiently utilize a device's graphics processing unit (GPU). This is achieved with the underlying Vulkan, Metal, or Direct3D 12 system APIs. On relevant devices, WebGPU is intended to supersede the older WebGL standard.
Google Chrome enabled initial WebGPU support in April 2023. Safari and Firefox have not yet enabled theirs. The W3C standard is thus in the working draft phase.
Technology
WebGPU enables 3D graphics within an HTML canvas. It also has robust support for general-purpose GPU computations.
WebGPU uses its own shading language called WGSL that was designed to be trivially translatable to SPIR-V, until complaints caused redirection into a more traditional design, similar to other shading languages. The syntax is similar to Rust. Tint is a Google-made compiler for WGSL. Naga is a similar project developed for the needs of wgpu-rs.
Implementations
Both Google Chrome and Firefox support WebGPU with SPIR-V, with work ongoing for the WGSL front-end. Firefox and Deno use the Rust wgpu library. Safari follows upstream specifications of both WebGPU and WGSL.
Chrome version 113 enabled initial WebGPU support on Windows devices with Direct3D 12, ChromeOS devices with Vulkan, and macOS with Metal. This support for Android was enabled in version 121.
History
On June 8, 2016, Google showed "Explicit web graphics API" presentation to the WebGL working group (during the bi-annual face to face meeting). The presentation explored the basic ideas and principles of building a new API to eventually replace WebGL, aka "WebGL Next".
On January 24, 2017, Khronos hosted an IP-free meeting dedicated to discussion of "WebGL Next" ideas, collided with WebGL working group meeting in Vancouver. Google team presented the NXT prototype implementing a new API that could run in Chromium with OpenGL, or standalone with OpenGL and Metal. NXT borrowed concepts from all of Vulkan, Direct3D 12, and Metal native APIs. Apple and Mozilla representatives also showed their prototypes built on Safari and Servo correspondingly, both of which closely replicated the Metal API.
On February 7, 2017, Apple's WebKit team proposed the creation of the W3C community group to design the API. At the same time they announced a technical proof of concept and proposal under the name
"WebGPU", based on concepts in Apple's Metal. The WebGPU name was later adopted by the community group as a working name for the future standard rather than just Apple's initial proposal. The initial proposal has been renamed to "WebMetal" to avoid further confusion.
The W3C "GPU for the Web" Community Group was launched on February 16, 2017. At this time, all of Apple, Google, and Mozilla had experiments in the area, but only Apple's proposal was officially submitted to the "gpuweb-proposals" repository.
Shortly after, on March 21, 2017, Mozilla submitted a proposal for WebGL Next within Khronos repository, based on the Vulkan design.
On June 1, 2018, citing "resolution on most-high level issues" in the cross-browser standardization effort, Google's Chrome team announced intent to implement the future WebGPU standard.
References
External links
3D graphics APIs
Cross-platform software
Graphics standards
Web development | WebGPU | [
"Technology",
"Engineering"
] | 751 | [
"Software engineering",
"Computer standards",
"Graphics standards",
"Web development"
] |
53,135,263 | https://en.wikipedia.org/wiki/Surveillance%20capitalism | Surveillance capitalism is a concept in political economics which denotes the widespread collection and commodification of personal data by corporations. This phenomenon is distinct from government surveillance, although the two can be mutually reinforcing. The concept of surveillance capitalism, as described by Shoshana Zuboff, is driven by a profit-making incentive, and arose as advertising companies, led by Google's AdWords, saw the possibilities of using personal data to target consumers more precisely.
Increased data collection may have various benefits for individuals and society, such as self-optimization (the quantified self), societal optimizations (e.g., by smart cities) and optimized services (including various web applications). However, as capitalism focuses on expanding the proportion of social life that is open to data collection and data processing, this can have significant implications for vulnerability and control of society, as well as for privacy.
The economic pressures of capitalism are driving the intensification of online connection and monitoring, with spaces of social life opening up to saturation by corporate actors, directed at making profits and/or regulating behavior. Therefore, personal data points increased in value after the possibilities of targeted advertising were known. As a result, the increasing price of data has limited access to the purchase of personal data points to the richest in society.
Background
Shoshana Zuboff writes that "analysing massive data sets began as a way to reduce uncertainty by discovering the probabilities of future patterns in the behavior of people and systems". In 2014, Vincent Mosco referred to the marketing of information about customers and subscribers to advertisers as surveillance capitalism and made note of the surveillance state alongside it. Christian Fuchs found that the surveillance state fuses with surveillance capitalism.
Similarly, Zuboff informs that the issue is further complicated by highly invisible collaborative arrangements with state security apparatuses. According to Trebor Scholz, companies recruit people as informants for this type of capitalism. Zuboff contrasts the mass production of industrial capitalism with surveillance capitalism, where the former was interdependent with its populations, who were its consumers and employees, and the latter preys on dependent populations, who are neither its consumers nor its employees and largely ignorant of its procedures.
Their research shows that the capitalist addition to the analysis of massive amounts of data has taken its original purpose in an unexpected direction. Surveillance has been changing power structures in the information economy, potentially shifting the balance of power further from nation-states and towards large corporations employing the surveillance capitalist logic.
Zuboff notes that surveillance capitalism extends beyond the conventional institutional terrain of the private firm, accumulating not only surveillance assets and capital but also rights, and operating without meaningful mechanisms of consent. In other words, analysing massive data sets was at some point not only executed by the state apparatuses but also companies. Zuboff claims that both Google and Facebook have invented surveillance capitalism and translated it into "a new logic of accumulation".
This mutation resulted in both companies collecting very large numbers of data points about their users, with the core purpose of making a profit. By selling these data points to external users (particularly advertisers), it has become an economic mechanism. The combination of the analysis of massive data sets and the use of these data sets as a market mechanism has shaped the concept of surveillance capitalism. Surveillance capitalism has been heralded as the successor to neoliberalism.
Oliver Stone, creator of the film Snowden, pointed to the location-based game Pokémon Go as the "latest sign of the emerging phenomenon and demonstration of surveillance capitalism". Stone criticized that the location of its users was used not only for game purposes, but also to retrieve more information about its players. By tracking users' locations, the game collected far more information than just users' names and locations: "it can access the contents of your USB storage, your accounts, photographs, network connections, and phone activities, and can even activate your phone, when it is in standby mode". This data can then be analysed and commodified by companies such as Google (which significantly invested in the game's development) to improve the effectiveness of targeted advertisement.
Another aspect of surveillance capitalism is its influence on political campaigning. Personal data retrieved by data miners can enable various companies (most notoriously Cambridge Analytica) to improve the targeting of political advertising, a step beyond the commercial aims of previous surveillance capitalist operations. In this way, it is possible that political parties will be able to produce far more targeted political advertising to maximise its impact on voters. However, Cory Doctorow writes that the misuse of these data sets "will lead us towards totalitarianism". This may resemble a corporatocracy, and Joseph Turow writes that "the centrality of corporate power is a direct reality at the very heart of the digital age".
Theory
Shoshana Zuboff
The terminology "surveillance capitalism" was popularized by Harvard Professor Shoshana Zuboff. In Zuboff's theory, surveillance capitalism is a novel market form and a specific logic of capitalist accumulation. In her 2014 essay A Digital Declaration: Big Data as Surveillance Capitalism, she characterized it as a "radically disembedded and extractive variant of information capitalism" based on the commodification of "reality" and its transformation into behavioral data for analysis and sales.
In a subsequent article in 2015, Zuboff analyzed the societal implications of this mutation of capitalism. She distinguished between "surveillance assets", "surveillance capital", and "surveillance capitalism" and their dependence on a global architecture of computer mediation that she calls "Big Other", a distributed and largely uncontested new expression of power that constitutes hidden mechanisms of extraction, commodification, and control that threatens core values such as freedom, democracy, and privacy.
According to Zuboff, surveillance capitalism was pioneered by Google and later Facebook, just as mass-production and managerial capitalism were pioneered by Ford and General Motors a century earlier, and has now become the dominant form of information capitalism. Zuboff emphasizes that behavioral changes enabled by artificial intelligence have become aligned with the financial goals of American internet companies such as Google, Facebook, and Amazon.
In her Oxford University lecture published in 2016, Zuboff identified the mechanisms and practices of surveillance capitalism, including the production of "prediction products" for sale in new "behavioral futures markets." She introduced the concept "dispossession by surveillance", arguing that it challenges the psychological and political bases of self-determination by concentrating rights in the surveillance regime. This is described as a "coup from above."
Key features
Zuboff's book The Age of Surveillance Capitalism is a detailed examination of the unprecedented power of surveillance capitalism and the quest by powerful corporations to predict and control human behavior. Zuboff identifies four key features in the logic of surveillance capitalism and explicitly follows the four key features identified by Google's chief economist, Hal Varian:
The drive toward more and more data extraction and analysis.
The development of new contractual forms using computer-monitoring and automation.
The desire to personalize and customize the services offered to users of digital platforms.
The use of the technological infrastructure to carry out continual experiments on its users and consumers.
Analysis
Zuboff compares demanding privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the Internet to asking Henry Ford to make each Model T by hand and states that such demands are existential threats that violate the basic mechanisms of the entity's survival.
Zuboff warns that principles of self-determination might be forfeited due to "ignorance, learned helplessness, inattention, inconvenience, habituation, or drift" and states that "we tend to rely on mental models, vocabularies, and tools distilled from past catastrophes," referring to the twentieth century's totalitarian nightmares or the monopolistic predations of Gilded Age capitalism, with countermeasures that have been developed to fight those earlier threats not being sufficient or even appropriate to meet the novel challenges.
She also poses the question: "will we be the masters of information, or will we be its slaves?" and states that "if the digital future is to be our home, then it is we who must make it so".
In her book, Zuboff discusses the differences between industrial capitalism and surveillance capitalism. Zuboff writes that as industrial capitalism exploited nature, surveillance capitalism exploits human nature.
John Bellamy Foster and Robert W. McChesney
The term "surveillance capitalism" has also been used by political economists John Bellamy Foster and Robert W. McChesney, although with a different meaning. In an article published in Monthly Review in 2014, they apply it to describe the manifestation of the "insatiable need for data" of financialization, which they explain is "the long-term growth speculation on financial assets relative to GDP" introduced in the United States by industry and government in the 1980s that evolved out of the military-industrial complex and the advertising industry.
Response
Numerous organizations have been struggling for free speech and privacy rights in the new surveillance capitalism and various national governments have enacted privacy laws. It is also conceivable that new capabilities and uses for mass-surveillance require structural changes towards a new system to create accountability and prevent misuse. Government attention towards the dangers of surveillance capitalism especially increased after the exposure of the Facebook-Cambridge Analytica data scandal that occurred in early 2018. In response to the misuse of mass-surveillance multiple states have taken preventive measures. The European Union, for example, has reacted to these events and restricted its rules and regulations on misusing big data. Surveillance-Capitalism has become a lot harder under these rules, known as the General Data Protection Regulations. However, implementing preventive measures against misuse of mass-surveillance is hard for many countries as it requires structural change of the system.
Bruce Sterling's 2014 lecture at Strelka Institute "The epic struggle of the internet of things" explained how consumer products could become surveillance objects that track people's everyday life. In his talk, Sterling highlights the alliances between multinational corporations who develop Internet of Things-based surveillance systems which feeds surveillance capitalism.
In 2015, Tega Brain and Surya Mattu's satirical artwork Unfit Bits encourages users to subvert fitness data collected by Fitbits. They suggested ways to fake datasets by attaching the device, for example to a metronome or on a bicycle wheel. In 2018, Brain created a project with Sam Lavigne called New Organs which collect people's stories of being monitored online and offline.
The 2019 documentary film The Great Hack tells the story of how a company named Cambridge Analytica used Facebook to manipulate the 2016 U.S. presidential election. Extensive profiling of users and news feeds that are ordered by black box algorithms were presented as the main source of the problem, which is also mentioned in Zuboff's book. The usage of personal data to subject individuals to categorization and potentially politically influence individuals highlights how individuals can become voiceless in the face of data misusage. This highlights the crucial role surveillance capitalism can have on social injustice as it can affect all aspects of life.
See also
Decomputing
References
Further reading
External links
Shoshana Zuboff Keynote: Reality is the Next Big Thing, YouTube, Elevate Festival, 2014
Big Other: Surveillance Capitalism and the Prospects of an Information Civilization, Shoshana Zuboff
Capitalism's New Clothes, Evgeny Morozov, The Baffler (4 February 2019)
2014 neologisms
Big data
Capitalism
History of the Internet
Market research
Mass surveillance
Surveillance
Tracking | Surveillance capitalism | [
"Technology"
] | 2,380 | [
"Tracking",
"Data",
"Wireless locating",
"Big data"
] |
53,137,072 | https://en.wikipedia.org/wiki/Marjorie%20G.%20Horning | Marjorie Janice Groothuis Horning (August 23, 1917 – June 11, 2020) was an American biochemist and pharmacologist. She was considered to be a pioneer of chromatography for her work in developing new techniques and applying them to the study of drug metabolism. She demonstrated that drugs and their metabolites can be transferred from a pregnant woman to her developing child, and later through breast milk, from a mother to a baby. Horning's work made possible the prevention of birth defects, as doctors began to warn of the dangers of drugs, alcohol, and smoking during pregnancy.
Early life and education
Marjorie Janice Groothuis was born in August 1917 in Detroit, Michigan, to Nina Jane Potter and Herman Groothuis. She studied at Goucher College in Baltimore, Maryland, graduating with a Bachelor of Arts in 1938. She then attended the University of Michigan, graduating with a Master of Science in 1940 and a Doctor of Philosophy in 1943.
She worked as a research assistant in the pediatrics department of the University of Michigan Hospital until 1945.
Career
Horning moved with her husband to the University of Pennsylvania in Philadelphia, Pennsylvania, in 1945, working there until 1951.
In 1950, Evan was appointed Chief of the Laboratory of the Chemistry of Natural Products of the National Institutes of Health (NIH) in Bethesda, Maryland. In 1951, Marjorie obtained a position as a research chemist at the National Heart Institute at NIH. She remained there until 1961.
In 1961, the couple moved to Baylor College of Medicine in Houston, Texas. Marjorie became an associate research professor at the Lipid Research Center at Baylor. She became a full professor of biochemistry at the Institute for Lipid Research at Baylor College in 1969.
In 1973, Atmospheric Pressure Chemical Ionization (APCI) first reported using a 63Ni foil and corona discharge by Evan and Marjorie Horning of Baylor College of Medicine.
In 1974, Corona discharge at atmospheric pressure. DLI with effluent introduced directly from an LC column with APCI using 63Ni foil/corona discharge source, reported by Evan and Marjorie Horning, Baylor College of Medicine.
In 1981, she became an adjunct professor of biochemistry and biophysics at the University of Houston, held concurrently with her position at Baylor.
She worked on the editorial boards of Drug Metabolism and Disposition, Analytical Chemistry, Toxicology and Applied Pharmacology, the Journal of Chromatography, Trends in Pharmacological Sciences and Biopharmaceutics and Drug Disposition.
In 1984, Horning became the first woman president of the American Society for Pharmacology and Experimental Therapeutics (ASPET). She had previously served as secretary-treasurer from 1981 to 1982. She was a member of the American Association for the Advancement of Science.
Research
Horning published more than 200 scientific articles about biochemistry, pharmacology, and analytical chemistry.
Marjorie and Evan Horning were pioneers in the field of analytical biochemistry, in the application of gas chromatography, mass spectrometry, and gas and liquid-mass spectrometric analysis. They developed revolutionary techniques to study the metabolism of drugs and track breakdown products of drugs as they transform and travel throughout the body. Marjorie helped to develop new methods of chromatographic analysis for the study of drug metabolism, including procedures for metabolic profiling and for the study of adrenocortical steroids.
Horning investigated the metabolism of drugs and their metabolites in humans, with particular attention to prenatal transmission between a pregnant woman and an embryo or fetus. Her work showed that drugs and their degradation products travel between mother and child and can affect the unborn child. Previous to her research, it had been believed that the placenta acted as a barrier. Her work resulted in changes in medical practice and the prevention of drug-related birth defects. As a result of her work, doctors in the 1980s began to warn women about the risks of taking medications, drinking alcohol, and smoking during pregnancy. Horning also determined that drugs and their metabolites can be passed from mother to child through breast milk.
She was a long-term member of the Society of Toxicology and worked with the National Toxicology Program, established in 1978 to identify toxic chemicals. Over 48,000 chemicals were used in the United States at the time, many in food additives, medicinal products, or household products.
Awards and honors
Frank H. Field and Joe L. Franklin Award in Mass Spectrometry, American Chemical Society, shared with Evan C. Horning, 1990
Tswett Chromatography Medal, International Symposium on Advances in Chromatography 1987
National Honorary Member of Iota Sigma Pi (National Honor Society for Women in Chemistry), 1985
Alumnae Athena award, University of Michigan, 1980
Garvan-Olin Medal, American Chemical Society, 1977
Honorary doctorate, Goucher College, 1977
Warner Lambert award, American Association of Clinical Chemists, 1976
Personal life
While a student at the University of Michigan, she met her husband-to-be, Evan C. Horning (1916-1993), a chemist and teacher. They married on September 26, 1942.
Following her retirement in 1987, Marjorie Horning found more time to pursue her passion for art. She became an elected Trustee of the Museum of Fine Arts, Houston in 1988, and later a Life Trustee in 2000.
The Hornings traveled widely for scientific conferences and collected art on many of these trips. Along with their friends Virginia and Ira Jackson, they provided early leadership of the Prints and Drawings Department at the MFAH, and Marjorie and Evan later donated their entire collection of over 300 Old Master and Modern prints and drawings to the Museum. Asian Art was another focus of their collecting, especially important Japanese woodblock prints. During the late 1950s and early 1960s, the Hornings had yearly residences in Denmark, Sweden, and Finland, and collected decorative arts that later transformed the Museum's Scandinavian design collection. Ever thoughtful philanthropists, the Hornings also established generous endowments to support these core collecting interests.
References
1917 births
2020 deaths
Scientists from Detroit
Goucher College alumni
University of Michigan alumni
American women biochemists
Recipients of the Garvan–Olin Medal
20th-century American women scientists
20th-century American chemists
University of Pennsylvania faculty
Baylor College of Medicine faculty
American women centenarians
American women academics
21st-century American women
Mass spectrometrists | Marjorie G. Horning | [
"Physics",
"Chemistry"
] | 1,307 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
53,137,377 | https://en.wikipedia.org/wiki/Sound%20suppression%20system | Sites for launching large rockets are often equipped with a sound suppression system to absorb or deflect acoustic energy generated during a rocket launch. As engine exhaust gasses exceed the speed of sound, they collide with the ambient air and shockwaves are created, with noise levels approaching 200 db. This energy can be reflected by the launch platform and pad surfaces, and could potentially cause damage to the launch vehicle, payload, and crew. For instance, the maximum admissible overall sound power level (OASPL) for payload integrity is approximately 145 db. Sound is dissipated by huge volumes of water distributed across the launch pad and launch platform during liftoff.
Water-based acoustic suppression systems are common on launch pads. They aid in reducing acoustic energy by injecting large quantities of water below the launch pad into the exhaust plume and in the area above the pad. Flame deflectors or flame trenches are designed to channel rocket exhaust away from the launch pad but also redirect acoustic energy away.
Soviet Union/Russia
The launch pad built by the Soviet Union beginning in 1978 at the Baikonur Cosmodrome for launching the Energiya rocket included an elaborate sound suppression system which delivered a peak flow of per second fed by three ground level reservoirs totaling .
NASA
Space Shuttle program
Data from the launch of STS-1 found an overpressure wave created by the shuttle's three SSME (now designated RS-25) liquid-fueled rocket engines and the four-segment solid rocket boosters contributed to the loss of sixteen and damage to an additional 148 thermal protection tiles prompting modifications to the Sound Suppression Water System (SSWS) installed at both launch pads at Kennedy Space Center's Launch Complex 39.
The resulting gravity fed system, used through the remainder of the program, began release from a water tower at the launch site 6.6 seconds before main engine start through diameter pipes connected to the mobile launch platform. Water flowed out six towers known as "rainbirds" onto the launch platform and flame trench below, emptying the system in 41 seconds with a peak flow of reducing acoustic energy levels to approximately 142 dB.
The massive white clouds that billowed around the shuttle at each launch were not smoke, but wet steam generated as the rocket exhaust boiled away huge quantities of water.
Antares
Launch pad 0 at the Mid-Atlantic Spaceport at NASA's Wallops Flight Facility in Virginia is equipped with a water tower above the ground, among the tallest in the world. Engine exhaust exits through ring of water jets in the launch platform, directly beneath engine nozzles. The system is capable of delivering per second. Additional storage tanks totaling may be added for static fire tests. Water not vaporized is held in a retention basin where it is tested before release.
Space Launch System
Following the retirement of the Space Shuttle program, pad B at launch complex 39 was upgraded for launches of the Space Launch System (SLS). SLS features an additional RS-25 liquid-fueled rocket engine along with an additional segment in each of its solid rocket boosters over the Space Shuttle program prompting upgrades to the system creating the Ignition Over-Pressure/Sound Suppression Water System (IOP/SS).
The control system was upgraded including replacement of nearly of copper cables with of fiber optic cable. Capacity was upgraded to with a peak flow rate of per minute. The upgrade system was tested in December 2018 with .
Japan Aerospace Exploration Agency (JAXA)
JAXA "seeks to achieve the world's quietest launch" from their Noshiro Rocket Testing Center in Akita with the installation of a sound suppression water system as well as sound absorbing walls. The H3 Scaled Acoustic Reduction Experiment completed in 2017 provided additional data about the noise generated during liftoff.
References
Acoustics
Rocketry
Rocket launch technologies | Sound suppression system | [
"Physics",
"Engineering"
] | 761 | [
"Rocketry",
"Classical mechanics",
"Acoustics",
"Aerospace engineering"
] |
53,138,964 | https://en.wikipedia.org/wiki/Gamma-Ray%20Imaging%20Spectrometer | The Gamma-Ray Imaging Spectrometer (GRIS) was a gamma-ray spectrometer instrument on a balloon-borne airborne observatory. It used germanium detectors to achieve high resolution spectroscopy. GRIS was operated from 1988 to 1995 by NASA's Goddard Space Flight Center, which called it "arguably one of the most successful gamma-ray balloon programs in history".
History
GRIS followed earlier gamma ray spectroscopy work by Bell Labs/Sandia National Laboratories and co-investigators Marvin Leventhal and Bonnard Teegarden, including the LEGS spectrometer. GRIS was selected for a balloon program after the removal of a high-resolution gamma-ray spectrometer from the payload of what would become the Compton Gamma Ray Observatory.
GRIS first flew in May 1988 from Alice Springs, Australia. During its first several flights, the instrument definitively measured the gamma ray lines from Supernova 1987A, including that of 56Co, and the positron annihilation line from the Galactic Center at 511 keV, elucidating the nature of these emissions. These measurements resulted in two letters in Nature and three in The Astrophysical Journal, and earned the John Lindsay Memorial Award for Science from Goddard Space Flight Center.
GRIS was flown a total of nine times between 1988 and 1995, with a total flight time of 223 hours. In a configuration that included a wide-field collimator and blocking crystal mechanism, GRIS measured the diffuse galactic and cosmic gamma-ray spectra, yielding insight into the production of 26Al in the galaxy. During its final two flights from Alice Springs, GRIS carried the PoRTIA instrument, which yielded measurements of the CdZnTe detector background for use in future instrument design.
Following the program, the Goddard team proposed transferring the instrument to the University of Maryland to be refurbished for the Long Duration Balloon program, which would entail reconfiguring the instrument for wide field-of-view studies of diffuse emissions. The program would involve graduate and undergraduate student researchers, and would address observation regimes inaccessible to the INTEGRAL mission.
Specifications
The GRIS instrument was flown with a helium-filled balloon to a typical altitude of .
The GRIS instrument carried seven n-type germanium detectors with a range of sensitivity between 20 and 8000 keV and a combined energy resolution of 1.8 keV at an energy of 500 keV. Each detector was in diameter by deep (among the largest in the world at the time), for a total detector area of and a total detector volume of . The liquid nitrogen-cooled detectors were shielded on all sides by of NaI active anticoincidence shielding for rejection of background events. The instrument had a three-sigma narrow line sensitivity of 1.7 x 10−4 picohenries per square centimeter per second at 500 keV over 12 hours, and a field of view (FWHM) of 17 degrees at 500 keV.
The experimental payload had a weight of , and used 350 Watts of power. It relied on a momentum wheel for azimuth control, and a magnetic pointing reference, with a star tracker and Sun sensor for verification. The instrument delivered telemetry at a rate of 55.2 kbps.
Team
The principal investigator of the GRIS project was Jack Tueller, with co-investigators Scott Barthelmy, Lyle Bartlett, Neil Gehrels, Marvin Leventhal, Juan Naya, Ann Parsons, and Bonnard Teegarden.
References
External links
GRIS scientific results
Summary of GRIS flights from 1988 to 1995
Description of the GRIS mission by co-investigator Neil Gehrels
Gamma-ray telescopes
Spectrometers
Balloon-borne telescopes
Goddard Space Flight Center | Gamma-Ray Imaging Spectrometer | [
"Physics",
"Chemistry"
] | 752 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
47,486,255 | https://en.wikipedia.org/wiki/Northern%20Provinces | The Northern Provinces of South Africa is a biogeographical area used in the World Geographical Scheme for Recording Plant Distributions (WGSRPD). It is part of the WGSRPD region 27 Southern Africa. The area has the code "TVL". It includes the South African provinces of Gauteng, Mpumalanga, Limpopo (Northern Province) and North West, together making up an area slightly larger than the former Transvaal Province.
See also
Cape Provinces
References
Bibliography
Biogeography | Northern Provinces | [
"Biology"
] | 105 | [
"Biogeography"
] |
47,486,581 | https://en.wikipedia.org/wiki/PT%20Puppis | PT Puppis (PT Pup) is a star in the constellation Puppis. Anamarija Stankov confirmed this star as a Beta Cephei variable. Analysis of its spectrum and allowing for extinction gives a mass 7.94 times that of the Sun, a surface temperature of 19,400 K and luminosity of 6405 Suns.
The star was discovered to be variable by Janet Rountree Lesh and P. R. Wesselius in 1979. It was given its variable star designation in 1981.
References
Puppis
Puppis, PT
Beta Cephei variables
2928
037036
061068
BD-19 1967
B-type bright giants | PT Puppis | [
"Astronomy"
] | 137 | [
"Puppis",
"Constellations"
] |
47,487,167 | https://en.wikipedia.org/wiki/Optimism | Optimism is an attitude reflecting a belief or hope that the outcome of some specific endeavor, or outcomes in general, will be positive, favorable, and desirable. A common idiom used to illustrate optimism versus pessimism is a glass filled with water to the halfway point: an optimist is said to see the glass as half full, while a pessimist sees the glass as half empty.
The term derives from the Latin , meaning "best". To be optimistic, in the typical sense of the word, is to expect the best possible outcome from any given situation. This is usually referred to in psychology as dispositional optimism. It reflects a belief that future conditions will work out for the best. As a trait, it fosters resilience in the face of stress.
Theories of optimism include dispositional models and models of explanatory style. Methods to measure optimism have been developed within both of these theoretical approaches, such as various forms of the Life Orientation Test for the original dispositional definition of optimism and the Attributional Style Questionnaire designed to test optimism in terms of explanatory style.
Variation in optimism between people is somewhat heritable and reflects biological trait systems to some degree. A person's optimism is also influenced by environmental factors, including family environment, and may be learnable. Optimism may also be related to health.
Psychological optimism
Dispositional optimism
Researchers operationalize the term "optimism" differently depending on their research. As with any trait characteristic, there are several ways to evaluate optimism, such as the Life Orientation Test (LOT), an eight-item scale developed in 1985 by Michael Scheier and Charles Carver.
Dispositional optimism and pessimism are typically assessed by asking people whether they expect future outcomes to be beneficial or negative (see below). The LOT returns separate optimism and pessimism scores for each individual. these two scores correlate around r=0.5. Optimistic scores on this scale predict better outcomes in relationships, higher social status, and reduced loss of well-being following adversity. Health-preserving behaviors optimism while health-damaging behaviors are associated with pessimism.
Some have argued that optimism is the opposite end of a single dimension with pessimism, with any distinction between them reflecting factors such as social desirability. Confirmatory modelling, however, supports a two-dimensional model and the two dimensions . Genetic modelling confirms this independence, showing that pessimism and optimism are inherited as independent traits, with the typical correlation between them emerging as a result of a general well-being factor and family environment influences. Patients with high dispositional optimism appear to have stronger immune systems since optimism buffers against psychological stressors. Optimists appear to live longer.
Explanatory style
Explanatory style is distinct from dispositional theories of optimism. While related to life-orientation measures of optimism, theory suggests that dispositional optimism and pessimism are reflections of the ways people explain events, i.e., that attributions cause these dispositions. An optimist would view defeat as temporary, as something that does not apply to other cases, and as something that is not their fault. Measures of attributional style distinguish three dimensions among explanations for events: Whether these explanations draw on internal versus external causes; whether the causes are viewed as stable versus unstable; and whether explanations apply globally versus being situationally specific. In addition, the measures distinguish attributions for positive and negative events.
Optimistic people attribute internal, stable, and global explanations to good things. Pessimistic explanations attribute these traits of stability, globality, and internality to negative events, such as relationship difficulty. Models of optimistic and pessimistic attributions show that attributions themselves are a cognitive style – individuals who tend to focus on the global explanations do so for all types of events, and the styles correlate among each other. In addition, individuals vary in how optimistic their attributions are for good events and on how pessimistic their attributions are for bad events. Still, these two traits of optimism and pessimism are un-correlated.
There is much debate about the relationship between explanatory style and optimism. Some researchers argue that optimism is simply the lay-term for what researchers know as explanatory style. More commonly, it is found that explanatory style is distinct from dispositional optimism, so the two should not be used interchangeably as they are marginally correlated at best. More research is required to "bridge" or further differentiate these concepts.
Origins
As with all psychological traits, differences in both dispositional optimism and pessimism and in attributional style are heritable. Both optimism and pessimism are strongly influenced by environmental factors, including the family environment. Optimism may be indirectly inherited as a reflection of underlying heritable traits such as intelligence, temperament, and alcoholism. Evidence from twin studies shows that the inherited component of the dispositional optimism is about 25 percent, making this trait a stable personality dimension and a predictor of life outcomes. Its genetic origin interacts with environmental influences and other risks, to determine the vulnerability to depression across the lifespan. Many theories assume optimism can be learned, and research supports a modest role of family-environment acting to raise (or lower) optimism and lower (or raise) neuroticism and pessimism.
Work utilising brain imaging and biochemistry suggests that at a biological trait level, optimism and pessimism reflect brain systems specialised for the tasks of processing and incorporating beliefs regarding good and bad information respectively.
Assessment
Life Orientation Test
The Life Orientation Test (LOT) was designed by Scheier and Carver (1985) to assess dispositional optimism – expecting positive or negative outcomes. It is one of the more popular tests of optimism and pessimism. It was often used in early studies examining these dispositions' effects in health-related domains. Scheier and Carver's initial research, which surveyed college students, found that optimistic participants were less likely to show an increase in symptoms like dizziness, muscle soreness, fatigue, blurred vision, and other physical complaints than pessimistic respondents.
There are eight items and four filler items in the test. Four are positive items (e.g. "In uncertain times, I usually expect the best") and four are negative items e.g. "If something can go wrong for me, it will." The LOT has been revised twice—once by the original creators (LOT-R) and also by Chang, Maydeu-Olivares, and D'Zurilla as the Extended Life Orientation Test (ELOT). The Revised Life Orientation Test (LOT-R) consists of six items, each scored on a five-point scale from "Strongly disagree" to "Strongly agree" and four filler items. Half of the coded items are phrased optimistically, the other half in a pessimistic way. Compared with its previous iteration, LOT-R offers good internal consistency over time despite item overlaps, making the correlation between the LOT and LOT-R extremely high.
Attributional Style Questionnaire
The Attributional Style Questionnaire (ASQ) is based on the explanatory style model of optimism. Subjects read a list of six positive and negative events (e.g. "you have been looking for a job unsuccessfully for some time"), and are asked to record a possible cause for the event. They then rate whether this is internal or external, stable or changeable, and global or local to the event. There are several modified versions of the ASQ including the Expanded Attributional Style Questionnaire (EASQ), the Content Analysis of Verbatim Explanations (CAVE), and the ASQ designed for testing the optimism of children.
Associations with health
Optimism and health are correlated moderately. Optimism explains between 5–10% of the variation in the likelihood of developing some health conditions (correlation coefficients between .20 and .30), notably including cardiovascular disease, stroke, and depression.
The relationship between optimism and health has also been studied with regard to physical symptoms, coping strategies, and negative effects for those suffering from rheumatoid arthritis, asthma, and fibromyalgia. Among individuals with these diseases, optimists are not more likely than pessimists to report pain alleviation due to coping strategies, despite differences in psychological well-being between the two groups. A meta-analysis confirmed the assumption that optimism is related to psychological well-being: "Put simply, optimists emerge from difficult circumstances with less distress than do pessimists." Furthermore, the correlation appears to be attributable to coping style: "That is, optimists seem intent on facing problems head-on, taking active and constructive steps to solve their problems; pessimists are more likely to abandon their effort to attain their goals."
Optimists may respond better to stress: pessimists have shown higher levels of cortisol (the "stress hormone") and trouble regulating cortisol in response to stressors. Another study by Scheier examined the recovery process for a number of patients that had undergone surgery. The study showed that optimism was a strong predictor of the rate of recovery. Optimists achieved faster results in "behavioral milestones" such as sitting in bed, walking around, etc. They also were rated by staff as having a more favorable physical recovery. At a six-month follow-up, optimists were quicker to resume normal activities.
Optimism and well-being
A number of studies have been done on optimism and psychological well-being. One 30-year study undertaken by Lee et al. (2019) assessed the overall optimism and longevity of cohorts of men from the Veterans Affairs Normative Aging Study and women from the Nurses' Health Study. The study found a positive correlation between higher levels of optimism and exceptional longevity, defined as a lifespan exceeding 85 years.
Another study conducted by Aspinwall and Taylor (1990) assessed incoming freshmen on a range of personality factors such as optimism, self-esteem, locus of self-control, etc. Freshmen who scored high on optimism before entering college had lower levels of psychological distress than their more pessimistic peers while controlling for the other personality factors. Over time, the more optimistic students were less stressed, less lonely, and less depressed than their pessimistic counterparts. This study suggests a strong link between optimism and psychological well-being.
Low optimism may help explain the association between caregivers' anger and reduced sense of vitality.
A meta-analysis of optimism supported findings that optimism is positively correlated with life satisfaction, happiness, and psychological and physical well-being, and negatively correlated with depression and anxiety.
Seeking to explain the correlation, researchers find that optimists choose healthier lifestyles. For example, optimists smoke less, are more physically active, consume more fruit, vegetables, and whole-grain bread, and are more moderate in alcohol consumption.
Translating association into modifiability
Research has demonstrated that optimists are less likely to have certain diseases or develop certain diseases over time. Research has not been able to demonstrate the ability to change an individual's level of optimism through psychological interventions, and thereby perhaps alter the course of disease or likelihood for development of disease.
An article by Mayo Clinic argues that steps to change self-talk from negative to positive may shift individuals from a negative to a more positive/optimistic outlook. Strategies claimed to be of value include surrounding oneself with positive people, identifying areas of change, practicing positive self-talk, being open to humor, and following a healthy lifestyle.
There is also the notion of "learned optimism" in positive psychology, which holds that joy is a talent that can be cultivated and can be achieved through specific actions such as challenging negative self talk or overcoming "learned helplessness". However, criticism against positive psychology argues that it places too much importance on "upbeat thinking, while shunting challenging and difficult experiences to the side"—threatening to become toxic positivity.
A study involving twins found that optimism is largely inherited at birth. Along with the recognition that childhood experiences determine an individual's outlook, such studies demonstrate the genetic basis for optimism reinforces the recognized difficulty in changing or manipulating the direction of an adult's disposition from pessimist to optimist.
Philosophical optimism
One of the earliest forms of philosophical optimism was Socrates' theory of moral intellectualism, which formed part of his model of enlightenment through the process of self-improvement. According to the philosopher, it is possible to live a virtuous life by attaining moral perfection through philosophical self-examination. He maintained that knowledge of moral truth is necessary and sufficient for leading a good life. In his philosophical investigations, Socrates followed a model that did not merely focus on the intellect or reason but a balanced practice that also considered emotion as an important contributor to the richness of human experience.
Distinct from a disposition to believe that things will work out, there is a philosophical idea that, perhaps in ways that may not be fully comprehended, the present moment is in an optimum state. This view that all of nature—past, present, and future—operates by laws of optimization along the lines of Hamilton's principle in the realm of physics is countered by views such as idealism, realism, and philosophical pessimism. Philosophers often link the concept of optimism with the name of Gottfried Wilhelm Leibniz, who held that we live in the best of all possible worlds () The concept was also reflected in an aspect of Voltaire's early philosophy, one that was based on Isaac Newton's view that described a divinely ordered human condition. This philosophy would also later emerge in Alexander Pope's Essay on Man.
Leibniz proposed that it was not God's power to create a perfect world, but he created the best among possible worlds. In one of his writings, he responded to Blaise Pascal's philosophy of awe and desperation in the face of the infinite by claiming that infinity should be celebrated. While Pascal advocated for making man's rational aspirations more humble, Leibniz was optimistic about the capacity of human reason to extend itself further.
This idea was mocked by Voltaire in his satirical novel Candide as baseless optimism of the sort exemplified by the beliefs of one of its characters, Dr. Pangloss, which are the opposite of his fellow traveller Martin's pessimism and emphasis on free will. The optimistic position is also called Panglossianism which became an term for excessive, even stupendous, optimism. The phrase "panglossian pessimism" has been used to describe the pessimistic position that, since this is the best of all possible worlds, it is impossible for anything to get any better. Conversely, philosophical pessimism might be associated with an optimistic long-term view because it implies that no change for the worse is possible. Voltaire found it difficult to reconcile Leibniz's optimism with human suffering as demonstrated by the earthquake that devastated Lisbon in 1755 and the atrocities committed by the pre-revolutionary France against its people.
Optimalism
As defined by Nicholas Rescher, philosophical optimalism holds that this universe exists because it is better than the alternatives. While this philosophy does not exclude the possibility of a deity, it also does not require one, and is compatible with atheism. Rescher explained that the concept can stand on its own feet, arguing that there is no necessity to seeing optimalism realization as divinely instituted because it is a naturalistic theory in principle.
Psychological optimalism, as defined by the positive psychologist Tal Ben-Shahar, means willingness to accept failure while remaining confident that success will follow, a positive attitude he contrasts with negative perfectionism. Perfectionism can be defined as a persistent compulsive drive toward unattainable goals and valuation based solely in terms of accomplishment. Perfectionists reject the realities and constraints of human ability. They cannot accept failures, delaying any ambitious and productive behavior in fear of failing again. This neuroticism can even lead to clinical depression and low productivity. As an alternative to negative perfectionism, Ben-Shahar suggests the adoption of optimalism. Optimalism allows for failure in pursuit of a goal, and expects that while the trend of activity is towards the positive, it is not necessary always to succeed while striving towards goals. This basis in reality, prevents the optimalist from being overwhelmed in the face of failure.
Optimalists accept failures and learn from them, encouraging further pursuit of achievement. Ben-Shahar believes that optimalists and perfectionists show distinct motives. Optimalists tend to have more intrinsic, inward desires, with a motivation to learn, while perfectionists are highly motivated by a need to prove themselves worthy consistently.
Two additional facets of optimalism have been described: product optimalism and process optimalism. The former is described as an outlook that seeks to realize the best possible result, while the latter seeks maximization of the chances of achieving the best possible result.
Some sources also distinguish the concept from optimism since it does not focus on how things are going well but on whether things are going as well as possible.
See also
References
Further reading
Literary terminology
New Thought beliefs
Motivation
Personality
Philosophical theories
Philosophy of life | Optimism | [
"Biology"
] | 3,548 | [
"Behavior",
"Motivation",
"Ethology",
"Personality",
"Human behavior"
] |
47,487,168 | https://en.wikipedia.org/wiki/Meyer%20Desert%20Formation%20biota | The Meyer Desert Formation biota is a fossilized biota (flora and fauna) found in the Dominion Range in the Transantarctic Mountains in Antarctica, alongside the Beardmore Glacier.
Since about 15 million years ago (Ma), Antarctica has been mostly covered with ice.
Fossil Nothofagus leaves in the Meyer Desert Formation of the Sirius Group show that intermittent warm periods allowed Nothofagus shrubs to cling to the Dominion Range as late as 3–4 Ma (mid-late Pliocene). After that the Pleistocene glaciation covered the whole continent with ice and destroyed all major plant life on it.
Species reported by Ashworth and Cantrill from about 3 million years ago include:
Animals:
Pisidium species (very small or minute freshwater clams, Sphaeriidae)
A lymnaeid gastropod (air-breathing freshwater snails)
2 species of curculionid beetles (weevils)
A cyclorrhaphid fly (Diptera)
A tooth of an unknown species of freshwater fish
Plants:
Nothofagus beardmorensis (Fagales)
Ranunculus or similar achenes (Ranunculaceae?)
Mosses (apparently 5 types)
Pollen, mostly Nothofagus
Coniferous bisaccate pollen grains, perhaps Podocarpidites
Pollen of the pollen genus Tricolpites
Flowering cushion plants
A seed of Hippuris (mare's tails: Plantaginaceae)
A seed of Cyperaceae (sedges)
3 or more types of liverworts
References
External links
Fossil weevils (Coleoptera: Curculionidae) from latitude 85 S Antarctica
Cenozoic terrestrial palynological assemblages in the glacial erratics from the Grove Mountains, east Antarctica
Neogene vegetation of the Meyer Desert Formation (Sirius Group), Transantarctic Mountains, Antarctica, by Allan C. Ashworth and David J. Cantrill
A Forest Grows in Antarctica
H.M. Li and Z.K. Zhou (2007) Fossil nothofagaceous leaves from the Eocene of western Antarctica and their bearing on the origin, dispersal and systematics of Nothofagus. Science in China. 50(10): 1525-1535.
and New grounds for reassessing palaeoclimate of the Sirius Group, Antarctica, G. J. Retallack, E. S. Krull and J. G. Bockheim:: full abstract, and passworded links to full article.
Cenozoic Antarctica
Paleontology in Antarctica
Cenozoic paleobiotas
Transantarctic Mountains
Paleogene plants
Neogene plants
Pliocene extinctions | Meyer Desert Formation biota | [
"Biology"
] | 556 | [
"Cenozoic paleobiotas",
"Prehistoric biotas"
] |
47,487,717 | https://en.wikipedia.org/wiki/Keystone%20Nano | Keystone Nano, founded in 2005, is an American-based company based in Pennsylvania, that creates nanoscale products to diagnose and treat human disease and improve the quality of life.
Patents
Keystone Nano Inc. and team has been granted the follow patents:
US Patent (8,747,891) - Awarded to The Penn State Research Foundation and Keystone Nano's Chief Medical Officer Mark Kester, this patent describes the process of loading Ceramide nano-scale liposomes with anti-cancer compounds and create a combination of therapies that benefit from the therapeutic activity of both Ceramide and the anti-cancer compound. This process improves the delivery of both compounds by targeting tumors and extending the time of biological activity.
FDA Approval
In January 2017, the FDA approved the investigational new drug application, NanoLiposome, to assess the product as a form of treatment for solid tumors. Phase 1 trials will take place at the University of Maryland, University of Virginia, and the Medical University of South Carolina.
Compound
Keystone was approved in 2017 to begin clinical trials to assess ceramide nanoliposome for possible use in treating cancer.
References
External links
American companies established in 2005
Companies based in Centre County, Pennsylvania
Nanotechnology companies
2005 establishments in Pennsylvania
Technology companies established in 2005
Technology companies of the United States | Keystone Nano | [
"Materials_science"
] | 268 | [
"Nanotechnology",
"Nanotechnology companies"
] |
47,488,074 | https://en.wikipedia.org/wiki/Penicillium%20rugulosum | Penicillium rugulosum is an anamorph species of fungus in the genus Penicillium which produces inulinase, luteoskyrin and (+) rugulosin.
Further reading
References
rugulosum
Fungi described in 1910
Taxa named by Charles Thom
Fungus species | Penicillium rugulosum | [
"Biology"
] | 61 | [
"Fungi",
"Fungus species"
] |
47,489,128 | https://en.wikipedia.org/wiki/Dade%20Moeller | Dade Moeller (February 27, 1927 – September 26, 2011) was an internationally known expert in radiation safety and environmental protection.
Life
Dade William Moeller, Ph.D., CHP, P.E. was born in 1927 in Grant, Florida, a fishing community located on the intracoastal waterway near the Atlantic Ocean. His father was Robert A. Moeller and his mother was Victoria Moeller and he had 4 brothers, Charles E. Moeller, Robert L. Moeller, John A. Moeller, and Ken L. Moeller. In 1949 he married Betty Jean Radford 'Jeanie' of Decatur, Georgia. Moeller died at home from complications due to malignant lymphoma on September 26, 2011.
Military service and education
He passed the V-12 Navy College Training Program, and joined the U.S. Navy in 1944. Moeller attended Georgia Tech and graduated magna cum laude with a Bachelor of Science degree in civil engineering in 1947 and a Master of Science degree in environmental engineering in 1948. After graduating, Dade became a commissioned officer in the U.S. Public Health Service, with assignments that included Oak Ridge National Laboratory, Los Alamos National Laboratory, and the Headquarters office in Washington, D.C.
In 1957 with sponsorship from the U.S. Public Health Service, Moeller earned the Doctor of Philosophy degree in nuclear engineering from North Carolina State University. He taught radiation protection courses at the U.S. Public Health Service's Radiological Health Training Center in Cincinnati, Ohio.
In 1959, Moeller joined the Health Physics Society and became a certified health physicist and a certified environmental engineer.
In 1961, he became the officer in charge at the Northeastern Radiological Health Laboratory in Winchester, Massachusetts, and studied radioactive fallout from atomic weapons testing and monitored children's thyroids for the uptake of radioactive iodine.
In 1966 Moeller retired from the U.S. Public Health Service.
Harvard School of Public Health
Moeller held tenure for 26 years and served as:
Professor of Engineering in Environmental Health
Associate Director of the Kresge Center for Environmental Health
Associate Director of the Harvard-National Institute of Environmental Health Sciences Center for Environmental Health
Chairman of the Department of Environmental Health Sciences
Associate Dean for Continuing Education
Taught in the Center for Continuing Professional Education
Memberships
American Association for the Advancement of Science
American Industrial Hygiene Association
American Nuclear Society
American Public Health Association
Health Physics Society
Awards and honors
Health Physics Society, Fellow, 1968
National Academy of Engineering, Fellow, 1978
American Public Health Association, Fellow, 1988
American Nuclear Society, Fellow, 1988
U.S. Nuclear Regulatory Commission, Meritorious Achievement Award, 1988
National Council on Radiological Protection and Measurements, Distinguished Emeritus Member, 1997
Georgia Institute of Technology, Engineering Hall of Fame, 1999
NC State University, Distinguished Engineering Alumni Award, 2001
Health Physics Society, Robley D. Evans Commemorative Medal, 2003
William McAdams Outstanding Service Award, American Academy of Health Physics, 2005
Professor Emeritus Award of Merit, Harvard University School of Public Health, 2006
Patents
Method and apparatus for reduction of radon decay product exposure.
Radon decay product removal unit as adapted for use with a lamp.
Select publications
Thesis
Radionuclides in Reactor Cooling Water-Identification, Source and Control. (1957).
References
1927 births
2011 deaths
United States Navy officers
United States Public Health Service Commissioned Corps officers
United States Navy personnel of World War II
Georgia Tech alumni
People from Grant, Florida
North Carolina State University alumni
Health physicists
American civil engineers
Environmental engineers
Oak Ridge National Laboratory people
Harvard T.H. Chan School of Public Health faculty
Los Alamos National Laboratory personnel
Health Physics Society
American nuclear engineers
Scientists from Florida
21st-century American engineers
20th-century American engineers
Deaths from lymphoma in the United States
Deaths from cancer in North Carolina | Dade Moeller | [
"Chemistry",
"Engineering"
] | 766 | [
"Environmental engineers",
"Environmental engineering"
] |
47,489,333 | https://en.wikipedia.org/wiki/Levenspiel%20plot | A Levenspiel plot is a plot used in chemical reaction engineering to determine the required volume of a chemical reactor given experimental data on the chemical reaction taking place in it. It is named after the late chemical engineering professor Octave Levenspiel.
Derivation
For a continuous stirred-tank reactor (CSTR), the following relationship applies:
where:
is the reactor volume
is the molar flow rate per unit time of the entering reactant A
is the conversion of reactant A
is the rate of disappearance of reactant A per unit volume per unit time
For a plug flow reactor (PFR), the following relationship applies:
If is plotted as a function of , the required volume to achieve a specific conversion can be determined given an entering molar flow rate.
The volume of a CSTR necessary to achieve a certain conversion at a given flow rate is equal to the area of the rectangle with height equal to and width equal to .
The volume of a PFR necessary to achieve a certain conversion at a given flow rate is equal to the area under the curve of plotted against .
References
Chemical reaction engineering
Plots (graphics) | Levenspiel plot | [
"Chemistry",
"Engineering"
] | 227 | [
"Chemical engineering",
"Chemical reaction engineering"
] |
47,489,445 | https://en.wikipedia.org/wiki/BisQue%20%28Bioimage%20Analysis%20and%20Management%20Platform%29 | BisQue is a free, open source web-based platform for the exchange and exploration of large, complex datasets. It is being developed at the Vision Research Lab at the University of California, Santa Barbara. BisQue specifically supports large scale, multi-dimensional multimodal-images and image analysis. Metadata is stored as arbitrarily nested and linked tag/value pairs, allowing for domain-specific data organization. Image analysis modules can be added to perform complex analysis tasks on compute clusters. Analysis results are stored within the database for further querying and processing. The data and analysis provenance is maintained for reproducibility of results. BisQue can be easily deployed in cloud computing environments or on computer clusters for scalability. BisQue has been integrated into the NSF Cyberinfrastructure project CyVerse. The user interacts with BisQue via any modern web browser.
History
Project BisQue originally started in 2004 as part of the US National Science Foundation (NSF) supported Center for Bio-Image Informatics at UCSB, to facilitate integration of database and image analysis methods, specifically in the context of microscopy images. Given the diversity of imaging equipment and image formats, there was an urgent need to access multiple formats in a uniform way. More importantly, there was also a need for maintaining the analysis provenance for reproducing image analysis results. Very early on, it was realized that BisQue has to go schema-less to support the needs of diverse biological experiments—each experiment and analysis results are unique and slightly different. Further, from the beginning, BisQue focused on using the web browser as the standard interface. These posed unique database and visualization challenges while dealing with large scale multimodal data, and in the process BisQue has developed a unique and novel framework for visualizing very large images (100k x 100k pixels, for example), and currently supports over 250 different image file formats. Within the browser, users can now visualize 2D, 3D, 4D and 5D images, and export them to many other standardized formats. Over the years the BisQue team has closely worked with the iPlant Cyberinfrastructure (now the CyVerse), supporting the image database management needs of the plant biology community.
Going beyond Bioimaging applications, BisQue has been used in analyzing underwater images and video (REF here) and in medical imaging applications. The current BisQue interface now supports the latest DICOM standard. BisQue has integrated over 100 different image features in its feature service and the next release will include support for deep learning methods and feature classification services.
Features
BisQue provides an online resource for management and analysis of 5D biological images. In addition to image collection management, the system facilitates common biological workflows typical of
biological images: imaging, experimental annotation, repeated analysis and presentation of images and results.
Ingestion of images and metadata
Image and metadata ingestion is the first step in using BisQue. The ingestion can either happen through a web browser-based interface, or via the BisQue API. To date, BisQue supports over 240 different image formats from generic JPEG to specialized microscopy image formats such as Zeiss CZI, Imaris Ims, and Nikon ND2. Images can be arbitrarily large and are automatically pyramided after ingestion. This guarantees a fluent user experience when panning and zooming in the image viewer. In addition to the image data itself, BisQue also captures all metadata of an image (e.g., camera settings, geo coordinates, etc.) and attaches them to the image as tags.
Annotation with textual and graphical metadata
Images and metadata are organized with tags (name–value pairs) associated with an image. BisQue allows an arbitrary number of tags per resource and arbitrary nesting between tags, similar to XML documents. This provides a very flexible way of managing information, tailored to the needs of the underlying imaging project. For efficiency and reliability, the tags and values are stored in an indexed tag/value table in the underlying SQL database.
Graphical annotations can be stored in addition to tags. They include simple objects such as points, lines, and circles, and more complex objects such as region outlines. Each of these graphical objects is stored and indexed in the underlying database as well. In addition to be searchable, these graphical annotation are also rendered in BisQue's image viewer as overlays on top of the viewed image.
Organization and search
Users typically locate images of interest by browsing through collections or by searching with specific queries. For the former, BisQue provides a web-based tag organizer that enables rapid filtering and grouping of large image collections by tag names and values. For the latter, BisQue offers a RESTful tag query interface to find images with specific tag values. Both of these search capabilities are converted into SQL queries over the tag/value table behind the scenes.
Besides tag-centric image organization, BisQue also provides a traditional folder view that allows users to browse images by navigating folders similar to a file manager.
Parallel analysis modules
BisQue allows users to write analysis modules in the programming language of their choice (e.g., Matlab, Python, C++) by using language-specific APIs. Modules typically read in images and metadata and generate new images or additional metadata as output. These results are stored back into the system in the form of tags, graphical objects and/or images. Images or metadata are never over-written, in order to preserve the complete provenance information.
Tested modules can then be registered in the BisQue system for execution. BisQue supports different execution modes, depending on the available infrastructure. For simple modules, BisQue can execute them on a single node. For high-performance needs, BisQue can leverage the HTCondor high-throughput computing software framework for coarse-grained distributed parallelization. In the latter case, BisQue can automatically parallelize analysis over large image datasets and then collect the results in a single BisQue metadata document.
Visualization and sharing
Metadata in BisQue can take many forms: text, objects of interest, user annotations or another web-based file (e.g. associated publication in PDF). Textual and graphical markup viewing and editing is available in the web 5D image viewer. The viewer is used for image and object browsing, ground-truth acquisition and statistical summaries of biological objects. Additionally, it allows for various visualization options such as channel mapping, image enhancement, projections and rotations. The most recent image viewer is able to present volumetric imagery in 3D without browser plug-ins by utilizing modern browsers' WebGL capabilities.
Biological image sharing has often been difficult due to proprietary formats. In BisQue, sharing images, metadata and analysis results can be performed through the web. The system contains an export facility that allows conversions of image formats, application of a variety of image-processing operations and export of textual or graphical annotations as XML, CSV or to Google Docs.
RESTful interface
All services and modules are accessible via standard web access methods (HTTP). This permits a wide variety of tools, from web browsers to custom analysis applications, to interact with BisQue. Most BisQue services are implemented using the RESTful design pattern architecture that exposes resources through URIs. Resources are manipulated by using the common HTTP methods. Among many benefits attributed to RESTful pattern are scalability through web caches and the use of client side state and processing resources. Bisque services exchange data in XML format.
For easy integration with existing software, BisQue also provides an API that covers all aspects of resource ingestion, search, analysis, and manipulation. It is currently available for Python and Matlab.
Use cases
Marine science
BisQue has been used to manage and analyze 23.3 hours (884GB) of high definition video from dives in Bering Sea submarine canyons to evaluate the density of fishes, structure-forming corals and sponges and to document and describe fishing damage. Non-overlapping frames were extracted from each video transect at a constant frequency of 1 frame per 30s. An image processing algorithm developed in Matlab was used to detect laser dots projected onto the seafloor as a scale reference. BisQue's module system allows to wrap this Matlab code into an analysis module that can be parallelized across a compute cluster. In addition, each frame was manually annotated with objects of interest (e.g., fishes, sponges, substrates) and these annotations and other image metadata (e.g., pixel resolution, GPS location) was stored in BisQue's flexible metadata store. The annotations were then used to compute the average density of species and co-habitation behavior in different regions of the canyons, resulting in new insights into this ecosystem.
Plant biology
The BisQue platform is part of the iPlant Cyberinfrastructure (now the CyVerse) to analyze plant-related images in the context of phenotype analysis. BisQue was integrated with iPlant’s authentication, cloud storage, and high-performance grid computing infrastructure and configured with sample data and algorithms designed to assay phenotypes such as directional root-tip growth or comparisons of seed size differences.
License
As of version 0.5.5, BisQue is released under a modified BSD license that requires proper and visible attribution of the BisQue project if the whole or parts of BisQue are used for either research or commercial purposes.
References
External links
BisQue Homepage
Big data
Data analysis software
Cluster computing | BisQue (Bioimage Analysis and Management Platform) | [
"Technology"
] | 1,986 | [
"Data",
"Big data"
] |
47,489,534 | https://en.wikipedia.org/wiki/Targeted%20covalent%20inhibitors | Targeted covalent inhibitors (TCIs) or Targeted covalent drugs are rationally designed inhibitors that bind and then bond to their target proteins. These inhibitors possess a bond-forming functional group of low chemical reactivity that, following binding to the target protein, is positioned to react rapidly with a proximate nucleophilic residue at the target site to form a bond.
Historical impact of covalent drugs
Over the last 100 years covalent drugs have made a major impact on human health and have been highly successful drugs for the pharmaceutical industry. These inhibitors react with their target proteins to form a covalent complex in which the protein has lost its function. The majority of these successful drugs, which include penicillin, omeprazole, clopidogrel, and aspirin were discovered through serendipity in phenotypic screens.
However, key changes in screening approaches, along with safety concerns, have made pharma reluctant to pursue covalent inhibitors in a systematic way (Liebler & Guengerich, 2005). Recently, there has been considerable attention to using rational drug design to create highly selective covalent inhibitors called targeted covalent inhibitors. The first published example of a targeted covalent drug was for the EGFR kinase. But this has now broadened to other kinases and other protein families. Aside from small molecules, covalent probes are also being derived from peptides or proteins. By incorporation of a reactive group into a binding peptide or protein via posttranslational chemical modification or as an unnatural amino acid, a target protein can be conjugated specifically via proximity-induced reaction.
Advantages of covalent drugs
Potency
Covalent bonding can lead to potencies and ligand efficiencies that are either exceptionally high or, for irreversible covalent interactions, even essentially infinite. Covalent bonding thus allows high potency to be routinely achieved in compounds of low molecular mass, along with all the beneficial pharmaceutical properties that are associated with small size.
Selectivity
Covalent inhibitors can be designed to target a nucleophile that is unique or rare across a protein family. Thereby ensuring that covalent bond formation cannot occur with most other family members. This approach can lead to high selectivity against closely related proteins because although the inhibitor might bind transiently to the active sites of such proteins, it will not covalently label them if they lack the targeted nucleophilic residue in the appropriate position.
Pharmacodynamics
The restoration of pharmacological activity after covalent irreversible inhibition requires re-synthesis of the protein target. This has important and potentially advantageous consequences for drug pharmacodynamics in which the level and frequency of dosing relates to the extent and duration of the resulting pharmacological effect.
Built-in-biomarker
Covalent inhibitors can be used to assess target engagement which can sometimes be used pre-clinically and clinically to assess the relationship between dose of drug and efficacy or toxicity. This approach was used for covalent Btk inhibitors pre-clinically and clinically to understand the relationship between dose administered and efficacy in animal models of arthritis and target occupancy in a clinical study of healthy volunteers.
Design of covalent drugs
The design of covalent drugs requires careful optimization of both the non-covalent binding affinity (which is reflected in Ki) and the reactivity of the electrophilic warhead (which is reflected in k2).
The initial design of TCIs involves three key steps. First, bioinformatics analysis is used to identify a nucleophilic amino acid (for example, cysteine) that is either inside or near to a functionally relevant binding site on a drug target, but is rare in that protein family. Next, a reversible inhibitor is identified for which the binding mode is known. Finally, structure-based computational methods are used to guide the design of modified ligands that have electrophilic functionality, and are positioned to react specifically with the nucleophilic amino acid in the target protein.
Targeted covalent photoisomerizable ligands (photoswitches) have been developed to remotely and reversibly control the activity of receptor proteins with light. They have been used as molecular prostheses to restore visual input in the retina and auditory input in the cochlea via glutamate receptors. Ligand conjugation is targeted to specific lysine residues via an affinity labeling mechanism.
Toxicity risks associated with covalent modification of proteins
There has been a reluctance for modern drug discovery programs to consider covalent inhibitors due to toxicity concerns. An important contributor has been the drug toxicities of several high-profile drugs believed to be caused by metabolic activation of reversible drugs. For example, high dose acetaminophen can lead to the formation of the reactive metabolite N-acetyl-p-benzoquinone imine. Also, covalent inhibitors such as beta lactam antibiotics which contain weak electrophiles can lead to idiosyncratic toxicities (IDT) in some patients. It has been noted that many approved covalent inhibitors have been used safely for decades with no observed idiosyncratic toxicity. Also, that IDTs are not limited to proteins with a covalent mechanism of action. A recent analysis has noted that the risk of idiosyncratic toxicities may be mitigated through lower doses of administered drug. Doses of less than 10 mg per day rarely lead to IDT irrespective of the drug mechanism.
TCIs in clinical development
Despite the apparent lack of attention towards covalent inhibitor drug discovery by most pharmaceutical companies, there are several examples of covalent drugs that have been approved or are progressing to late-stage clinical development.
KRAS and lung, colorectal cancer
AMG 510 by Amgen is a KRAS p.G12C covalent inhibitor that has recently finished Phase I clinical trial. The drug elicited partial responses in half of evaluable patients with KRAS G12C-mutant non–small cell lung cancer, and led to stable disease in most evaluable patients with colorectal (or appendix) cancer.
EGFR and lung cancer
The second generation EGFR inhibitors Afatinib and Mobocertinib have been approved for the treatment of EGFR driven lung cancer and Dacomitinib is in late stage clinical testing. The third generation EGFR inhibitors which target mutant EGFR which is specific to the tumor but are selective against wild-type EGFR that are expected to lead to a wider therapeutic index.
ErbB family and breast cancer
The pan-ErbB inhibitor Neratinib was approved in the US in 2017 and in the EU in 2018 for the extended adjuvant treatment of adult patients with early-stage HER2-overexpressed/amplified breast cancer after trastuzumab-based therapy.
Btk and leukemia
Ibrutinib, a covalent inhibitor of Bruton's tyrosine kinase, has been approved for the treatment of chronic lymphocytic leukemia, waldenstrom’s macroglobulinemia and mantle cell lymphoma.
SARS-CoV-2 protease and COVID-19
Paxlovid is a covalent inhibitor of the 3CLpro (Mpro) enzyme. It is in Phase III trials for the early treatment of SARS-CoV-2 infected patients who have not progressed to severe COVID-19 disease, and who do not immediately require hospitalisation.
References
External links
Covalent drugs go from fringe field to fashionable endeavor, Chemical & Engineering News, November 9, 2020
Medicinal chemistry
Enzyme inhibitors
Covalent inhibitors | Targeted covalent inhibitors | [
"Chemistry",
"Biology"
] | 1,580 | [
"Biochemistry",
"nan",
"Medicinal chemistry"
] |
47,489,726 | https://en.wikipedia.org/wiki/Transient%20photocurrent | Transient photocurrent (TPC) is a measurement technique, typically employed in the physics of thin film semiconductors. TPC allows to study the time-dependent (on a microsecond time scale) extraction of charges generated by photovoltaic effect in semiconductor devices, such as solar cells.
A semiconductor is sandwiched between two extracting electrodes. When it is excited with a short pulse of light (as short as 100 femtoseconds), the photogenerated charges are extracted on the electrodes, resulting in a current, which is detected by an oscilloscope in form of voltage across a resistor. Since the excitation pulse is square, there are two ways to measure TPC: in a “light on” and a “light off” positions. In a “Light on”, the signal is recorded as soon as the excitation pulse is switched on, allowing to observe the build-up of charges on the electrode after the start of excitation. “Light off” measurements show how the charges decay after the pulse is switched off.
In contrast to transient photovoltage, TPC measurements are conducted under short circuit condition and yield information about extractable charges, charge recombination and density of states. Quite often, TPC measurements help to build “drift-diffusion” model which reflects trapping and detrapping of the photogenerated charges and the quality of contact between different layers.
TPC allows varying different measurement parameters, such as intensity or length of the light pulse, applied voltage, etc.
See also
Time resolved microwave conductivity
Photoconductance decay
References
Guo, F. et al. A nanocomposite ultraviolet photodetector based on interfacial trap-controlled charge injection. Nat. Nanotechnol. 7, 798–802 (2012).
MacKenzie, R. C. I., Shuttle, C. G., Chabinyc, M. L. & Nelson, J. Extracting Microscopic Device Parameters from Transient Photocurrent Measurements of P3HT:PCBM Solar Cells. Adv. Energy Mater. 2, 662–669 (2012).
External links
OPV characterization techniques
Transient photocurrent in amorphous semiconductors
Semiconductor technology | Transient photocurrent | [
"Materials_science"
] | 470 | [
"Semiconductor technology",
"Microtechnology"
] |
47,489,952 | https://en.wikipedia.org/wiki/Biclique-free%20graph | In graph theory, a branch of mathematics, a -biclique-free graph is a graph that has no (complete bipartite graph with vertices) as a subgraph. A family of graphs is biclique-free if there exists a number such that the graphs in the family are all -biclique-free. The biclique-free graph families form one of the most general types of sparse graph family. They arise in incidence problems in discrete geometry, and have also been used in parameterized complexity.
Properties
Sparsity
According to the Kővári–Sós–Turán theorem, every -vertex -biclique-free graph has edges, significantly fewer than a dense graph would have. Conversely, if a graph family is defined by forbidden subgraphs or closed under the operation of taking subgraphs, and does not include dense graphs of arbitrarily large size, it must be -biclique-free for some , for otherwise it would include large dense complete bipartite graphs.
As a lower bound, conjectured that every maximal -biclique-free bipartite graph (one to which no more edges can be added without creating a -biclique) has at least edges, where and are the numbers of vertices on each side of its bipartition.
Relation to other types of sparse graph family
A graph with degeneracy is necessarily -biclique-free. Additionally, any nowhere dense family of graphs is biclique-free. More generally, if there exists an -vertex graph that is not a 1-shallow minor of any graph in the family, then the family must be -biclique-free, because all -vertex graphs are 1-shallow minors of .
In this way, the biclique-free graph families unify two of the most general classes of sparse graphs.
Applications
Discrete geometry
In discrete geometry, many types of incidence graph are necessarily biclique-free. As a simple example, the graph of incidences between a finite set of points and lines in the Euclidean plane necessarily has no subgraph.
Parameterized complexity
Biclique-free graphs have been used in parameterized complexity to develop algorithms that are efficient for sparse graphs with suitably small input parameter values. In particular, finding a dominating set of size , on -biclique-free graphs, is fixed-parameter tractable when parameterized by , even though there is strong evidence that this is not possible using alone as a parameter. Similar results are true for many variations of the dominating set problem. It is also possible to test whether one dominating set of size at most can be converted to another one by a chain of vertex insertions and deletions, preserving the dominating property, with the same parameterization.
References
Extremal graph theory
Graph families | Biclique-free graph | [
"Mathematics"
] | 574 | [
"Mathematical relations",
"Graph theory",
"Extremal graph theory"
] |
47,490,358 | https://en.wikipedia.org/wiki/Solar%20phenomena | Solar phenomena are natural phenomena which occur within the atmosphere of the Sun. They take many forms, including solar wind, radio wave flux, solar flares, coronal mass ejections, coronal heating and sunspots.
These phenomena are believed to be generated by a helical dynamo, located near the center of the Sun's mass, which generates strong magnetic fields, as well as a chaotic dynamo, located near the surface, which generates smaller magnetic field fluctuations. All solar fluctuations together are referred to as solar variation, producing space weather within the Sun's gravitational field.
Solar activity and related events have been recorded since the eighth century BCE. Throughout history, observation technology and methodology advanced, and in the 20th century, interest in astrophysics surged and many solar telescopes were constructed. The 1931 invention of the coronagraph allowed the corona to be studied in full daylight.
Sun
The Sun is a star located at the center of the Solar System. It is almost perfectly spherical and consists of hot plasma and magnetic fields. It has a diameter of about , around 109 times that of Earth, and its mass (1.989 kilograms, approximately 330,000 times that of Earth) accounts for some 99.86% of the total mass of the Solar System. Chemically, about three quarters of the Sun's mass consists of hydrogen, while the rest is mostly helium. The remaining 1.69% (equal to 5,600 times the mass of Earth) consists of heavier elements, including oxygen, carbon, neon and iron.
The Sun formed about 4.567 billion years ago from the gravitational collapse of a region within a large molecular cloud. Most of the matter gathered in the center, while the rest flattened into an orbiting disk that became the balance of the Solar System. The central mass became increasingly hot and dense, eventually initiating thermonuclear fusion in its core.
The Sun is a G-type main-sequence star (G2V) based on spectral class, and it is informally designated as a yellow dwarf because its visible radiation is most intense in the yellow-green portion of the spectrum. It is actually white, but from the Earth's surface, it appears yellow because of atmospheric scattering of blue light. In the spectral class label, G2 indicates its surface temperature, of approximately 5770 K ( the UAI will accept in 2014 5772 K ) and V indicates that the Sun, like most stars, is a main-sequence star, and thus generates its energy via fusing hydrogen into helium. In its core, the Sun fuses about 620 million metric tons of hydrogen each second.
The Earth's mean distance from the Sun is approximately , though the distance varies as the Earth moves from perihelion in January to aphelion in July. At this average distance, light travels from the Sun to Earth in about 8 minutes, 19 seconds. The energy of this sunlight supports almost all life on Earth by photosynthesis, and drives Earth's climate and weather. As recent as the 19th century, scientists had little knowledge of the Sun's physical composition and source of energy. This understanding is still developing; a number of present-day anomalies in the Sun's behavior remain unexplained.
Solar cycle
Many solar phenomena change periodically over an average interval of about 11 years. This solar cycle affects solar irradiation and influences space weather, terrestrial weather, and climate.
The solar cycle also modulates the flux of short-wavelength solar radiation, from ultraviolet to X-ray and influences the frequency of solar flares, coronal mass ejections and other solar eruptive phenomena.
Types
Coronal mass ejections
A coronal mass ejection (CME) is a massive burst of solar wind and magnetic fields rising above the solar corona. Near solar maxima, the Sun produces about three CMEs every day, whereas solar minima feature about one every five days. CMEs, along with solar flares of other origin, can disrupt radio transmissions and damage satellites and electrical transmission line facilities, resulting in potentially massive and long-lasting power outages.
Coronal mass ejections often appear with other forms of solar activity, most notably solar flares, but no causal relationship has been established. Most weak flares do not have CMEs; however, most powerful ones do. Most ejections originate from active regions on the Sun's surface, such as sunspot groupings associated with frequent flares. Other forms of solar activity frequently associated with coronal mass ejections are eruptive prominences, coronal dimming, coronal waves and Moreton waves, also called solar tsunami.
Magnetic reconnection is responsible for CME and solar flares. Magnetic reconnection is the name given to the rearrangement of magnetic field lines when two oppositely directed magnetic fields are brought together. This rearrangement is accompanied with a sudden release of energy stored in the original oppositely directed fields.
When a CME impacts the Earth's magnetosphere, it temporarily deforms the Earth's magnetic field, changing the direction of compass needles and inducing large electrical ground currents in Earth itself; this is called a geomagnetic storm, and it is a global phenomenon. CME impacts can induce magnetic reconnection in Earth's magnetotail (the midnight side of the magnetosphere); this launches protons and electrons downward toward Earth's atmosphere, where they form the aurora.
Flares
A solar flare is a sudden flash of brightness observed over the Sun's surface or the solar limb, which is interpreted as an energy release of up to 6 × 1025 joules (about a sixth of the total Sun's energy output each second or 160 billion megatons of TNT equivalent, over 25,000 times more energy than released from the impact of Comet Shoemaker–Levy 9 with Jupiter). It may be followed by a coronal mass ejection. The flare ejects clouds of electrons, ions and atoms through the corona into space. These clouds typically reach Earth a day or two after the event. Similar phenomena in other stars are known as stellar flares.
Solar flares strongly influence space weather near the Earth. They can produce streams of highly energetic particles in the solar wind, known as a solar proton event. These particles can impact the Earth's magnetosphere in the form of a geomagnetic storm and present radiation hazards to spacecraft and astronauts.
Solar proton events
A solar proton event (SPE), or "proton storm", occurs when particles (mostly protons) emitted by the Sun become accelerated either close to the Sun during a flare or in interplanetary space by CME shocks. The events can include other nuclei such as helium ions and HZE ions. These particles cause multiple effects. They can penetrate the Earth's magnetic field and cause ionization in the ionosphere. The effect is similar to auroral events, except that protons rather than electrons are involved. Energetic protons are a significant radiation hazard to spacecraft and astronauts. Energetic protons can reach Earth within 30 minutes of a major flare's peak.
Prominences
A prominence is a large, bright, gaseous feature extending outward from the Sun's surface, often in the shape of a loop. Prominences are anchored to the Sun's surface in the photosphere and extend outwards into the corona. While the corona consists of high temperature plasma, which does not emit much visible light, prominences contain much cooler plasma, similar in composition to that of the chromosphere.
Prominence plasma is typically a hundred times cooler and denser than coronal plasma.
A prominence forms over timescales of about an earthly day and may persist for weeks or months. Some prominences break apart and form CMEs.
A typical prominence extends over many thousands of kilometers; the largest on record was estimated at over long
– roughly the solar radius.
When a prominence is viewed against the Sun instead of space, it appears darker than the background. This formation is called a solar filament. It is possible for a projection to be both a filament and a prominence. Some prominences are so powerful that they eject matter at speeds ranging from 600 km/s to more than 1000 km/s. Other prominences form huge loops or arching columns of glowing gases over sunspots that can reach heights of hundreds of thousands of kilometers.
Sunspots
Sunspots are relatively dark areas on the Sun's radiating 'surface' (photosphere) where intense magnetic activity inhibits convection and cools the Photosphere. Faculae are slightly brighter areas that form around sunspot groups as the flow of energy to the photosphere is re-established and both the normal flow and the sunspot-blocked energy elevate the radiating 'surface' temperature. Scientists began speculating on possible relationships between sunspots and solar luminosity in the 17th century. Luminosity decreases caused by sunspots (generally < - 0.3%) are correlated with increases (generally < + 0.05%) caused both by faculae that are associated with active regions as well as the magnetically active 'bright network'.
The net effect during periods of enhanced solar magnetic activity is increased radiant solar output because faculae are larger and persist longer than sunspots. Conversely, periods of lower solar magnetic activity and fewer sunspots (such as the Maunder Minimum) may correlate with times of lower irradiance.
Sunspot activity has been measured using the Wolf number for about 300 years. This index (also known as the Zürich number) uses both the number of sunspots and the number of sunspot groups to compensate for measurement variations. A 2003 study found that sunspots had been more frequent since the 1940s than in the previous 1150 years.
Sunspots usually appear as pairs with opposite magnetic polarity. Detailed observations reveal patterns, in yearly minima and maxima and in relative location. As each cycle proceeds, the latitude of spots declines, from 30 to 45° to around 7° after the solar maximum. This latitudinal change follows Spörer's law.
For a sunspot to be visible to the human eye it must be about 50,000 km in diameter, covering or 700 millionths of the visible area. Over recent cycles, approximately 100 sunspots or compact sunspot groups are visible from Earth.
Sunspots expand and contract as they move about and can travel at a few hundred meters per second when they first appear.
Wind
The solar wind is a stream of plasma released from the Sun's upper atmosphere. It consists of mostly electrons and protons with energies usually between 1.5 and 10 keV. The stream of particles varies in density, temperature and speed over time and over solar longitude. These particles can escape the Sun's gravity because of their high energy.
The solar wind is divided into the slow solar wind and the fast solar wind. The slow solar wind has a velocity of about , a temperature of 2 K and a composition that is a close match to the corona. The fast solar wind has a typical velocity of 750 km/s, a temperature of 8 K and nearly matches the photosphere's. The slow solar wind is twice as dense and more variable in intensity than the fast solar wind. The slow wind has a more complex structure, with turbulent regions and large-scale organization.
Both the fast and slow solar winds can be interrupted by large, fast-moving bursts of plasma called interplanetary CMEs, or ICMEs. They cause shock waves in the thin plasma of the heliosphere, generating electromagnetic waves and accelerating particles (mostly protons and electrons) to form showers of ionizing radiation that precede the CME.
Effects
Space weather
Space weather is the environmental condition within the Solar System, including the solar wind. It is studied especially surrounding the Earth, including conditions from the magnetosphere to the ionosphere and thermosphere. Space weather is distinct from terrestrial weather of the troposphere and stratosphere. The term was not used until the 1990s. Prior to that time, such phenomena were considered to be part of physics or aeronomy.
Solar storms
Solar storms are caused by disturbances on the Sun, most often coronal clouds associated with solar flare CMEs emanating from active sunspot regions, or less often from coronal holes. The Sun can produce intense geomagnetic and proton storms capable of causing power outages, disruption or communications blackouts (including GPS systems) and temporary/permanent disabling of satellites and other spaceborne technology. Solar storms may be hazardous to high-latitude, high-altitude aviation and to human spaceflight. Geomagnetic storms cause aurorae.
The most significant known solar storm occurred in September 1859 and is known as the Carrington event.
Aurora
An aurora is a natural light display in the sky, especially in the high latitude (Arctic and Antarctic) regions, in the form of a large circle around the pole. It is caused by the collision of solar wind and charged magnetospheric particles with the high altitude atmosphere (thermosphere).
Most auroras occur in a band known as the auroral zone, which is typically 3° to 6° wide in latitude and observed at 10° to 20° from the geomagnetic poles at all longitudes, but often most vividly around the spring and autumn equinoxes. The charged particles and solar wind are directed into the atmosphere by the Earth's magnetosphere. A geomagnetic storm expands the auroral zone to lower latitudes.
Auroras are associated with the solar wind. The Earth's magnetic field traps its particles, many of which travel toward the poles where they are accelerated toward Earth. Collisions between these ions and the atmosphere release energy in the form of auroras appearing in large circles around the poles. Auroras are more frequent and brighter during the solar cycle's intense phase when CMEs increase the intensity of the solar wind.
Geomagnetic storm
A geomagnetic storm is a temporary disturbance of the Earth's magnetosphere caused by a solar wind shock wave and/or cloud of magnetic field that interacts with the Earth's magnetic field. The increase in solar wind pressure compresses the magnetosphere and the solar wind's magnetic field interacts with the Earth's magnetic field to transfer increased energy into the magnetosphere. Both interactions increase plasma movement through the magnetosphere (driven by increased electric fields) and increase the electric current in the magnetosphere and ionosphere.
The disturbance in the interplanetary medium that drives a storm may be due to a CME or a high speed stream (co-rotating interaction region or CIR) of the solar wind originating from a region of weak magnetic field on the solar surface. The frequency of geomagnetic storms increases and decreases with the sunspot cycle. CME driven storms are more common during the solar maximum of the solar cycle, while CIR-driven storms are more common during the solar minimum.
Several space weather phenomena are associated with geomagnetic storms. These include Solar Energetic Particle (SEP) events, geomagnetically induced currents (GIC), ionospheric disturbances that cause radio and radar scintillation, disruption of compass navigation and auroral displays at much lower latitudes than normal. A 1989 geomagnetic storm energized ground induced currents that disrupted electric power distribution throughout most of the province of Quebec and caused aurorae as far south as Texas.
Sudden ionospheric disturbance
A sudden ionospheric disturbance (SID) is an abnormally high ionization/plasma density in the D region of the ionosphere caused by a solar flare. The SID results in a sudden increase in radio-wave absorption that is most severe in the upper medium frequency (MF) and lower high frequency (HF) ranges, and as a result, often interrupts or interferes with telecommunications systems.
Geomagnetically induced currents
Geomagnetically induced currents are a manifestation at ground level of space weather, which affect the normal operation of long electrical conductor systems. During space weather events, electric currents in the magnetosphere and ionosphere experience large variations, which manifest also in the Earth's magnetic field. These variations induce currents (GIC) in earthly conductors. Electric transmission grids and buried pipelines are common examples of such conductor systems. GIC can cause problems such as increased corrosion of pipeline steel and damaged high-voltage power transformers.
Carbon-14
The production of carbon-14 (radiocarbon: 14C) is related to solar activity. Carbon-14 is produced in the upper atmosphere when cosmic ray bombardment of atmospheric nitrogen (14N) induces the nitrogen to undergo β+ decay, thus transforming into an unusual isotope of carbon with an atomic weight of 14 rather than the more common 12. Because galactic cosmic rays are partially excluded from the Solar System by the outward sweep of magnetic fields in the solar wind, increased solar activity reduces 14C production.
Atmospheric 14C concentration is lower during solar maxima and higher during solar minima. By measuring the captured 14C in wood and counting tree rings, production of radiocarbon relative to recent wood can be measured and dated. A reconstruction of the past 10,000 years shows that the 14C production was much higher during the mid-Holocene 7,000 years ago and decreased until 1,000 years ago. In addition to variations in solar activity, long-term trends in carbon-14 production are influenced by changes in the Earth's geomagnetic field and by changes in carbon cycling within the biosphere (particularly those associated with changes in the extent of vegetation between ice ages).
Observation history
Solar activity and related events have been regularly recorded since the time of the Babylonians. Early records described solar eclipses, the corona and sunspots.
Soon after the invention of telescopes, in the early 1600s, astronomers began observing the Sun. Thomas Harriot was the first to observe sunspots, in 1610. Observers confirmed the less-frequent sunspots and aurorae during the Maunder minimum. One of these observers was the renowned astronomer Johannes Hevelius who recorded a number of sunspots from 1653 to 1679 in the early Maunder minimum, listed in the book Machina Coelestis (1679).
Solar spectrometry began in 1817. Rudolf Wolf gathered sunspot observations as far back as the 1755–1766 cycle. He established a relative sunspot number formulation (the Wolf or Zürich sunspot number) that became the standard measure. Around 1852, Sabine, Wolf, Gautier and von Lamont independently found a link between the solar cycle and geomagnetic activity.
On 2 April 1845, Fizeau and Foucault first photographed the Sun. Photography assisted in the study of solar prominences, granulation, spectroscopy and solar eclipses.
On 1 September 1859, Richard C. Carrington and separately R. Hodgson first observed a solar flare. Carrington and Gustav Spörer discovered that the Sun exhibits differential rotation, and that the outer layer must be fluid.
In 1907–08, George Ellery Hale uncovered the Sun's magnetic cycle and the magnetic nature of sunspots. Hale and his colleagues later deduced Hale's polarity laws that described its magnetic field.
Bernard Lyot's 1931 invention of the coronagraph allowed the corona to be studied in full daylight.
The Sun was, until the 1990s, the only star whose surface had been resolved. Other major achievements included understanding of:
X-ray-emitting loops (e.g., by Yohkoh)
Corona and solar wind (e.g., by SoHO)
Variance of solar brightness with level of activity, and verification of this effect in other solar-type stars (e.g., by ACRIM)
The intense fibril state of the magnetic fields at the visible surface of a star like the Sun (e.g., by Hinode)
The presence of magnetic fields of 0.5×105 to 1×105 gauss at the base of the conductive zone, presumably in some fibril form, inferred from the dynamics of rising azimuthal flux bundles.
Low-level electron neutrino emission from the Sun's core.
In the later twentieth century, satellites began observing the Sun, providing many insights. For example, modulation of solar luminosity by magnetically active regions was confirmed by satellite measurements of total solar irradiance (TSI) by the ACRIM1 experiment on the Solar Maximum Mission (launched in 1980).
See also
List of articles related to the Sun
Outline of astronomy
Radiative levitation
Solar cycle
Notes
References
Further reading
Solar activity Hugh Hudson Scholarpedia, 3(3):3967. doi:10.4249/scholarpedia.3967
External links
NOAA / NESDIS / NGDC (2002) Solar Variability Affecting Earth NOAA CD-ROM NGDC-05/01. This CD-ROM contains over 100 solar-terrestrial and related global data bases covering the period through April 1990.
Recent Total Solar Irradiance data updated every Monday
Latest Space Weather Data – from the Solar Influences Data Analysis Center (Belgium)
Latest images from Big Bear Solar Observatory (California)
The Very Latest SOHO Images – from the ESA/NASA Solar & Heliospheric Observatory
Map of Solar Active Regions – from the Kislovodsk Mountain Astronomical Station
Space physics
Articles containing video clips | Solar phenomena | [
"Physics",
"Astronomy"
] | 4,404 | [
"Physical phenomena",
"Outer space",
"Solar phenomena",
"Stellar phenomena",
"Space physics"
] |
47,490,492 | https://en.wikipedia.org/wiki/Logitech%20Harmony | Logitech Harmony is a line of remote controls and home automation products produced by Logitech. The line includes universal remote products designed for controlling the components of home theater systems (including televisions, set-top boxes, DVD and Blu-ray players, video game consoles) and other devices that can be controlled via infrared, as well as newer smart home hub products that can be used to additionally control supported Internet of things (IoT) and Smart home products, and allow the use of mobile apps to control devices. On April 10, 2021, Logitech announced that they would discontinue Harmony Remote manufacturing.
History
The Harmony remote control was originally created in 2001 by Easy Zapper, a Canadian company, and first sold in November 2001. The company later changed its name to Intrigue Technologies and was located in Mississauga, Ontario, Canada. Computer peripheral manufacturer Logitech acquired it in May 2004 for US$29 million, turning Harmony remotes into a worldwide phenomenon.
In April 2021, Logitech announced the decision to discontinue the manufacturing of Harmony remotes. Any remaining Harmony remote inventory will continue to be available through retailers for new customers, and support will continue to be offered.
Features
All Harmony remotes are set up online using an external configuration software. For all models this can be done using a computer running Microsoft Windows or MacOS to which they need to be connected via USB cable; the Elite and Ultimate models can also be configured wirelessly using a smartphone app for Android or iOS.
Each remote has infrared (IR) learning capability (some later models also include RF support), and can upload information about a new remote to an online device database. 5000+ brands of devices were supported.
All Harmony remotes support one-touch activity based control, which allows control of multiple devices at once. For example, a home theater setup might include a TV, a digital set top box and a home theater sound system. Pressing the 'Watch TV' activity button on the remote will turn on the TV, turn on digital set top box, turn on the sound system, switch the input of TV to the digital set top box and switch the input of the sound system to the set top box. In addition, the volume buttons would be mapped to the sound system, the channel buttons would be mapped to the digital set to box, and other controls to the most appropriate system component for the activity. The remote would track which devices were powered on or off and which inputs devices had previously been switched to, allowing it to transition the devices from one activity to another without sending redundant or incorrect commands.
Harmony Remote software
The remote software allows users to update the remote configuration, learn IR commands, and upgrade the remote control's firmware.
Early versions of the remote Software 6 required a web browser; newer versions are Java-based. The software requires constant Internet connectivity while programming the remote, as remote control codes are downloaded from Logitech. This method allows updates to the product database, remote codes, and macro sequences to be easily distributed. This also allows Logitech to survey their market in order to determine products for investigation and research. Harmony control software is available for Microsoft Windows and Mac OS X. A group of developers was working on Harmony Remote software for the Linux operating system; the latest available release was dated August 2010.
On March 31, 2010, Logitech launched a new website called "My Harmony" for setting up several later Harmony remote controls.
On April 10, 2021, Logitech announced that they would discontinue Harmony Remote manufacturing.
Products
Harmony 650/665
The lowest-cost version of the Harmony remote that contains a display screen, which is color. It can be programmed with multiple activities and up to 8 devices.
Harmony Express
The Express uses Amazon Alexa to navigate, via a smaller distinct remote. It is the only Harmony remote that supports voice-activated search.
Harmony Hub
This device is not a remote, but rather a hub that can control IR and Bluetooth devices, as well as certain smart home devices (e.g. Philips Hue, Nest thermostat). It is controlled by certain Harmony remotes as well iOS/Android based apps, and more recently Alexa can control certain functions. By itself, it can control up to 8 home theater devices and an amount of home automation devices. A lot of the current products include this along with the remote. This replaces the older Harmony Ultimate Hub, Harmony Home Hub and Harmony Link devices.
Harmony Smart Control
Includes a Harmony Home Hub and a simple remote control that contains three activity buttons used to activate up to 6 different activities. The simple remote lacks a display screen, and can also be purchased separately for those who already own a Harmony Home Hub. Supports up to 8 devices.
Harmony Companion (formerly Harmony Home Control)
Like the Harmony Smart Control described above, but the included Simple Remote also contains home automation controls. Like the Smart Control Simple Remote, the included remote lacks a display screen, but it cannot be purchased for Home Hub owners unlike the Smart Control remote.
Harmony Smart Keyboard
This includes the Harmony Hub along with a keyboard containing a built-in touchpad. The keyboard appears to be like Logitech's previous K400 keyboard and touchpad combo, except some of the keys and buttons have been replaced with others more useful to a home theater remote, and two numbered, Harmony-specific USB receivers are included. It lacks a display screen, supports three activities (Watch a Movie, Watch TV, and Listen to Music), and can also be purchased as an add-on accessory for Harmony Home Hub owners. It controls up to 8 devices.
Harmony Touch
The Harmony Touch remote control contains a full-color display screen with touch functionality. It is an IR remote that supports up to 15 devices and multiple activities. It lacks dedicated physical buttons for home automation control. This remote can be added to a Harmony Hub for additional functionality. No longer available from the manufacturer, but still available via retail.
Harmony 950/Harmony Elite/Harmony Pro
The current top of the range Harmony available via retail. The Harmony 950 is a redesigned version of the Harmony Ultimate One with the addition of dedicated physical buttons for home automation control. Other changes include the media transport control buttons being relocated to a more ergonomic location, and the addition of user accessible battery compartment. This remote can be added to a Harmony Hub for additional functionality. The Harmony Elite is a bundle containing both the Harmony 950 remote control and the Harmony Hub. The Harmony Pro is the Harmony Elite bundle sold for professional installers.
Harmony Pro 2400
The Pro 2400 is the only Harmony product that includes a hub with an Ethernet port, as well as power over Ethernet (POE) support. The hub is significantly wider, and comes with a detachable directional antenna. It also has six, 3.5mm jacks for IR sensors (versus two, 2.5 mm jacks on other Hub products). It uses the Elite remote, and is only available through professional installers.
Harmony 350
The lowest-budget version of the Harmony remote, which can control up to 8 devices in particular categories, and supports only one activity: Watch TV. Unlike most current and former products in the Harmony line, this model lacks a display screen.
Harmony Ultimate One/Harmony Ultimate
The Harmony Ultimate One remote control is a revised version of the Harmony Touch adding motion-activated back-lit keys, eyes-free gesture control, tilt sensor and vibration feedback. This remote can be added to a Harmony Hub for additional functionality. The Harmony Ultimate is a bundle containing both the Harmony Ultimate One remote control and the Harmony Hub but this pack is no longer available from the manufacturer, but still available via retail.
Harmony Ultimate Home
Includes the Harmony Home Hub and a remote similar to the above described Harmony Ultimate One. The package includes four IR emitters, the remote, the hub, and two IR extenders that plug into the hub. Pressing a button on the included remote or any add-on remote will first communicate with the hub, then the hub will tell one of the four IR emitters based on configuration (including the IR emitter on the remote) to transmit the command. Harmony Ultimate Home also contains home automation controls, unlike the Ultimate One. The remote can't be purchased separately for Home Hub owners, unlike most of the other remotes that include it. It supports a maximum of 15 devices.
Harmony Link
A device which utilizes a mobile app as a remote to control devices within the room. It has since been succeeded by the newer Harmony Hub product, which also supports controlling Smart home products. On November 8, 2017, Logitech announced that it would end support for the Harmony Link and make the devices inoperable after March 18, 2018, citing an expired security certificate for a component in the platform. Following criticism of Logitech's originally-announced plan to do so for users whose devices were still under warranty, Logitech announced on November 10, 2017, that it would exchange all Harmony Links for Harmony Hubs free-of-charge, regardless of warranty status.
Harmony 500 series
The Harmony 500 remotes are mid-range remotes that is similar in functionality to the Harmony 659 and 670, but with different button arrangements and a squared-off physical design compared to the hourglass design of the 6xx series. Compared to today's offerings, these remotes offered control of up to 15 devices at an affordable price. The remotes have a back-lit monochrome LCD screen. The 500 series seems to be discontinued entirely.
Harmony for Xbox 360
While it's marketed for the Xbox 360 segment, this remote must be said to be part of the 5xx series. It runs the same software. The Harmony 360 is pre-configured to be used with the Xbox 360 console, and has special buttons, X, Y, A, B and media center control, correlating with the same as found on native Xbox controllers. It has a back-lit LCD screen and uses four AAA batteries.
The hardware layout is mostly the same as the 550. The extra up/down arrows of the 550 is removed to make room for the colored X, Y, A and B buttons beneath the play and pause rows. This would make it the remote in the 500 series with the most hardware buttons, 54 (counting the four direction arrow keys). It can control up to 12 devices.
Harmony 510/515
The Harmony 510/515 is an entry-level remote that is essentially a replacement to the 500 series and the Xbox 360 version. It has the same number of buttons as the 525 and features colored buttons typical on most satellite boxes. It has a four-button, monochrome LCD display. This remote is software limited to controlling up to five devices. Like its mid-range cousins, the 520 and 550, it has no recharge pod and uses AAA batteries instead. Unlike previous 500 series models, these newer models have been limited to 5 devices in software, yet sell for the same prices.
Difference between 510/515:
The 510 is black; the 515 is silver.
Harmony 520/525
The Harmony 520 is a mid-range remote that is similar in functionality to the Harmony 659 and 670, but with a different button arrangement and a squared-off physical design compared to the hourglass design of the 6xx series. It has a blue back-light and monochrome LCD screen. These 5xx models are equipped with an infrared learning port to learn IR signals of unsupported or unknown devices. By pointing an original remote control at the Harmony's learning port, it is able to copy and reproduce those codes and, in the case of supported devices, it is able to figure out what the remote is used to control and imports that device. They require 4 AAA batteries. A mini USB port is used to connect these to a computer for programming.
Difference between 520/525:
The 525 has 50 buttons, while the 520 has 46. It lacks the red, green, yellow and blue colour buttons commonly used for things like teletext and PVR control. Apparently, the 520 is the American model while the 525 is the European. The 520 and 525 can control up to 12 and 15 devices respectively.
Harmony 550/555
The harmony 550/555 remotes are variants of the 525 remote. Compared to the model 525, the 550 and 555 have two extra buttons, and are made of higher grade materials with different colors. The 550 and 555 models both have a sound and a picture button that changes the button mapping on the remote, allowing for reuse of the same physical buttons for different set of functionality. 52 buttons.
Difference between 550/555:
The 550 and 555 have the same number and placement of buttons, just with different mapping. The 555 have the same color buttons as the 525. The 550 does not, instead it has the following extra functions: Up arrow, Down arrow, A and B buttons. The 555 has orange back-light, the 550 has blue.
Harman/Kardon TC 30
The Harman/Kardon TC 30 appears to be a redesigned, rebranded Harmony 52x with a cradle and a color LCD. The LCD has eight items compared to the four of the rest of the Harmony 5xx series. Images exist of the TC 30 both with and without the teletext color buttons. This might mean that there's one version based on the 520 and one based on the 525. The key layout is identical to the 52x remotes. It seems to require different software from the Logitech branded remotes — however at the moment you can download this software from Logitech via harmonyremote.com.
Harmony 610
The Harmony 610 is functionally identical to the Harmony 670 and Harmony 620, but comes in black with a silver face panel. The 610 can control a maximum of 5 devices.
Harmony 620
The Harmony 620 is functionally identical to the Harmony 670, but comes in black instead of silver/black. The 670 can control up to 15 devices, where the 620 can only control 12 devices.
Harmony 659
The Harmony 659 is another mid-range universal remote that offers most of the functionality in the Harmony line. It has a monochrome LCD screen.
Harmony 670
The Harmony 670 is a mid-range universal remote that offers most of the functionality in the Harmony line. The 670 has a monochrome LCD screen and puts DVR functions in the middle of the remote. Logitech has discontinued this product.
Harmony 680
The Harmony 680 is a mid-range, computer programmable universal remote. The 680 has a back lit monochrome LCD screen, and Media PC specific.
buttons. Unlike many newer Harmony remotes, the 680 is able to control up to 15 devices.
Harmony 688
The Harmony 688 was (no longer produced) a mid-range, computer programmable universal remote. The H688 has a monochrome LCD screen and is back lit by an Electro Luminescent sheet (blue in color).
Harmony 720
The Harmony 720 was initially offered exclusively through Costco in 2006 and featured a color screen and backlit keys. It was designed as an inexpensive competitor to the earlier Harmony 880, with few differences, except for the ergonomic design and key layout. It is now available through other vendors, but remains unlisted on Logitech's product page.
The Harmony 720 remote is closely related to the 500 series, as it has a square shape and a layout akin to those remotes. When compared to the 525, you will find the same buttons above the LCD. The 720 has a colour LCD with six buttons/activities instead of four. The eight play/stop etc. buttons have been moved to the lower part. The Mute and Prev buttons have been moved and in their place, there are extra up and down buttons — same as on the 550. Compared to the 500 series, the glow button has been removed. These remotes do not have the Sound and Picture buttons to change key mappings, like the 550/555 remotes does. Lacking red, green, yellow and blue colour buttons, the 720 has 49 buttons. It can control up to 12 devices.
Harmony 768
The Harmony 768 is a capsule-shaped remote with a backlit LCD screen it was available in silver, blue or red. It has 32 buttons, as well as a clickable thumb-wheel to scroll through and select activities.
Harmony 785
The harmony 785 is nearly identical to the 720. While the 720 has 49 buttons, the 785 has 53. The extra buttons are the red, green, yellow and blue colour buttons commonly used for things like teletext and PVR control. These are located above the number buttons, which are placed further down compared to the 720. Another difference from the 720 is that the 785 can control up to 15 devices.
Harmony 880/885
The Harmony 880 was the first Harmony with a color LCD screen and a rechargeable battery. The Harmony 885 remote has extra buttons as mentioned below. The 885 replaces up and down keys with four color keys used for Teletext and, more recently, by some set-top boxes.
There was a short-lived 880Pro that had the picture and sound buttons. This remote did not feature multi-room/multi-controller support like the 890Pro.
Difference between 880/885:
The 885 has the red, green, yellow and blue colour buttons commonly used for things like teletext and PVR control. These four buttons occupy the same space where the 880 has two selection buttons (up arrow, down arrow).
Harmony 890/895
The Harmony 890/895 is the same as the 880/885, but it adds radio frequency (RF) capability, enabling the remote to control devices even without line-of-sight to and from different rooms, up to a range of 30 meters. This remote control cannot control proprietary RF devices, but it can control special Z-Wave RF devices, as well as IR devices without line-of-sight via the RF extender.
The 890Pro adds multi-room and multi-controller support, as well as a different color scheme. (Primary and secondary remotes can be set up that work with the same wireless extender) It also adds two buttons — picture and sound — that allow for quick access to picture- and sound-related commands. It is not listed on the Logitech Web site and is sold through custom installation companies. The 890Pro is not shipped with the RF extender.
Harmony 1000
The Harmony 1000 has customizable touch screen commands, sounds and a rechargeable battery, and allows control up to 15 devices. It is also compatible with the RF extender. A maximum of two extenders can be configured within the software.
Harmony 300
The universal remote has 1 activity support (Watch TV), and control up to 4 devices. The remote supports customizes key with remote features and favorite channels. This remote has no LCD, and like the discontinued 500 series mid-range models, no battery charge pod. Requires two AA batteries.
Harmony 300i
Similar to the Harmony 300, but has a glossy finish rather than a matte finish.
Harmony 600
Support for up to 5 devices. Monochrome display. Requires 2 AA batteries.
Harmony 700
Support for up to 6 devices. Color display. Rechargeable AA batteries via USB.
Harmony One
The Harmony One features a color touch screen and is rechargeable. It does not offer any RF capability. A CNET TV review stated that it is one of the best universal remotes on the market today.
Harmony 900
Harmony 900 has the same ergonomics design as Harmony One. It has additional four color buttons compared to Harmony One and RF supported. The RF technology used by Harmony 900 is not comparable with Harmony 890, 1000, and 1100. The Harmony 900 and 1100 models do not support "sequences" (Logitech parlance for macros).
Harmony 1100
Adds QVGA resolution to the touch screen and allows 15 devices to be controlled.
The user interface of the Harmony 1100 is now Flash based vs the Java based one found in the Harmony 1000.
Accessories
E-R0001
The Harmony E-R0001 is an IR to Bluetooth adapter for the PS3.
RF Wireless Extender
The Harmony RF Wireless Extender allows some Harmony remotes, e.g., models 890, 1000 and 1100, to control devices using radio frequencies instead of infrared, with longer range than infrared and no need for line-of-sight transmission. The Harmony 1000 can use two RF Extenders, while the 1100 can use multiple extenders.
IR Extender System
The Harmony IR Extender System has an IR blaster and a set of mini blasters, and does not require programming. It is manufactured by Philips and rebadged.
See also
Universal Remote Controls - General Article on Universal Remote Controls.
JP1 remote - Universal Electronics/One For All range of programmable remotes
References
External links
Harmony at Logitech.com
Assistive technology
Remote control
Smart home hubs
Harmony | Logitech Harmony | [
"Technology"
] | 4,296 | [
"Home automation",
"Smart home hubs"
] |
47,490,500 | https://en.wikipedia.org/wiki/Funneliformis | Funneliformis is a genus of fungi in the family Glomeraceae. All species are arbuscular mycorrhizal (AM) fungi that form symbiotic relationships (mycorrhizaa) with plant roots. The genus was circumscribed in 2010 by Arthur Schüßler and Christopher Walker, with Funneliformis mosseae (named after the biologist Barbara Mosse and originally described in 1968 as a species of Endogone) as the type species. The generic name refers to the funnel-shaped spore base present in several species.
Species
Funneliformis africanum (Błaszk. & Kovács) C.Walker & A.Schüßler 2010
Funneliformis badium (Oehl, D.Redecker & Sieverd.) C.Walker & A.Schüßler 2010
Funneliformis caledonium (T.H.Nicolson & Gerd.) C.Walker & A.Schüßler 2010
Funneliformis constrictum (Trappe) C.Walker & A.Schüßler 2010
Funneliformis coronatum (Giovann.) C.Walker & A.Schüßler 2010
Funneliformis fragilistratum (Skou & I. Jakobsen) C.Walker & A.Schüßler 2010
Funneliformis geosporum (T.H.Nicolson & Gerd.) C.Walker & A.Schüßler 2010
Funneliformis mosseae (T.H.Nicolson & Gerd.) C.Walker & A.Schüßler 2010
Funneliformis verruculosum (Błaszk.) C.Walker & A. Schüßler 2010
Funneliformis xanthium (Błaszk., Blanke, Renker & Buscot) C.Walker & A.Schüßler 2010
References
External links
Fungus genera
Glomerales | Funneliformis | [
"Biology"
] | 410 | [
"Fungus stubs",
"Fungi"
] |
47,491,138 | https://en.wikipedia.org/wiki/Mohammad%20Hajiaghayi | Mohammad Taghi Hajiaghayi () is a computer scientist known for his work in algorithms, game theory, social networks, network design, graph theory, and big data. He has over 200 publications with over 185 collaborators and 10 issued patents.
He is the Jack and Rita G. Minker Professor at the University of Maryland Department of Computer Science.
Professional career
Hajiaghayi received his PhD in applied mathematics and computer science from Massachusetts Institute of Technology in 2005 advised by Erik Demaine and F. Thomson Leighton.
His thesis was The Bidimensionality Theory and Its Algorithmic Applications. It founded the theory of bidimensionality which later received the Nerode Prize and was the topic of workshops.
Hajiaghayi has been the coach of the
University of Maryland ACM International Collegiate Programming team in the World Finals.
Honors and awards
Hajiaghayi's has received National Science Foundation CAREER Award (2010), Office of Naval Research Young Investigator Award (2011), University of Maryland Graduate Faculty Mentor of the Year Award (2015), as well as Google Faculty Research Awards (2010 & 2014). So far Hajiaghayi has raised more than $4 million in terms of grant award money from government and industry since joining the University of Maryland.
With his co-authors Erik Demaine, Fedor Fomin, and Dimitrios Thilikos, he received the 2015 European Association for Theoretical Computer Science Nerode Prize for his work (also the topic of his Ph.D. thesis) on bidimensionality, a general technique for developing both fixed-parameter tractable exact algorithms and approximation algorithms for a wide class of algorithmic problems on graphs.
Hajiaghayi has been elected as an ACM Fellow in 2018 "for contributions to the fields of algorithmic graph theory and algorithmic game theory."
Hajiaghayi has been elected as an IEEE Fellow in 2019 "for contributions to algorithmic graph theory and to algorithmic game theory." Hajiaghayi has been elected as an EATCS Fellow in 2020 "his contributions to the theory of algorithms, in particular algorithmic graph theory, game theory, and distributed computing."
In 2019, Hajiaghayi was awarded a fellowship by the John Simon Guggenheim Memorial Foundation. In 2020, he was selected as an honoree of Blavatnik Awards for Young Scientists.
References
External links
Hajiaghayi's homepage
List of publications
Citations of his work
Living people
Massachusetts Institute of Technology alumni
American computer scientists
Iranian computer scientists
Iranian emigrants to the United States
American theoretical computer scientists
Graph theorists
People from Qazvin
2018 fellows of the Association for Computing Machinery
Year of birth missing (living people)
University of Maryland, College Park faculty | Mohammad Hajiaghayi | [
"Mathematics"
] | 552 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
47,491,561 | https://en.wikipedia.org/wiki/Eremaea%20%C3%97%20codonocarpa | Eremaea × codonocarpa is a plant in the myrtle family, Myrtaceae and is endemic to the south-west of Western Australia. It is thought to be a stabilised hybrid between two subspecies of Eremaea. It is a small shrub with triangular leaves and flowers a shade of pink to purple on the ends of the branches.
Description
Eremaea × codonocarpa is a sometimes an erect shrub, sometimes prostrate, growing to a height of about . The leaves are long, wide, linear to narrow egg-shaped tapering to a point and more or less triangular in cross section. They have a covering of fine hairs and one, sometimes three veins on the lower surface.
The flowers are pink to deep pink and occur in small groups (usually pairs) on the end of short branches from longer ones formed the previous year. The outer surface of the flower cup (the hypanthium) is densely hairy. There are 5 petals long. The stamens, which give the flower its colour, are arranged in 5 bundles, each containing 19 to 26 stamens. Flowering occurs from October to November and is followed by fruits which are woody capsules. The capsules are more or less urn-shaped, long with a rough, flaky surface.
Taxonomy and naming
Eremaea × codonocarpa was first formally described in 1993 by Nuytsia in the journal Nuytsia from a specimen found near Jurien Bay. Hnatiuk considers Eremaea × codonocarpa to be a stabilised hybrid between Eremaea asterocarpa subsp. asterocarpa and Eremaea violacea subsp. raphiophylla. That view is supported by isozyme studies. The name codonocarpa is derived from the Ancient Greek words κώδων (kódon) meaning “bell” and καρπός (karpós) meaning "fruit", alluding to the urn-shaped or bell-shaped fruits.
Distribution and habitat
Eremaea × codonocarpa occurs in the Irwin district in the Geraldton Sandplains and Swan Coastal Plain biogeographic regions. It grows in sandy laterite on sandplains.
Conservation
Eremaea × codonocarpa is classified as "not threatened" by the Western Australian Government Department of Parks and Wildlife.
References
codonocarpa
Hybrid plants
Myrtales of Australia
Plants described in 1993
Endemic flora of Western Australia | Eremaea × codonocarpa | [
"Biology"
] | 509 | [
"Hybrid plants",
"Plants",
"Hybrid organisms"
] |
47,493,145 | https://en.wikipedia.org/wiki/Baird%27s%20rule | In organic chemistry, Baird's rule estimates whether the lowest triplet state of planar, cyclic structures will have aromatic properties or not. The quantum mechanical basis for its formulation was first worked out by physical chemist N. Colin Baird at the University of Western Ontario in 1972.
The lowest triplet state of an annulene is, according to Baird's rule, aromatic when it has 4n π-electrons and antiaromatic when the π-electron count is 4n + 2, where n is any positive integer. This trend is opposite to that predicted by Hückel's rule for the ground state, which is usually the lowest singlet state (S0). Baird's rule has thus become known as the photochemical analogue of Hückel's rule.
Through various theoretical investigations, this rule has also been found to extend to the lowest lying singlet excited state (S1) of small annulenes.
See also
Möbius–Hückel concept
Möbius aromaticity
References
Eponymous chemical rules
Physical organic chemistry
Rules of thumb | Baird's rule | [
"Chemistry"
] | 215 | [
"Physical organic chemistry"
] |
47,496,360 | https://en.wikipedia.org/wiki/Clearfield%2C%20Inc. | Clearfield, Inc. manufactures and distributes passive connectivity products. Their fiber management and enclosure platform consolidates, distributes, and protects fiber through inside plant facilities, to outside plant facilities, to the home, and to the drop-off points in between. Clearfield's products service the wireless, cable, and telephone service providers, municipal-owned utilities, and non-traditional providers. Clearfield was founded in 2008 and is headquartered in Minneapolis, Minnesota.
History
1981–2008
Clearfield's history began in the late 1980s with the merging of Americable and Computer System Products. Moving forward under the business name Americable, in 2003 the company was purchased by APA Enterprises. In 2007, APA Enterprises changed course after several consecutive years of profit losses. In June 2007, Cheri Beranek took the new role of CEO. Under the new leadership, APA Enterprises redefined itself with a new vision by rebranding the company as Clearfield at the start of 2008.
2008–Present
Clearfield reported its first profitable quarter on September, 30th 2008. The company moved into its 60% larger Minneapolis Headquarters in January 2015.
Organization
Clearfield, Inc. manufacturers its fiber optic components out of its corporate office in Minneapolis, Minnesota and out of its satellite plant in Tijuana, Mexico.
Awards
Clearfield rated #14 in the top 25 small businesses in America in 2013
Eureka! Award for ingenuity and innovation
External links
Clearfield's Official Website
References
Manufacturing companies based in Minnesota
American companies established in 2008
Networking hardware companies
Wire and cable manufacturers
Companies listed on the New York Stock Exchange
Computer companies of the United States
Computer hardware companies | Clearfield, Inc. | [
"Technology"
] | 333 | [
"Computer hardware companies",
"Computers"
] |
47,496,485 | https://en.wikipedia.org/wiki/C15H18O7 | {{DISPLAYTITLE:C15H18O7}}
The molecular formula C15H18O7 (molar mass: 310.30 g/mol, exact mass: 310.1053 u) may refer to:
Jiadifenolide
Picrotin
Molecular formulas | C15H18O7 | [
"Physics",
"Chemistry"
] | 61 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
56,007,094 | https://en.wikipedia.org/wiki/Aspergillus%20persii | Aspergillus persii is a species of fungus in the genus Aspergillus which can cause onychomycosis. and otomycosis
It is from the Circumdati section. Aspergillus persii produces xanthomegnin and ochratoxin A.
Growth and morphology
A. persii has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below.
References
Further reading
persii
Fungi described in 2002
Fungus species | Aspergillus persii | [
"Biology"
] | 134 | [
"Fungi",
"Fungus species"
] |
56,007,376 | https://en.wikipedia.org/wiki/Aspergillus%20fresenii | Aspergillus fresenii is a species of fungus in the genus Aspergillus. Aspergillus fresenii produces ochratoxin A, ochratoxin B, ochratoxin C, aspochracins, mellamides, orthosporins, radarins, secopenitrems, sulphinines, xanthomegnins.
References
Further reading
fresenii
Fungi described in 1971
Fungus species | Aspergillus fresenii | [
"Biology"
] | 100 | [
"Fungi",
"Fungus species"
] |
56,007,469 | https://en.wikipedia.org/wiki/Virus%20nanotechnology | Virus nanotechnology is the use of viruses as a source of nanoparticles for biomedical purposes.
Viruses are made up of a genome and a capsid; and some viruses are enveloped. Most virus capsids measure between 20-500 nm in diameter. Because of their nanometer size dimensions, viruses have been considered as naturally occurring nanoparticles. Virus nanoparticles have been subject to the nanoscience and nanoengineering disciplines. Viruses can be regarded as prefabricated nanoparticles. Many different viruses have been studied for various applications in nanotechnology: for example, mammalian viruses are being developed as vectors for gene delivery, and bacteriophages and plant viruses have been used in drug delivery and imaging applications as well as in vaccines and immunotherapy intervention.
Overview
Virus nanotechnology is one of the very promising and emerging disciplines in nanotechnology. A highly interdisciplinary field, viral nanotechnology occupies the interface between virology, biotechnology, chemistry, and materials science. The fields employs viral nanoparticles (VNPs) and its counterparts of virus-like nanoparticles (VLPs) for potential applications in the diverse fields of electronics, sensors, and most significantly at clinical field. VNPs and VLPs are attractive building blocks for several reasons. Both particles are on the nanometer-size scale; they are monodisperse with a high degree of symmetry and polyvalency; they can be produced with ease on large scale; they are exceptionally stable and robust, and they are biocompatible, and in some cases, orally bioavailable. They are "programmable" units that can be modified by either genetic modification or chemical bioconjugation methods.
What is nanotechnology?
Nanotechnology is the manipulation or self-assembly of individual atoms, molecules, or, molecular clusters into structures to create materials and devices with new or vastly different properties. Nanotechnology can work from the top down (which means reducing the size of the smallest structures to the nanoscale) or bottom up (which involves manipulating individual atoms and molecules into nanostructures) .The definition of nanotechnology is based on the prefix "nano" which is from the Greek word meaning "dwarf". In more technical terms, the word "nano" means 10−9, or one billionth of something. For a meaningful comparison, a virus is roughly 100 nanometers (nm) in size. So that a virus can also call as a nanoparticle. The word nanotechnology is generally used when referring to materials with the size of 0.1 to 100 nanometres, however, it is also inherent that these materials should display different properties from bulk (or micrometric and larger) materials as a result of their size. These differences include physical strength, chemical reactivity, electrical conductance, magnetism and optical effects.
Nanotechnology has an almost limitless string of applications in biology, biotechnology, and biomedicine. Nanotechnology has engendered a growing sense of excitement due to the ability to produce and utilize materials, devices, and systems through the control of matter on the nanometer scale (1 to 50 nm). This bottom-up approach requires less material and causes less pollution. Nanotechnology has had several commercial applications in advanced laser technology, hard coatings, photography, pharmaceuticals, printing, chemical-mechanical polishing, and cosmetics. Soon, there will be lighter cars using nanoparticle reinforced polymers, orally applicable insulin, artificial joints made from nanoparticulate materials, and low-calorie foods with nanoparticulate taste enhancers.
Viruses as building blocks in nanotechnology
Viruses have long been studied as deadly pathogens to cause disease in all living forms. By the 1950s, researchers had begun thinking of viruses as tools in addition of pathogens. Bacteriophage genomes and components of the protein expression machinery have been widely utilized as tools for understanding the fundamental cellular process. On the basis of these studies, several viruses have been exploited as expression systems in biotechnology. Later in the 1970s, viruses are used as a vector for the benefit of humans. Since that, often viruses are used as vectors for gene therapy, cancer control and control of harmful or damaging organisms, in both agriculture and medicine.
Recently, a new approach to exploiting viruses and their capsids for biotechnology began to change toward using them for nanotechnology application. Researchers Douglas and Young (Montana State University, Bozeman, MT, USA) were the first to consider the utility of a virus capsid as a nanomaterial. They have taken plant virus Cowpea Chlorotic Mottle Virus (CCMV) for their study. CCMV showed a highly dynamic platform with pH and metal ion dependent structural transitions. Douglas and Young made use of these capsid dynamics and exchanged the natural cargo (nucleic acid) with synthetic materials. Since then many materials have been encapsulated into CCMV and other VNPs. At about the same time, the research team led by Mann (University of Bristol, UK) pioneered a new area using the rod-shaped particles of TMV (Tobacco Mosaic Virus). The particles were used as templates for the fabrication of a range of metallized nanotube structures using mineralization techniques. TMV particles have also been utilized to generate various structures (nanotubes and nanowires) for use in batteries and data storage devices.
Viral capsids have attracted great interest in the field of nanobiology because of their nanoscale size, symmetrical structural organization, load capacity, controllable self-assembly, and ease of modification. viruses are essentially naturally occurring nanomaterials capable of self-assembly with a high degree of precision. Viral capsid- nanoparticle hybrid structures, which combine the bio-activities of virus capsids with the functions of nanoparticles, are a new class of bionanomaterials that have many potential applications as therapeutic and diagnostic vectors, imaging agents, and advanced nanomaterial synthesis reactors.
Plant viruses in nanotechnology
Plant virus-based systems, in particular, are among the most advanced and exploited for their potential use as bioinspired structured nanomaterials and nano-vectors. Plant virus nanoparticles are non-infectious to mammalian cells also proved by Raja muthuramalingam et al. 2018. Plant viruses have a size particularly suitable for nanoscale applications and can offer several advantages. In fact, they are structurally uniform, robust, biodegradable and easy to produce. Moreover, many are the examples regarding functionalization of plant virus-based nanoparticles by means of modification of their external surface and by loading cargo molecules into their internal cavity. This plasticity in terms of nanoparticles engineering is the ground on which multivalency, payload containment and targeted delivery can be fully exploited.
George P. Lomonossoff writing in "Recent Advances in Plant Virology",
The capsids of most plant viruses are simple and robust structures consisting of multiple copies of one or a few types of protein subunit arranged with either icosahedral or helical symmetry. The capsids can be produced in large quantities either by the infection of plants or by the expression of the subunit(s) in a variety of heterologous systems. In view of their relative simplicity and ease of production, plant virus particles or virus-like particles (VLPs) have attracted much interest over the past 20 years for applications in both bio- and nanotechnology [Lomonossoff, 2011]. As result, plant virus particles have been subjected to both genetic and chemical modification, have been used to encapsulate foreign material and have themselves, been incorporated into supramolecular structures. Significantly, plant viruses studied are not human pathogens, which have no natural tendency to interact with human cell surface receptors. Recently, a plant pathogenic virus was reportedly used to synthesize a noble hybrid metal nanomaterials used as bio-semiconductor.
Plant viruses
Viruses cause several destructive plant diseases and are accountable for massive losses in crop production and quality in all parts of the world. Infected plants may show a range of symptoms depending on the disease but often there is severe leaf curling, stunting (abnormalities in the whole plant) and leaf yellowing (either of the whole leaf or in a pattern of stripes or blotches). Most plant viruses are therefore transmitted by a vector organism (insects, nematodes, plasmodiophorids and mites) that feeds on the plant or (in some diseases) are introduced through wounds made, for example during agriculture practices (e.g. pruning). Many plant viruses, for example, tobacco mosaic virus, have been used as model systems to provide a basic understanding of how viruses express genes and replicate. Others permitted the elucidation of the processes underlying RNA silencing, now recognised as a core epigenetic mechanism underpinning numerous areas of biology.
Some properties of viral nanoparticles
Plant viruses come in many shapes and sizes: for example, the plant virus Tobacco mosaic virus (TMV) measures 300x18 nm in size; it forms a hollow rod. The plant virus Potato virus X (PVX) forms flexible filaments of 515x13 nm. The following viruses have an icosahedral symmetry and measure between 25-30 nm: plant virus Cowpea mosaic virus (CPMV), bacteriophage Qbeta and mammalian adeno-associated virus (AAV).
These are just some examples, many different viruses are being engineered and studied for their potential applications in medicine, some examples of plant viruses include Cowpea chlorotic mottle virus, Red clover necrotic mottle virus, Physalis mosaic virus, Papaya mosaic virus.
Plant viruses and bacteriophages are not infectious toward mammals. In contrast to mammalian viruses, there is no risk of a viral infection.
Virus-like particles (VLPs) can be produced that lack the viral genome; these VLPs are non-infectious also toward plants and thus considered safe also from an agricultural point of view.
Viruses and their non-infectious counterparts can be produced through molecular farming in plants or fermentation in cell culture.
The virus-based nanoparticles can be tailored for specific applications using a number of chemical biology approaches:
Genetic modification can be used to modify the amino acid sequence of the capsid protein (also known as coat protein).
Bioconjugate chemistry can be used to introduce non-biological or biological cargos.
Lastly, while often shown as rigid materials, the viruses are dynamic materials that undergo swelling and other conformational changes allowing for cargo to be infused or encapsulated into their viral capsids.
Manifold plant virus platform technologies are being developed and studied for many applications including:
Vaccines: VLPs or epitope display platforms
Immunotherapies: in situ vaccines
Molecular imaging contrast agents
Drug delivery: targeting both human health and plant health
Battery electrodes
Sensor applications
References
Nanotechnology | Virus nanotechnology | [
"Materials_science",
"Engineering"
] | 2,287 | [
"Nanomedicine",
"Nanotechnology",
"Materials science"
] |
56,007,515 | https://en.wikipedia.org/wiki/Polycomb%20repressive%20complex%201 | Polycomb repressive complex 1 (PRC1) is one of the two classes of Polycomb Repressive complexes, the other being PRC2. Polycomb-group proteins play a major role in transcriptional regulation during development. Polycomb Repressive Complexes PRC1 and PRC2 function in the silencing of expression of the Hox gene network involved in development as well as the inactivation of the X chromosome. PRC1 inhibits the activated form of RNA polymerase II preinitiation complex with the use of H3K27me. PRC1 binds to three nucleosomes, this is believed to limit access of transcription factors to the chromatin, and therefore limit gene expression.
References
Proteins | Polycomb repressive complex 1 | [
"Chemistry"
] | 149 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
56,007,650 | https://en.wikipedia.org/wiki/NGC%20502 | NGC 502, also occasionally referred to as PGC 5034 or UGC 922, is a lenticular galaxy in the constellation Pisces. It is located approximately 113 million light-years from the Solar System and was discovered on 25 September 1862 by German astronomer Heinrich Louis d'Arrest.
When the Morphological Catalogue of Galaxies was published between 1962 and 1974, the identifications of NGC 502 and NGC 505 were reversed. In reality, NGC 502 is equal to MGC +01-04-041 and not MCG +01-04-043 as noted in the catalogue.
Observation history
Arrest discovered NGC 502 using an 11" reflecting telescope in Copenhagen. His position, which he measured on four separate nights, matches with both UGC 922 and PGC 5034. John Louis Emil Dreyer, creator of the New General Catalogue, described the galaxy as "considerably bright, small, round, brighter middle and nucleus".
See also
Elliptical galaxy
List of NGC objects (1–1000)
Pisces (constellation)
References
External links
SEDS
Lenticular galaxies
Pisces (constellation)
0502
00922
+01-04-041
005034
J01225553+0902570
Astronomical objects discovered in 1862
Discoveries by Heinrich Louis d'Arrest | NGC 502 | [
"Astronomy"
] | 264 | [
"Pisces (constellation)",
"Constellations"
] |
56,008,035 | https://en.wikipedia.org/wiki/Bis%28benzonitrile%29palladium%20dichloride | Bis(benzonitrile)palladium dichloride is the coordination complex with the formula PdCl2(NCC6H5)2. It is the adduct of two benzonitrile (PhCN) ligands with palladium(II) chloride. It is a yellow-brown solid that is soluble in organic solvents. The compound is a reagent and a precatalyst for reactions that require soluble Pd(II). A closely related compound is bis(acetonitrile)palladium dichloride.
The complex is prepared by dissolving PdCl2 in warm benzonitrile. The PhCN ligands are labile, and the complex reverts to PdCl2 in noncoordinating solvents. According to X-ray crystallography, the two PhCN ligands are mutually trans.
References
Palladium compounds
Homogeneous catalysis
Coordination complexes
Chloro complexes
Benzonitriles | Bis(benzonitrile)palladium dichloride | [
"Chemistry"
] | 207 | [
"Catalysis",
"Homogeneous catalysis",
"Coordination chemistry",
"Coordination complexes"
] |
56,008,800 | https://en.wikipedia.org/wiki/Triazolite | Triazolite is an organic mineral with the chemical structure of NaCu2(N3C2H2)2(NH3)2Cl3·4H2O, and is formed in conjunction with chanabayite, another natural triazolate anion salt. Triazolite has only been found in Pabellón de Pica, Chanabaya, Iquique Province, Tarapacá Region, Chile, due to its specific requirements for formation. The first specimens of triazolite were found in what is suspected to be the guano of the guanay cormorant. The guano reacted to chalcopyrite-bearing gabbro, allowing the formation for triazolite to take place. Triazolite was initially grouped together with chanabayite in 2015, and wasn't identified as a separate mineral until 2017.
References
Organic minerals
Copper(II) minerals
Orthorhombic minerals
Chloride minerals | Triazolite | [
"Chemistry"
] | 199 | [
"Organic compounds",
"Organic minerals"
] |
56,009,041 | https://en.wikipedia.org/wiki/Kit%20%28of%20components%29 | A kit is a set of components that has to be assembled by the buyer or at the site of use to get the definitive product.
Examples:
Electronic kit, a package of electrical components used to build an electronic device.
Kit car ("component car"), an automobile that the buyer assembles into a functioning car
Kit bike
Folding kayak
Tent
Prefabricated building of houses
provisional military engineering constructions like
Mabey Logistic Support Bridge
Space station
Much of IKEA furniture
A lot of kits are sold for model building
Construction
Kit cars | Kit (of components) | [
"Engineering"
] | 109 | [
"Construction"
] |
56,009,419 | https://en.wikipedia.org/wiki/Grand%20Prix%20scientifique%20de%20la%20Fondation%20NRJ | Grand Prix scientifique de la Fondation NRJ (The Scientific Grand Prize of the Foundation NRJ) is an award conferred annually by the Foundation NRJ at the Institut de France. It is awarded in the areas of medical science, particularly neuroscience. Each year the prize has a different theme. The award has a €150,000 prize (€130,000 for laboratory research and €20,000 for the laureate).
Laureates
Winners of the prize are:
2017: Jean-Yves Delattre
2016: Ghislaine Dehaene-Lambertz
2015:
2014:
2013: Massimo Zeviani
2012: Isabelle Arnulf and Mehdi Tafti
2011: Séverine Boillée and Vincent Meininger
2010: José A. Esteban and Bruno Dubois
2009: Luis Garcia-Larrea and John N. Wood
2008: Catherine Lubetzki and Anne Baron-van Vercooren
2007: and Béatrice Desgranges
2006: José-Alain Sahel
2005: Patrice Tran Ba Huy and Guy Richardson
2004: Piotr Topilko
2003: Olivier Dulac
2002: Antoine Guedeney
2001: Catherine Billard-Davou
2000: Alain Fischer
See also
List of medicine awards
References
Institut de France
Medicine awards
French science and technology awards
Awards established in 2000 | Grand Prix scientifique de la Fondation NRJ | [
"Technology"
] | 269 | [
"Science and technology awards",
"Medicine awards"
] |
56,010,004 | https://en.wikipedia.org/wiki/NGC%201189 | NGC 1189 is a barred spiral galaxy approximately 105 million light-years away from Earth in the constellation of Eridanus. It was discovered by American astronomer Francis Leavenworth on December 2, 1885 with the 26" refractor at Leander McCormick Observatory.
NGC 1189 has extended clumpy star formation throughout its spiral arms with remarkably little associated stellar light, which is striking in the color images.
Together with NGC 1190, NGC 1191, NGC 1192 and NGC 1199 it forms Hickson Compact Group 22 (HCG 22) galaxy group. Although they are considered members of this group, NGC 1191 and NGC 1192 are in fact background objects, since they are much further away compared to the other members of this group.
Image gallery
See also
Barred spiral galaxy
Hickson Compact Group
List of NGC objects (1001–2000)
Eridanus (constellation)
References
External links
SEDS
Barred spiral galaxies
Eridanus (constellation)
1189
11503
Hickson Compact Groups
Astronomical objects discovered in 1885
Discoveries by Francis Leavenworth | NGC 1189 | [
"Astronomy"
] | 214 | [
"Eridanus (constellation)",
"Constellations"
] |
56,010,998 | https://en.wikipedia.org/wiki/Transition%20metal%20nitrile%20complexes | Transition metal nitrile complexes are coordination compounds containing nitrile ligands. Because nitriles are weakly basic, the nitrile ligands in these complexes are often labile.
Scope of nitriles
Typical nitrile ligands are acetonitrile, propionitrile, and benzonitrile. The structures of [Ru(NH3)5(NCPh)]n+ have been determined for the 2+ and 3+ oxidation states. Upon oxidation the Ru-NH3 distances contract and the Ru-NCPh distances elongate, consistent with amines serving as pure-sigma donor ligands and nitriles functioning as pi-acceptors.
Synthesis and reactions
Acetonitrile, propionitrile and benzonitrile are also popular solvents. Because nitrile solvents have high dielectric constants, cationic complexes containing a nitrile ligand are often soluble in a solution of that nitrile.
Some complexes can be prepared by dissolving an anhydrous metal salt in the nitrile. In other cases, a suspension of the metal is oxidized with a solution of NOBF4 in the nitrile:
Ni + 6 MeCN + 2 NOBF4 → [Ni(MeCN)6](BF4)2 + 2 NO
Heteroleptic complexes of molybdenum and tungsten can by synthesized from their respective hexacarbonyl complexes.
M(CO)6 + 4 MeCN + 2 NOBF4 → [M(NO)2(MeCN)4](BF4)2
For the synthesis of some acetonitrile complexes, the nitrile serves as a reductant. This method is illustrated by the conversion of molybdenum pentachloride to the molybdenum(IV) complex:
2 MoCl5 + 5 CH3CN → 2 MoCl4(CH3CN)2 + ClCH2CN + HCl
Reactions
Transition metal nitrile complexes are usually employed because the nitrile ligand is labile and relatively chemically inert. Cationic nitrile complexes are however susceptible to nucleophilic attack at carbon. Consequently some nitrile complexes catalyze the hydrolysis of nitriles to give the amides.
Fe- and Co-nitrile complexes are intermediates in nitrile hydratase enzymes. N-coordination activates the sp-hybridized carbon center toward attack by nucleophiles, including water. Thus coordination of the nitrile to a cationic metal center is the basis for the catalytic hydration:
M-NCR + H2O → M-O=C(NH2)R
M-O=C(NH2)R + NCR → O=C(NH2)R + M-NCR
Nitrile ligands in electron-rich complexes are susceptible to oxidation, e.g. by iodosylbenzene. Nitriles undergo coupling with alkenes, also involving electron-rich complexes.
Examples
[M(NCMe)6]n+
Hexakis(acetonitrile)vanadium(II) tetrachlorozincate ([V(MeCN)6](ZnCl4)), green
Hexakis(acetonitrile)chromium(II) bis(tetraphenylborate) ([Cr(MeCN)6](B(C6H5)4)2, green
Hexakis(acetonitrile)chromium(III) tetrafluoroborate ([Cr(MeCN)6](BF4)3), white
Hexakis(acetonitrile)iron(II) bis(tetrakis(pentafluorophenyl)borate) ([Fe(MeCN)6](B(C6F5)4)2, orange
Hexakis(acetonitrile)cobalt(II) bis(tetrakis(pentafluorophenyl)borate) ([Co(MeCN)6](B(C6F5)4)2, purple
Hexakis(acetonitrile)nickel(II) tetrafluoroborate ([Ni(MeCN)6](BF4)2), blue
Hexakis(acetonitrile)copper(II) bis(tetrakis(pentafluorophenyl)borate) ([Cu(MeCN)6](B(C6F5)4)2, pale blue-green solid
Hexakis(acetonitrile)ruthenium(II) tetrafluoroborate ([Ru(MeCN)6](BF4)2), white, dRu-N = 202 pm.
Hexakis(acetonitrile)rhodium(III) tetrafluoroborate ([Rh(MeCN)6](BF4)3), a yellow solid.
Hexakis(acetonitrile)rhenium(II) tetrafluoroborate ([Re(MeCN)6](BF4)2), a yellow solid.
Hexakis(acetonitrile)rhenium(III) tetrafluoroborate ([Re(MeCN)6](BF4)3), a brown solid.
[M(NCMe)4]n+
[Cr(MeCN)4](BF4)2, blue
[Cu(MeCN)4]PF6, colorless
[Pd(MeCN)4](BF4)2, yellow
[M(NCMe)4 or 5]2n+
[Mo2(MeCN)8/10](BF4)4 blue d(Mo-Mo) = 218, d(Mo-N)axial = 260, d(Mo-N)equat = 214 pm
[Tc2(MeCN)10](BF4)4
[Re2(MeCN)10][B(C6H3(CF3)2)4]2, blue; d(Re-Re) = 226, d(Re-N)axial = 240, d(Re-N)equat = 205 pm
[Rh2(MeCN)10](BF4)4, orange; d(Rh-Rh) = 261, d(Re-N)axial = 219, d(Re-N)equat = 198 pm
[M(NCMe)2]+
[Ag(MeCN)2]B(C6H3(CF3)2)4
[Au(MeCN)2]SbF6
Mixed ligand examples
Bis(benzonitrile)palladium dichloride (PdCl2(PhCN)2), an orange solid that serves as a source of "PdCl2"
Tricarbonyltris(propionitrile)molybdenum(0) (Mo(CO)3(C2H5CN)3), a source of "Mo(CO)3". Related Cr and W complexes are known.
Complexes of η2-nitrile ligands
In some of its complexes, nitriles function as η2-ligands. This bonding mode is more common for complexes of low-valence metals, such as Ni(0). Complexes of η2-nitriles are expected to form as transient intermediates in certain metal-catalyzed reactions of nitriles, such as the Hoesch reaction and the hydrogenation of nitriles.
In some cases, η2-nitrile ligands are intermediates that preceded oxidative addition.
See also
Cyanometalate – coordination compounds containing cyanide ligands (coordinating via C).
References
Coordination complexes
Hexafluorophosphates
Tetrafluoroborates | Transition metal nitrile complexes | [
"Chemistry"
] | 1,725 | [
"Coordination chemistry",
"Coordination complexes"
] |
56,011,748 | https://en.wikipedia.org/wiki/Lake%20Chichoj | Lake Chichoj is located near the city of San Cristóbal Verapaz, in the department of Alta Verapaz, in Guatemala. It is long, wide, with an area of , an average water volume of [], and a maximum depth of .
Location and catchment
Lake Chichoj is located in the municipality of San Cristóbal Verapaz, department of Alta Verapaz, in Guatemala. The catchment of lake has been designated as a Protected Area, in an attempt to protect the lake from environmental degradation. Water routing through the catchment is made complex by karstic groundwater flow. It is estimated that the catchment of the lake drains . The lake in turn drains superficially to the Cahabón River, which flows to the Atlantic Ocean via Lake Izabal.
Legends surrounding the catastrophic formation of the lake
According to a few oral traditions from San Cristóbal Verapaz, the lake would have formed catastrophically by ground collapse during an earthquake in the early 16th Century, soon after the arrival of the Dominican friars (around 1525 CE), engulfing a church and its surrounding Maya settlement. The cataclysm has been explained as a divine punishment imparted to the inhabitants of San Cristóbal Caccoh, following the expulsion of a Dominican friar and the refusal of the inhabitants to submit to the Christian faith. This tradition is echoed in a book published in 1648 CE by the Irish Dominican friar Thomas Gage: "The English-American, or a New Survey of the West Indies". Gage's book suffers many exaggerations, casting doubt on the validity of his testimony. However, an independent report by Spanish Dominicans also mentions the sudden formation of a lake near San Cristóbal, by cave collapse, during an earthquake in 1590 CE. The parochial church suffered little damage during that earthquake. The most serious report therefore suggests that the purported event would have taken place far enough from the city for the church not to be damaged, but still would have been large enough to be deemed to reported. The event did not affect the western part of the lake, which was already in existence since at least the 8th Century. Montero de Miranda wrote about the lake that by 1575 (UNAM, 1982: 223–248) it was a "very large, long and very deep lake".
Hydrology
Lake surface temperatures fluctuate between in winter, and in summer. From 1979 to 2011 annual rainfall was at the lake, and at Cerro La Laguna, the highest part of the catchment. The residence time of water in the lake is therefore 35 ± 6 days assuming homogeneous water mixing. In actuality the lake is strongly stratified and dimictic, being composed of a highly turbid and poorly mineralized epilimnion, over a 5 °C cooler, highly mineralized hypolimnion. Most of the water therefore only restrict its circulation to the epilimnion, with an average residence time of 18 ± 3 days, assuming a constant mean depth of the termocline of . The lake usually homogenizes in January or February, sometimes very rapidly.
The lake is fed by several streams, most noticeably by the Paná River in the west, which is born from the junction of Chijuljá and Requenzal creeks. Other streams (Los Lavaderos, Chicojgual, Cerro Caj Coj) contribute very little to the lake water budget. Some springs feed the lake either directly near the shoreline (for example near Panconsul cave), or through the extensive marshlands that surround the lake. Lake Chichoj drains to Río El Desagüe, a tributary of the Cahabón River, which it joins after sinking into a cave for several hundreds of meters. Some of the sewage of San Cristál Verapaz is rerouted away from the lake and flows in a pipe through the marshlands before being emptied into Río El Desagüe, downstream of lake Chichoj.
The lake results from the coalescence of at least three dolines, likely resulting from the dissolution of gypsum at depth. The occurrence of gypsum is evidenced by a cluster of sulfate-bearing springs that dot the active trace of the Chixoy Polochic fault, 2 km south of the lake in the chixoy river valley, where they form large travertine fans. Discharge at these springs is much larger than what their upslope catchments can provide. The catchment of Lake Chichój is the closest catchment susceptible to provide water to these springs, and it lies above the springs.
Ecological succession, eutrophication and shrinking of lake Chichoj
Eutrophication can occur naturally, during the late stage of the natural ecological succession that accompanies the infilling of a lake. It then develops slowly, over thousands of years. It develops within only a few decades when triggered by cropland fertilization, industrial contamination, or urban development. Deforestation, land fertilization, urbanization and industrialization in the catchment of lake Chichoj are all thought to have contributed in a way or the other to the massive lake contamination and eutrophication of the lake that has taken place since the 1950s. Degradation of its ecosystem motivated environmental studies starting in the 1970s. They aimed at documenting the eutrophication process, and at identifying its causes. Most concluded that the main source of eutrophication is the absence of treatment of city waste waters, rather than agriculture.
The most visible consequence of eutrophication is the massive development of large floating rafts of the water hyacinth (Eichhornia crassipes), which is untiringly harvested to prevent complete invasion of the lake open waters. The enormous amount of hyacinth removed from the lake is then composted to produce a horticultural fertilizer.
Many local witnesses have reported that the extensive marshes that surround the lake were open waters in the 1950s. The presence of a well marked shoreline 1.0 ± 0.1 to 1.4 ± 0.1 m above the average current lake level and surrounding the marshes supports these testimonies. Because eutrophication leads to rapid infilling of lakes by plant debris, and conversion of open waters to marshland, it has been hypothesized that eutrophication is responsible for the reduction in the lake surface.
Due to its location halfway between the Atlantic and Pacific Ocean, the lake ecosystem is normally influenced by the Pacific El Niño Oscillation (ENSO) and the North Atlantic Oscillation. Studies are under way to determine the sensitivity of the lake hydrology to these oscillations over the past millennium.
Chromium contamination
Contamination of the lake environment by chromium started in the 1950s, and has increased dramatically until at least 2005, reaching 20 times natural background levels. It originates in industrial activities that involve leather tanning in the shoe factory of Calzado Cobán. The chromium does not seem to accumulate along the food chain, as it is not found in fishes and crayfishes. However, it accumulates massively in water hyacinth roots and, from there, is transferred to lake sediments through shedding of roots to the lake floor. Most of the water hyacinth biomass is actually extracted from the lake to fight eutrophication and turned into horticultural fertilizer.
Forest cover reduction
Only 20.35% of the catchment of the lake is covered by forest.
Demographic growth and lack of employment are some of the factors that have promoted conversion of forested areas into subsistence agriculture, especially following the coffee price crisis.
The loss of forest cover is particularly critical on steep terrains, which are most susceptible to overland flow and erosion. Soil loss results in siltation farther downslope, as well as in streams and in the lake. It also decreases water recharge of the deep aquifers.
Seismic hazard
Lake Chichoj is located within 2 km of the Chixoy-Polochic fault, a major fault of the North America-Caribbean plate boundary, which constitutes the closest and largest seismic hazard for San Cristóbal Verapaz, but lies within a broader array of large to intermediate seismogenic faults. The latest noticeable earthquakes include a M 4.1 quake in 2006 on the Polochic fault and a M4.8 in June 2009 on a secondary fault, NW of the lake. The sediments of the lake host a rich record of disruptions produced by past earthquakes, most notably the M 7.8 February 4th 1978 earthquake on the Motagua fault as well as a series of older M 7 earthquakes along the Polochic fault between 850 CE and 1450 CE. The lake adds hazard to the ground shaking of earthquakes. The low-lying marshlands that surround the lake are increasingly filled and urbanized. They are susceptible to seismic wave amplification, seismic wave refraction, and soil liquefaction during earthquakes, but also susceptible to flooding during earthquakes if the lake spillovers. Large waves can be produced during earthquakes, either as a result of landslides affecting the lake inner slopes, or by seismic resonance (seiche waves).
Ground collapse
Various geologic data suggest that Lake Chichoj stretches above of a body of gypsum well exposed on outcrops farther west. There, gypsum dissolution is responsible for repeated mountain flank collapses in the valley of Los Chorros. The lake occupies at least three coalescing dolines likely formed by dissolution of gypsum at depth. The dolines are probably only a few tens of thousands of years old, and the marshlands that surround the lake are likely covering similar, sediment-filled dolines. They are therefore susceptible to resumption of ground subsidence, if gypsum keeps dissolving at depth. The new phase of subsidence could either be slow and continuous, or pulsed, and possibly fast, even instantaneous. It is even possible that the marshes are actively subsiding, since no monitoring of subsidence has ever been undertaken. Subsidence might also be occurring under the combined effect of slow sediment compaction and oxidation/decomposition of the organic matter trapped in the sediments. The accommodation space created by the subsidence would be filled by mineral and organic sediments over the marshlands.
Floods
The wetlands that surround the lake spread over an area of 0.63 km2, and are enclosed into an ancient shoreline. The origin of this larger open water lake remains unclear. If the lake level has remained stable, then the reduction in lake size is solely due to the conversion of these open waters into wetlands. Alternatively, lake shrinking might reflect a slight lowering of the average lake level, possibly due to a more efficient drainage at the lake outlet. In any case, it cannot be excluded that the lake level reverts in the future to its ancient stand, flooding areas that have now been filled and urbanized. Besides, the outlet drains to a cave that might get partially of completely obstructed by debris following large storms. Water ponding upstream of the cave could result in an increase in the lake level of 4,0 ± 0,3 m before surface overflow is achieved near the cave.
References
Chichoj
Eutrophication
Geography of the Alta Verapaz Department | Lake Chichoj | [
"Chemistry",
"Environmental_science"
] | 2,314 | [
"Eutrophication",
"Environmental chemistry",
"Water pollution"
] |
56,011,855 | https://en.wikipedia.org/wiki/Multispecies%20coalescent%20process | Multispecies Coalescent Process is a stochastic process model that describes the genealogical relationships for a sample of DNA sequences taken from several species. It represents the application of coalescent theory to the case of multiple species. The multispecies coalescent results in cases where the relationships among species for an individual gene (the gene tree) can differ from the broader history of the species (the species tree). It has important implications for the theory and practice of phylogenetics and for understanding genome evolution.
A gene tree is a binary graph that describes the evolutionary relationships between a sample of sequences for a non-recombining locus. A species tree describes the evolutionary relationships between a set of species, assuming tree-like evolution. However, several processes can lead to discordance between gene trees and species trees. The Multispecies Coalescent model provides a framework for inferring species phylogenies while accounting for ancestral polymorphism and gene tree-species tree conflict. The process is also called the Censored Coalescent.
Besides species tree estimation, the multispecies coalescent model also provides a framework for using genomic data to address a number of biological problems, such as estimation of species divergence times, population sizes of ancestral species, species delimitation, and inference of cross-species gene flow.
Gene tree-species tree congruence
If we consider a rooted three-taxon tree, the simplest non-trivial phylogenetic tree, there are three different tree topologies but four possible gene trees. The existence of four distinct gene trees despite the smaller number of topologies reflects the fact that there are topologically identical gene tree that differ in their coalescent times. In the type 1 tree the alleles in species A and B coalesce after the speciation event that separated the A-B lineage from the C lineage. In the type 2 tree the alleles in species A and B coalesce before the speciation event that separated the A-B lineage from the C lineage (in other words, the type 2 tree is a deep coalescence tree). The type 1 and type 2 gene trees are both congruent with the species tree. The other two gene trees differ from the species tree; the two discordant gene trees are also deep coalescence trees.
The distribution of times to coalescence is actually continuous for all of these trees. In other words, the exact coalescent time for any two loci with the same gene tree may differ. However, it is convenient to break up the trees based on whether the coalescence occurred before or after the earliest speciation event.
Given the internal branch length in coalescent units it is straightforward to calculate the probability of each gene tree. For diploid organisms the branch length in coalescent units is the number of generations between the speciation events divided by twice the effective population size. Since all three of the deep coalescence tree are equiprobable and two of those deep coalescence tree are discordant it is easy to see that the probability that a rooted three-taxon gene tree will be congruent with the species tree is:
Where the branch length in coalescent units (T) is also written in an alternative form: the number of generations (t) divided by twice the effective population size (Ne). Pamilo and Nei also derived the probability of congruence for rooted trees of four and five taxa as well as a general upper bound on the probability of congruence for larger trees. Rosenberg followed up with equations used for the complete set of topologies (although the large number of distinct phylogenetic trees that becomes possible as the number of taxa increases makes these equations impractical unless the number of taxa is very limited).
The phenomenon of hemiplasy is a natural extension of the basic idea underlying gene tree-species tree discordance. If we consider the distribution of some character that disagrees with the species tree it might reflect homoplasy (multiple independent origins of the character or a single origin followed by multiple losses) or it could reflect hemiplasy (a single origin of the trait that is associated with a gene tree that disagrees with the species tree).
The phenomenon called incomplete lineage sorting (often abbreviated ILS in the scientific literatures) is linked to the phenomenon. If we examine the illustration of hemiplasy with using a rooted four-taxon tree (see image to the right) the lineage between the common ancestor of taxa A, B, and C and the common ancestor of taxa A and B must be polymorphic for the allele with the derived trait (e.g., a transposable element insertion) and the allele with the ancestral trait. The concept of incomplete lineage sorting ultimately reflects on persistence of polymorphisms across one or more speciation events.
Mathematical description of the multispecies coalescent
The probability density of the gene trees under the multispecies coalescent model is discussed along with its use for parameter estimation using multi-locus sequence data.
Assumptions
In the basic multispecies coalescent model, the species phylogeny is assumed to be known. Complete isolation after species divergence, with no migration, hybridization, or introgression is also assumed. We assume no recombination so that all the sites within the locus share the same gene tree (topology and coalescent times). However, the basic model can be extended in different ways to accommodate migration or introgression, population size changes, recombination.
Data and model parameters
The model and implementation of this method can be applied to any species tree. As an example, the species tree of the great apes: humans (H), chimpanzees (C), gorillas (G) and orangutans (O) is considered. The topology of the species tree, , is assumed known and fixed in the analysis (Figure 1). Let be the entire data set, where represent the sequence alignment at locus , with for a total of loci.
The population size of a current species is considered only if more than one individual is sampled from that species at some loci.
The parameters in the model for the example of Figure 1 include the three divergence times , and and population size parameters for humans; for chimpanzees; and , and for the three ancestral species.
The divergence times ('s) are measured by the expected number of mutations per site from the ancestral node in the species tree to the present time (Figure 1 of Rannala and Yang, 2003).
Therefore, the parameters are .
Distribution of gene genealogies
The joint distribution of is derived directly in this section. Two sequences from different species can coalesce only in one populations that are ancestral to the two species. For example, sequences H and G can coalesce in populations HCG or HCGO, but not in populations H or HC. The coalescent processes in different populations are different.
For each population, the genealogy is traced backward in time, until the end of the population at time , and the number of lineages entering the population and the number of lineages leaving it are recorded. For example, and , for population H (Table 1). This process is called a censored coalescent process because the coalescent process for one population may be terminated before all lineages that entered the population have coalesced. If the population consists of disconnected subtrees or lineages.
With one time unit defined as the time taken to accumulate one mutation per site, any two lineages coalesce at the rate . The waiting time until the next coalescent event, which reduces the number of lineages from to has exponential density
If , the probability that no coalescent event occurs between the last one and the end of the population at time ; i.e. during the time interval . This probability is and is 1 if .
(Note: One should recall that the probability of no events over time interval for a Poisson process with rate is . Here the coalescent rate when there are lineages is .)
In addition, to derive the probability of a particular gene tree topology in the population, if a coalescent event occurs in a sample of lineages, the probability that a particular pair of lineages coalesce is .
Multiplying these probabilities together, the joint probability distribution of the gene tree topology in the population and its coalescent times as
.
The probability of the gene tree and coalescent times for the locus is the product of such probabilities across all the populations. Therefore, the gene genealogy of Figure 1, we have
Likelihood-based inference
The gene genealogy at each locus is represented by the tree topology and the coalescent times . Given the species tree and the parameters on it, the probability distribution of is specified by the coalescent process as
,
where is the probability density for the gene tree at locus locus , and the product is because we assume that the gene trees are independent given the parameters.
The probability of data given the gene tree and coalescent times (and thus branch lengths) at the locus, , is Felsenstein's phylogenetic likelihood. Due to the assumption of independent evolution across the loci,
The likelihood function or the probability of the sequence data given the parameters is then an average over the unobserved gene trees
where the integration represents summation over all possible gene tree topologies () and, for each possible topology at each locus, integration over the coalescent times . This is in general intractable except for very small species trees.
In Bayesian inference, we assign a prior on the parameters, , and then the posterior is given as
where again the integration represents summation over all possible gene tree topologies () and integration over the coalescent times . In practice this integration over the gene trees is achieved through a Markov chain Monte Carlo algorithm, which samples from the joint conditional distribution of the parameters and the gene trees
The above assumes that the species tree is fixed. In species-tree estimation, the species tree () changes as well, so that the joint conditional distribution (from which the MCMC samples) is
where is the prior on species trees.
As a major departure from two-step summary methods, full-likelihood methods average over the gene trees. This means that they make use of information in the branch lengths (coalescent times) on the gene trees and accommodate their uncertainties (due to limited sequence length in the alignments) at the same time. It also explains why full-likelihood methods are computationally much more demanding than two-step summary methods.
Markov chain Monte Carlo under the multispecies coalescent
The integration or summation over the gene trees in the definition of the likelihood function above is virtually impossible to compute except for very small species trees with only two or three species. Full-likelihood or full-data methods, based on calculation of the likelihood function on sequence alignments, have thus mostly relied on Markov chain Monte Carlo algorithms. MCMC algorithms under the multispecies coalescent model are similar to those used in Bayesian phylogenetics but are distinctly more complex, mainly due to the fact that the gene trees at multiple loci and the species tree have to be compatible: sequence divergence has to be older than species divergence. As a result, changing the species tree while the gene trees are fixed (or changing a gene tree while the species tree is fixed) leads to inefficient algorithms with poor mixing properties. Considerable efforts have been taken to design smart algorithms that change the species tree and gene trees in a coordinated manner, as in the rubber-band algorithm for changing species divergence times, the coordinated NNI, SPR and NodeSlider moves.
Consider for example the case of two species (A and B) and two sequences at each locus, with a sequence divergence time at locus . We have for all . When we want to change the species divergence time within the constraint of the current , we may have very little room for change, as may be virtually identical to the smallest of the . The rubber-band algorithm changes without consideration of the , and then modifies the deterministically in the same way that marks on a rubber band move when the rubber band is held from a fixed point pulled towards one end. In general, the rubber-band move guarantees that the ages of nodes in the gene trees are modified so that they remain compatible with the modified species divergence time.
Full likelihood methods tend to reach their limit when the data consist of a few hundred loci, even though more than 10,000 loci have been analyzed in a few published studies.
Extensions
The basic multispecies coalescent model can be extended in a number of ways to accommodate major factors of the biological process of reproduction and drift.
For example, incorporating continuous-time migration leads to the MSC+M (for MSC with migration) model, also known as the isolation-with-migration or IM models. Incorporating episodic hybridization/introgression leads to the MSC with introgression (MSci) or multispecies-network-coalescent (MSNC) model.
Impact on phylogenetic estimation
The multispecies coalescent has profound implications for the theory and practice of molecular phylogenetics. Since individual gene trees can differ from the species tree one cannot estimate the tree for a single locus and assume that the gene tree correspond the species tree. In fact, one can be virtually certain that any individual gene tree will differ from the species tree for at least some relationships when any reasonable number of taxa are considered. However, gene tree-species tree discordance has an impact on the theory and practice of species tree estimation that goes beyond the simple observation that one cannot use a single gene tree to estimate the species tree because there is a part of parameter space where the most frequent gene tree is incongruent with the species tree. This part of parameter space is called the anomaly zone and any discordant gene trees that are more expected to arise more often than the gene tree. that matches the species tree are called anomalous gene trees.
The existence of the anomaly zone implies that one cannot simply estimate a large number of gene trees and assume the gene tree recovered the largest number of times is the species tree. Of course, estimating the species tree by a "democratic vote" of gene trees would only work for a limited number of taxa outside of the anomaly zone given the extremely large number of phylogenetic trees that are possible. However, the existence of the anomalous gene trees also means that simple methods for combining gene trees, like the majority rule extended ("greedy") consensus method or the matrix representation with parsimony (MRP) supertree approach, will not be consistent estimators of the species tree (i.e., they will be misleading). Simply generating the majority-rule consensus tree for the gene trees, where groups that are present in at least 50% of gene trees are retained, will not be misleading as long as a sufficient number of gene trees are used. However, this ability of the majority-rule consensus tree for a set of gene trees to avoid incorrect clades comes at the cost of having unresolved groups.
Simulations have shown that there are parts of species tree parameter space where maximum likelihood estimates of phylogeny are incorrect trees with increasing probability as the amount of data analyzed increases. This is important because the "concatenation approach," where multiple sequence alignments from different loci are concatenated to form a single large supermatrix alignment that is then used for maximum likelihood (or Bayesian MCMC) analysis, is both easy to implement and commonly used in empirical studies. This represents a case of model misspecification because the concatenation approach implicitly assumes that all gene trees have the same topology. Indeed, it has now been proven that analyses of data generated under the multispecies coalescent using maximum likelihood analysis of a concatenated data are not guaranteed to converge on the true species tree as the number of loci used for the analysis increases (i.e., maximum likelihood concatenation is statistically inconsistent).
Software for inference under the multispecies coalescent
There are two basic approaches for phylogenetic estimation in the multispecies coalescent framework: 1) full-likelihood or full-data methods which operate on multilocus sequence alignments directly, including both maximum likelihood and Bayesian methods, and 2) summary methods, which use a summary of the original sequence data, including the two-step methods that use estimated gene trees as summary input and SVDQuartets, which use site pattern counts pooled over loci as summary input.
References
Statistical genetics
Statistical inference
Population genetics
Phylogenetics | Multispecies coalescent process | [
"Biology"
] | 3,450 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
56,012,467 | https://en.wikipedia.org/wiki/Pour%20point%20depressant | Pour point depressants are used to allow the use of petroleum based mineral oils at lower temperatures. The lowest temperature at which a fuel or oil will pour is called a pour point. Wax crystals, which form at lower temperatures, may interfere with lubrication of mechanical equipment. High-quality pour point depressants can lower a pour point of an oil additive by as much as 40°C.
Methods
Pour point depressants do not lower the temperature at which wax crystals begin to form, called the cloud point, or the amount of wax that is formed—pour point depressants work by altering the crystal shape and size, which inhibits lateral crystal growth. There are two known methods by which this may be achieved: surface adsorption and co-crystallization.
Types
Any reduction in an oil's pour point depends on both the composition and properties of the oil, as well as the type of pour point depressant used. Other factors are the substance's relative molecular weight, its chemical composition, and the substance's concentration in the oil. If the concentration of pour point depressant is too high, there may be a visible effect on viscosity at higher temperatures.
Pour point depressants are only effective on refined oils. Non-refined oils contain polyaromatic hydrocarbons and resins which act as antagonists against synthetic pour point depressants. Pour point depressants are also ineffective for engine oils with a viscosity above SAE 30. Generally they are most effective on thinner oils like SAE 10, SAE 20 or SAE 30 grade oils.
Alkylaromatics and aliphatic polymers are two types of pour point depressants that are commercially available. Most commercially available pour point depressants are organic polymers, but nonpolymeric substances such as phenyltristearyloxysilane and pentaerythritol tetrastearate may also be effective.
Winter 1980-1981
In 1981 there was a problem with lubricating oil pumpability. The same thing happened the following winter, along with reports that oil would not flow out of containers. The issue seemed to be caused by olefin copolymers which caused the oil to gel in cold temperatures.
References
Petrochemicals | Pour point depressant | [
"Chemistry"
] | 465 | [
"Petrochemicals",
"Products of chemical industry"
] |
56,012,715 | https://en.wikipedia.org/wiki/Nonblocker | In graph theory, a nonblocker is a subset of vertices in an undirected graph, all of which are adjacent to vertices outside of the subset. Equivalently, a nonblocker is the complement of a dominating set.
The computational problem of finding the largest nonblocker in a graph was formulated by , who observed that it belongs to MaxSNP.
Although computing a dominating set is not fixed-parameter tractable under standard assumptions, the complementary problem of finding a nonblocker of a given size is fixed-parameter tractable.
In graphs with no isolated vertices, every maximal nonblocker (one to which no more vertices can be added) is itself a dominating set.
Kernelization
One way to construct a fixed-parameter tractable algorithm for the nonblocker problem is to use kernelization, an algorithmic design principle in which a polynomial-time algorithm is used to reduce a larger problem instance to an equivalent instance whose size is bounded by a function of the parameter.
For the nonblocker problem, an input to the problem consists of a graph and a parameter , and the goal is to determine whether has a nonblocker with or more vertices.
This problem has an easy kernelization that reduces it to an equivalent problem with at most vertices. First, remove all isolated vertices from , as they cannot be part of any nonblocker. Once this has been done, the remaining graph must have a nonblocker that includes at least half of its vertices; for instance, if one 2-colors any spanning tree of the graph, each color class is a nonblocker and one of the two color classes includes at least half the vertices. Therefore, if the graph with isolated vertices removed still has or more vertices, the problem can be solved immediately. Otherwise, the remaining graph is a kernel with at most vertices.
Dehne et al. improved this to a kernel of size at most . Their method involves merging pairs of neighbors of degree-one vertices until all such vertices have a single neighbor, and removing all but one of the degree-one vertices, leaving an equivalent instance
with only one degree-one vertex. Then, they show that (except for small values of , which can be handled separately) this instance must either be smaller than the kernel size bound or contain a -vertex blocker.
Once a small kernel has been obtained, an instance of the nonblocker problem may be solved in fixed-parameter tractable time by applying a brute-force search algorithm to the kernel. Applying faster (but still exponential) time bounds leads to a time bound for the nonblocker problem of the form . Even faster algorithms are possible for certain special classes of graphs.
See also
Dominating set - the complement of a nonblocker.
References
Graph theory objects
Computational problems in graph theory | Nonblocker | [
"Mathematics"
] | 566 | [
"Computational problems in graph theory",
"Graph theory objects",
"Computational mathematics",
"Graph theory",
"Computational problems",
"Mathematical relations",
"Mathematical problems"
] |
56,012,962 | https://en.wikipedia.org/wiki/Graph%20polynomial | In mathematics, a graph polynomial is a graph invariant whose value is a polynomial. Invariants of this type are studied in algebraic graph theory.
Important graph polynomials include:
The characteristic polynomial, based on the graph's adjacency matrix.
The chromatic polynomial, a polynomial whose values at integer arguments give the number of colorings of the graph with that many colors.
The dichromatic polynomial, a 2-variable generalization of the chromatic polynomial
The flow polynomial, a polynomial whose values at integer arguments give the number of nowhere-zero flows with integer flow amounts modulo the argument.
The (inverse of the) Ihara zeta function, defined as a product of binomial terms corresponding to certain closed walks in a graph.
The Martin polynomial, used by Pierre Martin to study Euler tours
The matching polynomials, several different polynomials defined as the generating function of the matchings of a graph.
The reliability polynomial, a polynomial that describes the probability of remaining connected after independent edge failures
The Tutte polynomial, a polynomial in two variables that can be defined (after a small change of variables) as the generating function of the numbers of connected components of induced subgraphs of the given graph, parameterized by the number of vertices in the subgraph.
See also
Knot polynomial
References
Polynomials
Graph invariants | Graph polynomial | [
"Mathematics"
] | 263 | [
"Polynomials",
"Graph theory",
"Graph invariants",
"Mathematical relations",
"Algebra"
] |
56,013,204 | https://en.wikipedia.org/wiki/Polycarboxylates | Polycarboxylates are organic compounds with several carboxylic acid groups. Butane-1,2,3,4-tetracarboxylate is one example. Often, polycarboxylate refers to linear polymers with a high molecular mass (Mr ≤ 100 000) and with many carboxylate groups. They are polymers of acrylic acid or copolymers of acrylic acid and maleic acid. The polymer is used as the sodium salt (see: sodium polyacrylate).
Use
Polycarboxylates are used as builders in detergents. Their high chelating power, even at low concentrations, reduces deposits on the laundry and inhibits the crystal growth of calcite.
Polycarboxylate ethers (PCE) are used as superplasticizers in concrete production.
Safety
Polycarboxylates are poorly biodegradable but have a low ecotoxicity. In the sewage treatment plant, the polymer remains largely in the sludge and is separated from the wastewater.
Polyamino acids like polyaspartic acid and polyglutamic acid have better biodegradability but lower chelating performance than polyacrylates. They are also less stable towards heat and alkali. Since they contain nitrogen, they contribute to eutrophication.
See also
tricarboxylic acids
References
Polymers
Salts and esters of carboxylic acids | Polycarboxylates | [
"Chemistry",
"Materials_science"
] | 295 | [
"Polymers",
"Polymer chemistry"
] |
56,013,267 | https://en.wikipedia.org/wiki/Jo%20Ellis-Monaghan | Joanna Anthony Ellis-Monaghan is an American mathematician and mathematics educator whose research interests include graph polynomials and topological graph theory. She is a professor of mathematics at the Korteweg-de Vries Institute for Mathematics of the University of Amsterdam.
Education and career
Ellis-Monaghan grew up in Alaska. She graduated from Bennington College in 1984 with a double major in mathematics and studio art, and earned a master's degree in mathematics from the University of Vermont in 1986. After beginning a doctoral program at Dartmouth College, she transferred to the University of North Carolina at Chapel Hill, where she completed her Ph.D. in 1995. Her dissertation, supervised by Jim Stasheff, was A unique, universal graph polynomial and its Hopf algebraic properties, with applications to the Martin polynomial.
She joined the Saint Michael's College faculty in 1992, chaired the department there, and has also held positions at the University of Vermont. In 2020, she became professor of Discrete Mathematics at the University of Amsterdam. From 2010-2020, she served as a subject editor of PRIMUS, a journal on the teaching of undergraduate mathematics.
Bibliography
With Iain Moffat, Ellis-Monaghan is the author of the book Graphs on Surfaces.
References
External links
Home page
Year of birth missing (living people)
Living people
20th-century American mathematicians
21st-century American mathematicians
Graph theorists
American mathematics educators
Bennington College alumni
University of Vermont alumni
University of North Carolina at Chapel Hill alumni
Saint Michael's College faculty
University of Vermont faculty
Academic staff of the University of Amsterdam
20th-century American women mathematicians
21st-century American women mathematicians | Jo Ellis-Monaghan | [
"Mathematics"
] | 323 | [
"Mathematical relations",
"Graph theory",
"Graph theorists"
] |
56,013,705 | https://en.wikipedia.org/wiki/Fatigue%20of%20welded%20joints | Fatigue of welded joints can occur when poorly made or highly stressed welded joints are subjected to cyclic loading. Welding is a manufacturing method used to join various materials in order to form an assembly. During welding, joints are formed between two or more separate pieces of material which can introduce defects or residual stresses. Under cyclic loading these defects can grow a fatigue crack, causing the assembly to fail even if these cyclic stresses are low and smaller than the base material and weld filler material yield stress. Hence, the fatigue strength of a welded joint does not correlate to the fatigue strength of the base material. Incorporating design considerations in the development phase can reduce failures due to fatigue in welded joints.
Stress-Life method
Similar to high cycle fatigue analysis, the stress life method utilizing stress-cycle curves (also known as Wöhler curves) can be used to determine the strength of a welded joint under fatigue loading. Welded sample specimens undergo repeated loading at a specified stress amplitude, or fatigue strength, until the material fails. This same test is then repeated with various stress amplitudes in order to determine its corresponding cycles, N, to failure. With the data collected, fatigue strength can be plotted against the corresponding number of cycles for a specific material, welded joint and loading. From these curves, the endurance limit, finite-life and infinite-life region can then be determined.
Factors affecting fatigue
Welding residual stresses
During the welding process, residual stresses can present themselves in the area of the weld, either in the heat affected zone or fusion zone. The mean stress a welded joint may see in application, can be altered due to the welding processes implementing residual stresses, changing the fatigue life and can render S-N laboratory testing results. Welded assemblies, with geometrical imperfections, can also introduce residual stresses. Although there are stress and strain relief methods to reduce residual stresses, the complete removal of residual stress is not possible.
Member thickness
An increase in thickness of a base material decreases the fatigue strength when a crack propagates from the toe of a welded joint. This is due to an increase in residual stress concentrations in thick material cross sections.
Material type
In welded joints, an increase in the base material's ultimate tensile strength does not necessarily lead to an increase in fatigue strength of the welded joint.
Welding process
Many welding processes are available for various applications and environments. Stress-cycle curves are not available for all of these processes and still need to be developed so proper fatigue analysis can be performed. The most abundant process found in stress-cycle curves is developed from specimens prepared by arc welding.
Surrounding environment
The surrounding environment of a welded assembly can affect the fatigue life of the welded joints, often lowering them. Variables such as temperature, moisture, and geographical location are considered part of the surrounding environment. Environments which contain sea water may see decreased fatigue life due to the increase in crack growth rates. Little information is available in this area, but it is known that if a base material is subjected to corrosion, the fatigue strength can decrease compared to similar welded joints.
Avoiding fatigue failures of a welded joint
Since the presence of cracks reduces fatigue life and accelerates failure, it is important to avoid all cracking mechanisms in order to prolong the fatigue life of a welded joint. Other weld defects, such as inclusions and lack of penetration, should also be avoided due to these defects being the source of where cracks can initiate. Detailed review of the welded joint during the design is another way to reduce failures. Ensuring that the design is able to handle the cyclic loading profile will prevent premature failures. Additional resources through design handbooks are also available to aid in designing the welded joint to optimize fatigue life. Finite element analysis can also be used to successfully predict fatigue failure.
References
Welding
Mechanical failure
Fracture mechanics | Fatigue of welded joints | [
"Materials_science",
"Engineering"
] | 758 | [
"Structural engineering",
"Welding",
"Fracture mechanics",
"Materials science",
"Mechanical engineering",
"Materials degradation",
"Mechanical failure"
] |
56,013,737 | https://en.wikipedia.org/wiki/Bailey%20peptide%20synthesis | The Bailey peptide synthesis is a name reaction in organic chemistry developed 1949 by J. L. Bailey. It is a method for the synthesis of a peptide from α-amino acid-N-carboxylic acid anhydrides (NCAs) and amino acids or peptide esters. The reaction is characterized by short reaction times and a high yield of the target peptide.
The reaction can be carried out at low temperatures in organic solvents. The residues R1 and R2 can be organic groups or hydrogen atoms, R3 is the used amino acid or peptide ester:
Reaction mechanism
The reaction mechanism is not known in detail. Supposedly, the reaction begins with a nucleophilic attack of the amino group on the carbonyl carbon of the anhydride group of the N-carboxylic acid anhydride (1). After an intramolecular proton migration, a 1,4-proton shift and the cleavage of carbon dioxide follows, resulting in the peptide bond in the final product (2):
Atom economy
The advantage in atom economy of using NCAs for peptide formation is that there is no need for a protecting group on the functional group reacted with the amino acid. For example, the Merrifield synthesis depends on the use of Boc and Bzl protecting groups, which need be removed after the reaction. In the case of Bailey peptide synthesis, the free peptide is directly obtained after the reaction. However, unwanted and difficult to remove by-products may be formed. An N-substitution of the NCA (for example, by an o-nitrophenylsulfenyl group) can simplify the subsequent purification process, but on the other hand deteriorates the atom economy of the reaction. The synthesis of NCAs can be carried out by the Leuchs reaction or by the reaction of N-(benzyloxycarbonyl)-amino acids with oxalyl chloride. In the latter case, again the procedure is less efficient in the sense of atom economy.
Synthesized peptides
The following peptides were synthesized using this method by 1949:
DL-Ala-Gly
L-Tyr-Gly
DL-Tyr-Tyr
DL-Ala-DL-Ala-Gly
DL-di-Ala-L-cystinyl-di-Gly
DL-Ala-L-Tyr-Gly
DL-Ala-L-Tyr-Gly-Gly
DL-Ala-DL-Ala-L-Tyr-Gly-Gly
L-Tyr-L-Tyr-L-Tyr
L-Cystinyl-di-Gly
Literature
P. Katsoyannis: The Chemistry of Polypeptides. Springer Science & Business Media, 2012, , S. 129.
References
Biochemistry
Name reactions | Bailey peptide synthesis | [
"Chemistry",
"Biology"
] | 578 | [
"Name reactions",
"Biochemistry",
"nan"
] |
56,013,977 | https://en.wikipedia.org/wiki/G%C3%B6ran%20Lindblad%20%28physicist%29 | Göran Lindblad (9 July 1940–30 November 2022) was a Swedish theoretical physicist and a professor at the KTH Royal Institute of Technology, Stockholm. He made major foundational contributions in mathematical physics and quantum information theory, having to do with open quantum systems, entropy inequalities, and quantum measurements.
Personal life
Lindblad was born in Boden, Sweden on July 9, 1940, and grew up in Örebro, Sweden. Besides physics, he took an interest in history. He died on November 30, 2022, near his home in Johanneshov, Sweden.
Career
Lindblad spent his entire career, starting from his undergraduate days, at the KTH Royal Institute of Technology. He defended his PhD thesis, entitled "The concepts of information and entropy applied to the measurement process in quantum theory and statistical mechanics," on May 29, 1974. His PhD thesis summarized the contents of some important contributions to quantum information theory, including his proof of the data-processing inequality for quantum relative entropy, communicated in a series of three research publications.
Shortly after his PhD thesis work, he derived his most well known scientific contribution, what is known as the Lindblad equation. As the Schrödinger equation describes the evolution of a closed quantum system, the Lindblad equation is a generalization, describing the evolution of an open quantum system, in which a system of interest is interacting with an uncontrollable environment. The Lindblad equation is a significant theoretical contribution and is widely used in many fields of physics, including quantum optics and condensed matter. It is also now the most common method for describing noise that affects various quantum technologies, in the domains of quantum communication and computation.
He published a monograph on two related conceptual problems in the foundations of statistical mechanics, concerning the derivation of the irreversibility of the observed macroscopic behavior from the reversible microscopic laws of motion and the definition of an entropy function on non-equilibrium quantum states.
He retired on July 1, 2005, and was a professor emeritus since that time.
References
Swedish physicists
Quantum physicists
1940 births
2022 deaths
People from Boden Municipality | Göran Lindblad (physicist) | [
"Physics"
] | 437 | [
"Quantum physicists",
"Quantum mechanics"
] |
56,015,038 | https://en.wikipedia.org/wiki/Dokdonia | Dokdonia is a genus of bacteria in the family Flavobacteriaceae and phylum Bacteroidota.
The genus Dokdonia was first described in 2005 by Yoon et al. near Liancourt Rocks in the Sea of Japan. Dokdonia is named after Dokdo, the Korean name for the Liancourt Rocks which sovereignty is disputed between Japan and South Korea. Yoon et al. isolated the bacterium from seawater and identified the first species as Dokdonia donghaensis.
There are 10 classified species (D. aurantiaca, D. diaphoros, D. donghaensis, D. eikasta, D. flava, D. genika, D. lutea, D. pacifica, D. ponticola, and D. sinensis) and many unclassified strains under the Dokdonia genus based on the NCBI taxonomy database. The International Committee on Systematics of Prokaryotes (ICSP) currently recognizes nine groups of Dokdonia described to species level with D. ponticola considered not validly published.
The general characteristics of Dokdonia species include gram-negative, non-motile, aerobic, catalase- and oxidase-positive, non-spore-forming rods or elongated rods. Species are usually considered relatively halophilic as they are cultivated optimally with 2% w/v sea salts (NaCl).
Ecology and significance
Dokdonia species have a relatively wide distribution in the water column. They have been isolated from surface seawater, marine sediment, and seaweed. Dokdonia have been found across a wide range of marine environments around Korea, China, and Japan, but have also been found in Baltic and Mediterranean waters. They tend to live a planktonic lifestyle, drifting in the water column, but they can occupy a wide range of ecological niches.
Dokdonia cells are primarily heterotrophic and sustain off dissolved organic carbon in the water column. It has also been shown that phototrophy can be induced in some strains under laboratory conditions, implying that bacteria in the genus Dokdonia are not obligate heterotrophs but are mixotrophic. This shift in carbon source is induced by lower levels and quality of carbon source as well as lower light levels. As a planktonic mixotrophic microbe, Dokdonia cells can provide a source of organic matter and carbon for higher trophic level organisms, contribute to the ocean's primary productivity, and also play an important role in transforming elements and nutrient cycling.
Bacteria in the genus Dokdonia has been seen to congregate in biofilms with other bacterial species, collectively improving their resistance to bacterial predation and producing antimicrobial agents.
Among the Flavobacteriaceae, Dokdonia have a high requirement for copper in order maintain regular growth and metabolism. Most information known about the genus is from strains of the type species, Dokdonia donghanensis. As research and cultivation Dokdonia spp. continues, insights into their diversity and adaptations contribute greatly to public knowledge of marine bacteria as a whole.
Described species
Dokdonia donghaensis
Discovered by Yoon et al. in 2005 from seawater near Liancourt Rocks.
For isolation of strain from seawater, the standard dilution plating technique was used and the isolate is cultured on marine agar at 25 °C.
Gram-negative, non-motile, non-spore-forming, slightly halophilic, rod or elongated rod-shaped with carotenoid pigments but no flexirubin-type pigments
Cells are 1.5–25.0μm in length and 0.3–0.6μm in width. Form circular, smooth, yellow colonies with diameter of 1 – 2mm on marine agar after 3 days.
Optimum growth occurs at 30 °C, pH 7.0–8.0 and 2% (w/v) NaCl; No anaerobic growth on marine agar and no growth on marine agar supplemented with nitrate
Catalase-positive and oxidase-positive; DNA G+C content is 38 mol%
Dokdonia aurantiaca
Discovered by Choi et al. in 2018 from seaweed sample, Zostera marina, collected from East China Sea, Republic of Korea
For isolation of strain from seaweed sample, the seaweed sample with cleaned and immerse in saline. The saline supernatant is collected and inoculated onto marine agar for 5 days at 25 °C to culture the isolated strain.
Gram-negative, non-motile, aerobic, orange-coloured and rod-shaped with carotenoid pigments but no flexirubin-type pigments
Cells are 0.68–0.76 μm in diameter and 1.76–3.04 μm in length. Form circular, convex, smooth, colonies with diameter of 1.5–2mm on marine agar after 3 days
Optimum growth occurs with 4% (w/v) NaCl, at pH 7 and at 25 °C
Catalase-positive and oxidase-negative; DNA G+C content is 38 mol%
Dokdonia diaphoros
Discovered by Khan et al. in 2006 from marine sediment at Kisarazu, Japan and was classified as Krokinobacter diaphorus. In 2012, Yoon et al. reclassified this species as Dokdonia diaphoros as the 16S rRNA gene sequence analysis has shown that the genera Dokdonia and Krokinobacter under the family Flavobacteriaceae are phylogenetically closely related
Gram-negative, aerobic rods with carotenoid pigments but no flexirubin-type pigments
Cells are 0.5–0.7mm by 2.5–4.0mm. Form slightly convex and yellowish colonies
Optimum growth occurs with 3% (w/v) NaCl at 20 °C
Catalase-positive and oxidase-positive; DNA G+C content is 33 mol%
Dokdonia eikasta
Discovered by Khan et al. in 2006 from marine sediment of Kisarazu, Japan. It was initially classified as Krokinobacter eikastus. In 2012, Yoon et al. reclassified this species as Dokdonia diaphoros
Gram-negative, aerobic rods with carotenoid pigments but no flexirubin-type pigments
Cells are 0.5–0.7mm by 2.5–4.0mm. Form slightly convex and yellowish colonies
Optimum growth occurs with 3% (w/v) NaCl at 20 °C
Catalase-positive and oxidase-positive; DNA G+C content is 38 mol%
Dokdonia flava
Discovered by Choi et al. from the seaweed Zostera marina from the Yellow Sea, Republic of Korea in 2018.
Gram-negative, aerobic, non-motile, rod-shape cells with carotenoid pigments but no flexirubin-type pigments
Cell size of 0.80–0.89 μm in diameter and 2.24–3.84 μm in length. Form circular, convex, smooth, yellowish colonies with 1–2mm in diameter on marine agar after 3 days.
Optimum growth occurs with 4% (w/v) NaCl at pH = 7 and 25 °C
Catalase-positive and oxidase-positive; DNA G+C content is 36 mol%.
Dokdonia genika
Discovered by Khan et al. in 2006 from the marine sediment at Odawara, Japan and described as Krokinobacter genikus. In 2012, Yoon et al. reclassified this species as Dokdonia genika.
Gram-negative, aerobic rods with carotenoid pigments but no flexirubin-type pigments
Cells are 0.5–0.7mm by 2.5–4.0mm. Form slightly convex and yellowish colonies
Optimum growth occurs with 3% (w/v) NaCl at 20 °C
Catalase-positive and oxidase-positive; DNA G+C content is 37-39 mol%
Dokdonia lutea
Discovered by Choi et al. in 2017 from the brown alga Sargassum fulvellum collected from the East China Sea, Republic of Korea.
Gram-negative, aerobic, non-motile, rod-shaped with no flexirubin-type pigments
Cells are 0.5 μm in diameter and 2.0–3.2 μm in length. Form circular, convex, smooth, and yellowish colonies that are 1.5–2 mm in diameter on marine agar after 3 days
Optimum growth occurs with 5% (w/v) NaCl at pH = 8 and at 25-30 °C
Catalase-positive and oxidase-negative; DNA G+C content is 35 mol%
Dokdonia pacifica
Discovered by Zhang et al. in 2015 from surface seawater collected from the South Pacific Gyre.
Gram-negative, aerobic, non-flagellated, non-gliding, rod-shaped with carotenoid pigments but no flexirubin-type pigments
Cells are 1.2–1.5mm in length, 0.3–0.4mm in width. Form circular, convex, transparent, and yellow transparent, and yellow colonies that are 0.8–1.0 mm in diameter on marine agar after 3 days
Optimum growth occurs with 2-3% (w/v) NaCl at pH = 8 and at 28 °C
Catalase-positive and oxidase-positive; DNA G+C content is 36 mol%
Dokdonia sinensis
Discovered by Zhou et al. in 2020 from seawater collected around Xiaoshi Island, PR China.
Gram-negative, aerobic, non-motile, rod-shaped with no flexirubin-type pigments
Cells are 1.0–3.0 μm in length and 0.5–0.8 μm in width. Form circular, convex, smooth, orange-pigmented colonies that are 1.0–1.5mm in diameter on marine agar after 3 days
Optimum growth occurs with 3% (w/v) NaCl at pH = 7 and at 28 °C
Catalase-positive and oxidase-negative; DNA G+C content is 39.5 mol%
Dokdonia ponticola
Discovered by Park et al. in 2018 from seawater collected around Oido, an island of South Koreaon in the Yellow Sea. This species is not recognized by ICSP as a species under Dokdonia genus. However, the result of 16S rRNA gene sequences showed that this strain has high similarity with the type strains of Dokdonia species, thus suggesting it belongs to Dokdonia genus.
Gram-negative, aerobic, non-motile, carrageenan-degrading, rod-shaped with no flexirubin-type pigments
Cells are 0.2–0.5 μm in diameter and 0.5 to over 10.0 μm in length. Form circular, convex, smooth colonies with intense yellow colour. Colonies are 0.5–1.0mm in diameter on marine agar after 5 days
Optimum growth occurs with 2% (w/v) NaCl at 20-25 °C.
Catalase-positive and oxidase-positive; DNA G+C content is 39.7 mol%
Genome
Nine species of Dokdonia have been added to core genomic databases such as Uniprot and Genbank but not all have undergone formal review.
Genome properties of D. donghaensis DSW-1T
The complete genome sequence of D. donghaensis DSW-1T can be accessed from GenBank under the accession number CP015125. The complete circular genome contains 3,923,666 base pairs, 55 RNA genes, and 2,881 protein genes. The sequencing was established using the PacBio sequencing platform and funded by the National Research Foundation of Korea.
Genome properties of Dokdonia sp. strain MED134
Whole-genome sequencing of Dokdonia sp. strain MED134 was done by J.Craig Venter Institute using Sanger method. The genome size of Dokdonia sp. strain MED134 is 3,301,953 bp which is relatively small compared to other Bacteroidota. The number of conserved genes is similar to other Bacteroidota and it contains 170 core genes for bacteria. Genome analysis shows that proteorhodopsins(PR)-containing marine Bacteroidota contains significantly fewer paralogous genes which form through intragenomic duplication events. This result suggests that phototrophic Bacteroidota have similar number of gene families but less genes for each paralogous gene family which results in a reduced genome size. This finding is consistent among phototrophic bacteria, suggesting that Bacteroidota have evolved to retain only select genes from each paralogous gene family while also establishing other genetic features common among PR-containing marine Bacteroidota.
Several key genes involved in PR-phototrophy have been identified through further genomic analysis of Dokdonia sp. strain MED134 and comparison to heterotrophic marine bacteria. Dokdonia have a relatively high abundance of peptidases and greater proportion of peptidases to total proteins compared to other bacteria. This property may indicate that the degradation of peptides may be the main carbon and nitrogen source instead of polysaccharides. Phylogenetic analysis of features like conjugative transposons genes highlight horizontal gene transfer events with other marine flavobacteria. Successful incorporation of PR genes into the genome has been seen to contribute to the overall fitness and survival of bacteria in the marine environment.
Metabolism
Metabolic processes of Dokdonia
Dokdonia species are known to be chemoheterotrophs, acquiring energy from organic molecules. They show maximal growth in conditions with a high concentration of dissolved organic matter. Dokdonia and other members of the Bacteroidota phylum are important in the degradation of organic materials, especially during sporadic nutrient increase. They have a mechanism that allows them to attach and degrade polymeric substances efficiently.
Some strains of Dokdonia are facultative double mixotrophs as they can utilize both heterotrophic and phototrophic metabolism. These strains contain proteorhodopsins (PRs) which act as light-dependent proton pumps. PR has a simple structure compared to other light-harvesting molecules like chlorophyll. They are a single membrane protein with retinal as its prosthetic group. Retinal is a polyene chromophore (light-sensitive pigment) which acts as the light-absorbing molecule in Dokdonia species. PRs allow cells to harvest energy from sunlight. When exposing to light, PRs pump protons across membrane and build proton gradient that can be used to generate ATP that powers various cellular activities. The energy generated by PRs can support cell growth, degradation of complex and recalcitrant organic matter, and allow cells to uptake amino acids and peptides at lower concentrations.
Dokdonia sp. strain MED134 (Dokdonia donghaensis MED134)
This strain uses aerobic heterotrophic metabolism; it primarily uses amino acids as carbon and nitrogen sources via expression of peptidases which break peptides down into amino acids. Dokdonia sp. strain MED134 has a Na+ ion pump on its membrane that can generate Na+ gradient to produce ATP for other cellular activities.
Dokdonia sp. strain MED134 has complete Embden-Meyerhof-Farnas pathway (glycolysis), gluconeogenesis pathway, and tricarboxylic acid (TCA) cycle. The anaplerotic reaction which connects glycolysis with TCA cycle utilizes several enzymes including PEP carboxykinase and PEP carboxylase.
The PRs in Dokdonia sp. strain MED134 are predicted to be heptahelical integral membrane proteins that pump H+ across membranes to build a proton gradient which generates ATP. Depends on the organism's depth in water column, the PR's maximum absorbance wavelength changes. In near surface waters, the maximum absorption wavelength of PR is around 530 nm (green light). In deeper water, the maximum absorption wavelength is 490 nm (blue light).
Dokdonia sp. PRO95
Dokdonia sp. PRO95 carries two types of rhodopsins: PR as well as a rhodopsin that is related to xanthorhodopsins (XRs). XRs are light-harvesting proton pumps that are more efficient compared to PRs. The PRs of this strain are light-driven sodium-motive pumps (Na+-rhodopsins or NaRs) which pump Na+ from the cytoplasm to external medium. They can also pump H+ when Na+ is absent. Unlike Dokdonia sp. strain MED134, no light-induced growth is observed in Dokdonia sp. PRO95 despite they have high genome similarity.
Ecological significance of proteorhodopsin-containing Dokdonia
The presence of PRs allows Dokdonia to grow better in light compared dark conditions, especially when there is low or intermediate levels of nutrients available. Cells with PRs are less dependent on the amount and quality of organic carbon sources. Being able to harvest energy from light gives them an advantage during nutrient deficient periods, making the retention of PRs favoured by selection.
Alpha- and gammaproteobacteria that contain PRs have enhanced cell growth due to cellular functions such as survival in environments with deficient nutrients that are promoted by light . The correlation between PR phototrophy and increased cell growth was first observed in Dokdonia sp. strain MED134.
Expression of proteorhodopsin encoding gene
In Dokdonia, light induces the expression of PR genes. There is a significant increase in the expression levels of PR genes in light compared to dark conditions as light can increase the strength of the PR gene promoter. The light-dark cycle can induce the upregulation of PR genes in Flavobacteriia that lead to population growth.
Behavior
Biofilm
In the Tyrrhenian Sea off the coast of Naples, species of Dokdonia are the most abundant in biofilms on plastic debris. 4.76 ± 7.1% of genera forming biofilms belong to Dokdonia.
Given that Dokdonia was found on plastics but not in the sediment or water column, it is possible that the habitat in biofilms on plastics is preferable to planktonic growth for species of Dokdonia in the Mediterranean Sea. As microbial mortality is reduced in the more stable microenvironment of a biofilm, population maintenance is less dependent on environmental factors such as nutrient concentration and grazers.
Response to varying nutrient availability
Due to variable expression of PR, Dokdonia sp. strain MED134 growth rate is more positively influenced by light exposure when metabolizing an energy-poor carbon source such as alanine compared to that which was observed in the presence of glucose.
Growth rates of Dokdonia sp. Dokd-P16 are significantly affected by variations in dissolved copper concentration. The strain exhibited an 80% reduction in growth rates in 0.6 nM copper compared to those observed in 50 nM copper. This suggests that the presence of copper is crucial to Dokdonia metabolism.
Iron is a common limiting nutrient in the ocean. Dokdonia sp. strain MED134 has various mechanisms make use of different forms of iron. It contains ATP-binding cassette-type transport system that can transport iron into the cell. Also, it can store iron and recycle iron from heme.
References
Further reading
Anashkin, Viktor A.; Bertsova, Yulia V.; Mamedov, Adalyat M.; Mamedov, Mahir D.; Arutyunyan, Alexander M.; Baykov, Alexander A.; Bogachev, Alexander V. (2017-10-05). "Engineering a carotenoid-binding site in Dokdonia sp. PRO95 Na+-translocating rhodopsin by a single amino acid substitution". Photosynthesis Research. 136 (2): 161–169. doi:10.1007/s11120-017-0453-0. ISSN 0166-8595.
Bunse, Carina; Israelsson, Stina; Baltar, Federico; Bertos-Fortis, Mireia; Fridolfsson, Emil; Legrand, Catherine; Lindehoff, Elin; Lindh, Markus V.; Martínez-García, Sandra; Pinhassi, Jarone (2019-01-17). "High Frequency Multi-Year Variability in Baltic Sea Microbial Plankton Stocks and Activities". Frontiers in Microbiology. 9. doi:10.3389/fmicb.2018.03296. ISSN 1664-302X.
Flavobacteria
Bacteria genera
Marine microorganisms | Dokdonia | [
"Biology"
] | 4,498 | [
"Marine microorganisms",
"Microorganisms"
] |
56,015,648 | https://en.wikipedia.org/wiki/Rheotrauma | Rheotrauma is a medical term for the harm caused to a patient's lungs by high gas flows as delivered by mechanical ventilation. Although mechanical ventilation may prevent death of a patient from the hypoxia or hypercarbia which may be caused by respiratory failure, it can also be damaging to the lungs, leading to ventilator-associated lung injury. Rheotrauma is one of the ways in which mechanical ventilation may do this, alongside volutrauma, barotrauma, atelectotrauma and biotrauma. Attempts have been made to combine all of the mechanical forces caused by the ventilator on the patient's lungs in an all encompassing term: mechanical power.
References
Respiratory therapy
Pulmonology
Emergency medicine
Medical equipment
Intensive care medicine
Lung disorders | Rheotrauma | [
"Biology"
] | 165 | [
"Medical equipment",
"Medical technology"
] |
56,016,045 | https://en.wikipedia.org/wiki/List%20of%20lyrate%20plants | The following plants have leaves that are lyrate:
Arabidopsis lyrata
Berlandiera lyrata
Ficus lyrata
Leibnitzia lyrata
Paysonia lyrata
Quercus lyrata
Salvia lyrata
Saussurea costus
Lyrate
Leaves | List of lyrate plants | [
"Biology"
] | 63 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
56,016,417 | https://en.wikipedia.org/wiki/Atelectotrauma | Atelectotrauma, atelectrauma, cyclic atelectasis or repeated alveolar collapse and expansion (RACE) are medical terms for the damage caused to the lung by mechanical ventilation under certain conditions. When parts of the lung collapse at the end of expiration, due to a combination of a diseased lung state and a low functional residual capacity, then reopen again on inspiration, this repeated collapsing and reopening causes shear stress which has a damaging effect on the alveolus. Clinicians attempt to reduce atelectotrauma by ensuring adequate positive end-expiratory pressure (PEEP) to maintain the alveoli open in expiration. This is known as open lung ventilation. High frequency oscillatory ventilation (HFOV) with its use of 'super CPAP' is especially effective in preventing atelectotrauma since it maintains a very high mean airway pressure (MAP), equivalent to a very high PEEP. Atelectotrauma is one of several means by which mechanical ventilation may damage the lungs leading to ventilator-associated lung injury. The other means are volutrauma, barotrauma, rheotrauma and biotrauma. Attempts have been made to combine these factors in an all encompassing term: mechanical power.
References
Respiratory therapy
Pulmonology
Lung disorders
Emergency medicine
Intensive care medicine
Trauma types
Medical equipment | Atelectotrauma | [
"Biology"
] | 294 | [
"Medical equipment",
"Medical technology"
] |
56,016,868 | https://en.wikipedia.org/wiki/NGC%201191 | NGC 1191 is a lenticular galaxy approximately 406 million light-years away from Earth in the constellation of Eridanus. It was discovered by American astronomer Francis Leavenworth on December 2, 1885 with the 26" refractor at Leander McCormick Observatory.
Together with NGC 1189, NGC 1190, NGC 1192 and NGC 1199 it forms Hickson Compact Group 22 (HCG 22) galaxy group. Although they are considered members of this group, NGC 1191 and NGC 1192 are in fact background objects, since they are much further away compared to the other members of this group.
Image gallery
See also
Lenticular galaxy
Hickson Compact Group
List of NGC objects (1001–2000)
Eridanus (constellation)
References
External links
SEDS
Lenticular galaxies
Eridanus (constellation)
1191
11514
Hickson Compact Groups
Astronomical objects discovered in 1885
Discoveries by Francis Leavenworth | NGC 1191 | [
"Astronomy"
] | 184 | [
"Eridanus (constellation)",
"Constellations"
] |
56,017,139 | https://en.wikipedia.org/wiki/NGC%204528 | NGC 4528 is a barred lenticular galaxy located about 50 million light-years away in the constellation Virgo. It was discovered by astronomer William Herschel on March 15, 1784. The galaxy is a member of the Virgo Cluster.
See also
List of NGC objects (4001–5000)
NGC 4340
References
External links
Virgo (constellation)
Barred lenticular galaxies
4528
41781
7722
Astronomical objects discovered in 1784
Virgo Cluster
Discoveries by William Herschel | NGC 4528 | [
"Astronomy"
] | 97 | [
"Virgo (constellation)",
"Constellations"
] |
56,017,178 | https://en.wikipedia.org/wiki/NGC%201199 | NGC 1199 is an elliptical galaxy approximately 107 million light-years away from Earth in the constellation of Eridanus. It was discovered by William Herschel on December 30, 1785.
NGC 1199 is dominated by stellar light with little long wavelength emission.
Together with NGC 1189, NGC 1190, NGC 1191 and NGC 1192 it forms Hickson Compact Group 22 (HCG 22) galaxy group. Although they are considered members of this group, NGC 1191 and NGC 1192 are in fact background objects, since they are much further away compared to the other members of this group.
Image gallery
See also
Lenticular galaxy
Hickson Compact Group
List of NGC objects (1001–2000)
Eridanus (constellation)
References
External links
SEDS
Elliptical galaxies
Eridanus (constellation)
1199
11527
Hickson Compact Groups
Astronomical objects discovered in 1785
Discoveries by William Herschel | NGC 1199 | [
"Astronomy"
] | 180 | [
"Eridanus (constellation)",
"Constellations"
] |
56,017,282 | https://en.wikipedia.org/wiki/NGC%201192 | NGC 1192 is a lenticular galaxy approximately 417 million light-years away from Earth in the constellation of Eridanus. It was discovered by American astronomer Francis Leavenworth on December 2, 1885 with the 26" refractor at Leander McCormick Observatory.
Together with NGC 1189, NGC 1190, NGC 1191 and NGC 1199 it forms Hickson Compact Group 22 (HCG 22) galaxy group. Although they are considered members of this group, NGC 1191 and NGC 1192 are in fact background objects, since they are much further away compared to the other members of this group.
Image gallery
See also
Lenticular galaxy
Hickson Compact Group
List of NGC objects (1001–2000)
Eridanus (constellation)
References
External links
SEDS
Lenticular galaxies
Eridanus (constellation)
1192
11519
Hickson Compact Groups
Astronomical objects discovered in 1885
Discoveries by Francis Leavenworth | NGC 1192 | [
"Astronomy"
] | 184 | [
"Eridanus (constellation)",
"Constellations"
] |
56,017,325 | https://en.wikipedia.org/wiki/NGC%20503 | NGC 503, also occasionally referred to as PGC 5086 or GC 5169, is an elliptical galaxy in the constellation Pisces. It is located approximately 265 million light-years from the Solar System and was discovered on 13 August 1863 by German astronomer Heinrich Louis d'Arrest.
Observation history
Arrest discovered NGC 503 using an 11" reflecting telescope in Copenhagen. His position matches perfectly with PGC 5086. At the time of discovery, he considered the possibility of having observed one of William Herschels discoveries (NGC 495, 496, 499). John Louis Emil Dreyer, creator of the New General Catalogue, described the galaxy as "extremely faint, extremely small" with a "double star 4 arcmin to southwest".
See also
Elliptical galaxy
List of NGC objects (1–1000)
Pisces (constellation)
References
External links
SEDS
Lenticular galaxies
Pisces (constellation)
0503
5086
Astronomical objects discovered in 1863
Discoveries by Heinrich Louis d'Arrest | NGC 503 | [
"Astronomy"
] | 204 | [
"Pisces (constellation)",
"Constellations"
] |
56,017,860 | https://en.wikipedia.org/wiki/Natural%20rope | A natural rope is a rope that is made from natural fibers. These fibers are obtained from organic material (such as materials produced by plants). Natural ropes suffer from many problems including susceptibility to rotting, degradation, mildew and wear out very quickly.
Materials
Cotton, sisal, manila, coir, and papyrus are materials that can be used to create a natural rope.
Disadvantages compared to synthetic ropes
Natural ropes suffer from many problems when compared to synthetic ropes. Natural ropes have a susceptibility to rot, degrade, and mildew. Natural ropes also wear out very quickly and lose much of their strength when placed in water.
See also
Synthetic rope
Fiber rope
Wire rope
References
Materials | Natural rope | [
"Physics"
] | 144 | [
"Materials stubs",
"Materials",
"Matter"
] |
56,018,307 | https://en.wikipedia.org/wiki/List%20of%20carsharing%20organizations | This is an incomplete list of currently operating carsharing organizations. Carsharing is model of car rental where people rent cars for short periods of time, often by the hour or minute.
Defunct services
Autolib'
car2go
City CarShare
DriveNow
Flexcar
I-GO
JustShareIt
Maven
PhillyCarShare
ReachNow
Streetcar
Tilden Rent-a-Car
Whizzgo
See also
Sharing economy
Alternatives to car use
Car rental
Carpool
Fleet vehicle
References
Carsharing
Sustainable transport | List of carsharing organizations | [
"Physics"
] | 104 | [
"Physical systems",
"Transport",
"Sustainable transport"
] |
56,018,407 | https://en.wikipedia.org/wiki/NOYB | NOYB – European Center for Digital Rights (styled as "noyb", from "none of your business") is a non-profit organization based in Vienna, Austria established in 2017 with a pan-European focus. Co-founded by Austrian lawyer and privacy activist Max Schrems, NOYB aims to launch strategic court cases and media initiatives in support of the General Data Protection Regulation (GDPR), the proposed ePrivacy Regulation, and information privacy in general. The organisation was established after a funding period during which it has raised annual donations of €250,000 by supporting members. Currently, NOYB is financed by more than 4,400 supporting members.
While many privacy organisations focus attention on governments, NOYB puts its focus on privacy issues and privacy violations in the private sector. Under Article 80, the GDPR foresees that non-profit organizations can take action or represent users. NOYB is also recognized as a "qualified entity" to bring consumer class actions in Belgium.
Notable actions
EU–US data transfers/"Schrems I" (2016)
The Irish Data Protection Commission (DPC) filed a lawsuit against Schrems and Facebook in 2016, based on a complaint from 2013, which had led to the so-called "Safe Harbor Decision". Back then, the Court of Justice of the European Union (CJEU) had invalidated the Safe Harbor data transfer system with its decision. When the case was referred back to the DPC the Irish regulator found that Facebook had in fact relied on Standard Contact Clauses, not on the invalidated Safe Harbor. The DPC then found that there were "well-founded" concerns by Schrems under these instruments too, but instead of taking action against Facebook, initiated proceedings against Facebook and Schrems before the Irish High Court. The case was ultimately referred to the CJEU in C-311/18 (called "Schrems II"; see Max Schrems#Schrems II). NOYB supported this private case of Schrems.
"Forced consent" complaints (2018)
Within hours after General Data Protection Regulation rules went into effect on 25 May 2018, NOYB filed complaints against Facebook and subsidiaries WhatsApp and Instagram, as well as Google (targeting Android), for allegedly violating Article 7(4) by attempting to completely block use of their services if users decline to accept all data processing consents, in a bundled grant which also includes consents deemed unnecessary to use the service. Based on the complaint, the French data protection authority CNIL has issued a €50 million fine against Google. The other cases are still pending.
Spotify case (2019)
Since Spotify is based in Sweden, the Swedish data protection authority (IMY) was responsible. However, this authority took its time. For over four years, no decision was made on the complaint against the streaming service. So in 2022, NOYB first filed a complaint for inaction in Sweden. The lawsuit was decided in favor of the privacy activists. The IMY then imposed a GDPR fine of 58 million Swedish kronor (about EUR 5 million) on Spotify.
Apple tracking case (2020)
In mid November 2020, NOYB announced that complaints were filed to both the German and Spanish Data Protection Authorities, claiming "IDFA (Apple's Identifier for Advertisers) allows Apple and all apps on the phone to track a user and combine information about online and mobile behaviour". In a slight change from their previous legal strategy in other similar cases, NOYB notes that, because the complaint is based on Article5(3) of the ePrivacy Directive and not the GDPR, the Spanish and German authorities can directly fine Apple, without appealing to EU Data Protection Authorities under the GDPR.
Open letter on GDPR cooperation mechanism (2020)
NOYB also focuses on putting pressure on regulators to enforce privacy laws on the books. In an open letter, the NGO has accused the Irish Data Protection Commission of acting too slowly and having 10 meetings with Facebook before the coming into application of the GDPR.
Schrems II – Court of Justice Judgment on Privacy Shield (2020)
On July 16, 2020, the Court of Justice of the European Union (CJEU) invalidated Privacy Shield and decided that Facebook and other companies that fall under US surveillance laws cannot rely on "Standard Contractual Clauses" (SCCs) since US surveillance laws were found to be conflicting EU fundamental rights. This judgement was based on a long lasting case of Max Schrems and NOYB. US companies' foreign customers' data are not protected from the U.S. intelligence services. The CJEU found that this violates the "essence" of certain EU fundamental rights.
The Court has also clarified that EU data protection authorities (DPAs) have a duty to take action. The Court highlighted that a DPA is "required to execute its responsibility for ensuring that the GDPR is fully enforced with all due diligence".
Despite the invalidations made by the judgment, absolutely "necessary" data flows can continue to flow under Article 49 of the GDPR. Any situation where users want their data to flow abroad is still legal, as this can be based on the informed consent of the user, which can be withdrawn at any time. Equally the law allows data flows for what is "necessary" to fulfil a contract.
Mass complaints on EU–US data transfers (2020)
After the Schrems II judgment, B filed 101 complaints against EU/EEA companies against controllers using Google Analytics or Facebook Connect and thereby transferring data to the US despite the Court finding (link to Privacy Shield) that US surveillance laws violate the essence of EU fundamental rights. The organization thereby wanted to point out the lack of enforcement of Schrems II. These model complaints led to the creation of a special taskforce by the European Data Protection Board (EDPB) which is tasked to coordinate the complaints and to prepare recommendations for controllers and processors. On January 12, 2022, the Austrian Data Protection Authority (DSB) reached a partial decision in favour of NOYB, stating that the continuous use of Google Analytics violates the GDPR. This decision affects most websites in the European Union since Google Analytics is the most common traffic analysis tool.
Google Advertising ID tracking (2021)
On April 7, 2021, NOYB filed a complaint in France charging that Android users were being tracked by Google without giving consent.
"Google's software creates the AAID without the user's knowledge or consent. The identification number functions like a license plate that uniquely identifies the phone of a user and can be shared among companies. After its creation, Google and third parties (e.g. applications providers and advertisers) can access the AAID to track users' behaviour, elaborate consumption preferences and provide personalised advertising. Such tracking is strictly regulated by the EU "Cookie Law" (Article 5(3) of the e-Privacy Directive) and requires the users' informed and unambiguous consent."
Facebook and DPC complaint (2021)
NOYB filed a complaint against the Irish Data Protection Commissioner (DPC) for corruption and possible bribery in 2021 under Austrian law for an affair concerning Facebook.
Administrative fine for Grindr over illegal sharing of user data (2021)
Together with the Norwegian Consumer Council, NOYB filed three strategic complaints against the dating app Grindr and several adtech companies over illegal sharing of users' data in January 2020. The data shared was GPS location, IP address, Advertising ID, age, gender and the fact that the user in question was on Grindr. Users could be identified through the data shared, and the recipients could potentially further share the data. These complaints are based on the report "Out of Control" by the Norwegian Consumer Council.
One year after the complaint was filed, the Norwegian Data Protection Authority upheld the complaint against Grindr, confirming that Grindr did not receive valid consent from users in an advance notification. The Authority imposed a fine of 100 million NOK (€9.63 million) on Grindr, which was then reduced to 65 million NOK (€6.5 million) in the final decision since Grindr's actual revenue was lower than previously assumed and the company undertook measures to remedy deficiencies in their previous consent management platform.
Action against the use of dark patterns in cookie banners (2021)
On August 10, 2021, NOYB filed 422 complaints against companies using deceptive cookie banners on their website. This wave of complaints was the outcome of a "Legal Tech" initiative by the organization in the course of which thousands of websites in Europe had been automatically checked for violations with a tool that was developed specifically for this purpose. In response to those complaints an EDPB task force was set up to exchange views on legal analysis and possible infringements and to streamline communication. In its effort to overcome the necessity of cookie banners, NOYB has also co-developed Advanced Data Protection Control together with the Sustainable Computing Lab of the Vienna University of Economics. The ADPC browser signal poses a feasible alternative to cookie banners through its automated mechanism for the communication of users' privacy decisions and data controllers' responses.
Austrian Court: Google Analytics illegal in Europe (2022)
In early 2022, an Austrian court ruled that the use of Google Analytics on European websites was illegal. The case in question was filed in August 2020, from a Google user accessing an Austrian website for health related issues. The website used Google Analytics, and data about the user was transmitted to Google. The Google user complained to the Austrian data protection authority alongside NOYB. The issue at hand has a direct reference to Article 44 under GDPR, since the user cannot be afforded the correct level of protections established, thus making it a clear violation of GDPR. France's data watchdog CNIL concurred with the Austrian ruling in mid February 2022. Schrems duly commented:
Furthermore, in mid 2022, the Austrian DPA also ruled that Google's anonymization was insufficient in protecting user privacy, and that Article 44 of GDPR does not allow for a risk-based approach that Google had argued for.
References
External links
2017 establishments in Austria
Information privacy
Information technology organisations based in Austria
Internet privacy organizations
Data protection
Cross-European advocacy groups
Privacy organizations | NOYB | [
"Engineering"
] | 2,122 | [
"Cybersecurity engineering",
"Information privacy"
] |
62,036,830 | https://en.wikipedia.org/wiki/Helen%20Byrne | Helen M. Byrne is a mathematician based at the University of Oxford. She is Professor of Mathematical Biology in the university's Mathematical Institute and a Professorial Fellow in Mathematics at Keble College. Her work involves developing mathematical models to describe biomedical systems including tumours. She was awarded the 2019 Society for Mathematical Biology Leah Edelstein-Keshet Prize for exceptional scientific achievements and for mentoring other scientists and was appointed a Fellow of the Society in 2021.
Early life and education
Byrne attended Manchester High School for Girls. Eventually she studied mathematics at Newnham College, Cambridge, where she became interested in the applications of mathematics to real-world problems. She moved to Wadham College, Oxford for her graduate studies, where she earned a master's degree in Mathematical Modelling and Numerical Analysis. She remained at Oxford for her doctoral degree in applied mathematics. She was appointed as a postdoctoral fellow at the cyclotron unit at Hammersmith Hospital. There, she started working in mathematical and theoretical biology. The biomedical questions she worked on included fitting mathematical models to positron emission tomography scans to evaluate oxygen and glucose transport and consumption within solid tumours. After hearing Mark Chaplain talk about tumours at a conference she realised she could use her mathematical skills to study tumour growth.
Research and career
Byrne worked with Mark Chaplain at the University of Bath from 1993. She joined the University of Manchester Institute of Science and Technology as a lecturer in 1996. In 1998 Byrne joined the University of Nottingham, where she was promoted to Professor of Applied Mathematics in 2003. She was involved with the development of the Nottingham Centre for Mathematical Medicine and Biology, which she directed from 1999 to 2011.
She joined the faculty at the University of Oxford in 2011 where she was made Professor of Mathematical Biology based in the Mathematical Institute. Her research has considered mathematical models to describe biological tissue. She has explored how oxygen levels impact biological function, developing complex models that can describe disease progression. She was part of a team who demonstrated that cell cannibalism is involved in the development of inflammatory diseases.
Byrne was appointed Director of Equality and Diversity in the Mathematical, Physical and Life Sciences (MPLS) Division from 2016 to 2020. In 2018 she was awarded the Society for Mathematical Biology Leah Edelstein-Keshet Prize, being appointed a fellow of the society in 2021. Byrne is co-director of the University of Liverpool 3D BioNet (an interdisciplinary network looking at how cells grow in three dimensions) and was on the management group of the Engineering and Physical Sciences Research Council Cyclops Healthcare Network which ran from 2016 to 2019. She is a member of the IBS Biomedical Mathematics Group.
Selected publications
Personal life
Whilst a graduate student at Oxford, she competed for OUWLRC in the Henley Boat Races in 1990 and 1991, earning a half blue each time.
References
External links
21st-century British women mathematicians
British mathematicians
Academics of the University of Manchester
Academics of the University of Nottingham
Alumni of Newnham College, Cambridge
Alumni of Wadham College, Oxford
Fellows of Keble College, Oxford
Living people
Year of birth missing (living people)
British women scientists
People educated at Manchester High School for Girls
Theoretical biologists | Helen Byrne | [
"Biology"
] | 636 | [
"Bioinformatics",
"Theoretical biologists"
] |
62,036,856 | https://en.wikipedia.org/wiki/Hausdorff%20Medal | The Hausdorff medal is a mathematical prize awarded every two years by the European Set Theory Society. The award recognises the work considered to have had the most impact within set theory among all articles published in the previous five years. The award is named after the German mathematician Felix Hausdorff (1868–1942).
Winners
2013: Hugh Woodin for his articles "Suitable extender models I" (J. Math. Log. 10 (2010), no. 1–2, pp. 101–339) and "Suitable extender models II: beyond ω-huge" (J. Math. Log. 11 (2011), no. 2, pp. 115–436).
2015: Ronald Jensen and John R. Steel for their article " without the measurable" (The Journal of Symbolic Logic, Volume 78, Issue 3 (2013), pp. 708–734).
2017: Maryanthe Malliaris and Saharon Shelah for their article "General topology meets model theory, on 𝔭 and 𝔱" (Proc. Natl. Acad. Sci. USA 110 (2013), no. 33, 13300–13305).
2019: Itay Neeman for his work on "the new method of iterating forcing using side conditions and the tree property".
2022: David Asperó and Ralf Schindler for their positive solution to the long standing conjecture that MM++, a strong form of Martin’s Maximum, implies Woodin’s Axiom (*).
2024: Gabriel Goldberg for his work on the Ultrapower Axiom, isolating abstract properties of inner models with supercompact cardinals.
See also
List of mathematics awards
References
Mathematics awards
Set theory | Hausdorff Medal | [
"Mathematics",
"Technology"
] | 360 | [
"Science and technology awards",
"Mathematical logic",
"Mathematics awards",
"Set theory"
] |
62,038,243 | https://en.wikipedia.org/wiki/Legal%20status%20of%20ibogaine%20by%20country | This is an overview of the legality of ibogaine by country. Ibogaine is not included on the UN International Narcotics Control Board's Green List, or List of Psychoactive Substances under International Control. However, since 1989, it has been on the list of doping substances banned by the International Olympic Committee and the International Union of Cyclists because of its stimulant properties.
References
Drug control law
Drug policy by country
Entheogens
Ibogaine
Ibogaine | Legal status of ibogaine by country | [
"Chemistry"
] | 98 | [
"Drug control law",
"Regulation of chemicals"
] |
62,038,467 | https://en.wikipedia.org/wiki/Cyclopentadienylchromium%20tricarbonyl%20dimer | Cyclopentadienylchromium tricarbonyl dimer is the organochromium compound with the formula Cp2Cr2(CO)6, where Cp is C5H5. A dark green crystalline solid. It is the subject of research it exists in measureable equilibrium quantities with the monometallic radical CpCr(CO)3.
Structure and synthesis
The six CO ligands are terminal, and the Cr-Cr bond distance is 3.281 Å, 0.06 Å longer than the related dimolybdenum compound. The compound is prepared by treatment of chromium hexacarbonyl with sodium cyclopentadienide followed by oxidation of the resulting NaCr(CO)3(C5H5).
Related compounds
Cyclopentadienylmolybdenum tricarbonyl dimer
Cyclopentadienyltungsten tricarbonyl dimer
References
Organochromium compounds
Carbonyl complexes
Dimers (chemistry)
Half sandwich compounds
Cyclopentadienyl complexes
Chemical compounds containing metal–metal bonds | Cyclopentadienylchromium tricarbonyl dimer | [
"Chemistry",
"Materials_science"
] | 219 | [
"Half sandwich compounds",
"Cyclopentadienyl complexes",
"Dimers (chemistry)",
"Polymer chemistry",
"Organometallic chemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.